Subscribe now

Earth

How robot swarms are learning to find what we lose at sea

As we perfect wireless underwater communication, robots will criss-cross the abyss finding everything from lost planes to chemical leaks

By Sandrine Ceurstemont

27 July 2016

sea floor monitoring

Cabled arrays allow real-time ocean floor monitoring

NOAA Office of Ocean Exploration and Research

This article is usually available only to subscribers but is being made free to view thanks to sponsorship from Ocado

METHODICALLY, a team of robots patrols the turbid waters of Venice’s lagoon. Some glide through the gloom, seeking their quarry. Others sit silent on the silty floor, waiting to relay intel back to base. If Thomas Schmickl has his way, this vision will soon be reality. The robotic fleets he and his colleagues are developing are part of a struggle to crack one of the biggest challenges in the ocean: finding things.

We all followed the story of Malaysia Airlines flight MH370, which vanished shortly after take-off from Kuala Lumpur airport in 2014, and seemingly crashed somewhere in the Indian Ocean. Debris from the plane washed up more than a year later, but the sea search was a failure. The search zone initially covered an area about half the size of the UK, but “we can safely say that nobody has ever seen a part of MH370 underwater,” says Jules Jaffe at the Scripps Institution of Oceanography in San Diego, California.

Lost planes might grab headlines, but there are many other reasons to improve our ability to search at sea: locating and plugging holes in poison-leaking shipwrecks, for example, or recapturing the thousands of shipping containers estimated to be lost overboard each year. Robotic search could be the answer – if some tricky problems can be overcome first.

When we explore the deep ocean now, we generally do so in submarines with human pilots or with robots that, if not physically tethered to a boat, must regularly surface to broadcast their findings. The search for MH370 involved ships criss-crossing the ocean towing an underwater microphone on a 6-kilometre cable to listen for the ping of the plane’s black box.

To make progress, we need autonomously powered searchers that can communicate wirelessly from the depths of the ocean and orient themselves underwater. The difficulties start with high-frequency radio signals, our go-to communication medium in air, being quickly absorbed in water. Lower frequencies go further, but can’t carry much information.

Sound, on the other hand, travels much more easily through water than it does through air. Sonar systems were first developed after the sinking of the Titanic in 1912 to detect submerged objects like icebergs. But encoding information in sound pulses is difficult and the acoustic modems that do it are slow. And with whales and dolphins as well as ships already using echolocation systems, there’s plenty of noise that will interfere with an underwater searcher trying to send information back to the surface.

Schmickl is a member of the Subcultron project, whose robot swarms are testing out alternative approaches. Their scheme involves “mussel” robots that sit on the sea floor and act as a coordinating grid, floating to the surface when they need to talk to a base station. They chat to “fish” explorer robots that can also talk among themselves, reducing the distance any signal need travel. The result is a “dynamic seabed-exploring carpet, which can then slowly crawl through a larger habitat”, says Schmickl, who is based at the University of Graz in Austria.

Underwater LED system

Underwater LED communication systems could aid search

Sonardyne/NASA

In Venice’s lagoon the water is rarely more than 2 metres deep. Here, the robots will look for hazards like burst sewage pipes or chemical spills, and test communication methods. Options include spherical electromagnetic fields that would be detectable to other robots, or lasers or blue LEDs; blue because that’s the colour of light that travels furthest in water.

Communicating with light pulses is likely to be even more effective in the clearer and darker water of the deep ocean. James Kinsey at the Woods Hole Oceanographic Institution in Massachusetts and his colleagues recently established a high-speed data connection between an autonomous underwater vehicle and a sensor node on the sea floor when they were up to 100 metres apart. “We were reaching the same transfer speed you would expect from home internet,” says Kinsey.

“The result is a dynamic, slowly crawling, seabed-exploring carpet“

The Subcultron robots can chat to each other, but Chiara Petrioli at the Sapienza University of Rome wants to create something more extensive – a submarine version of the internet of things. The idea is that any device or person could log on to the network and access information from divers, autonomous sensors and robots anywhere in the ocean, all speaking the same language. “It would let us monitor the underwater world, for example to understand climate change and underwater volcanoes, or to research an accident or oil spill,” says Petrioli. That would all work without the kilometres of difficult-to-install wiring that characterises existing underwater observatories (see “Conquering the deep: wiring up an undersea volcano“).

Petrioli is part of the Europe-wide Sunrise project, which has already created a small-scale version of such a wireless network. In 2014, the team unleashed three autonomous vehicles into Porto harbour in Portugal. Equipped with sonar for probing the seabed, they could talk among themselves and respond to instructions. The swarm found a “lost” shipping container in 20 minutes flat. During a similar test at an archaeological site in Sicily, they found an ancient wreck. Later this year, a bigger test will see if 12 different robots and other devices can be persuaded to communicate and collaborate on a task.

There are plenty of other problems to be ironed out if we want to find objects like fallen planes in enormous expanses of open ocean. Rough seas can easily destroy robots, so making them cheap enough to be considered disposable is an important goal. Another is making them intelligent enough to switch to the most appropriate communication mode as they travel through regions of the seas that are choppy, murky, shallow or remote.

But we are making progress. At the moment underwater robots only know where they are by resurfacing and receiving signals from positioning satellites or by listening out for the pings of positioning buoys, painstakingly placed at the surface during a mission. “Putting out pingers is a pain and we would prefer a scheme that doesn’t add extra noise to the oceans,” says Jaffe. His team are investigating whether robots can triangulate their relative positions by listening for ambient noise from shipping routes – or possibly even the surprisingly loud claw snaps from shrimp colonies.

Likewise for power. The Subcultron robots top up their juice by docking with floating “lilypads” of solar panels, but at great depths another solution is needed. Schmickl suggests a fuel cell powered by bacteria. The basic technology already exists, and it has the added benefit that the bacteria munch on polluting chemicals to keep going. “Dirtiness could be a benefit,” says Schmickl. “The robots would be helping to clean up the oceans.”

None of this is easy, but Schmickl’s team are used to ambitious feats. In 2014, they got 41 underwater robots collaborating in a tank. This summer the Venice lagoon experiments will kick off, and within two years their swarm should be large enough to explore a sizeable area. “We will have 150 robots in total,” says Schmickl. “We’re aiming for the largest underwater swarm in the world.”

From sounding line to sonar

A knotted sounding wire was the instrument of choice for the first systematic attempts to map the seabed. This map of the Cocos Islands in the Indian Ocean was made by the crew of HMS Beagle in 1835. The soundings helped the ship’s on-board naturalist, one Charles Darwin, develop his theory for how coral atolls form in the wake of subsiding volcanoes

Cocos islands

Scottish Geographical Map

Today surveys wield multibeam sonar to chart the sea floor in 3D with metre resolution. The difference between this and single beam echo-sounding is apparent in these images of Northwind ridge and Chukchi margin off northern Alaska (top image 2012, bottom 1979). Less than 15 per cent of the seabed is mapped in high resolution now, but the General Bathymetric Chart of the Oceans initiative hopes to cover it all by 2030

New Scientist Default Image

General bathymetric chart of the oceans Center for Coastal and Ocean Mapping, University of New Hampshire, USA

Read more in our ocean special: “High tech goes deep: A new age of ocean exploration”

This article appeared in print under the headline “Dropped in the ocean”

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up