|
. | . |
|
by Jennifer Chu for MIT News Cambridge MA (SPX) Jul 18, 2012
For years, the U.S. Navy has employed human divers, equipped with sonar cameras, to search for underwater mines attached to ship hulls. The Navy has also trained dolphins and sea lions to search for bombs on and around vessels. While animals can cover a large area in a short amount of time, they are costly to train and care for, and don't always perform as expected. In the last few years, Navy scientists, along with research institutions around the world, have been engineering resilient robots for minesweeping and other risky underwater missions. The ultimate goal is to design completely autonomous robots that can navigate and map cloudy underwater environments - without any prior knowledge of those environments - and detect mines as small as an iPod. Now Franz Hover, the Finmeccanica Career Development Assistant Professor in the Department of Mechanical Engineering, and graduate student Brendan Englot have designed algorithms that vastly improve such robots' navigation and feature-detecting capabilities. Using the group's algorithms, the robot is able to swim around a ship's hull and view complex structures such as propellers and shafts. The goal is to achieve a resolution fine enough to detect a 10-centimeter mine attached to the side of a ship. "A mine this small may not sink the vessel or cause loss of life, but if it bends the shaft, or damages the bearing, you still have a big problem," Hover says. "The ability to ensure that the bottom of the boat doesn't have a mine attached to it is really critical to vessel security today." Hover and his colleagues have detailed their approach in a paper to appear in the International Journal of Robotics Research.
Seeing a shape in the dots Fully viewing a massive structure such as a naval combat vessel - as well as all its small features, including bolts, struts and any small mines - is a tricky planning problem, according to Hover. "It's not enough to just view it from a safe distance," Hover says. "The vehicle has to go in and fly through the propellers and the rudders, trying to sweep everything, usually with short-range sensors that have a limited field of view." The group approached the challenge in two stages. For the first stage, the researchers programmed the robot to approach the ship's hull from a safe 10-meter distance, swimming in a square around the structure. The vehicle's sonar camera emits signals that boomerang back as the robot makes its way around the ship; the researchers process the sonar signals into a grainy point cloud. At such a low resolution, Hover says one can clearly make out a ship's large propeller, though not an iPod-sized mine. "We get an immense point cloud," Hover says. "And believe it or not, we see the shape of the ship's structures emerge." However, Hover describes the point cloud as a "mist" that doesn't necessarily tell a robot where a ship's structures begin and end - crucial information for the robot to avoid colliding with a ship's propellers. To translate this "mist" into a solid structure, the researchers adapted computer-graphics algorithms to their sonar data, generating a three-dimensional, "watertight" mesh model.
Up close and mine-able One approach, he says, might be to have the robot sweep over the structure much like one would mow a lawn, one strip at a time - a common technique in robotic inspection. But such rectangular surveys can be tedious and time-consuming. Instead, the researchers came up with a more efficient approach, using optimization algorithms to program the robot to sweep across the structures while taking into account their complicated 3-D shapes. The group's technique significantly shortens the path a robot needs to follow to view an entire ship. "Over a minute or two of computation, we're able to make tremendous improvements to the length of this path, and do so while keeping every single point in view," Englot says. Gaurav Sukhatme, a professor of computer science at the University of Southern California who was not involved in this work, sees the group's work as an integrated approach to multiple problems, including "the extraordinary amount of data the vehicle has to intelligently process, and the safety considerations when operating near a hull that is being imaged or examined. I think a big application is going to be in servicing existing underwater rigs, and in decommissioning rigs no longer slated for use," Sukhatme says. The team has tested its algorithms in the field, creating underwater models of two vessels: the Curtiss, a 183-meter military support ship in San Diego, and the Seneca, an 82-meter cutter in Boston. The group is performing tests this month in Boston Harbor. "The goal is to be competitive with divers in speed and efficiency, covering every square inch of a ship," Englot says. "We think we're close." This research is supported by the Office of Naval Research.
Related Links Massachusetts Institute Of Technology All about the robots on Earth and beyond!
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement |