Automatic vision-based local positioning, navigation, and seafloor mapping for unmanned submersible vehicles

Shahriar Negahdaripour, Ph.D.
University of Miami

Wednesday, May 3, 2000
3:00 p.m.—Pacific Forum

Over nearly a decade, research at the Underwater Imaging Lab has been aimed at the development of various vision-based technologies that would enable an unmanned underwater submersible vehicle to function with some level of intelligence (or some degree of autonomy). Of immediate interest are the realization of a number of capabilities, including automatic station keeping, local navigation, building of 2-D visual maps and mosaics, and 3-D shape reconstruction that are extremely valuable in operator-assisted ROV missions. The goal is to relieve the operator from tedious low-level tasks including continuous manual control to fly the vehicle, to permit high-level interaction between the operator and the vehicle, and to provide a more global picture of the environment that is available in a single live image.

In this talk, I will present a summary of the most recent results of our work on a number of projects, including automatic station keeping experiments off Key Biscayne Bay, a real-time mosaic building system, and photo-metric stereo for 3-D shape reconstruction. I will also give an overview of some of the ongoing work for improving the accuracy in vehicle self-positioning and local navigation.

Next: Association patterns of cooperative lunge-feeding humpback whales in

Last updated: December 19, 2000