Automated Video Event Detection
In order to study the distribution and abundance of oceanic animals, MBARI uses high-resolution video equipment on remotely operated vehicles. Quantitative video transects (QVTs) supplant traditional net tows to assess the quantity and diversity of organisms in the water column. QVTs are run from 50 m to 4000 m and provide high-resolution data at the scale of the individual animals as well as their natural aggregation patterns. However, the current, manual method of analyzing QVTs is labor intensive and tedious.
We have developed an automated system for detecting marine organisms visible in the videos. Video frames are processed with a neuromorphic selective attention algorithm. The candidate objects of interest are tracked across video frames using linear Kalman filters. If objects can be tracked successfully over several frames, they are labeled as potentially “interesting” and marked in the video frames. The plan is that the system will enhance the productivity of human video annotators and/or cue a subsequent object classification module by marking candidate objects.
The continued use of ROVs and future use of Autonomous Underwater Vehicles (AUVs) for QVTs offer potential for even more data, perhaps many times what we current collect and analyze. Hence, we see tremendous benefit in automating portions of the analysis. We also see great benefit in automating analysis of video from fixed ocean observatory cameras, where autonomous response to potential events (pan/zoom to events), and automated processing of largely “boring” (event sparse) video streams from 10s or 100s or even 1000s of network cameras could be key to those cameras being useful practical scientific instruments.