The ocean soundscape is a continuously changing mosaic of sounds that originate from living organisms (communication and foraging), natural processes (breaking waves, wind, rain, earthquakes), and human activities (shipping, construction, and resource extraction). Listening to sound in the sea is a rich exploration of the marine environment, which includes some of the ways in which human activities may influence marine life. We record ocean sound using a hydrophone, an underwater microphone.
Being a good listener
Because acoustic information spans a tremendous range of frequencies, we must record sound across a very broad spectrum. Capturing information at the high-frequency end of this range requires that we sample sound very frequently (more than 250,000 times each second). This frequent sampling of sound across a broad spectrum generates a tremendous flow of data.
Many isolated marine acoustic recording systems rely on battery power and internal data storage. Such systems are limited in how long they can be deployed and in how long they can record each day. This hydrophone does not have these constraints because it is connected to the MARS cabled observatory, which supplies power from shore and high-speed communication to data storage on shore, thereby enabling the hydrophone to record 24 hours a day for long periods of time.
On July 28, 2015, a digital broadband hydrophone was connected to the MARS cabled observatory. Deployment went smoothly thanks to the ship’s crew and remotely operated vehicle (ROV) pilots. Shortly after the sound of the ROV faded from the MARS node as the ROV ascended to the ship, marine mammal vocalizations became clearly audible in the hydrophone recordings.
This little hydrophone generates big data—about 24 terabytes in one year. Understanding this voluminous and dense data requires a variety of analysis methods – from automated recognition of vocalizations by marine mammals to long-term statistical description of all sounds recorded. Automated methods that sift through the data to detect and classify vocalizations of different species are being developed and applied. This will allow examination of variations in the presence of different species in the Monterey Bay area, in relation to variations in the environment.
In addition to listening to recordings, we can also visually represent the soundscape using a spectrogram. A spectrogram quantifies sound energy as a function of frequency and time. Animation of spectrograms through time enables “soundscape visual browsing.”
This MBARI team is working in collaboration with the Naval Postgraduate School, Stanford Hopkins Marine Station, Moss Landing Marine Laboratories, University of California at Santa Cruz, and the Monterey Bay National Marine Sanctuary.