Google DeepMind’s AI lab has developed a model to decipher dolphin speech.
DolphinGemma boasts 400 million parameters and is trained using data from the Wild Dolphin Project (WDP), a non-profit organization studying the behavior of Atlantic spotted dolphins.
The neural network is based on Google’s series of open AI models Gemma. It can generate “dolphin-like” sound sequences and is efficient enough to operate on mobile phones.
This summer, WDP plans to use the Pixel 9 smartphone to run a platform capable of creating synthetic dolphin vocalizations and listening to their sounds to find an appropriate “response.”
“By identifying recurring sound patterns, clusters, and reliable sequences, the model can assist researchers in uncovering hidden structures and potential meanings in natural dolphin communication. This task previously required enormous human effort. Ultimately, these patterns, supplemented by synthetic sounds created by researchers to denote objects dolphins enjoy playing with, could establish a shared vocabulary for interactive communication with the mammals,” states a Google blog.
Previously, WDP used the Pixel 6 for a similar task. The use of Pixel 9 will enable researchers to simultaneously run AI models and pattern-matching algorithms.
“The use of Pixel smartphones significantly reduces the need for custom equipment, improves system repairability, lowers power consumption, and decreases the cost and size of the device—key advantages for field research in the open ocean,” noted Google.
In March, Google announced a range of AI products and features for the healthcare sector. These include “open” models for discovering new drugs called TxGemma.
Earlier, the corporation released the open AI model Gemma for the responsible development of artificial intelligence.
