Inspired by the workings of a bat’s ear, Rolf Mueller, a professor of mechanical engineering at Virginia Tech, has created bio-inspired technology that determines the location of a sound’s origin.

Mueller’s development works from a simpler and more accurate model of sound location than previous approaches, which have traditionally been modeled after the human ear. His work marks the first new insight for determining sound location in 50 years.

The findings were published in Nature Machine Intelligence by Mueller and a former Ph.D. student, lead author Xiaoyan Yin.

“I have long admired bats for their uncanny ability to navigate complex natural environments based on ultrasound and suspected that the unusual mobility of the animal's ears might have something to do with this,” said Mueller.

A new model for sound location

Bats navigate as they fly by using echolocation, determining how close an object is by continuously emitting sounds and listening to the echoes. Ultrasonic calls are emitted from the bat’s mouth or nose, bouncing off the elements of its environment and returning as an echo. They also gain information from ambient sounds. Comparing sounds to determine their origin is called the Doppler effect.

The Doppler effect works differently in human ears. A 1907 discovery showed that humans can find location by virtue of having two ears, receivers that relay sound data to the brain for processing. Operating on two or more receivers makes it possible to tell the direction of sounds that contain only one frequency, and would be familiar to anyone who has heard the sound of a car horn as it passes. The horn is one frequency, and the ears work together with the brain to build a map of where the car is going.

A 1967 discovery then showed that when the number of receivers is reduced down to one, a single human ear can find the location of sounds if different frequencies are encountered. In the case of the passing car, this might be the car horn paired with the roaring of the car’s engine.

According to Mueller, the workings of the human ear have inspired past approaches to pinpointing sound location, which have used pressure receivers, such as microphones, paired with the ability to either collect multiple frequencies or use multiple receivers. Building on a career of research with bats, Mueller knew that their ears were much more versatile sound receivers than the human ear. This prompted his team to pursue the objective of a single frequency and a single receiver instead of multiple receivers or frequencies.

Creating the ear

As they worked from the one-receiver, one-frequency model, Mueller’s team sought to replicate a bat’s ability to move their ears.

They created a soft synthetic ear inspired by horseshoe and Old-World leaf-nosed bats and attached it to a string and a simple motor, timed to make the ear flutter at the same time it received an incoming sound. These particular bats have ears that enable a complex transformation of sound waves, so nature’s ready-made design was a logical choice. That transformation starts with the shape of the outer ear, called the pinna, which uses the movement of the ear as it receives sounds to create multiple shapes for reception which channel the sounds into the ear canal.

Loading player for /content/dam/news_vt_edu/articles/2021/04/Sound_Source_Tracking_2.mp4...

The biggest challenge Yin and Mueller faced with their single-receiver, single-frequency model was interpreting the incoming signals. How do you turn incoming sound waves into data that is readable and interpretable?

The team placed the ear above a microphone, creating a mechanism similar to that of a bat. The fast motions of the fluttering pinna created Doppler shift signatures that were clearly related to the direction of the source, but not easily interpretable because of the complexity of the patterns. To deal with this, Yin and Mueller engaged a deep neural network: a machine-learning approach that mimics the many layers processing found in the brain. They implemented such a network on a computer and trained it to provide the source direction associated with each received echo.

To test the performance of the system consisting of the ear and machine learning, they mounted the ear on a rotating rig that also included a laser pointer. Sounds were then emitted from a loudspeaker that was placed in different directions relative to the ear.

Once the direction of the sound was determined, the control computer would rotate the rig so that the laser pointer hit a target attached to the loudspeaker, pinpointing location within half a degree. Human hearing typically determines location within 9 degrees with working with two ears, and the best technology has achieved location within 7.5 degrees.

A fluttering bat ear is able to pinpoint the origin of a sound with higher accuracy than both current technology and the human ear.
A fluttering bat ear is able to pinpoint the origin of a sound with higher accuracy than both current technology and the human ear.

“The capabilities are completely beyond what is currently in the reach of technology, and yet all this is achieved with much less effort,” said Mueller. “Our hope is to bring reliable and capable autonomy to complex outdoor environments, including precision agriculture and forestry; environmental surveillance, such as biodiversity monitoring; as well as defense and security-related applications.”

- Written by Alex Parrish

Share this story