Thinking Tech

Why hearing aids fail (and what will make them better)

Posting in Technology

Why hearing aids are frustratingly imperfect and continue to be a tough challenge for technology to solve.

Hearing aids have improved greatly over recent years, but they continue to be a surprisingly frustrating experience for new wearers. Sure today’s hearing aids are tiny, nearly invisible in fact, and they amplify sound and are able to present a higher range of frequencies, but they have not solved the problem of amplifying the sounds we just don’t want to hear. For new wearers the crumpling of a paper bag on the other side of a room can sound like a jackhammer.

To explain why this is such a huge challenge for technology, Smart Planet spoke with an expert in psychoacoustics, the study of how the brain perceives sound. Andrew J. Oxenham is a psychologist and hearing expert at the University of Minnesota.

SmartPlanet: Why can’t hearing aids work like glasses, and simply grant us healthy hearing?

Andrew Oxenham: Glasses compensate for lenses that aren’t focusing properly. With the ear it’s different. The ear works by analyzing sound and breaking it into different frequencies. And with many forms of hearing impairment it’s this frequency selectivity that is impaired.

SP: What does that mean?

AO: What that means is that the ear doesn’t filter as well as it did before. So instead of having very sharp tuning to filter out different frequencies the filtering becomes much broader and there is no real way of compensating for that. You can’t sharpen the filters or you can’t pre-process sound so it’s sharp. It’s like a broken TV set.  You can process the signal going into the TV as much as you like but you still won’t get a clear picture of the output.

SP: So have hearing aids improved recently?

AO: Recent hearing aids have made a lot of progress, like being able to present frequencies of up to 6000 Hz as opposed to limited frequencies up to about 4000 Hz.

SP: How?

AO: By using digital signal processing, and a lot more computing power on a lot smaller chip.

SP: You mean it’s due Moore’s Law?

AO: Yes that is right.

SP:  I didn't realize that.

AO: Well that's not all. Another big leap has been with the directional hearing. They can focus the microphones toward the front and filter out a lot of the sound coming from the side and back. And that is a fairly simple technique, but it involves signal processing that wasn’t possible with earlier hearing aids.

SP: I understand from hearing aid wearers that ambient sound is horribly distracting for them. A paper bag being crumpled across a room sounds screechingly loud. Why?

AO: This is common complaint of people who recently start wearing a hearing aid. Their hearing has deteriorated, often without them being completely aware of it, over a period of time. Then they are fitted with a hearing aid and they hear sounds they've got used to not hearing. The sounds are suddenly annoying and distracting. It’s a contrast effect. For a long time and suddenly they are now amplified and you can hear them and they become distracting.

SP: So it’s a perception thing? That is sourced in the brain’s ability to analyze sound?

AO: It's a complex interaction between the ear and the brain. The ear sends signals up to the brain, and we also know the brain does an awful lot of processing and on top of that, and then sends signals back down to the ear. These signals change the way the ear accepts input. This is partly why hearing aids are not perfect. Because the hearing aid is not part of that natural feedback loop. There’s no way with current aids that the brain can interface with a hearing aid directly to change its characteristics.

SP: To deal with background noise there are things called “hearing loops.” What are they?

AO: These are systems that are set up within places like concert halls and churches that interface directly with the hearing aid. It's like sending a radio signal to the hearing device.

SP: How?

AO: The idea is that this hearing loop picks up the sound directly from the microphone in front of a speaker. Say you're in a church and the priest is talking into a microphone. Normally we hear the sound acoustically through the airwaves. If you are wearing a regular hearing aid the microphone will pick up the sounds on the airwaves but that is together with all the background noise and reverberation in the church. With a hearing loop it sends the signal directly from the microphone to the ear and bypasses all the acoustics in the building itself. So the ear is getting a much better, clearer and cleaner signal of what's coming into the microphone.

SP: Where else could we use hearing loops?

AO: Concert halls, movie theaters.

SP: Why are we now fitted with two hearing aids, as opposed to one?

 And how does this help our hearing?

AO: It's only recently that people have routinely been fitted with two hearing aids. Often people only got one. The way we localize sound, to know where the sound is coming from, is the brain comparing the signals coming in at the two ears. So if it's slightly louder on one side then the brain knows the sound is coming from that side. But more importantly it's the time of arrival difference between the two ears. If you think about a sound coming from the right. The sound will reach your right ear a little bit before it reaches your left ear. We are talking about millionths of seconds. But your brain needs two ears to make a distinction. If you only have one you lose that ability to localize sound. And it's also an important part of filtering out sound and noise. The brain can determine if there is speech right in front and background noise in back of  and to the side. And the brain can use those differences in localization to help to make the speech more intelligible.

SP: So the biggest technical challenge is developing hearing aids that can focus on what we really need and want to listen to. How are we going to solve this problem?

AO: We are hoping through even more sophisticated signal processing schemes that we'll be able to work on artificial source segregation and that means is analyzing the signal that is coming in and figuring out what is speech and what isn't, and only presenting to the ear the wanted signal.

SP: How are you going to distinguish between a wanted signal and just plain noise?

AO: The assumption is that what you really want to listen to is speech, and so there are certain acoustical aspects of speech that we can recognize and there are certain acoustical aspects of noise that are different from speech. So if you can get your algorithm to distinguish between speech and noise that will help you towards filtering the unwanted signal.

SP: Could brain-computer interface be part of the hearing aid systems of the future?

AO: That could possibly be a direction for the future. Where the hearing aid is tapping into brain responses to pick up the signal the person wants to pay attention to.

SP: When are we going to solve this problem of distinguishing a wanted sound from noisy ambient background?

AO: It's an ongoing process with incremental steps and we will continue to see improvements in next 15 years.

Share this

Christie Nicholson

Contributing Writer

Christie Nicholson produces and hosts Scientific American's podcasts 60-Second Mind and 60-Second Science and is an on-air contributor for Slate, Babelgum, Scientific American, Discovery Channel and Science Channel. She has spoken at MIT/Stanford VLAB, SXSW Interactive, the National Science Foundation, the National Research Council, the Space Studies Board and Brookhaven National Laboratory. She holds degrees from the Columbia University Graduate School of Journalism and Dalhousie University in Canada. She is based in New York. Follow her on Twitter. Disclosure