We’ve all been to an office or a café where there was too much noise to either concentrate or hear a conversation properly. But now a new type of sound mapping software based on human hearing has been developed that will help architects design out unwanted noise from buildings that are being planned.
The research has been funded by the Engineering and Physical Sciences Research Council and led by Professor John Culling from Cardiff University. The software generates audibility maps of proposed room designs, showing areas where conversations would not be easy to listen to if the room was busy. Professor Culling explains more.
Professor John Culling [JC]
So what we have is some software which enables us to predict the intelligibility of speech against interfering noise sources in any kind of room and we can actually map the room and find out which parts of the room would be easy to hear speech in and which parts would be more difficult.
Something that we suspect may happen in rooms is that there are hot spots where it is particularly difficult to understand speech and it would be nice to be able to predict them from plan and know whether or not you’re going to have this sort of thing happening in a room once it has actually been built.
So the first thing that we need to do is to understand how the audibility system works. So we need to understand exactly what it is that enables people to understand speech in background noise. And that’s why we do most of our experiments doing things like measuring people’s ability to detect tones or to understand speech in noise under different circumstances.
Once you’ve got a really clear understanding of how that works then you are able to create a model which then will make predictions for any other situation that you might encounter and it’s now pretty clear that there are, from speech understanding in noise, two mechanisms which are important to determine people’s performance.
One is the relative level of the sound of the two ears which enables people to do a thing called better ear listening and that’s simply a matter of listening to one ear or the other ear. And the second is binaural unmasking. Binaural unmasking is a bit more clever, it relies on the actual timing of sounds as they pass across your head from one ear to the other and if that’s different for the signal, for the speech and the noise then the audibility system is able to understand that speech or detect a tone, better than if they are the same and they would be the same if the sounds were coming from the same direction.
What the research team have done is develop a mathematical equation that represents this binaural unmasking aspect of human hearing. Professor Culling says that it looks at how people take in sound through both ears as it travels round busy rooms and how noise sources are affected by each other.
In terms of what happens in the space around the head. What happens with binaural unmasking is that if the noise source and the speech are coming from the same place then it will provide you with no advantage because the acoustics are the same of the two ears. However, if they are separated in space so that you have slightly different acoustics at each ear then this binaural unmasking effect can kick in. So what the equation is doing is looking at the acoustic features that are apparent when you separate two sound sources at the two ears. It is then transforming those parameters into a number that goes from zero, when the sound sources are in the same place, to some other value when they are in different places.
The particular thing that’s unique about our work is that we’ve created a model that goes very fast. It’s very computationally efficient and that means that we can run it on hundreds or thousands of situations. We can map whole rooms so we can take a situation where we say there is a target source of speech in one place and we’ve got some interfering noise coming from some other place, or maybe there are several interfering noises, and we can test what would happen if there was a listener in every different place of the room. So we create a map of how intelligible that speech would be against that noise from every different place in that environment.
In addition to helping improve the listening environment of busy social areas the research could also help in the future developments of hearing aids and cochlear implants.
One of the serendipitous things that has come out of doing this research is that we have started to get a new insight into what it’s like to only have one ear, which may not to most people seem obvious to do with cochlear implants and hearing aids, but certainly for many years people tended to be fitted with only one hearing aid and today people tend to be fitted with only one cochlear implant.
I think bilateral hearing aids are becoming much more common now, but bilateral cochlear implants are not, they tend only to be given to children.
What we can see in our maps of rooms is what happens if you take one of the ears away. It’s quite easy to make a one eared map once you can do this. And when you have got a one eared map you can see that there are large areas of the room which, if you had two ears, would be quite accessible and you could stand there and understand what’s going on, but if you take away one ear you can’t stand there anymore and you’ve got to move somewhere else. I think a lot of people will be familiar with that scenario of having to move somewhere else to be able to hear what is going on.
What would be nice to be able to do is to predict not only what a room would be like for a normally hearing person and how well they will understand speech in noise in this environment, but also what would happen to a hearing impaired person or someone who was relying on a cochlear implant for their hearing, how are they going to do and is it possible to design a room such that they will find it more amenable than they do at present.
So what does the future hold for the research?
We think what we have at the moment should be marketable as say a plug-in to architectural software which could be used both by architects in order to design rooms and by acoustic consultants when they are trying to ameliorate problems caused by existing rooms.
From our point of view doing the research, we want to deal with modulated noises and generally speaking sounds that are more like speech than what we are dealing with so far, because strictly speaking the software only produces accurate results for continuous noise interferers and if we’re wanting to sell this as a solution to social spaces, we really need to know what is going on with speech, which is a very complicated process.
Professor John Culling from Cardiff University talking about his research to design out unwanted noise from busy buildings.