Burn it Off

my self-inflicted panopticon failed.

3.03.2004

311 - Love Song (from 50 frst dates sound track...GREAT soundtrack)
attempt at writing



so i had to write an essay for my sound & hearing class (BME)....first essay i've had to write in a LONG time...i even had to install microsoft word :) sometimes i amaze even myself. either way, don't feel obligated to read it... just thought it might be interesting (4 pages, double spaced) to some of you....both the content and the fact that i'm a really shitty writer, hah. but you already knew i'm a shitty writer.

without further adieu:

*****


Directional Sensitivity

Directional Sensitivity is a process in which creatures use both mechanical and neural processes to locate the source of a sound. According to Masakazu Konishi in “Listening with Two Ears,” this process has only been thoroughly understood in Barn Owls, which Konishi discusses, and a fish that senses electric fields (often emitted by other fish of the same species). A lot is known about human hearing, but at the same time there are still steps in the process that are not understood.
The strongest sense of directional sensitivity comes from a process called “binaural fusion,” the process in which your brain processes a source signal being received differently by two ears. As best as I can figure, all animals use two ears. Having two ears works a lot like having two eyes for seeing. By seeing one object from two different angles, our brain can deduce depth by putting the two images together. The ears just receive a different kind of information – vibrations in the atmosphere instead of light waves.
I remember an interesting analogy from a physics class I had a long time ago (I don’t even remember where). If we were to try and ‘simulate’ hearing, it would be like making a large pound with two small outlets. If we were only allowed to monitor these two outlets (our “ears”), then hearing would be like throwing a single pebble in the pond and trying to figure out exactly where it hit the water by only studying the ripples in the two little outlets. Imagining this visually, it seems like an impossible task; but somehow our brain can take these seemingly lacking bits of data and, indeed, locate the source. The analogy with eyes, now, seems rather unfair because light is much more tangible (also a point that could be argued, however, but this is not the place).
The simplest algorithm the brain uses to process the information from its two ear signals is to compare time. If a sound source is directly in front of you, by basic symmetry we can tell that since the sound travels radially outwards from the source it will reach both ears at the same time. The brain will compare the two signals, see that they’re Identical, and indicate that the source is directly in front of you.
It gets more complicated when you start to move the source horizontally to different positions, though. It is a common misconception that directionality is a result of different intensities received in the two ears. Often times when an amateur is configuring a stereo system to try and “move” the source (called a phantom image), he/she will try and pan the signal left or right (I myself am guilty of having tried this). From experience and another class of mine called “Music Signal Processing” (Prof. Alexandros Eleftheriadis), I know that what happens when you do this is the sound source will “flip” from left to right when you pan it left and right instead of gradually moving from left to right as most would expect. The reason for this, I think, is that different intensities between the two ear signals do not provide enough information to locate a signal source.
A stereo speaker system is not a natural thing. No natural sound source will be identical but come from two different places. What this means is that, for example, if you have a sound source to your right, the sound will hit your right ear first, travel across your head, and then hit your left ear. The only difference in intensity will be the sound pressure lost in the short distance the sound travels the width of your head. Even with an oscilloscope, this intensity difference might be impossible to detect. Instead what the ears do, as I mentioned before, is monitor the timing of the two ear signals.
When a wave gets transmitted from the outside world to the outer ear to the middle ear and finally to the inner ear, the brain receives a train of pulses with some type of periodic pattern based on the physical wave that was received. When you compare these two patterns in the time domain, any difference would be readily apparent. Intensity isn’t really a factor anymore because no matter what intensity of a sound source, its waveform will be the same in the time domain. Therefore, the pulse train patterns the two inner ears will send to the brain will be identical, but if the sound is off center (left or right) one of the patterns will be lagging. This lag is detected in the brain and I imagine through childhood development, the brain learns what length of lag determines where to the left or right the source is in relation to our “forward” which is usually where our eyes are facing.
Unfortunately this timing difference between the two ears is only one tool the brain uses to locate a sound. It is, however, a process that is seen in almost every animal and one of the strongest indications of direction. From there directional sensitivity becomes rather animal-specific. For example, research done on barn owls by Konishi, documented in “Listening with Two Ears” indicates that another tool owls use allows them to sense directionality up and down as well.
An owl’s ears face two different directions; one slightly up and one slightly down. Through a series of tests (which are very impressive), it was found that to sense vertical direction an owl does indeed use intensity differences. I told you earlier why this doesn’t work from left to right, and in humans, it won’t work for our vertical perception, either. With owls, though, because its ears are physically facing in two different directions, a sound coming from up high will be heard much more intensely in the ear facing upwards. I imagine an analogy to humans would be to cover one ear. When a sound is up high, an owl might here it only (at the extreme) in one ear, whereas down low an owl would hear it only from the other ear. By spreading out the range of differences it becomes plausible for the brain to sense the magnitude of the differences. Again in humans, the only intensity difference would be the intensity lost in the distance from ear to ear which would be very minute.
All this does beg the question of how humans locate sound sources vertically. It is obvious that we do, because you can always tell a bird is chirping overhead, a plane is flying above you, or even when you are walking on dry leaves. I unfortunately don’t know the specific process in which this happens but to the best of my knowledge (from class discussion) I think it has something to do with our outer ear filtering out certain bands of frequencies when hit from different elevations. The contours of it would definitely do different things to sounds hitting it from different directions, but I can’t even imagine the complexity of the processing that would involve in order to extract directional data.



*gunshot*


0 Comments:

Post a Comment

<< Home