http://cercor.oxfordjournals.org/cgi/content/full/11/10/946
I came across this fact researching my final paper for Perception class: the left auditory cortex is known to resolve temporal changes in sound better, while the right auditory cortex resolves tonal and harmonic information more finely. As soon as I read this I thought of FFT. Could it be the difference between the brain hemispheres is related to the auditory cortices having different block sizes? -Chuckk
Okay, just had this material in a class this semester...and I've done some research... There has been a long standing debate over functional specialization of the different hemispheres. One hypothesis is that the left hemisphere deals with high frequency information, and the right deals with low frequency information. This does not mean that the auditory information is actually divided disjointly to the hemispheres, but as a general theme in lateralization, the left hemisphere resolves higher frequency information... This hypothesis explains that the left hemisphere can represent fine timing information by having a more full set of frequencies, and that the right hemisphere resolves pitch contour by pertaining more to slow changes in frequency.
but....the neurological side doesn't really compare with this hypothesis. It's not something that can be boiled down to a single hypothesis. What's really going on is that subcortical structures in the auditory pathway differentially project to the right and left hemispheres.
in an article by Ligeois-Chavel (2001, New York Academy of Sciences), intracerebrally recorded evoked potentials (IEPs) were taking during a simple pitch experiment. The IEPs showed that the right hemisphere tonotopically encodes pitch information->there were position different "signatures" like event-related potentials that varied with respect to frequency. In the left hemisphere, there was no tonotopic organization, the areas of the brain under study responded equally to a wide band of frequencies.
While the coding in the auditory cortex is "like" an FFT, it is NOT an FFT. there are many fine differences....the whole auditory pathway is not real clear cut where functions are localized, exactly. At different stages in the auditory pathway, frequencies can be tonotopically or rate encoded. The cochlea is actually a dynamic organ, in and of itself->it's not a passive organ with a system of resonators and transducers, it's like a bank of critically tuned resonators at unstable equilibrium (like a hair trigger). The cochlea encodes a train of phase locked pulses, which are transmitted by the auditory nerve, which contains both tonotopically and timing encoded (fine timing) information. The cochlea can represent frequencies with timing encoding (I'm searching my brain for a better term than timing encoding, but not sure) up to around 5 kHz (it seemed a little high to me, when I first read the number...I thought it would be around 1 kHz). As we proceed through the auditory pathway, the ability to encode timing decreases, so that at the cortical level, timing encoding only pertains to the low <100-200 Hz frequencies. Hence, the primary auditory cortex accomplishes things by rate encoding (i.e. the strength of the tone is represented by the frequency of neuron action potentials, and not by timing between them)
(I can't remember these too well at the moment, I just graduated this spring...and my brains a little fried...there may be more. I just feel like I've left some out) okay, it goes cochlea->auditory meatus (midbrain ~ pons/mesencephalon)->lateral lemniscus (of pons)->superior olivary complex (of pons)->inferior colliculus->medial geniculate nucleus (of thalamus)->primary auditory cortex. (the named areas occur on both sides of the brain. There is also, of course, and auditory chiasm taking place somewhere around the superior olivary complex)
It's not real clear what each of these things does, and how they contribute to specific functions of the auditory system. There are some differences in encoding at different areas, such as the inferior colliculus. The IC is organized into isofrequency layers, each layer encodes a different band of frequencies (this is more "like" a wavelet transform), and fine timing information here works to accomplish sound localization (also has to do with eye position information from superior colliculus) There may be other information that is encoded at the level of the IC (I'm leaning strongly towards pitch in the IC). Different kinds of information are projected from the subcortical structures to the cortical levels...it just depends upon what kind of processing is taking place, and how it's organized.
but the point (there IS a point) is that there are many different representations of frequency information along the auditory pathway. Each structure named does some kind of processing and passes the processing along to the next structure. Ultimately, there are several different ways that sound is encoding up to and including the primary auditory cortex. It is "like" and FFT, but it is also "like" a wavelet transform, and it is also "like" a bank of hair-trigger resonators.
Chuck
On 5/24/06, Chuckk Hubbard badmuthahubbard@gmail.com wrote:
http://cercor.oxfordjournals.org/cgi/content/full/11/10/946
I came across this fact researching my final paper for Perception class: the left auditory cortex is known to resolve temporal changes in sound better, while the right auditory cortex resolves tonal and harmonic information more finely. As soon as I read this I thought of FFT. Could it be the difference between the brain hemispheres is related to the auditory cortices having different block sizes? -Chuckk
-- "Far and away the best prize that life has to offer is the chance to work hard at work worth doing." -Theodore Roosevelt
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 5/24/06, Charles Henry czhenry@gmail.com wrote:
resonators at unstable equilibrium (like a hair trigger). The cochlea encodes a train of phase locked pulses, which are transmitted by the auditory nerve, which contains both tonotopically and timing encoded (fine timing) information. The cochlea can represent frequencies with timing encoding (I'm searching my brain for a better term than timing encoding, but not sure) up to around 5 kHz (it seemed a little high to me, when I first read the number...I thought it would be around 1 kHz).
I mentioned this a little in my other response. According to my teacher, the nerves sending impulses arriving from the hair cells are in bundles of 10, allowing higher frequencies to be perceived this way. But from about 100 Hz, the same frequencies are perceived locally on the basilar membrane. I guess. By phase-locked pulses, do you mean that the hair-cell trigger happens at the same point in each cycle, and so the resulting pulses correlate to the same phase for each vibration? If you mean something else, it's probably not worth trying to explain. But please do.
(I can't remember these too well at the moment, I just graduated this spring...and my brains a little fried...there may be more. I just feel like I've left some out)
"One had to cram all this stuff into one's mind for the examinations, whether one liked it or not. This coercion had such a deterring effect on me that, after I had passed the final examination, I found the consideration of any scientific problems distasteful to me for an entire year." -Albert Einstein
but the point (there IS a point) is that there are many different representations of frequency information along the auditory pathway. Each structure named does some kind of processing and passes the processing along to the next structure. Ultimately, there are several different ways that sound is encoding up to and including the primary auditory cortex. It is "like" and FFT, but it is also "like" a wavelet transform, and it is also "like" a bank of hair-trigger resonators.
But even if it's not FFT, doesn't it still make sense that you can gather more information about a complex sound if you wait for a larger chunk of it and average it all? If nothing else, a shorter sound would seem less likely to set all the relevant areas of the basilar membrane in motion before it stops. Supposedly one of the downfalls of infrared MIDI guitar pickups is that there has to be a full cycle of every frequency before it knows what notes to use, and the lowest note on a guitar is around 82.5 Hz. The popular Pd solution is overlapping windows, but I'm wondering if there's something to running two simultaneous FFTs, with different block sizes, gathering fine resolution in both domains. I guess it depends whether you're trying to reproduce the sound or just know what it is. For that matter, recording a sound at several different sampling rates would give a finer time-domain picture. There's probably someone somewhere who would know how to model a cochlea in Pd. I wish it was me. If all of the resonators are moved by every motion, but a resonator's motion only amplifies if it is repeatedly pushed in the direction it's moving at any given part of its cycle, maybe it wouldn't be too hard. Then again the energy absorbed by that resonator would be removed. Hm. This is the sort of thing I would love to show my commercial-software musician friends, a virtual cochlea modeled in free software. Of course few of them know what a cochlea is anyway.
-Chuckk
Chuck
On 5/24/06, Chuckk Hubbard badmuthahubbard@gmail.com wrote:
http://cercor.oxfordjournals.org/cgi/content/full/11/10/946
I came across this fact researching my final paper for Perception class: the left auditory cortex is known to resolve temporal changes in sound better, while the right auditory cortex resolves tonal and harmonic information more finely. As soon as I read this I thought of FFT. Could it be the difference between the brain hemispheres is related to the auditory cortices having different block sizes? -Chuckk
-- "Far and away the best prize that life has to offer is the chance to work hard at work worth doing." -Theodore Roosevelt
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Very interesting. You two Chuck(k)s certainly got me thinking over some old stuff, but this left right thing is really fascinating. For one thing it makes me think maybe there's more to localisation than we know about right now.
There has been a long standing debate over functional specialization
of the different hemispheres.
Can only comment from a cog sci perspective, but I remember a general theme in Fodor and some other texts that we often see complementary behaviour with one hemispherical faculty taking up the loose ends or blindspots left by the other. In that regard all our senses are "stereoscopic", or at least not as discrete per organ as we assume, we like to get two points of view from which perception emerges. (eg taste is linked to smell)
Since time and frequency acuity are mutually exclusive at the limit wouldn't it make sense that the brain would evolve to have one side process things using a complementary principle than the other?
What's really going on is that subcortical structures in the auditory pathway differentially project to the right and left hemispheres.
Precisely.
The cochlea is actually a dynamic organ, in and of itself
And can exhibit very localised and very fast adaptive behaviour almost within itself as a protection mechanism. This is how it achieves such an awesome dynamic range, after lying in a soundproof isolation tank for many hours subjects report hearing a hissing noise which we assume is brownian motion of air molecules. On the other hand an impulse at 130dBU can effectively shut down the auditory system and dilate the tympanic membrane with the same kind of reflex to an eyelid closing. I've always found this amazing. That our senses are capable of picking up the smallest practically measurable "quanta" of information (eg the eye retina can detect a single photon of light)
tonotopically
What do you mean by that term please? In simple language.
I'm searching my brain for a better term than timing encoding, but not sure
PPM (pulse position modulation/encoding)? Well, that's what I'd call it with my electronic engineers hat on.
As we proceed through the auditory pathway, the ability to encode timing decreases,
See below vis Shannon, timing is increasingly less relevant higher up the tree.
so that at the cortical level, timing encoding only pertains to the low <100-200 Hz frequencies.
The magic number at ten to twenty milliseconds has fascinated me for ages. It comes up time and again, in Garbors work, in Warren, Jones and Lee (all cognitive scientists - check McAdams and Bigand compilation "Thinking in Sound"). Something very important is happening here. It is the essence of granular synthesis and marks the important point where discontinuity becomes continuity (eg grains "fuse" at this point and we have to move to a wavelet like model where position and frequency become combined) Can you throw any hypothesis at this speaking as neurobiologist? Do you think maybe one side of the brain is taking over from the other?
okay, it goes cochlea->auditory meatus (midbrain ~ pons/mesencephalon)->lateral lemniscus (of pons)->superior olivary complex (of pons)->inferior colliculus->medial geniculate nucleus (of thalamus)->primary auditory cortex.
<Vader voice> Impressive :) I bet you didn't have to look that one up either.
Ultimately, there are several different ways that sound is encoding up to and including the primary auditory cortex. It is "like" and FFT, but it is also "like" a wavelet transform, and it is also "like" a bank of hair-trigger resonators.
It's not defeatist to assume that maybe the brains internal encoding is quite unlike, and doesn't map elegantly onto, our traditional mathematics at all. Not even on the level of fuzzy ANNs. That would make these mathematical "utilities" no less powerful, rather like wave particle duality doesn't really weaken either of the models. In fact it makes perfect sense, in the terms of Shannon a function of time is being compressed in semantics, it is reduced to an ever smaller set of more salient features. The crazy thing about this adaptive system is that it's the brain itself which is deciding (in a dynamic fashion) what these salient features are according to context. I remember from somewhere that in 60ms a human or monkey can tell the difference between a bolt of lightning, a twig snapping and a raindrop on a leaf. All this information must be in the very small attack of a signal. It happens long before the frontal brain can classify and tag the sound with a word. While it's obvious from a survival POV in evolutionary biology it also indicates that adaptive feedback must be occurring at a very low level. In that sense the ear (cochlea/ transducer) can "focus" rather like the eye. I wish I could remember the experiment, but I thought at the time it seemed a bit wooly, involving lots of retrospective timing assumptions. It would be interesting to know if any advances have been made with invasive potential measurement since. My synthetic sound design work has frequently confirmed this in practice, where very tiny changes to the spectrum or shape of the attack portion from 10-100ms completely changes the emotional response to a sound.
On Wed, 24 May 2006 11:25:53 -0500 "Charles Henry" czhenry@gmail.com wrote:
Okay, just had this material in a class this semester...and I've done some research... There has been a long standing debate over functional specialization of the different hemispheres. One hypothesis is that the left hemisphere deals with high frequency information, and the right deals with low frequency information. This does not mean that the auditory information is actually divided disjointly to the hemispheres, but as a general theme in lateralization, the left hemisphere resolves higher frequency information... This hypothesis explains that the left hemisphere can represent fine timing information by having a more full set of frequencies, and that the right hemisphere resolves pitch contour by pertaining more to slow changes in frequency.
but....the neurological side doesn't really compare with this hypothesis. It's not something that can be boiled down to a single hypothesis. What's really going on is that subcortical structures in the auditory pathway differentially project to the right and left hemispheres.
in an article by Ligeois-Chavel (2001, New York Academy of Sciences), intracerebrally recorded evoked potentials (IEPs) were taking during a simple pitch experiment. The IEPs showed that the right hemisphere tonotopically encodes pitch information->there were position different "signatures" like event-related potentials that varied with respect to frequency. In the left hemisphere, there was no tonotopic organization, the areas of the brain under study responded equally to a wide band of frequencies.
While the coding in the auditory cortex is "like" an FFT, it is NOT an FFT. there are many fine differences....the whole auditory pathway is not real clear cut where functions are localized, exactly. At different stages in the auditory pathway, frequencies can be tonotopically or rate encoded. The cochlea is actually a dynamic organ, in and of itself->it's not a passive organ with a system of resonators and transducers, it's like a bank of critically tuned resonators at unstable equilibrium (like a hair trigger). The cochlea encodes a train of phase locked pulses, which are transmitted by the auditory nerve, which contains both tonotopically and timing encoded (fine timing) information. The cochlea can represent frequencies with timing encoding (I'm searching my brain for a better term than timing encoding, but not sure) up to around 5 kHz (it seemed a little high to me, when I first read the number...I thought it would be around 1 kHz). As we proceed through the auditory pathway, the ability to encode timing decreases, so that at the cortical level, timing encoding only pertains to the low <100-200 Hz frequencies. Hence, the primary auditory cortex accomplishes things by rate encoding (i.e. the strength of the tone is represented by the frequency of neuron action potentials, and not by timing between them)
(I can't remember these too well at the moment, I just graduated this spring...and my brains a little fried...there may be more. I just feel like I've left some out) okay, it goes cochlea->auditory meatus (midbrain ~ pons/mesencephalon)->lateral lemniscus (of pons)->superior olivary complex (of pons)->inferior colliculus->medial geniculate nucleus (of thalamus)->primary auditory cortex. (the named areas occur on both sides of the brain. There is also, of course, and auditory chiasm taking place somewhere around the superior olivary complex)
It's not real clear what each of these things does, and how they contribute to specific functions of the auditory system. There are some differences in encoding at different areas, such as the inferior colliculus. The IC is organized into isofrequency layers, each layer encodes a different band of frequencies (this is more "like" a wavelet transform), and fine timing information here works to accomplish sound localization (also has to do with eye position information from superior colliculus) There may be other information that is encoded at the level of the IC (I'm leaning strongly towards pitch in the IC). Different kinds of information are projected from the subcortical structures to the cortical levels...it just depends upon what kind of processing is taking place, and how it's organized.
but the point (there IS a point) is that there are many different representations of frequency information along the auditory pathway. Each structure named does some kind of processing and passes the processing along to the next structure. Ultimately, there are several different ways that sound is encoding up to and including the primary auditory cortex. It is "like" and FFT, but it is also "like" a wavelet transform, and it is also "like" a bank of hair-trigger resonators.
Chuck
On 5/24/06, Chuckk Hubbard badmuthahubbard@gmail.com wrote:
http://cercor.oxfordjournals.org/cgi/content/full/11/10/946
I came across this fact researching my final paper for Perception class: the left auditory cortex is known to resolve temporal changes in sound better, while the right auditory cortex resolves tonal and harmonic information more finely. As soon as I read this I thought of FFT. Could it be the difference between the brain hemispheres is related to the auditory cortices having different block sizes? -Chuckk
-- "Far and away the best prize that life has to offer is the chance to work hard at work worth doing." -Theodore Roosevelt
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On 5/25/06, padawan12 padawan12@obiwannabe.co.uk wrote:
Very interesting. You two Chuck(k)s certainly got me thinking over some old stuff, but this left right thing is really fascinating. For one thing it makes me think maybe there's more to localisation than we know about right now.
Glad to have awakened something interesting.
There has been a long standing debate over functional specialization
of the different hemispheres.
Can only comment from a cog sci perspective, but I remember a general theme in Fodor and some other texts that we often see complementary behaviour with one hemispherical faculty taking up the loose ends or blindspots left by the other. In that regard all our senses are "stereoscopic", or at least not as discrete per organ as we assume, we like to get two points of view from which perception emerges. (eg taste is linked to smell)
It makes sense this way, since the system grew around itself so to speak, improvements cropping up and lingering without ever overhauling the whole thing. I was happy that my perception teacher acknowledged that there are more than 5 senses. He was reluctant to give a number. I think it's inane to teach children they have 5 senses and ignore equilibrium and temperature. Like children wouldn't be able to comprehend these things.
Doesn't it also make sense, however, that even if two sides, or just two areas, performed some function fairly similarly at one time, gradually whichever side or area was more acute or earlier in the line in more individuals would be emphasized, and the importance of the other area performing that function would decrease? Some birds have photosensitive cells in their pituitary glands! We've no use for that, the thing being so insulated in there.
Since time and frequency acuity are mutually exclusive at the limit wouldn't it make sense that the brain would evolve to have one side process things using a complementary principle than the other?
I don't know much about the brain, but I don't see why it would have to be the sides in that case, it could just as well be other structures in the brain.
The cochlea is actually a dynamic organ, in and of itself
And can exhibit very localised and very fast adaptive behaviour almost within itself as a protection mechanism. This is how it achieves such an awesome dynamic range, after lying in a soundproof isolation tank for many hours subjects report hearing a hissing noise which we assume is brownian motion of air molecules. On the other hand an impulse at 130dBU can effectively shut down the auditory system and dilate the tympanic membrane with the same kind of reflex to an eyelid closing. I've always found this amazing. That our senses are capable of picking up the smallest practically measurable "quanta" of information (eg the eye retina can detect a single photon of light)
And the ossicles can actually be twisted to angles of less amplification. Our perception teacher pointed out that we put our clothes on at the beginning of the day and almost immediately stop feeling them. Leonard B. Meyer said that emotional response to music comes from "inhibited tendencies". His example was of a man reaching into his shirt pocket for a cigarette. He pulls the pack out, removes a cigarette, lights it and smokes it, and isn't aware of any of it. Might not even be able to remember whether he smoked a cigarette later. But if he reaches in and there's no cigarettes, he experiences immediate awareness and emotional affect. Same with drones vs. modulations in music. This is why I love Frank Zappa.
tonotopically
What do you mean by that term please? In simple language.
"The IEPs showed that the right hemisphere tonotopically encodes pitch information->there were position different "signatures" like event-related potentials that varied with respect to frequency. In the left hemisphere, there was no tonotopic organization, the areas of the brain under study responded equally to a wide band of frequencies."
I take "tonotopically" to refer to different frequencies exciting different physical places. Like on the basilar membrane. It isn't only the frequency of excitation, it's also the area of most excitation. See below...
I'm searching my brain for a better term than timing encoding, but not sure
PPM (pulse position modulation/encoding)? Well, that's what I'd call it with my electronic engineers hat on.
Wow, now there's a term that would get a blank stare from me! I'd say temporal, depending on the context. Or "frequency of impulses". I'd also use "localized" as opposed to "tonotopic", but that's just my own vocab.
See below vis Shannon, timing is increasingly less relevant higher up the tree.
"increasingly less" lol The things we type when we find something stimulating.
so that at the cortical level, timing encoding only pertains to the low <100-200 Hz frequencies.
The magic number at ten to twenty milliseconds has fascinated me for ages. It comes up time and again, in Garbors work, in Warren, Jones and Lee (all cognitive scientists - check McAdams and Bigand compilation "Thinking in Sound"). Something very important is happening here. It is the essence of granular synthesis and marks the important point where discontinuity becomes continuity (eg grains "fuse" at this point and we have to move to a wavelet like model where position and frequency become combined) Can you throw any hypothesis at this speaking as neurobiologist? Do you think maybe one side of the brain is taking over from the other?
Here is what I've gathered from both my own reading and my perception class. I'm sure Charles can elucidate, and I suspect this is part of what he said somewhere in the parts I didn't quite follow. The nerves triggered by the cilia in the basilar membrane can only fire 300-500 times a second (in my notes I have written 333x, but I'm not sure why). But they are innervated in bundles of 10, and so it is possible to hear sounds up to 3k-5k just from frequency of nerve impulses (is that what he means by timing encoding? awful coincidence of numbers there). Fish and amphibians only have this method of pitch recognition, and can't hear any higher frequencies. At low frequencies-- up to 100 Hz is the magic number I read somewhere-- the entire basilar membrane undulates. But we can hear famously up to 20k (24k according to my teacher). At higher frequencies, less and less of the basilar membrane undulates, and specific points of excitation register as specific pitches. Later, when we discussed sensitivity to frequencies, I asked my teacher if the overlap of methods between about 100 Hz and 3kHz has anything to do with our hearing being most sensitive in that range, and that being where speech tends to occur. He didn't know. Ha! If I understand correctly, Charles has had some experience with how the brain actually processes these two frequency detection devices differently.
I also wonder if this entire-membrane-excitation-at-low-frequencies thing has to do with why people are so moved by bass. But I'm sure there are many reasons for that.
The crazy thing about this adaptive system is that it's the brain itself which is deciding (in a dynamic fashion) what these salient features are according to context. I remember from somewhere that in 60ms a human or monkey can tell the difference between a bolt of lightning, a twig snapping and a raindrop on a leaf. All this information must be in the very small attack of a signal. It happens long before the frontal brain can classify and tag the sound with a word. While it's obvious from a survival POV in evolutionary biology it also indicates that adaptive feedback must be occurring at a very low level.
I've read of how these shortcuts appear in response to emotional situations. You jump when the snake starts moving, only understanding what it was afterward. I was thinking of this in assuming that there would be a difference as to which ear the sound comes in, even though the information is shared between the hemispheres. It gets to the area that does the processing before the area that receives messages from that area.
Just to throw another anecdote out there, http://www.mit.edu/~perfors/oldhotornot.htm She says women found men more attractive if their names had higher-formant vowels. In that case I'm doubly screwed with my first and last name... I thought her hypothesis was ridiculous, that women today are more attracted to guys who seem smaller and less threatening. I suggested to her it had to do with the complexity of the tone, like birdsong. Rock singers have always been loved for high-pitched screaming, guitarists for searing solos, and it isn't because it makes them seem small. It could also just be increased muscle tension in the mouth saying some words.
To get somewhat on-topic, Charles, what brings you to Pd? Have you used it in research at all? For auditory stuff or DSP? I think this list could benefit from people whose primary interest is other than programming or composing.
-Chuckk