Lesson four concerns some observations from a totally different place which is linguistics that make the same point. That what we hear is really quite different in a puzzling way from the physical characteristics of the sounds that come to the ear. So this demonstration is called the McGurk Effect and what this diagram is showing you is what you're going to experience yourselves in a second. So there is an observer sitting here watching a video screen. And a soundtrack that's playing the very same sound, the very same syllable being enunciated by the speaker. And the sound that's being enunciated doesn't correspond with what's being heard. So you have a disconnect between the sound that's being conveyed by your visual input, the position of the lips and the tongue to utter different syllables. And the result of this combination of discrepant sound input and visual input is, as you'll hear the report from the observer, is that she hears a different sound when she's looking and getting the visual input and when she's just hearing it with her eyes closed. So I want you to experience this for yourself so here is an individual, same thing. You're looking at this video and you're going to hear this person speak a syllable. And the syllable that you're hearing, the syllable on the audio tape, is the very same syllable. Now I want you to experience this, first of all, with your eyes open looking at the visual input. And then with your eyes closed, so that you just get the audio input. So, let's listen to this. >> Bah, bah, bah, bah. >> I think you all heard as I do listening to this that when you're getting the visual implement; even though the sound signal is exactly the same. You are hearing this individual enunciating three different syllables that are significantly and easily heard as different. But now I want you to go back and listen to this with your eyes closed and you'll hear. >> Bah, bah, bah, bah. >> That when your eyes are closed, you can hear that the guy is really saying the very same syllable each time. What's the point of this? The point is that, again, there is an obvious discrepancy between the physical sound signal that's there, that's coming to you, coming to your ear, and what you end up hearing subjectively. And again, another nice example, in this case, of how input from a different sensory modality, vision, is going to affect what you hear. Not surprising, it makes perfectly good sense that you want to use information from vision and audition to discriminate as best you can, particularly if like me you're a little bit deaf. You get a lot of information out of seeing the position of the lips and the tongue. This is lip-reading, of course, and it's very helpful for people who have diminished hearing. So, I wanna tell you about one more example carried out by the linguist Peter Ladefoged about 15 years ago. And what he did was to generate a sense synthetic with different qualities of the speaker's voice. So he changed what we're gonna talk about in a later module. He changed the formant relationships in the speaker's enunciation or utterance of this sentence, please say what this word is. And that sentence with different qualities of speaker voices was followed by a test sound signal that was a b-vowel t-word like bit, bet, bat, or but, each one using a different vowel. In the test, the task of the listener was to say what the vowel was that was in the test word to identify the test word. Was it bit, was it bat, was it but, etc. And what he found was, that depending on the quality of the speaker, people made very different judgments about the vowel that they were hearing in the test word. So again, the point of this additional observation is that what we hear is not easily, simply related to the sound signal that's out there. It can be changed markedly by either visual information from another sensory modality in that case or by preceding context in which the vowel interpretation is changed by the quality of the speaker's voice.