false
Catalog
2018 AANS Annual Scientific Meeting
538. Hearing the Written Word: Listening and Readi ...
538. Hearing the Written Word: Listening and Reading Recruit Shared Lexical Semantic Networks
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Dr. Forseth will be talking about language networks. All right. Slides up, please. Hello. My name is Kiefer Forseth. I'm a fourth-year MD-PhD candidate studying electrophysiology of language in the lab of Dr. Nitin Tandon at UT Health Science Center. So language is a unique human evolutionary adaptation. It's been critical to our success as a species. And importantly, we have two methods by which we're able to transmit language between each other. The first origins of language are maybe 250,000 years old. That's the current thinking. And it's the genetic adaptation that enabled us to develop this unique ability. But reading, as opposed to spoken language, is a much more recent development. And it's a particularly striking use and adaptation of our existing cerebral systems to accomplish a new and impressive task. So what I want to talk about today is what's the relation between these two direct means of language transmission, spoken language and reading. So cognitive neuroscience has proposed two distinct pathways for listening and reading. In both of them, you begin with sensory processing. So when you're listening to someone speak, you extract acoustic features. Interestingly, those features seem to be invariant to pitch, accent, volume. And in visual processing, you are able to read something regardless of its font size or even case, which is a particularly striking example. But what I want to get into more today is this next step. Once you've extracted these features, how do you extract meaning from the feature? How do you go from an acoustic signal or a visual signal to meaning? So in listening, there's dual streams of speech processing. The ventral stream is in regards to perception. In reading, you also have a dual stream hypothesis. There's two routes, a phonological route, which is essentially verbalizing the word to yourself in your own head. And this is for words with regular pronunciation or words you don't use very often. And you also have a lexical route, which is for frequent words. And it maps directly from the orthographic signature directly to meaning. And finally, both of these methods can be used to prime naming. So if you hear a short description, you can come up with that object, and that involves semantic access. So both of these tasks involve those three steps. Now, the question of where this lives in the brain is an open question. For verbal processing, we think it lives in superior temporal sulcus or in the posterior middle temporal gyrus. But the correlates of this for reading are still open to debate. And it's a key question whether speaking and reading use the same cortical mechanisms. Now, in reading, we have this phonological and we have this lexical route. So the phonological route is shown on the right. You go directly from letters to phonemes, and then you pronounce it in your head, and you go from the pronunciation back to meaning. It's a bit of a roundabout way, but there's good evidence that that's how we learned to read. The other way, which is when you become more mature, you can go directly from words to meaning. So the questions I want to answer today are, what are the temporal dynamics of lexical and phonological processing in the brain? And are they the same for speech processing in regards to listening and in regards to reading? Furthermore, do these two discrete processes share common substrates for accomplishing this goal? And finally, when do these cognitive processes converge before articulation? So I'm going to do this with electrocorticography. Electrocorticography has a couple of strengths. Namely, it's extremely high spatial and temporal resolution, and it's direct recordings from the human brain. But it does have some limitations. Namely, first of all, this is typically done in patients with epilepsy. And more importantly, we very rarely have good coverage of the entire cortical network that we're studying. So to compensate for that, we need to study this problem at the group level. And I'll show some measures for doing that later. So the two tasks we looked at today are naming to definition, either by spoken word or by rapid serial visual presentation. The stimuli are shared. We used a control condition for the auditory domain of reverse speech, just naming male or female. And in the reading domain, we changed the last word to make it impossible to bind a semantic concept. So in this case, around read time. There's no concept you could come up with that would describe that. So you just say nonsense. So we looked at patients with grid electrodes and with depth electrodes, providing complementary coverage of gyri and sulci. And we did this in 27 patients, 24 with depth electrodes. More than 4,000 total electrodes and more than 4,000 total trials. So let me begin with one patient here who has exceptional coverage of the entire lateral and ventral cortical network in language-dominant hemisphere. And I'm going to start on the ventral surface. We have two electrodes that are less than three millimeters apart in anterior fusiform gyrus. And you can see that with zero here is the end of the stimulus, so the last word presentation and listing the end of the acoustic stimulus. You can see that with each word, you get a parallel peak corresponding to letter processing. But after the stimulus ends, one of those electrodes goes silent while the other electrode maintains its activity. So this is a lexical semantic electrode, whereas the electrode just two millimeters behind it is a purely lexical electrode. It shows you kind of the fine-grained nature of this process. Now moving to the lateral temporal surface, we have two electrodes in posterior middle temporal gyrus. One of them is active throughout the reading task, maybe 300 milliseconds delayed from the early visual processing. But the other electrode, just a couple millimeters in front of it, ramps up throughout the stimulus. So that's, again, a distinction between a lexical and a lexical semantic electrode. And then finally, you can see this articulatory network involving IFG in purple, somatosensory cortex in green, and primary auditory cortex in red. That's involved in articulation. Now if we quickly look at the listening task, not surprisingly, early auditory cortex is active throughout the task. But again, we see this ramping up of the anterior PMTG electrode and a late activation of this anterior fusiform electrode. Also, very interestingly, during the reading task, we see activation of early auditory cortex. So this is a depth electrode that goes through the transverse temporal sulcus and Heschel's gyrus. And you can see that with each presentation of a word, you can actually see them articulating that word internally and generating the speech signal. And this is the most direct evidence for the phonological root that exists in literature. So I did mention that we're going to do this at a group level. And we do this by estimating or solving the inverse problem in electrocorticography, identifying the substrates that contribute to the signal we're measuring, and then integrating that across patients with a surface-based measure. So this lets us build these kind of movies. So this is 500 milliseconds following word presentation. You can see that beginning in early visual cortex, the activity flows along the ventral temporal surface, terminating in mid-fusiform gyrus, and then activates lateral temporal regions and posterior middle temporal gyrus and the superior temporal sulcus, and then very late activates IFG. So let's break this down into time windows. So the middle third of the sentence, listening on the left and reading on the right, you see that early auditory cortex is dominant for listening and early visual cortex is dominant for reading. But there's also these shared association areas in between, concentrated in superior temporal sulcus and middle temporal gyrus. Now when you get to the last word, that's when you can begin to bind semantic concepts. And here you see a much stronger activation of IFG, a residual activation of fusiform gyrus, which is now coming on for the first time in the listening task. And when you continue, so now you're almost to the point of articulation. This really emphasizes how global the production of language is. It's involving the entire dominant hemisphere. We're going to pare this down by comparing it to the control condition. So now these are just regions which are greater during listening to a coherent stimulus or reading a coherent stimulus than an incoherent stimulus. And this pulls out the lexical semantic network, kind of the interaction between those two levels of language production. So you have IFG, intraparietal sulcus and supermarginal gyrus, precuneus, and also midfusiform, which in this case is a bit buried by the fact that you have visual processing in both tasks. So what can we learn from all this? Well, first of all, we can say for the first time, what are the discrete temporal dynamics of the entire cortical reading network? At about 150 milliseconds, you get the letterbox activation of the visual wordform area. By 300 milliseconds, you see the phonological root and superior temporal gyrus and sulcus. And finally, around 400 milliseconds, the lexical root and posterior middle temporal gyrus and maybe an inferior frontal gyrus. So the question I asked originally was, do listening and reading utilize shared lexical cortical networks? The answer is yes. Lexical access seems to be a receptive lexical interface in PMTG. Semantic access seems to be more distributed in intraparietal sulcus, midfusiform, and IFG. Finally, when do these processes converge in the common process of articulation? It's within 200 milliseconds after the presentation of stimulus. So 200 milliseconds after hearing a sentence or reading a sentence, your brain is doing an identical task. So I'd like to acknowledge my lab and Dr. Nan Tandon for giving me the opportunity to do this work, and I'd be happy to take any questions. Thank you.
Video Summary
In this video, Dr. Forseth discusses the language networks in the brain. He explains that language is a unique human adaptation and that it can be transmitted through speaking and reading. He talks about the different pathways involved in listening and reading, and how meaning is extracted from acoustic and visual signals. Dr. Forseth also discusses the brain regions involved in verbal and reading processing, and the temporal dynamics of lexical and phonological processing. He presents findings from electrocorticography studies and highlights the shared cortical networks used in listening and reading. The video ends with Dr. Forseth answering questions and acknowledging his lab and Dr. Nitin Tandon for their support.
Asset Caption
Kiefer Forseth
Keywords
language networks
brain
speaking
reading
pathways
×
Please select your language
1
English