false
Catalog
Comprehensive World Brain Mapping Course
Contemporary Model of Language Organization
Contemporary Model of Language Organization
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you. Well, I want to really thank the organizers for inviting me. It's really a pleasure and an honor to be here at this first international mapping meeting. I'm not going to really talk about a model because I'm not quite sure what the model would be. I'm going to kind of go over the history of our intracranial recording of language, which has provided insights into how things seem to work and how things don't match classic theory. And if we have time at the end, I'd like to show you some of our efforts to actually try to develop a prosthetic device for, a speech prosthetic device. This is all clinical work. None of this would happen if I didn't have these fantastic collaborators. And I'm going to be presenting work from each of these investigators. It all started with Mitch, actually in 2003, when he contacted Berkeley and said, was anybody interested in working with nurse surgeons? And it's been really a fantastic journey. So the first thing is a lot of what I'm going to discuss is going to be focused on this recently described high frequency activity in the cortex. High frequency band, 60 to 250 hertz, seems to be the best target for underlying cortical single unit activity. Very reliable at a single trial level. So you can do individual statistics and that's why it's potentially so good for a potential prosthetic device. So the project almost ended with the first study because we nearly drove Mitch crazy. We decided to see if we could get quality mismatch data in the operating room. So the mismatch negativity, really simple, beep, beep, beep, beep, boop, we knew that you got nice auditory cortical activity. And just simple as that. And I'm not going to go into all the, you know, what it reflects. It's an automatic detection mechanism that we all have. Very important for orienting response. And the bottom line is the first study we did, lo and behold, we found that if we looked at the repeated tones, we got expected activity in the power in the lower frequency range as a time frequency analysis. But a huge amount of activity in the high frequency range, which we didn't really expect. We actually thought it was an artifact. Turns out that it wasn't. We replicated this with tones and then with phoneme deviance. And this gave us the idea that we really had a nice signal to try to do, understand cortical physiology of various things, and in this particular case, language. Now another important finding is that this activity is independent at four millimeter resolution. This is some intra-op data. And you can actually see there's nine electrodes and eight of them respond to hearing and only in the center one doesn't in response to speaking. So this will be important as we go further in the talk. There's a very rich, complex representation. And the idea that the cortex is a blob of this and a blob of that is just not true. There's a lot of interleaved activity that is important and plus we can use it for various analysis methods. So for instance, if we have this grid and here's a grid with some, those are artifacts, bad electrodes, you present a phoneme for 700 milliseconds and a word, you can see this incredibly rich, high frequency activity to this more complex word. And this is going to be really important for some of the stuff that I show you. It just shows you the complexity and the strength of the signal you can get with a language stimuli with simple direct intraoperative recording. Now the first thing that I think we did that was really challenged or defined some issues in the literature was this phenomenon of categorical perception. And this is very ubiquitous in the sensory world. It's not like you perceive things linearly, you perceive them as one thing, one thing, and then you switch into another category. And this has been around, this phenomenon for, since 1956. And let me play this for you. If you listen, you're going to hear these three phonemes have categorical shifts. Can you get that? They shift, the boundary, rapid, they don't go linearly, right? You go, you hear ba, then da, then ga. And each person shifts at a little bit different range. Any progress on the sound guys or back there? No? We give up on that? Do it this way. It's working. Okay. So the question had been whether in the brain, these, this, these different phone, these, transitions were organized in a linear model in accord with the physical stimulus or actually did the brain pull them apart rapidly into categories? And the answer, again, this is work done by Johan Rieger and Eddie Chang in the lab. They first had pre-op psychophysics for the patients and each patient had a different categorical boundary where they get this perceptual shift. And what they showed using, you know, uh, clustering, uh, analysis and various mathematical tools that very beautifully your brain pulls apart the different phonemes into clusters. And it does it very, very rapidly. And this pretty simple experiment in the operating room clarified, you know, it's something that had been unresolved for literally decades in the literature, that your brain uses a rapid clustering mechanism. Now, um, I will just say that these clusters actually interleave again, right? So these are different transitions for Bada, Da and Dada, Ga and so on. And even in this little part of cortex, you can see how different transitions are in different electrodes that are interleaved. Again, very complex organization. How about if we go a little bit higher up in the food chain and we try to figure out where, um, languages, uh, semantic meaning is represented. There's Wernicke up there on the right. Really simple experiment. Patient hears words and words that have been scrambled acoustically with the same energy. So they hear this. Babble. Laugh. Knock. Okay. So that Martha, what you do is you take the word and you subtract from it. It's acoustic energy, the scrambled, the idea is what's left is where the meaning of the word is actually represented. And here's just a couple of patients. And you're seeing in time, this activity evolve. Of course, this is stretched out because it'll be done in 200 milliseconds. And you can see classic language areas in the superior temporal surface and the superior temporal plane are activated in the word minus non-word. And you can see that it fits absolutely beautifully with data from Nina's lab that you're going to hear about where you get your chronic lesions with damage in the STS in the middle temporal gyrus producing Wernicke's aphasia. So real nice fusion of electrophysiology and classic neuropsychology. How about generating verbs? And how about the output part of the system? And this is work done by Eric Edwards. Really simple task. You hear a noun and you just have to generate a verb. Ball, throw, etc. When you do this, what you see is this elegant and with fMRI, you see all these nodes. You see the temporal lobe. You see the Broca's area. You see motor cortex because you're perceiving, you're selecting the verb, you're putting together the articulatory plan, and you're saying it. But if you do ECoG and use this high frequency response, in this case, it's between 70 and 160 hertz, you can see in real time language evolved in the human cortex. I just showed you to understand the meaning. Select and produce the motor program. Look at the time code again. Say it. Hits the airway, refires your auditory cortex. So you can map with temporal precision at the single trial level, the unfolding of language going from posterior temporal areas to frontal areas to motor areas and then refiring the auditory cortex as you speak. A really powerful way to probe the brain. So what about Broca's area? I think every one of us thinks that probably Broca's area is active as I speak, as I say the word. That's the standard party dogma for many, many years. Is it true? Turns out if you use ECoG, it's not true. And we suspect that it may not be true because we know, this is again some data from Nina that this is actually patient 10, the first Broca's case. But there was another patient of his who had a Broca's aphasia, but he only had a teeny area in the Broca's region. Teeny. It turns out if you look at her patients, they all have lesions that overlap deep, not in Broca's area. They overlap deep in the arctic vesiculus and insula. And this really suggests, and we know neurologically, if you just get a small infarct in Broca's area, you get mute for about 7 to 10 days and you often recover. So the story didn't fit. Dean Flinker did a really simple study doing electrocorticography where he had people speaking, repeating words, and reading. And to cut to the chase, what you can see here is not surprised. So surprisingly, auditory area is activated. When you hear something that you have to repeat, Broca's area follows very closely within about 150 milliseconds. But when you actually speak, Broca's area shuts off. And premotor, other premotor and motor areas come on. It doesn't fit the classic organization. This is highly reliable at the single trial level. And I show you this because this single trial analysis is really important for what I'm going to show you next, which is how we can use this to perhaps produce devices for patients. STG, loctus stimulus, superior temporal gyrus, Broca's following right by, shuts off when you begin to speak. Motor areas are active. Dean's idea is this is related to phonemic articulatory planning. And it was a large part of Broca's area is really premotor cortex. Now, what about the hippocampus? Classically, people don't think the hippocampus is engaged in language. Turns out that's not true also. We do know with strokes in the hippocampal region that you have more verbal memory problems with left hippocampal than right hippocampal lesions. We know that. But typically, people don't associate it as a language cortex. This is an extremely simple study done by Victoria Paya, who's just returned to the Max Planck in Nijmegen. And the patients hear sentences. And these patients, I'll show you in a minute, they have intracranial electrodes. He locked the door with the, that's constrained, key. She walked in here with unconstrained. So we have context constrained and unconstrained. In the constrained condition, you're faster. OK, that's well-known behavioral phenomena. But interestingly, if you look at depth recordings with electrodes, stereo EEG with electrodes predominantly in the left side, in hippocampal, depth, and MTL kind of regions, what you find is that in 10 of the 12 subjects we showed, as the context in the sentence begins building up, only in the context sentence, you get rhythmic hippocampal theta activity. Again, this suggests that the hippocampus online during the processing of this unfolding semantics and context in the sentence is engaged in language. We think it's probably going to relate to other lateral left hemisphere connectivity. But that work is still in progress. How plastic is your language cortex? It turns out remarkably plastic. This is a powerful behavioral phenomena. So listen to this. You're going to hear a scrambled sentence that's been, had its stirf, its sound essence has been taken apart. Then you're going to hear the real sentence. And then you're going to hear the scrambled again. And what you're going to hear behaviorally is that after you've had the real sentence, the scrambled sentence, you can understand it. And you can understand it very well. This is the percent, this is the number of percent words that you get right in the scrambled sentence after you've heard the real sentence. So just listen to this. It's pretty amazing. Bagpipes and bongos are musical instruments. The high security prison was surrounded by barbed wire. Just like that, boom, everybody perceives a powerful phenomenon. Move the garbage nearer to the large window. Very, very powerful. How does it happen? How do you do that? It's not memory for the sentence. What Chris Pohlgraf has shown in the lab, again, using straight intraoperative recording and no, these are grid data. This is mostly grid data that what happens is your spatial temporal profile for your individual electrodes changes. You change within a second so that the same stimulus that is scrambled evokes speech-like representation. So again, a STIRF basically is just a way to find a particular part in the time frequency spectrum and you track it through. And the STIRF is then used to correlate with activity in a particular electrode. And using this, what he showed, I'm going to maybe skip some of this technical stuff, although he would be upset, but that's life. Basically, what he showed, this is the STIRF, the tuning properties of these temporal electrodes and the question is are they close to the tuning property of the real sentence or are they not close and it turns out if you look at electrodes when you first heard that scrambled sentence there's not many that have stirfs or tuning properties that sound that are that are the same as the real speech however that right immediately after within a second the tuning properties change in most of these electrodes so that they actually look they get the tuning property of the real speech and that's why you perceive it your brain is incredibly plastic so then how could you use this so we've seen you can use it to map out where language is and with some pretty remarkable fidelity it's reliable at the single trial level there's some challenges the classic theory about what Broca's area does and what the hippocampus does in terms of language and then finally we just see this remarkable plasticity where your brain within us we're hearing one thing changes how it's going to respond to scrambled sound that pull out somehow this information that you could perceive so the question would be could the question was could we use this to maybe develop a implantable speech prosthesis for people who can't speak you know think think ALS think bad Broca's aphasia think people locked in etc so this is work again done by Eddie and with Brian Paisley in the lab was it who was at that time a grad student they did a really simple but I think really elegant study they presented a hundred or so words let's say well what's okay let's be context appropriate football they presented football they recorded the electrocorticogram they came up with a construct a reconstruction model they had a spectral model and a modulation model I'm not going to go into it but all the details everything is in this paper which went viral and they published it basically the model then forces in a way the each the electrodes to match this football spectrogram then you basically hold out to make sure if you have a model that you can pretty much model anything to anything to see if it's really have to hold out words and then present the words to the model and see if the model works so they held out words they presented it to the model the model had a pick between two words let's say football and soccer and random is 50% and they got 91% accuracy which is really remarkable and I'm going to just so what they're doing in a way is they're assigning to each electrode it has it's a certain frequency response at a certain time similar to you know piano keys and if you're Beethoven and you're deaf and you look at someone playing the piano you know what each key represents in terms of frequency and you know the timing and you can reconstruct the sound in a way that's what they're doing it's really quite amazing and you say well does it mean anything I think it really does and here's an example this is what the patients heard and they're cut off at 7 kilohertz mainly because a reconstruction power and computational issue so this would be what they heard here's four words Waldo structure doubt property here's the reconstruction from direct brain recordings both intraoperative where you see a small and a higher density grid from UCSF where you see and you can see the key areas really always are this these areas around meaning in the brain and this reconstruction is directly played back on the speaker straight from from the brain reconstruction you can kind of decide if you think it works Waldo Waldo structure structure doubt doubt property property so this was exciting to us because it says that we could we could decode the sensory driving of language and the question is as I'll close with can we just can we decode when someone imagines because this doesn't help a patient but if you could actually decode when the patient's imagining a word or thought then you've really got something so before it you it also works for single phrases that's my fire escape that's decoding directly from the brain it works for music and I did run into Mitch at a Pink Floyd concert so I think this is very appropriate so what we had basically 28 patients who kind of been lulls of quote cognitive testing could pretty much listen to whatever they wanted to in 28 of them listen to Pink Floyd so we were able to put together a super grid this is the overlap of all the patients and we did we did again modeling where we took the pink another brick in the wall is four minutes long so we took the first two minutes and came up with a model and check to see if the model could decode the last two minutes of the song pretty simple these were the informative electrodes out of all these electrodes were about a hundred and forty that really contributed to the reconstruction not so surprisingly their center been suspected that the typical suspects in auditory and association cortices so I'm going to just show you that you hear the results here is the what the patient heard no this is what this is I'm sorry this is not what they heard this is what our target is to reconstruct this is not the reconstruction this is at the last two minutes of another brick in the wall filtered at seven kilohertz that's what they heard here's what we were able to reconstruct so I think it just I think I got two more minutes stand here for a minute okay so again we can decode language input we can decode music probably not so surprising it's actually easier in a way from language it's not as complex but you can hear it I mean it's quite quite clear so now the question is could we decode when you imagine because that's the $64,000 question that's the kind of thing that could give the people in this audience a potential implantable device to treat a whole host of problems so this is done by Stephanie Martin who's a PhD in bioengineering and she goes between EPFL and Berkeley she's super talented and Mitch you would like this she was actually she was number five in the tryouts for the Olympic capacity team in in Switzerland and we had a lab picnic and she'd never touched a football and she put the football in her hand and because she throws the javelin she threw it better than all the guys it was pretty pretty impressive what she did is she played a this was work done with Gerv Schalk we needed some timing you know when do people imagine and there's a lot of technical details I'm glossing over and they're all in the papers but you needed some timing so the first thing we did is we had a ticker tape go by and it was the Gettysburg Address and it came by once and the person had to say it for you know while you'll hear and then it went by again and the person simply had to imagine and the imagine gave us a code to look for activity so here we go four score and seven years that's the patient actually talking so she's driving which was used to decode for that's overt speech on this continent and you can see we get in every subject we get very reliable decoding accuracy for the overt speech again you're driving your cortex even though you have speech suppression and the signals a little bit lower we still could decode but importantly in every patient we could significantly decode when they were imagining right so this is the and again not so surprisingly the informative decoding electrodes and I'm not going into the math of decoding but it's all laid out there and if anybody wants to get into this we're happy to give you any of the code we have in our lab for your own individual interest or projects you can see the decoding is really centered in in auditory cortices so this was really exciting she just got another paper accepted this week where we controlled we did a little bit different experiment you had an auditory cue and then you heard a word and then you had a visual cue and you had to imagine the word and then you had a visual cue and you had to say the word so we had two ways to drive the cortex overt input speaking input so two things to compare this imagine modeling to I want you to look here this again this is this point about this complex signal space there's a patient with some grids being evaluated and you can see listening imagine an overt and this is the analytic high gamma amplitude in this electrode basically you get activity in this electrode to get activity when you hear and when you speak and this one only when you speak in this one when you hear imagine and speak and this one only when you imagine and speak that's just the way the data is it's a very it reminded me a little bit about the prior motor talk things are not the way they seem to be in the in the classic literature but we got all this data and she again was able to use it to do modeling and see can we get at imagine speech and the answer is she could it's this was actually we were doing categorical imagination we had six words and could we significantly with imagination categorically select one of the six words or maybe one of the six phrases I'm hungry I love you I have to go to the bathroom you can imagine and again again not so surprisingly I've already shown you that listening we can decode the words but in four of the five patients we got reliable imagination again most of the significant stuff in temporal areas note the two right hemisphere patients were not as good as the left hemisphere patient maybe not so surprising maybe there's a lateralization for language imagination and prop most likely there is in conclusion two slides done this is another case that we just did with Eddie of a young man who had needed surgery and he's a musician and I think what you what you remember we're going into these hospital rooms and there's no practice right we said can you do this and you really need I think you need to be trained to do imagination this this young man agreed to pay spend four to six weeks practicing two music pieces and in one case he played and he listened to the music and in the other case he played with no music and imagined the music so he was trained to imagine a Brahms and a Beethoven piece and we recorded a cog from this patient and again this is this is some of the data you'll see one electrode basically the this is the Brahms piece unfolding this electrode tracks both the perception where you're banging the key and of course the nice thing is we have the key to code were to look and when you imagine and in another electrode in the same middle temporal gyrus only really encodes perception and not imagery so this is just the way it is and the stirfs again this is the perception for the music and these are the stirfs and you can see beautiful activity and temporal regions here again this is a centered you know this is going up into a high-frequency range and if you look at the imagine case again very similar pattern with again not as robust but clear clear physiology of imagination this is the most significant imagination activity we've gotten in any subject and I think it's because he practiced so we're hoping to do in the future is if we can get access to the patients at home before they come into the hospital is to train them for four or five weeks have someone go to the house and have them work so that when they're in the hospital maybe we can get even better I think the future for this field number one I think the future I think the inner interaction with neurosurgeons is absolutely critical for human neuroscience I think it's amazing what's happened in in the last two years I think computational neuroscience is critical to analyze these data sets I think we need higher density grids I think there's probably independent activity as close as one millimeter and maybe we can start using those at least in intra-op maybe not in chronic and I think in terms of developing a prosthetic device I think just simple behavioral training of the patient before they're actually implanted by neurosurgeons for their various either mapping for tumor epilepsy will be incredibly important so with that I'll stop and thank you very much for your attention
Video Summary
In this video transcript, the speaker discusses their work on intracranial recording of language and its insights into cortical physiology. They highlight the importance of high-frequency activity in the cortex, particularly in the range of 60 to 250 hertz, and its potential for developing a speech prosthetic device. The speaker presents various studies they conducted, including one on the phenomenon of categorical perception and another on the representation of semantic meaning in the brain. They also explore language generation and the involvement of different brain areas, such as Broca's area and the hippocampus. The speaker also demonstrates their ability to decode language input and music perception and imagination from direct brain recordings. They discuss the potential of using this decoding technique to develop an implantable speech prosthesis for patients who cannot speak and emphasize the plasticity of language cortex. The speaker concludes by discussing future directions for research in the field. This summary is based on a transcript of the video, and no credits were mentioned or granted.
Asset Subtitle
Robert T. Knight, MD
Keywords
intracranial recording
cortical physiology
speech prosthetic device
semantic meaning
language decoding
implantable speech prosthesis
×
Please select your language
1
English