false
Catalog
AANS Beyond 2021: Full Collection
BrainGate Turning Thought Into Action
BrainGate Turning Thought Into Action
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi. My name is Dan Rubin. I'm a neurointensivist at Mass General Hospital in Boston and an investigator in the BrainGate clinical trial. Today I'll be talking about the development of brain-computer interfaces for the restoration of communication and functional independence for people with paralysis. These are our research disclosures. Within the BrainGate trial, we have three simple goals. We want to be able to tell all people that are diagnosed with ALS they will never lose the ability to communicate. We want to be able to tell people who've had a brainstem stroke or Lockton syndrome or their families that they'll communicate again easily tomorrow. We want to tell people who've had spinal cord injuries or strokes causing motor weakness that you'll be able to move again tomorrow. Not in a month, not in a year, but with this technology tomorrow. How does that work? BrainGate is a brain-computer interface, and all brain-computer interfaces, be they BrainGate or one of the other ones, all have three similar components, a neural sensor, a decoder, and the technology to which that decoder is paired. With each of these components, there are some decisions to be made. When it comes to the neural sensor, we have to ask, what signal do we want to record? Do we want to record LFP, record action potentials, or some surface signal like EEG or ECOG? From what area are we going to record? That has a lot to do with what sort of signal you're trying to decode. And then what sensor are you going to use? A microelectrode array, a stentrode, an ECOG or an EEG array? Within the BrainGate system, I'll tell you that we use microelectrode arrays to record from motor cortex, and record both LFP and single units action potentials. We pair this signal with the decoder. So this is the black box. This is, to some extent, the magic. This is all the math that takes the activity we record, here represented mathematically as a firing rate vector, say, transform it, here represented with a matrix transform A, but in fact, this A is represented by thousands of lines of computer code, to output some useful signal. And here we've got intended hand motion, BFT, as our output signal. So really, all the magic of both BrainGate and really any brain-computer interface lies in this decoder, in this black box full of mathematics and computer code. Then we take this decoder and we pair it with some technology. And this is, to some extent, the fun part. This is the part where we get to do really neat stuff with the technology. So some things that you might think to pair this technology with are things like writing, typing, searching the internet, producing speech, but also things like prosthetic and robotic limbs, functional electrical stimulation. And we hope someday soon to close the loop and actually have feedback mechanisms so the same technology can be used not only for assistive technologies, but also to augment neuro-rehabilitation. So, as I mentioned, we use microelectrode arrays in the BrainGate trial. And this is an example of one of the arrays we use. It's sometimes called the Utah Array because it was invented at the University of Utah. We get ours from BlackRock. And this array has got 100 microelectrodes, each one either 1 or 1.5 millimeters long, all on this 4-millimeter by 4-millimeter array here shown next to the Lincoln Memorial on the back of a penny for scale. And with this array, we can record action potentials, spikes, multi-unit activity, and local field potentials, LFPs. This is what it looks like under a scanning microelectrode, electron microscope, that is. And this is how the system is all put together to actually do what we want it to do. So, as I mentioned, these are the electrode arrays we use, the 100 microelectrode arrays. Currently, each of our participants has two of these placed on motor cortex. I'll describe that in a little bit. They're connected by a bundle of micro wires to a percutaneous pedestal, and that's this little piece here. This is made of titanium, and this is affixed to the surface of the skull and then protrudes through the scalp and has a little bit of an attachment point up top where either the patient cable or transmitter can be attached to actually send the signals to the computer. That's represented by this oscilloscope screen here. We don't actually use the oscilloscope anymore, but this is meant to demonstrate that there's a rack of computers that is reading out the neural signals in real time, picking out action potentials, LFPs, and decoding this. And then this is fed into a computer system, and then that computer does whatever we ask it to do with those signals. So, in this cartoon example, a participant is using the BrainGate technology to play a hypothetical computer game. And here, this participant is imagining moving his hand as if he were operating a joystick all around the screen to collect little computer icons. So far, we've implanted BrainGate devices in 14 participants. I've got listed here on the side of the slide about another 14 or so that have been implanted with similar technology by other groups. So, overall, there's experience in the world with about 30-some-odd patients or participants using this technology. So, the BrainGate system is a clinical trial, and it is formally a feasibility study of the BrainGate neural interface system to augment communication and restore functional independence. In order to be enrolled in the trial, all of our participants must have tetraparesis or tetraplegia, either from motor neuron disease, spinal cord injury, brain stem stroke, or muscular dystrophy. We've only and are only recruiting adults. We ask that they be one year from their injury or time of diagnosis to ensure that they have stable neurologic symptoms. They must be able to communicate and consent to themselves. It doesn't mean they need to be able to talk. They can communicate through eye blink or eye gaze, but they have to provide their own consent. They otherwise need to be healthy enough to undergo the surgery to have the device placed, and they must live within three hours of one of our study sites so that our research technicians and investigators can get to their home to perform the research. And that's because all of the actual research happens in our participants' place of residence, be it in their home or in a long-term care facility. And right now, our research sites are in Boston, Providence, and Palo Alto, California. This is what it looks like when an array is placed. I couldn't present to a group of neurosurgeons without showing some intra-op photos. So this was one of our participants having a single microelectrode array placed. So you can see there's a craniotomy and a durotomy, the cortex is exposed, and the electrode array is very gently and very precisely tapped onto the surface of the cortex. In fact, it's not tapped by hand, but with a pneumatic device that applies a very precise amount of pressure. We ask that our neurosurgeons make a small ramp in the bone so that the wire can come out without being too aggressively kinked. Then, of course, the dura and the bone are replaced. And then this wire attaches to this pedestal, and so you can see the edge of that percutaneous pedestal here, and this will be affixed to the surface of the skull, and the scalp will wrap around it as it's closed. As I mentioned, we've placed BrainGate devices in 14 participants so far, which has given us a lot of great data about safety. In total, we've got almost 12,000 implant days of experience over our 14 participants, with several participants being in the trial for over three and in some cases even five years. If you count two arrays as separate implant days, we actually have over 17,000 days of experience. I have here on the right a partial list of some of the adverse events that we've seen. Most of these are, unfortunately, the sort of adverse events that happen to folks with paralysis, regardless of whether or not they've got an investigational device placed. Unfortunately, we haven't had any unanticipated adverse device events to date. So what do we do with this device once it's placed? So I'll show you some examples now of some of the things that we're using this technology for. This first example I'm going to show you is from one of our participants who had advanced ALS. He was unable to move his arms or legs, unable to talk, and required a ventilator to breathe. In fact, there was a little bit of pushback when we were first enrolling this participant in the question whether or not we could even actually record from the motor cortex of someone with ALS, or whether or not the degeneration of the upper motor neurons was going to leave us with an unusable signal. But in fact, when we placed the array, we very quickly saw lots of active units. And so we're able to decode into the motor activity quite easily. And this is an example of that participant using the technology. So one of our investigators is giving instructions of what to do. He is here in the foreground, and he's thinking about moving his hand to control a joystick or a computer mouse in order to control that cursor. And just by thinking about moving his hand, he can't actually move his hand at this point in his disease, the mouse is going where he wants it to. And he's been asked to move the mouse to whichever box turns yellow and to dwell the mouse over that box for some period of time. And when he does, then another box turns yellow. And he's able to do this pretty quickly, pretty easily, and has pretty excellent control right away. Now, you can probably tell from looking at this computer screen that this video is over a dozen years old. And our technology has come a long way since then. So in this video, one of our more recent participants is using the same technology, but in a much improved and upgraded form. So over the past 15 years, in addition to improvements in our hardware and our physical technology, as you all know, there's been an explosion of developments in things like machine learning and computational techniques for artificial intelligence and other learning algorithms. So our decoders are better. And with better decoders and faster computers, our participants now have more precise control and can move a tinier cursor to tinier targets more quickly and accurately. And I'll show you some other things that they're able to do, but this is just one example. Another thing to highlight from this video, of course, is what's in the background, which is the artwork in our participants' home. Because again, all this research is happening in our participants' homes, not in a university or hospital laboratory. This is another example of one of our participants using the technology. This is not for controlling a computer cursor on a screen, but rather for controlling a prosthetic or robotic arm. So this participant had a brainstem stroke about 10 years before she enrolled in the trial, and she was left with tetraparesis and an arthria. And here she's been asked to think about using her own hand to pick up this thermos and bring it over to her mouth. And as she's thinking about doing that, the robotic arm, the brain gate device, is lining up the thermos, because that's what she would do with her hand. And you'll see, she's going to bring it over to take a sip. This was exciting for us for a couple of reasons. One, because she had what we thought was pretty good control over a very complex robotic arm with multiple degrees of freedom. The other thing that was really meaningful for our participant was that this was the first time that she had given herself anything to eat or drink under her own volition since the time of her injury. She was able to eat, but she relied on being fed. But here, this is the first time she's really given herself something to drink since her stroke many years prior to starting on this trial. So as I mentioned, one of our primary goals is restoring communication. And for anyone who's worked with patients with paralysis that results in deficits of communication, you know that while there exist many forms of assistive and augmentative communication, things like eye gaze or letter boards or other systems that are devised, often they're very slow and clunky. And it's not uncommon for participants to have technology available that they just don't use because it's just too slow and uneasy to use. So one of our big goals is trying to make this technology easy so that people want to use it. And so to that end, we paired for this example our best motion decoder, so the fastest and most precise control of a computer cursor with an optimized keyboard where all of the most commonly used letters are in the middle of the screen, and asked our participant to copy some phrases and later to write some phrases of their own. And you can see that they're able to write pretty quickly. And in this example, 39 characters correct per minute, which again for someone who was previously communicating with either eye blinks or selecting letters from a large letter board, really transformative in terms of the speed that they can communicate. We were excited by that work, but of course many participants and their families when they're thinking about whether or not to join will often say, well, that's all well and good, but I don't want to use just a black and white keyboard on a screen. I'd like to just use the computer I know how to use. That can do a lot more than just type out messages. And we said, well, sure, that seems pretty straightforward. And actually, once you've got good control of a cursor, there's nothing really stopping you from just plugging that into any off-the-shelf commercial computer product. And so that's what we did. So these are two participants who enrolled in the trial. They're both using unmodified off-the-shelf Google Nexus tablets. The participant on the left is writing an email actually to one of our investigators who had congratulated her on 1,000 days in the trial. And the participant on the right is using the Google Nexus tablet to search YouTube. One thing to note from these pair of videos, one, both of these participants are using what we call the patient cable that connects the arrays to the computer system. And I'm going to show you in a couple of slides that we don't need to use those anymore. The other thing to note about the participants is that they both have tracheostomies and require ventilators. Both of these participants have ALS and are ventilator-dependent. So in addition to augmenting and replacing loss movement with assistive technology, we've also been partnered with the team at Case Western and their functional electrical stimulation team to try to restore participants' intrinsic mobility through FES, functional electrical stimulation. So in FES, as you all know, rather than recording electrodes, which is what the BrainGate device is, stimulating electrodes are placed on the participants' peripheral nerves. And in isolation, those nerves are used as part of rehab after stroke or spinal cord injury. But here what we've done is we've paired the BrainGate device recording and decoding a participant's intended movements with FES, so translating those intended movements into signals that are then used to stimulate the peripheral nerves so that the participant can actually activate his own upper extremity. And so what you'll see here, this is a participant who has been quadriplegic from a spinal cord injury many years prior to joining the trial, is now able to use, through a combination of BrainGate and FES, his own right arm to feed himself again for the first time since his injury. And you'll see there are a couple of braces in place just to sort of help him maintain a particular range of motion, but all of the actual lifting he's doing with his own muscles. And you get the idea. So one of the keys to the BrainGate research is that we keep it patient-centered. Yes, this is a clinical trial. And along the way, we like to think that we've done a lot of groundbreaking neuroscience to help us develop this technology. But at the end of the day, our goal is to develop technology to help people with paralysis. And so to that end, the research is really focused on understanding what our participants' needs are and trying to reach them. So to that end, we highlight that we've got broadly defined inclusion criteria. So we like to think that most people with severe paralysis, with quadriplegia, will be able to help through our inclusion criteria. There's obviously no placebo group. No one's getting sham surgery. When we enroll participants, we ask them to consider committing for one year so the trial formally lasts a year. But once enrolled in the trial, many of our participants find that they really like using the technology. And for many, it's transformative. It gives them an ability to communicate that they didn't have before. And so at the end of the formal one-year period, all participants are given the opportunity to continue to enroll in an extended period that we can extend for up to more than five years. And many of our participants, as you saw in one of the earlier slides, have chosen to do so. Significantly, people with advanced disease are not excluded. This becomes a big issue in trials for people with ALS. So a lot of the drug trials for ALS specifically exclude participants, whether they have advanced disease or acquired ventilator, because they're trials that are aimed at showing that they slow disease progression. But we don't include those participants. And as you saw in several videos, individuals who use a ventilator are welcome and have been enrolled. Again, importantly, all research occurs in our participants' homes. Our goal is to develop a technology that's going to help people where they live. And so we want to make sure that it works where they live, that it works in their home. And so after the initial surgery, which of course happens in an operating room, and a short two-day hospital stay for post-op and recovery, participants go home. And all of the rest of the research, up to the point that the device is explanted when they choose to leave the trial, all of the rest of the research happens in their home. So within a couple of days, a rack of computers shows up. We find a good place in the living room to put it. And then we come to their home roughly twice a week to run research sessions. Those research sessions could be trying out new decoding algorithms or trying some reliable decoding algorithms paired with some novel technology that we want to try, for example, a robotic arm. And even during the COVID pandemic, we're able to keep this going by switching to remote sessions. So in the spring of 2020, when it became clear that we weren't going to be able to set foot in other people's homes for some period of time, we rapidly switched everything that we could to remote sessions. We trained caregivers or family members on how to affix the wireless transmitter so that they could basically turn on the system. And then we would remote in to our participants' computers and run sessions virtually. That actually worked quite well and gave us a lot of experience in allowing participants to use the technology when we weren't there, which has been actually quite popular. And importantly, the research sessions emphasize our participants' priorities and feedback. So when participants tell us what they want, we listen to that. No one on the BrainGate team has paralysis. And so we rely on our participants to understand what are the sort of things that they want to use this technology for. We might want to get a more precise cursor control decoder. And our participants will tell us, no, the cursor control is great. It doesn't need to be any better. It works fine. What I really want is to be able to use this technology to control the lights in my home or to engage with other technology that's around me, for example. One feedback that we often receive is that we need a faster way to communicate. So even a really good 2D cursor control for point and click selection of letters on a keyboard only gets you so fast because you can only click on so many letters per minute doing point and click. Is there some faster way to get letters out? And so we had the idea, and really Frank Willett and his part of the team at Stanford had the idea, let's see if we can just decode letters directly by asking our participants to imagine writing. So they asked one of the participants to imagine holding a pen and just writing out the letters. And so rather than decoding a 2D position to drive a cursor on a screen, they were going to decode the two-dimensional trajectory that they modeled the pencil tip making, as the participant imagined moving their hand to do so, to decode the letters directly. And in doing so, with some very sophisticated neural network learning algorithms, they were able to decode letters at really previously unseen speeds. So this is a real-time video. This is a participant who's got a spinal cord injury resulting in quadriplegia. And he's imagining holding a pen in his hand and writing out the message that's written here. And this is the brain that's decoding what he's imagining writing with his hand in real time. In addition to copying messages, he can also just write. So he was asked where he was born, and he just imagined holding a pen or pencil in his hand and writing out the message that you'll see here. And with this approach, we're able to get around 90 correct characters per minute, which is pretty fast. And that's without implementing any sort of word or language prediction. With that, you can get even faster. But 90 characters per minute, to provide a sort of basis of comparison, is about as quickly as an able-bodied and facile smartphone user can text. So when you see people who are texting real fast, it's about 90 characters a minute. So again, for someone with paralysis or Lockton syndrome, really life-changing, we think. Where are we going next? How are we going to get even faster? Well, the next frontier is really speech. So even faster than handwriting would be if we just decode entire words wholesale. And so that's where we're going next. We've done some work showing that we can reliably decode phonemes, phonemes are the individual sounds that make up the syllables and make up words. And so we're now working on pairing our ability to precisely decode phonemes and use that to develop more sophisticated decoders that can just decode entire words wholesale. And hopefully, in another year or so, we'll be able to show you some positive results from this work. One last recent accomplishment that we're quite proud of is that we've been able to make this entire system wireless. So most of the videos I've showed you thus far use the cable that connects the pedestal, this bit here, to the rack of computers. But we realized, of course, that this big bulky cable is not the future. And so we've worked to develop a wireless transmitter system. And that's shown here. This is a 3D rendering, but the thing looks exactly the same in real life. To put it to scale, this barrel component here is the diameter of roughly a AA battery. It's a little bit shorter. It's a medical grade AA. So it's half as long, but the same diameter. And so the whole thing screws on top of the pedestal, mace contact, and then this becomes your transmitter. And then there's no longer cable connecting the participant to the rack of computers. It's all done wirelessly. And this really brings us one step closer to having technology that someone can really imagine using in their home. So this is one of our participants. He's in the foreground, but he's kind of fuzzy here because the tablet computer's in focus. And really, the key to this video is how quotidian the whole thing is. This is his little neck pillow. This is him lying here. This is the wireless transmitter. And what you're going to see is he's lying in bed, and he's using the BrainGate device to control his Microsoft Surface tablet. And he's just picking out some songs to listen to on Pandora. And that's all it is. It's just him listening to Pandora and choosing his music before he goes to bed. And that, again, is the idea. So the beauty of this whole thing is that it's sort of a commonplace thing that you'd want to do with a computer. This is the team. Obviously, this picture is from before the pandemic. So there's lots of us who are all working on this, and we're all excited to be doing this work. And I want to thank you all for listening. If you have any questions or comments or any feedback, I welcome it. You can email me directly at drubin4 at mgh.harvard.edu. You can reach out to us at clinicaltrials at braingate.org or visit our website. If you have any patients who are in the Providence or Palo Alto area who you think might be interested in learning more about participating, please shoot me an email. And any other questions about the technology, I'm happy to take them. Thanks.
Video Summary
The video features Dan Rubin, a neurointensivist at Mass General Hospital in Boston. He discusses the development of brain-computer interfaces (BCIs) for people with paralysis. He focuses on the BrainGate clinical trial and its goals of restoring communication and functional independence for individuals with conditions such as ALS, brainstem stroke, and spinal cord injuries.<br /><br />Rubin explains that BCIs consist of three components: a neural sensor, a decoder, and technology that is paired with the decoder. He discusses the choices involved in selecting the sensor, such as the signal to record and the type of electrode array to use. Rubin emphasizes the importance of the decoder, which processes brain activity and outputs useful signals.<br /><br />The video showcases examples of how the technology is used, including controlling a computer cursor, using a prosthetic arm, and restoring mobility through functional electrical stimulation. Rubin mentions the patient-centered approach of the BrainGate research, with emphasis on meeting participants' needs and incorporating their feedback.<br /><br />He also highlights the wireless transmitter system developed by the team, eliminating the need for cables and making the technology more user-friendly. Rubin concludes by inviting questions, feedback, and potential participants for the trial.<br /><br />Credits: This summary is based on a video presented by Dr. Dan Rubin at the Neural Interfacing: Moving Out of the Lab IV conference in 2021.
Keywords
Dan Rubin
BCIs
BrainGate clinical trial
neural sensor
decoder
wireless transmitter system
×
Please select your language
1
English