false
Catalog
AANS Online Scientific Session: Stereotactic & Fun ...
Restoration of volitional hand grasp in cervical q ...
Restoration of volitional hand grasp in cervical quadriplegia with a brain computer interface that decodes continuously
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, and thanks for tuning in to my talk. My name is Benjamin Meschida-Crasso, and I'm a Technical Research Associate at MIT and Massachusetts General Hospital in the Neuroscience and Statistics Research Lab. Today, I'll be presenting my work in collaboration with Dr. Ian Kahigas at the University of Miami Health System on a brain-computer interface for cervical quadriplegia that decodes continuously. Brain-computer interfaces, or BCIs, are a promising intervention for improving the independence and quality of life for the approximately 5.3 million paralysis patients in the USA. Survey data show that regaining arm and hand function is a top priority for quadriplegic patients. We describe a BCI implementation for a patient with complete cervical quadriplegia, C5, with a goal of decoding hand-grasp intent. Specifically, today, I'll be focusing on decoding algorithm development and adaptivity, but first, I'll describe the BCI. This BCI uses a chronically implanted ECoG device with two channels. Most chronic BCIs have used multi-unit depth electrodes, which suffer from glial scarring and signal dropout over time. These limitations may be improved by using ECoG because it's less invasive. Acutely implanted ECoG devices have demonstrated that motor decoding is possible. The device we use has fewer channels than multi-unit depth electrodes and fewer channels than most acutely implanted ECoG arrays, but we demonstrate that reliable decoding of motor intent is still possible. Next, I'll give you some overview on the implantation procedure. Here, you can see imaging of the subject's spinal cord injury in a sagittal T2-weighted MRI, showing a small fluid-filled cavity in the spinal cord at C5. The dominant hand region was localized using fMRI during hand motor intent, which is seen in red, and controlled for shoulder movement, seen in green. Diffusion tractography was also obtained to make sure that these regions do overlie with the corticospinal tracts, which is shown here in blue and purple. Here, you can see the intraoperative image showing the electrodes oriented in the anterior and posterior direction, located at the hand motor region of the left hemisphere precentral gyrus, or motor cortex. And thanks to our wonderful surgical squad, there were no complications and the implantation was successful. Our objective is to decode hand grasp intent from just two ECoG channels. Hand grasp as a binary target is feasible with a limited channel number, and it's also a patient priority, as I mentioned previously. Data were collected in between two to six-minute trials of continuous motor instruction presented as a move or rest command on a computer screen. We decided to use frequency information in the ECoG data for decoding, as it appears to elucidate the signal related to move and rest states. Power spectral estimation is a technique that quantifies the amount of power of oscillations across frequencies for a window of time series data. This technique is often applied to neural data, as it can be related to the underlying circuit activity. Here, the power spectral density, or PSD, is plotted for all move states in black and rest states in red in both channels one and three. It's clear that there's some separation in the beta range between 13 and 25 hertz, especially seen here in channel three, but also in channel one. In these two spectrograms, you can observe the dynamics of transitioning from a rest to a move state for both channels. The spectrogram is a plot colored by power across frequencies, which are plotted along the y-axis and over time on the x-axis. It shows which frequency bands are most prominent over time in the neural data. In these two spectrograms, the data are averaged over all transitions from a rest to a move state, and this transition is easy to see here as a decrease in beta activity, again, especially in channel three. Below the raw ECOG waveforms for individual transitions are plotted, and it's more difficult to elucidate the difference between states in the waveform, in the raw ECOG data, so that motivates us to use the power spectral estimates. Here data from an individual trial is shown, and the top two plots are the spectrograms, so in both channel one and channel three, and you can see those beta desynchronizations related to rest states, but it's less clear than when averaged over transitions. Below the motor instruction is plotted with green for move states and black for rest states, and you can see the random transitions there. Thirty-four trials of data were collected, and they were then split into train sets and test sets. This is a common machine learning strategy for better generalization and to prevent overfitting. To select between decoding strategies and regularization parameters, we used leave-one-out cross-validation on the train data set. This is a principled machine learning technique for model selection. All decoders were designed for continuous and fast decoding so that they could be used for closed-loop implementation. The results of cross-validation selected decoder architecture with dimensionality reduction and temporal modeling. For any technical details, please reach out to me via email, which is given on the first and last slides. The decoder was able to generalize well to the test set, as seen in the receiver-operator characteristic curve, or ROC curve. This plot shows decoder performance across increasing thresholds from 0 to 1. The false and true positive rates at each threshold are calculated, generating this curve in blue over increasing thresholds. A decoder making random predictions would have equal false and true positive rates, which would be represented by this black dotted line here. The area under the ROC curve, or AUC, is an independent measure of performance, with 1 meaning perfect decoding and 0.5 meaning random decoding. Over the entire test set, the AUC was 0.73, which demonstrates greater-than-chance decoding. And this green dot over here corresponds to the true and false positive rate at the optimal threshold that was calculated from the training data. Now when we break out AUC and calculate it per trial, the mean AUC per trial, the median AUC per trial, increases to 0.78, seen here in this black line. And it's heavily skewed towards 0.8, or above 0.9 even. And it's clear that a few trials are performing especially poorly, and these two are performing even worse than random. So let's explore what's going on in these poorly-performing trials compared to these well-performing trials. Plotted here are the decoder predictions for the highest-performing trial, trial A, which had an AUC of 0.94. The top two plots again are the spectrograms for both the ECoG channels, and that third plot is the MOVE REST instruction, which is the target of the decoder. The bottom plot shows the decoder predictions as probability of motor intent over the course of trial. We can see that the decoder tracks motor instruction well here, with a low predicted probability during REST instruction, seen here in black, and a high predicted probability of motor intent during green MOVE instructions. And these transitions are very fast, on the order of seconds or faster. Now in trial B, the worst-performing trial, with an AUC of 0.32, we see a pretty notable change in the beta band signal. It's not clear to the eye anymore that changes in beta activity here correspond to changes in the motor intent, at least not as they did previously. And you can tell the signal is different than here in trial A. This is also observed in the decoder's predictions, which don't align well with motor instructions. You can see here in the beginning that a high probability of motor intent is predicted during REST instruction, and then it remains pretty low despite changes in instruction here. So next, we explored whether adaptive decoding could recover performance in these poorly performing trials. We hypothesized that if there was a change in encoding from the train data that the model hadn't been exposed to yet, the performance could be recovered by updating parameters using the first half of the test trial. And then we could observe the performance on the second half of the trial using these updated parameters, and that would increase the performance if there was a change in the way that the data were encoded. So the parameter refitting method was selected using cross-validation adapted for time series data. We called the decoder shown on the previous two slides the static decoder, as its parameters are not updated for individual trials. The ROC curves for static and adaptive decoders on the test set are shown, and the addition of adaptive parameter refitting did not cause much of an improvement in performance. The adaptive decoder had an AUC of 0.69, while the static decoder had an AUC of 0.68. Now the reason that the static decoder's AUC and ROC curves are different from what I showed on a previous slide is that the test set now includes only the second half of test set because the first half of each trial was used for parameter refitting. When we calculate the per-trial differences in AUC between adaptive and static decoders, most trials did result on improvement, seen here as a positive difference, and in these three trials, there was a negative difference, meaning the adaptive decoder actually performed worse. The median improvement was 0.12, seen here in this blue line, which is a pretty small increase, 0.012, I think I said that incorrectly, and so it's a pretty small difference, and that was surprising to us because we expected that refitting the parameters to the first half of the trial would be pretty relevant to decoding the data directly following it. Low attention levels could be an alternate hypothesis that explain these poorly performing trials. That would be further supported by noting that the static decoder performed worse when tested on the second half of test set trials with an AUC of 0.68 versus when its AUC over the entire test set was 0.73, which would be expected if the patient was losing focus over the course of a trial. In conclusion, adaptivity may not increase decoder performance, but this would need to be further studied. Adaptivity may be more relevant over the long run if these signals change over time during chronic implantation, which has not been very well studied for chronic ECoG implantations. Additionally, we're working towards closing the loop, meaning running the decoder in real time and allowing the subject to learn how to use it. It's possible that if our attention hypothesis is correct, closed loop performance will increase if the subject is more engaged when seeing and feeling the output of the decoder. Furthermore, neuroplasticity has been shown to improve performance of paralysis BCIs, and so that could play a role in further improving performance in a closed loop setting. We've developed a Python package for a real-time decoder using the parameters selected from the methods that I just presented on, and we're preparing to gather closed loop data soon. This goes hand-in-hand with our work to bring this BCI into the patient's home so they can start to reap the benefits of this research in their everyday life. My colleagues have developed a very easy-to-use phone app that will allow the patient to control the BCI untethered and independently of clinician involvement. So that concludes my talk. I'd like to thank Dr. Ian Kahigas for really being the driving force behind every step of this project, and thank you to Dr. Emery Brown for supporting me in supporting this work. Additionally, John Abel, John Tauber, and Indy Garward were essential teammates for developing the decoding methods that I presented, and thank you to Jonathan Jagged, Abhishek Prasad, Dr. Michael Yvonne, and Noeline Prince for their work on device implementation and data collection. If you have any questions about methods or code or really anything, please email me at benmk at mit.edu. I'm happy to chat or hear your thoughts, and thanks again for watching.
Video Summary
The video features Benjamin Meschida-Crasso, a Technical Research Associate at MIT and Massachusetts General Hospital, presenting his work on a brain-computer interface (BCI) for cervical quadriplegia. The BCI aims to decode hand-grasp intent in patients with complete cervical quadriplegia. The system uses a chronically implanted electrocorticography (ECoG) device with two channels, and the decoding algorithm is based on frequency information in the ECoG data. The results show that the decoder can generalize well to the test set with an AUC of 0.73. Adaptive decoding was explored but did not significantly improve performance. The ultimate goal is to bring the BCI into the patient's home for real-time use.
Asset Subtitle
Benyamin Meschede-Krasa
Keywords
brain-computer interface
cervical quadriplegia
electrocorticography
decoding algorithm
real-time use
×
Please select your language
1
English