false
Catalog
2018 AANS Annual Scientific Meeting
508. Convolutional Neural Networks Provide Rapid I ...
508. Convolutional Neural Networks Provide Rapid Intraoperative Diagnosis Of Neurosurgical Specimens Imaged With Stimulated Raman Histology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Our next speaker, who I'm proud to introduce, is Dr. Todd Holland, who has received the Ronald Bittner Award in Brain Tumor Research. He's going to be speaking to us about convolutional neural networks provide rapid interoperative diagnosis of neurosurgical specimens imaged with simulated Raman histology. Thank you so much for your attention, and I'd like to thank the AANS and the tumor section for the Bittner Award. It truly is an honor. So this is a talk about rapid histology and how it can be combined with artificial intelligence to help inform decisions in the operating room. So accurate interoperative diagnosis is essential for providing optimal surgical care, and our current interoperative pathology workflow starts with a surgical specimen. Whether it's from a resection cavity or a stereotactic biopsy, we heard a bit about how important stereotactic biopsies and repeated biopsies are going to be in the future. And then this is sent to a frozen section lab where it's snap frozen, cut, mounted, and stained, and this introduces a fair amount of freezing sectioning artifact that can really limit interpretation and necessitates a neuropathologist that is physically present to be able to read these slides. And this can really limit our ability to provide efficient and timely neurosurgical care. So what we're proposing is a completely new pathway where this is imaged using a technique called simulated Raman histology, and this is a label-free imaging technique. It requires no tissue processing of any kind, no sectioning or staining. And then to interpret those images, we're going to use an artificial intelligence, namely a convolutional neural network that can give you an interpretation of the image and a diagnosis in a fraction of the time, an order of magnitude less time. So to achieve this, we use convolutional neural nets, which are the current state of the art in computer vision, and these are built on computational neurons, which were actually inspired by biological neurons, really developed significantly in the 1980s. And it has a presynaptic input that then goes through a weighted sum and then passes through a nonlinear activation function, which generates the computational equivalent of an action potential. And the other part of this are convolutional filters, and what these do is highlight certain specific features within images. And so for example, like edges or contrast certain colors within an image. And the way that a convolutional neural network works is by having multiple layers, sequential layers of convolutions to extract meaningful image features to achieve an image classification task. And what happens is you move through these convolutional layers until you've removed all the spatial information, and then you just end up with a feature vector that numerically describes the image. So for example, if you're trying to classify histologic images, it may be a quantification of how hypercellular the image is, or if there's microvascular perforation or NC ratio, you can then use those features to make a classification into a brain tumor subclass. So the question then becomes, can we develop a convolutional neural network to rapidly evaluate fresh surgical specimens and provide accurate brain tumor diagnoses? And we started with a raw SRH image and tiled that image into 300 by 300 size pixels. And this serves two purposes. One, it significantly boosts our data set, which is very important for training neural networks. And then it also allows us to use very high resolution images to give us the most information from each image. We use the Google Inception V3 CNN, which is well studied and well validated. And we reprogrammed the network so that we had outputs in the 13 different diagnostic classes that include the most common brain tumor types. And then we can also consolidate those output probabilities into different important clinical decision points. Like, for example, is it lesional tissue or is it normal tissue? Or is it surgical? Is it a lymphoma? Non-surgical? Or is it high-grade, low-grade, et cetera? And one of the really deep, important breakthroughs about modern AI and neural networks is that you do not train explicitly the individual weights or the convolutions. These are learned and optimized based on a large set of training examples, basically experience for the neural net. And in our case, we had 2.4 million images that we trained on from 440 patients that were prospectively enrolled. And then we held out a random set of 21 patients to test our neural net, the performance, basically your validation set. And to implement this, we used Keras, which is a high-level API wrapper around a low-level symbolic mathematical library called TensorFlow, which is also from Google. And then you need a fair amount of high-performance computing, so we did two NVIDIA graphics cards, and the code is available at my GitHub page. So how did we do? So this is on the 21 patient held out set. If we look at just the small tile, so very, very small fields of view, we achieved an accuracy of 75.6%, which really exceeded our expectations given that this is several orders of magnitude less image than what your standard neuropathologist uses in the operating room. And you can see some of the errors were some malignant gliomas called low-grade, some low-grades that were called pseudoprogression, which we did include gliosis, as well as some pylocytics being called low-grade. All are understandable mistakes given that, again, there's going to be a fair amount of histologic heterogeneity within any given image. So now that we have these individual tile probabilities, how do you integrate that into giving a diagnosis for a patient or a specific specimen? And this is our usual imaging size, which is 3 millimeters by 3 millimeters, and you can see when you pool this information in a technique that we actually developed in our lab called local probability pooling, you can see that one diagnosis clearly dominates over all the others. You can see the top example is glioblastoma. The next one is a patient who's ultimately diagnosed with an oligo, and you can see the diagnosis for an LGG intraoperatively, and then metastatic carcinoma, and then you can see this is a patient, the bottom row here, this is a patient with lymphoma. This is a stereotypic biopsy, and you can see very dense high probability for a lymphoma in the tumor core. So now that we're able to integrate all the tile information, how do we perform on the patient level, which is ultimately the most important thing? And you can see we only had a single error. We had 95% accuracy, and one patient who was ultimately diagnosed with a pilocytic was labeled as a low-grade glioma, which isn't really that bad of an error, and I think this is ultimately due to not having a sufficient number of training examples of pilocytics. This will improve in the future as we continue to image more patients. So now that you have a trained neural network that's performing well, and you've optimized your convolutions and the weights of your network, you can ask, so how similar do individuals look to one another given your numerical representation, that feature vector that I had mentioned earlier? And what you can do is, using a technique called stochastic neighbor embedding, which is a dimensionality reduction and data visualization technique, you can look to see if individual tumor classes cluster together, which is what you expect to see, and it is what we ultimately saw. So this is the cloud of a downsampling of our total data set, but this is medullo-lymphoma here, pituitaries, mets, meningiomas, and then this is our glioma cloud, or sorry, our glial cloud, and you can see how well it separates here. You can see the glial tumors actually form very tight clusters, and if you focus just on this, you see an interesting shape forming, these are GBMs. This is a pilocytic cloud, and then you see this bridge here of low-grade gliomas, gliosis down here. This is a normal gray matter, and then white matter, which has a very distinct signature in SRH images, so clusters very tightly. So one of the unique and important things about intraoperative pathology is that you want to differentiate specifically certain pathology in order to inform decision-making in the operating room. So for example, again, is this a normal specimen, surgical, high-grade, et cetera? And when we look at just, again, these small image tiles, and we look at these important decision points in the operating room, and we perform ROC analysis, our areas under the curve were very, very good, greater than 90% of the majority of cases, and what this ultimately means is that using a very small amount of tissue, we're able to very confidently inform decisions in the operating room. So this is the last thing I'll mention, and this is really an entire talk in itself, but we are able to automatically detect an infiltrating tumor. This is a patient who we imaged at the margin of a glioblastoma, and you can see very clearly in this specimen, you can see dense tumor here, this is high-magnification view on the left-hand side, this is dense tumor, and here you can see gliotic brain here. This is actually a reactive astrocyte here and here, and then if you perform the exact same analysis I just mentioned, you can very clearly see an infiltrating tumor margin here, this is dense tumor, this is the margin of the tumor right here, and you can see we're getting into normal brain, gliotic brain here. So in conclusion, artificial intelligence-based classification of SRH images can be used to inform decision-making in brain tumor surgery, and the combination of artificial intelligence and SRH is a promising approach that may allow surgeons to achieve more complete resections of brain tumors. I certainly hope this qualifies as a disruptive innovation, and I'd like to acknowledge my PI, Dr. Daniel Oringer, my chairman, Dr. Corinne Marasco, and my program director, Dr. Cormack Marr, who have been very supportive of me. Thank you very much. Thank you.
Video Summary
Dr. Todd Holland discusses the use of convolutional neural networks (CNNs) and simulated Raman histology (SRH) in rapid interoperative diagnosis of neurosurgical specimens. The traditional pathology workflow involves freezing, cutting, mounting, and staining specimens, which can limit interpretation and delay care. In contrast, SRH is a label-free, tissue-processing-free imaging technique. CNNs, specifically the Google Inception V3 CNN, can interpret SRH images and provide a diagnosis in a fraction of the time. The CNN is trained on a large dataset of SRH images and achieves an accuracy of 75.6% on small tile images. When integrated at a patient level, the CNN achieves 95% accuracy. The classification of tumor classes and important decision points in the operating room are highly accurate, and the method can also detect infiltrating tumor margins. The combination of artificial intelligence and SRH shows promise in improving brain tumor surgery outcomes. Dr. Todd Holland gives credit to his mentors, Dr. Daniel Orringer, Dr. Corinne Marasteanu, and Dr. Cormack Maher, for their support.
Asset Caption
Todd Hollon, MD
Keywords
convolutional neural networks
simulated Raman histology
neurosurgical specimens
rapid interoperative diagnosis
label-free imaging
×
Please select your language
1
English