false
Catalog
2024 AANS Neurosurgical Topics for APPs - On-Deman ...
Artificial Intelligence for Neurosurgical PAs - Da ...
Artificial Intelligence for Neurosurgical PAs - Daniel Donoho
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, good morning. My name is Daniel Donahoe. I'm a pediatric neurosurgeon in Washington, D.C. at Children's National Hospital. I'm here to talk to you today about some of the things that you may have been seeing and reading about. Some of these things may have impacted your practice already in terms of the effects of the growing revolution in artificial intelligence and what neurosurgical PAs need to know. We're going to keep this lecture pretty brief, and we have a structured Q&A session later in the day today. So I look forward to sharing some of this information with you and talking about it more in the sessions around lunchtime. As far as disclosures, my work is funded by the National Institutes of Health. I have some unrelated ventures and a nonprofit that I work in, which we'll tell you a little bit about during the course of this, but for the most part are not related and don't pertain to the content we're developing. The main disclosures that I have for this talk are that I am not an AI enthusiast and I don't know that there are AI experts, and I don't think that I am one, but it's an area that I care a lot about and I'm paying a lot of attention to because I think that its impacts are actually still underhyped. I think we still don't fully understand how much it's going to affect our lives on a daily basis. I don't think it's going to replace physicians or PAs, and I don't think it's going to bring about the end of the world or create existential risk, but I do think it's going to have both a jarring and a more subtle and imperceptible impact on our daily practice, and we'll talk about that during the course of this talk. This is a common narrative, will AI replace your doctor, and we'll address whether or not we're all really going to see robots coming to the bedside to hold grandma's hand. We're going to explain why this is happening, define some terms that you're going to come across in popular and other media, and provide a framework to consider when you encounter these AI products and software and other solutions, and give some demonstrative examples that pertain to neurosurgery specifically. These are the core messages of this talk, so if you remember nothing else, I think this is the most important slide, which is why I put it first. The digital transformation of medicine is inevitable. It's not something to be feared or fought against. It is part of our reality. We've already seen this, and we'll point out some of the ways that we've already been through this, so we're experienced in encountering these kinds of changes. Secondly, the capabilities of a digital transformation increase really rapidly, and it's important to understand that what you see today or yesterday or the day before does not map linearly onto what we're going to see in the coming months and years ahead. AI is not magic, especially in medicine. It is decidedly not magic. It is something that could be understood, and its implementation can be assessed by the people who know the subject matter the best. You are the constituents. You are the deciders who are going to evaluate whether these systems are actually any good, and as users, it's important to know that your competency in using these tools determine how well they work in actual practice. A competent user of these tools makes all the difference in the world, so digital transformation is coming, or is it? I want to examine the other side of this coin first. This is what some of the digital transformation in medicine looked like in the beginning, which is a very tired neurosurgery resident on call with about seven pagers on their belt, which is progress relative to the prior state of the art of sitting by the telephone. We're going to talk a little bit about this concept of variation in practice and how even though technologies or innovations might diffuse into medicine, they're not uniformly adopted, and standardization is actually pretty limited. Lastly, we're going to examine the effect of the electronic health record to see whether that should tell us anything about what's coming in AI. I want to start with that for a second. We've lived through this time where we've developed both electronic health records and electronic prescribing of medication, and most of us do this on a daily basis. We don't generally, for the most part, write prescriptions, especially for certain high-risk prescriptions like narcotics and other controlled substances, but there's differences in adoption, and those differences are real. If you're in a hospital or in an office, you might be more or less likely to write a prescription out by hand rather than put it into a computer, and this is something that we can all think about in our daily practice, and this is a very simple change compared to AI. It's not immediately obvious to me that any kind of digital transformation or AI is always going to be everywhere. We're going to have to pick and choose how we adopt it. We've also lived through version 2.0 of this, so for a fun exercise, can anyone think about what year this might be? The graph is showing the utilization of telehealth for care, so if you guessed that this was 2020, you were right. There was no telehealth in January, and then something globally happened around March to April of 2020 that dramatically changed the utilization of a digital technology in telemedicine, which had been in existence for a very long time, more than a decade in common consumer usage and many decades in more boutique and specialized applications, and in fact, even after this peak subsided, there's massive variation in how telehealth is used today, so it's heavily used in psychiatry, and neurosurgery is somewhere near the bottom of the list, so this should tell us that any digital technology or any digital transformation is going to be unevenly applied between different specialties, even irrespective of these massive global changes. On the left is a doctor from thousands of years ago writing in a cuneiform tablet, and on the right is a modern practice of medicine. Not much maybe has changed in some ways, but there's a lot going on underneath the hood. We're going to talk about what this third wave of digital disruption in medicine could look like. Just to recap, we went through this electronic medical record transformation, we went through telehealth, and now we're coming into this third wave of AI. So first of all, it's important to say that this is still very underutilized compared to other industries where both firms and workers are increasingly using AI on a daily basis. The second major trend of why this is happening now in medicine is this cost graph, which shows that at the top, hospital services have had the greatest amount of change in price compared to almost any other good in our economy, and particularly significantly technologically intensive hospital services, but all medical care has increased in price. This creates a pressure to reduce costs or reduce the price of goods through digital transformation and creates opportunities here that might not be seen in, for example, TVs, which are actually far more affordable now than they were 10 or 20 years ago. This is a massive area of investment, both in the private sector and in the government, and that essentially sets the playing field societally about what we should be looking about and thinking about. So what is this data that could be studied using AI, and what might we be talking about, in fact? So what we might be talking about is data that looks like this on the left. So this is common medical data. There's text, there's laboratory results, an EKG, an MRI showing a large pituitary adenoma. This looks nothing like the data that's on the right, which is a nice, neat Excel sheet with rows and columns and headers and idle, and maybe some formulas are even being used to generate some of this data. This data on the left is what we call unstructured data. It doesn't inherently have a shape or a relationship, although there certainly are shapes and relationships within this data. And it doesn't inherently come with labels, which is to say the EKG that you're looking at doesn't always have a read attached to it or ground truth or a description of what exactly is going on. But it sometimes does. And when it does, that can be quite useful for us. It's very different from this sales chart, which requires all of those elements to be present in order for humans to understand it, for it to be useful. So when we talk about what is happening in AI, we're really talking about this concept of a deep learning revolution. And we'll get to what that means in a second, but the important thing that you can know when you hear this term is these are systems that can model relationships in really, really big, and we'll talk about what really big means, but really, really big quantities of data that looks like our medical data. We've had Excel models for a long time. We've had standard statistics and regression, all of these things for a long time. But that doesn't work out of the box on our medical data. We have to do something to get that medical data to look like that Excel sheet, which if anyone's ever done a research project, it's a big part of what we actually end up doing in the beginning of a lot of our research. But now with this revolution, potentially, we might be able to understand that messy medical data in the same way that we could understand that Excel sheet. So we've talked about some reasons to have some pause about this revolution. We've talked about the fact that even consumer trends don't get adopted uniformly, where there's differences between hospitals and practices. We've talked about differences in specialties. And we've also lived through this technological disruption of daily work. So I think that before we get too far down this rabbit hole, I think there's a good reason to take a pause and have maybe a little bit more of a conservative view about how far-reaching and how limited it might be. So now that we've given you a taste, we've shown you why this is happening, that there are real pressures to digitize medicine. We've talked a little bit about what's being digitized, this medical data that's otherwise difficult to understand. And I wanted to go over a couple of terms that you'll see over and over again. So artificial intelligence is a very widely applied label, especially in 2024. It refers to both a phenomenon of systems that can think and reason like human beings, but also a family that is encompassed of a lot of other techniques within that umbrella terminology. At a high level, we could think about a distinction between something that's called weak or narrow AI, as opposed to something that's called strong or general AI. So weak or narrow AI might require a heavily prescribed set of rules or very specific relationships to exist in the data in order for it to work. Whereas strong or general AI, in principle, might not require anything more than what a human or even less than a human would require to understand, let's say, to read a book and be able to extract themes from that book, or to read a medical report and to establish whether the standard of care was met. And there's another term that we use a lot, which is called generative AI, which we'll get to in this talk, but it simply means an AI that can actually create, not just report on what it was given, but actually create new text, images, video, sounds, and so on. To talk about machine learning for a second, this refers to systems that can think and reason without explicit instruction. So we've seen examples that feel a little bit like machine learning, and that even includes things like linear regression, but it also includes this context, this type of machine learning that's called a neural network. Neural networks, at an abstract level, look sort of like neurons in the brain. For those of us in our field, they're not really neurons. But the point about a neural network is that they intake some information into a node where computation is performed on that piece of input information. And then there's an output that then goes to another node within the network where further computations are performed. And you can imagine, as these networks get bigger and vaster and more interconnected, that these can actually become quite powerful. You need me for something? Okay. Deep learning, which we talked about briefly before, refers to these highly interconnected neural networks with multiple layers in between where the data is input and where result is returned, where there are many, many potentially connections in between those neurons and many, many sets of those neurons where information flows. This is not a new concept. In fact, people have been talking about AI for almost 70 years or more, depending on where you trace back its actual origins. And it's important to understand a little bit about what's changed and why we're talking about this today, generally for four reasons. The first is the prevalence of smartphone and digital data has increased significantly. Tons of devices with cameras and sensors on them, a computational acceleration. The accessibility of that data, where it's not just trapped on these individual devices, but actually stored in areas where it can be computed across, and some innovations in computer science that have been extremely relevant. So we'll, again, go through each of these very briefly. Smartphone users, there's essentially one smartphone per person now, which is new and 20 years ago was not the case. Data creation has been astronomical, and we're currently creating multiple quantities of the internet circa 1992 on a daily or weekly basis. There have been significant innovations in parallel processing chips. I'm only mentioning this as an interesting side note for folks who are interested in electrical engineering and computer science, who will know that this wasn't natural or necessary. We didn't have to go this route. We did this as a field, as a way to overcome actually a problem that we had with chips in the early 2000s, where we had a very difficult time getting more performance out without increasing energy consumption dramatically. And we'll come back to that at the end, because energy consumption is a major feature of AI. So what has this resulted in? So in this graph here, showing the amount of computational power in floating point operations per second on the y-axis, and the date of publication of papers, basically shows that there was a steady linear rate of progress between about the 1950s and 2010, let's say. And since 2010, there was a dramatic acceleration in the utilization of computational power to publish computational modeling results. And that's something that really came about in the deep learning era that we talked about, where these large neural networks with high numbers of connections and neurons within each network became extremely popular. The second thing that happened is that there was a massive acceleration in the amount of data that was available for these models. Again, seeing the similar linear acceleration over the prior decades, and then things shooting up and to the right in the last five to 10 years. So this is a schematic of that neural network. And this is a slightly more involved diagram, but just making the point that there was a particular kind of neural network that became excruciatingly popular in the mid-2010s, called the transformer, which is the T in chat GPT, that became particularly important in the AI revolution. And lastly, I'd be amiss if I didn't talk about that there were computational best practices and habits, essentially, that we developed as a field to make this all happen. So we talked a little bit about generative AI. So I want to explain slightly about how that process actually works. So the most common type of generative neural network design includes an encoder and a decoder. So an encoder might take as an input sequence, the phrase, I am walking, perform a computation on that sequence, and then provide an output sequence through the decoder, which can output the similar phrase in German. The key point of all of this is that these neural networks are capable of not just outputting a prediction, like this is a pituitary adenoma, or this is a craniopharyngioma, but they're capable of outputting actual text themselves. And that's where a lot of the power of generative AI comes from. In imaging, the example is, what if you could turn this high resolution photorealistic image on the left into something that looks more like starry night? What can we do in neuroscience and neuroimaging? Well, we can do this with faces where we can turn sketches into these people that have never existed. We can remove rain from scenes. And we can even do this in video. So let's say you have this lovely sort of golden retriever type dog, but you want to turn him into a Dalmatian. Well, with generative AI, you can actually do that. And that's, I think, one of the sort of most bizarre and eerie features that we'll see. And you can do this in an image or in a video. So we'll segue a little bit into some of the tools that we're seeing in generative AI. Yeah. Cool. Good. We've talked a little bit about chat GPT, and I'm sure this is something that many people have come across. So I want to spend a few minutes describing exactly what this is. First of all, let's explain what these words mean. So this is a chat interface, which is probably the most interesting fact about it. And the technology underlying GPT had actually been around for a long time before it became so popular. But the chat interface is really what pushed it over the edge. And it relies on what's called a generative pre-trained transformer. So generative, we showed you examples of generative AI, meaning that it can actually create text as an output and accept text as input. It's pre-trained, meaning that when you use chat GPT, it doesn't need to go out and learn a bunch of new patterns. It's actually already seen a tremendous amount of data from the internet. And it uses this transformer neural network architecture that we briefly showed. So we mentioned that the quality of being a user of these systems is tremendously important. So when you're using a GPT, it's important to ask some of these questions, like what data was it trained on? This matters tremendously because the model will have differences in performance when it's working with data that it's seen before. It looks very similar to what it's seen before or a data that's within its training data set compared to out of sample data that it's never seen before. Sometimes these models can be unexpectedly delightful, but they can also be quite disappointing. In healthcare, this is particularly important because if a GPT has never seen healthcare data, it may not be able to provide the quality of data in the healthcare domain, the quality of an output compared to non-healthcare data. So you might look at it and say, oh, it can do poetry. Well, it certainly can tell me if this is a subdural or an epidural. That's not necessarily the case. So this is what working with chat GPT looks like. You can see that it can create some images, that it has a chat interface, and that you can even store some prior chats from before where you can ask it to do things. So this is one example of a type of interface. And there are different models that you can select with different performance characteristics. You can link it to a web browser, perform data analysis, or even incorporate third-party components. What does it feel like to use this kind of stuff? And how does it actually affect our performance? Well, it looks like that there is a significant difference when we're using chat GPT within the domain where it's experienced compared to outside of the domain. So within the domain where chat GPT is very experienced, its performance can often meet or exceed human users in specific tasks. And in fact, you'll see this, whether the users are skilled or not, they both increase in performance. But it's important to note that the top skilled performers without chat GPT are still doing better than the bottom half of performers with chat GPT. When they start using chat GPT themselves. So it doesn't erase the skill gaps, but it makes them a lot smaller. Unfortunately, outside of the frontier, where tasks are not really well understood by the model in its pre-training session, it can actually decrease performance. So this is an important thing to note. How does this work in the medical domain? So this shows an example of standard USMLE, medical licensing exam style questions, showing that there's been significant improvement in models over the last four or five years, culminating in models that are now exceeding the past threshold for USMLE style questions. Now it's important to say that this does not mean that these models could pass the USMLE. They can't get a social security number yet. They can't actually sit down and necessarily take the test. And their performance on new questions they haven't seen before might be a lot lower than what we've seen. But they do answer better in some ways than humans. Physicians are more likely, for example, to omit information and more likely to speak much more briefly about cases. Whereas AI models don't get tired. They don't care if they're outputting a one-page answer or a one-sentence answer. They'll give you essentially what you've asked for. So it allows these to be used differently and complementarily to humans in medicine. And these models generally performed fairly comparably to human experts in medicine. So it's important to say that in this medical question answering task, models are performing quite appropriately at or near the level of human experts. And this is already old news. And the next generation of models is getting even better. Whereas trust me, the next generation of physicians isn't necessarily getting any smarter. We can use these to generate texts for onerous applications like grants and so on. And some folks have even used some of these to get funded grants. So that's an interesting. For those of us who are in clinical practice, well, there's a dizzying landscape of potential opportunities that you may come in contact with. Give you a few examples that are maybe closer to home. One is communication software for stroke triage, allowing the integration of imaging findings, alerts to be sent to care teams and bringing an entire multidisciplinary team onto the same page. Potentially also including automated interpretation of images and automating the notification process, which could reduce the critical time for treatment for acute ischemic stroke. Other examples include the creation of patient-specific plans to create corrections during spinal deformity, which you might see, or the use of pathology solutions that can bring diagnosis into the operating room in near real time. We've also, again, see a significant number of generative solutions in medical texts. And we mentioned some examples of how that could happen. The moral of the story today is we need to learn a lot more about how these systems work before fully embracing them. But I encourage if they become available to you through your institutions for you to try them out and think about how these actually work in your hands. These could include ambient dictation systems, which can record the speech that's occurring, convert that speech to text, diarize that speech so that each speaker is noted, and then hopefully successfully translate that into a ready-to-go medical note. There are several examples of this software that are made by various commercial partners. Lastly, I think this is filling the research arc and everywhere we look, we're seeing more and more papers in machine learning and AI, even if potentially some of the papers may be similar to ones that we've written in the past. It's important to look at some of these and say, how could AI be deployed in ways that are actually different? So going beyond, for example, outcome prediction, what's interesting about these kinds of large language models trained on large corpus of healthcare data is that they can actually perform many different tasks, whereas a regression model to predict mortality would not be reasonably expected to predict insurance denial or length of stay. There's simply no way that that would work, but this single model may be able to have multiple different behaviors. There are some significant benefits and risks to using these kinds of chatbots and foundation models in medicine, and I wanted to briefly review what these are. So these include on the plus side, the fact that a lot of what we do in medicine actually is similar to this problem of next token prediction, which is to say that if I give you the beginning of a sentence, can you complete it logically? There are a lot of automated processes and dot phrases and texts that we're writing for the 100 millionth time that quite reasonably maybe be able to be written by computer systems. There are some things that probably won't ever work from this standpoint, including potentially the use of expert judgment in rare events, including the fact that these models may not perform in every situation the way that we would expect. And one of the critical limitations, which is that it's often difficult for us to know how well humans do at a particular task, which makes it tough for us to trust models. If you had an ultrasound tech that looked like this robot over here, you might be screaming at it too. I wanted to end by showing some brief examples of how we're using this in the operating room right now, and then we'll go straight to the Q&A. So as you all know, from being in the neurosurgical operating room, this is an arena that is full of sensors. Once we're in arenas like this, we're immediately thinking that this could be an imaging problem. And that is to say that if we're trying to classify what's in an image or to determine what event might be likely to happen next or the outcome of a procedure based on an image, that's an image classification problem, for example. The history of these kinds of problems is really positive. Computer systems have been able to actually outperform humans at detecting whether an image is let's say a sailboat or a fire hydrant or a car or a bicycle. As anyone who's filled out one of those darn password, you know, recognition CAPTCHAs on the internet will know, sometimes those are really tricky, but machines do really well at classifying images. In vision problems, we think about both localization and object detection, knowing where things are and what they are, even knowing their more precise boundaries or potentially not knowing anything about a particular scene but knowing where all of the objects are and then classifying them can be useful. In general, medical image interpretation is almost exclusively domain of radiology. And that's because there's so much more data and so much more effort being put into this space than any other. But I wanna outline some questions that we have around data from the operating room that could be relevant to PAs in the future. This probably pertains to any procedure that can be watched with a camera. And I wanted to talk about how we think about this particular problem as an example of some other things you might see. So the first question that we've asked in our lab is, is there a detectable and clinically meaningful signal in video, in this case, video from an endoscope or a laparoscope? Can we actually capture and process the data, create really simple and then progressively more complicated measures of surgical performance, and then maybe even develop systems that can watch surgery to provide quantitative and qualitative assessments of performance? So why do we think that anyone can watch a surgeon and figure out how well they're gonna do and how well their patients are gonna do? There's some earlier work in general surgery that showed that in bariatric surgery and laparoscopic gastric bypass surgery, when you took a small group of surgeons, had them edit their videos and send them to similar peer experts, the peers rated each other. And again, these are all practicing surgeons, but there's a significant difference in the skill ratings that we could see. Some surgeons scoring fives, the expert mastery level, and some 2.5, which is maybe more on the level of a junior faculty. And these ratings were strongly associated with risk-adjusted complication rates. So surgeons can watch video and can identify which surgeons are more or less likely to have complications, even after controlling for all of the things that we can measure about their patients. And these are complications that matter, things like bowel obstruction or infection, lung complications, death, return to the operating room, readmission, re-operation, those kinds of things. And so this tells us that there's probably some signal within video. And I think most of us intuitively know this whenever we're watching any performance. We have a sense of whether the person doing it, if we know the domain, whether they're skilled or not. So we undertook this challenge a few years ago. We published a very large data set of surgeons performing a hemorrhage control task during pituitary tumor surgery. And this is actually an open data set that anyone can download. And we're committed to making our work more accessible. We explained our methods of how we're gonna go about doing this process, how we actually use the life cycle of data, and all of this is available for you to read. But what we did with this in the first place was we just identified where the instruments were within the scene of the surgeons trying to stop the bleeding. And what instruments were present and for how long, how the surgeons move them. And it's no surprise that in even very simple models, when you know how a surgeon is moving instruments, it tells you so much more about them and their likelihood of being successful than any other factor in their age, their experience, their number of cases they've done of similar type and so on and so forth. Because if you've done a thousand cases but you still operate like you've done one, clearly the kinematics are really important here. We then did something a little bit different, which is we took only the first minute of these videos of these surgeons trying to stop this bleeding, and we showed them to some experts and asked them, based on just this first minute, do you think they're gonna succeed or fail? And the experts were not always right, but they were very consistent, which is a good thing because it showed us that they're all watching and finding the same signal within the video. We then showed that video to an AI system that was trained again on a very small amount of data and future iterations will certainly do better. But that model was able to at least match the expert performance in terms of predicting which surgeons would succeed or fail, and was much better at surgeons than at predicting blood loss, because as everyone knows, surgeons are notoriously not great at that. So this shows us that you can, in fact, train models to watch surgery and make judgments about surgeons' performance and predictions that are at least at the level of experts. And we can do a little bit more. So this is an example of a team that I worked with that is working in robotic surgery in urology, for example, and was able to show that we can actually not just figure out who's gonna be successful at this particular type of surgery, but what are the individual movements that they're doing called gestures, the smallest discernible action that deforms surgical tissue that advances the operation, and seeing which of those gestures actually mattered. And we're closer to understanding this and other operations and understanding what are the things we do in surgery that matter? What are the things we should do more of or less of? And this is very exciting. We believe that we can actually transform surgery from an art that is actually quite difficult to teach into a science with a real process, with real data collection. And we're very excited to continue this work. And to be honest, to invite all of you to join if this is something that you're interested in. We wanna move surgery from this domain where we are now, where surgery is largely uncoached, there's minimal feedback. It's very hard to collect data about surgery. We have a lot of individual practices and every time we do a case, we're kind of reinventing the wheel. And we wanna move this into a approach to surgery where we're guided by the history of tens of thousands of prior procedures, where we may even be able to get some feedback in real time or at least right after the case, where there's an automatic bounty of data that's collected from all of those camera systems in the operating room and other sensors, where knowledge is shared globally and where we change from sort of reinventing the wheel to updating this larger neurosurgical knowledge bank of how we actually perform procedures. So this is something that we're very excited about and could be of interest to PAs as well. No talk on AI would be complete without talking about the significant ethical and legal considerations in the use of AI. For historical context, it's important to say that there have been significant winters in AI in the past where progress was stalled for decades and public trust was not really present. And we could see those happen again. So we have to tread very carefully here. Data usage is still a contested domain as far as who owns the data and those concerns of those nature. Can patients use AI in and of themselves without involving their physicians? A very controversial area. What is the responsibility of the people creating and also of using AI models? Authorship or even medical legal concerns. If you get a suggestion from chat GPT and implement it but the suggestion is wrong, who's at fault? That's maybe a trite example but what if the AI is helping interpret laboratory tests and provide automated results notification to physicians but the AI is wrong? Or even if the AI is built into the laboratory analyzer itself? Some of these questions become a little bit less clear. Are we actually required and can we even disclose the use of AI if we even know that it's happening? And to underscore again, these societal attitudes can change rapidly and memory is long. So it's important that we all act as good citizens here. So to summarize where we've been, we wanted to explain why are we having this conversation? Why is there hype now around AI and ML? And we showed that there was a confluence of these events as well as factors within medicine itself that are leading to very rapid shifts in what is actually an underlying very consistent progression in technology. And we show that this is coming for healthcare. We defined some terminology to hopefully arm you guys with slightly better understanding of what these terms are and how we think about these problems. We gave a framework to think about how to decide whether AI products are safe and ready to use including understanding how the models are trained, thinking about whether they'll actually work in your setting, what are the costs of deploying them for example, and who's responsible for reporting the existence of the model and its information. And we give some demonstrative examples across a variety of domains, including the integration of care teams, the use of ambient dictation and some of our own work in computer vision to understand surgical scenes. Additionally, it's worth noting that a lot of the images, almost all of the images in this were created by AI, by generative AI. I commonly write with AI assistants and I occasionally will use AI assistants for computer code writing as well. I wanna thank you all for your time and attention. If you wanna reach me and have any questions about this or any other topic, here are my coordinates. It's a real pleasure and honor to be with you today and I look forward to the upcoming Q&A. Thank you.
Video Summary
Daniel Donahoe, a pediatric neurosurgeon at Children's National Hospital, discusses the rising influence of artificial intelligence (AI) in neurosurgery. He emphasizes that while AI won't replace physicians, its subtle and significant impacts are inevitable as part of the field's digital transformation. AI technologies, particularly generative AI and machine learning, are evolving rapidly, and understanding these innovations is crucial for effective use. Donahoe explains how these technologies can improve tasks like medical text generation, stroke triage, and surgical planning through enhanced communication and data analysis. He also highlights the potential of AI in video analysis to assess surgical performance, stressing its transformative capability in making surgery a more data-driven science. Furthermore, ethical and legal considerations must be addressed to ensure responsible AI deployment. Understanding AI's training data, models, and application contexts are critical for safe integration into medical practices. Donahoe concludes by inviting open discussions on AI's role in neurosurgery, emphasizing continuous learning and adaptation to harness AI's full potential.
Keywords
pediatric neurosurgery
artificial intelligence
machine learning
surgical planning
data-driven science
ethical considerations
medical innovation
×
Please select your language
1
English