false
Catalog
2024 AANS Neurosurgical Topics for APPs - On-Deman ...
Artificial Intelligence Q&A Session - Daniel Donoh ...
Artificial Intelligence Q&A Session - Daniel Donoho
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hey, folks. So I'm Dr. Donohoe. I'm a pediatric neurosurgeon here in Washington, D.C. I'm happy to be with you all here virtually, and I know that there's some prerecorded content, but I'm happy to dive in if folks had any questions or anything else of that nature. You know, Mike, if folks have taken the survey or had anything like that, then we can start with those, otherwise we'll basically just get into what's probably just the ending slides of the talk and then have another opportunity for Q&A. Nothing in the chat so far, Dr. Donahoe. Okay, awesome. So I will then go ahead and share my screen. And we'll start from from the end part of it. And obviously, you've got the got the video recorded. So you can go back and watch some of the beginning. But I, you know, live with you guys, I wanted to just sort of survey some of the some of the technologies that are actually out there that that you may come into contact with. So just as a view of the landscape, this, this image kind of makes me want to throw up, but I think it's actually a fair description of the state of AI and in clinical medicine, which basically shows that it is permeating every aspect of particularly the back office of practices, operations, administration, and starting to have some effects in particular areas within clinical medicine itself. And we'll dive into those that are maybe closest to neurosurgery. But for this, suffice it to say that there's, there's a lot going on here. And there's a lot that's kind of going to get thrown at us. And we want to be informed and active users and leaders in this space. So some examples of things that you may have already seen or may already be familiar with are things like communications tools to enhance and tighten linkages between multidisciplinary teams and stroke care. So you might have an AI that's reading over the shoulder of your radiologist, detects a large vessel occlusion, automatically notifies the entire stroke team. So you're not sitting there calling and figuring out the phone tree of who's on for what, and maybe bumps that scan to the top of the radiologist queue to confirm the finding, for example. So this is a classic integration function where AI is able to read text, is able to look at images, make some potentially clinically relevant interpretations, but is heavily working with humans to enhance human team performance. On the other spectrum of things, we have things like automated virtual surgical planning. So these are in spine deformity, for example, let's say you want to get a given amount of deformity correction for a particular change you want to see in a patient's lumbar lordosis. It might suggest or assess, like let's say you're going to, okay, I'll do L3 PSO, or I want to do this many speeds, or I want to fuse from this level to this level. What is the postoperative correction likely to be from that? And specifically, focally, how is that going to look? And then of course, we're all familiar with these essentially fancy navigation arms that can help us place instrumentation serotactically, whether through pedicle screws, serotactic EEG, or other devices in the brain. And all of these are integrating different levels of automation and of AI to do this kind of work. Another example that's really exciting and maybe not quite in clinical practice yet is some of the stuff that's being done in digital pathology. So this is using automated image interpretation systems to give, within about a minute or less, a provisional sort of fresh tissue type diagnosis of intraoperative pathology. And this is something that's really exciting for tumor surgeons like me, because it allows us potentially to get answers about tissue much more quickly and maybe even more precisely. We might be even able to make molecular judgments or best guesses based on frozen pathology that human beings can't currently do. So these are some things that are currently touching us in various ways in the operating room. But probably the best known use case that everyone sees is generative AI for medical text. So just to back up a second, what does it mean for an AI to be generative? What that basically means is that rather than making a prediction saying, oh, that's a dog, or that's a cat, or there's a 25% chance that's glioblastoma, it's actually generating text as the output of the AI model, or images, or video, or any other data type. So if you give it some text, it gives you some text back. Pre-trained means that this model has already seen a vast amount of data, larger or as large as the publicly available versions of the internet, with many, many thousands of examples of text and image text pairs and other things. And that allows it to have a basic understanding of many, many different domains, so that the text that it probabilistically outputs is more like what it has already seen, because it's seen so much. It also uses a type of neural network architecture called a transformer, which we're not going to get into, but that's what GPT stands for, generative pre-trained transformer. It's important to say that all of these actual models are variable. Some of them are publicly available, particularly those made by Meta, Facebook, but many of them are secret, and we don't really know what's in them. We don't know what they're trained on, and we don't know how well they're going to perform until we try them. And then, of course, they go get updated, and they perform differently. So these are, as you can get a sense, things that might give us a little bit of hesitation before adopting them in the clinical domain. We got our first chat question on. So it's the question, I've heard phone apps that translate patient and provider conversations into notes directly. Do we have any thoughts, and is it seems too good to be true? And I'd ask maybe people to thumbs up this if they're using it, and if they're accurate. Any thoughts, Dr. Donohoe? Well, you led into the next one. I love that question. So what this, the way that this works is it combines a series of things, some of which we've been able to do for a long time, other of which are more new in the way that they're being packaged into sort of a new product that we're all interested in. It combines speech-to-text transcription, which is typically done using, there's even some very good open-source AI models for this, but that's something we've been doing for a long time. We've been dictating into phones and using Dragon and those kinds of things. It adds another function called diarization, meaning that it knows which speaker, which voice is speaking what content. And then it typically adds in a large language model that can take that sort of screenplay-like dialogue and convert that into a medical note. So to answer your question, this is something that I've used, and I'm not going to talk about for conflicts reasons, some of the things that we've done in this space, but we'll talk just about the publicly available stuff, the stuff we're using in the hospital today. It's stuff that generally makes a lot of sense. It can help in certain ways. It's really important that you're in the room with the patient because it's very important that you go out of the room and look at that note. And this is my personal experience using some of these commercially available tools. It helps in some ways. It makes some really laughable errors, as you might expect. And in general, I feel like I don't have to take notes when I'm using this because I'm listening to the patient and they go, I look at the notes, oh wait, they didn't, I'm actually listening better, to be honest, because I'm not sitting there at a computer kind of scribing away. I'll say for me and my personal experience, I don't say my physical exam out loud as I'm doing it because I just, I can't bring myself to do it. And also our exams are pretty, you know, templated and I'll change exactly what I want to change. The exam is very important that it's exactly the way I want it. And the assessment and plan for me is exactly the way I want it. So I will actually currently go through and write that themselves. And there are other, I'll just say this at a high level, there are other systems that could go back through all of the notes that you've written and write in your voice. That is certainly a technically possible thing to do, but whether it's going to get implemented in your medical record in your hospital is different. And until then, basically I use it the way I would use a medical student if they were scribing for me, which is to say, trust, but verify and really, really rely on them much more for the HPI than for almost anything else. So I think it's useful. I think it's interesting. It makes me feel better to leave a room and have a note partly written, to be honest. And I use it when I'm alone in clinic, if I don't have a team and a resident and PPA. And I find it helpful in that context. Any other questions, Mike? Yeah, there's one other question just came through. I'm curious about the ethical and legal implications of AI. Where does that patient data go? How can we as providers ensure we are protecting patients' personal private information when using this technology? Yes, you should be very concerned about this. I'm going to make an assumption, which is that the PA legal framework is the same as the licensed MD framework when using it. And this is an area that should give people a lot of heartburn. Your institution, when they adopt such a tool, if it's adopted by your institution, is responsible for making sure that this tool, and before they put it in your hands, is storing the data responsibly. So typically, what will often happen is your institution will be given a secure area within a third-party server, Microsoft Azure, cloud, Amazon Web Services, Oracle Cloud, whatever it may be, that is basically your hospitals, rather than building a big server tower with a bunch of compute in your hospital. It's sort of a virtual private cloud, a virtual computer. And that's where your data should go. That's where the AI model should live inside that area. That's where the computation should occur. And that's where the results should come back to you from, which means that that data should not be going to, let's say, ChatGPT to train the next ChatGPT. When you're using commercially available tools, personally, like ChatGPT, like other tools, you should expect that anything you put into that system is going to go to be used for whatever purpose that that company wants to use it for. And you should be very careful about putting protected health information, you should not put protected health information into those commercially available tools. Most institutions have an IT and security review process that's probably not strong enough to deal with this. And we're relying a lot on the good faith of our commercial partners. So this should be top of mind. I don't know how much we can do about that as physicians and PAs. But I think the other thing to think about is, you know, make, I try to do this in a but there's some requirements even to disclose whether AI tools are being used. That can be very bothersome, right? So like, let's say that digital pathology tools being used in your pathology lab, let's say it's not being used in the operating room, you don't know anything about it, or AI is reading a radiology report, is radiologist going to call your patient and say, hey, by the way, we used, you know, ChatGPT radiology, that's not going to happen, right? So it's going to fall always down to us, you know, we're actually seeing the patient, we're actually seeing the That's not going to happen, right? So it's going to fall always down to us, you know, we're actually seeing the patient. And this is an area that I think brings a lot of heartburn for us because of the ways that this could work in the future. I'd say right now, there's very little interaction with those tools in medical record elements. But I think in the next 12 to 24 months, we're going to continue to see more and more and these questions you should be asking at your institution, when you hear about these things being adopted. I'm encouraging, I think that these are potentially great tools, but they all come with risks and downsides like anything else. I guess to follow up on that, are you optimistic that AI can kind of help us, you know, as we all kind of feel bogged down in terms of writing and documentation, can it help us spend more time with patients? I think that so again, I'll just talk about my personal experience, I think it helps, you know, using these tools helps me be a better listener. Because I'm not as concerned, I'm more concerned with listening holistically, right? And I'm less concerned with listening to extract the little bits that I'm going to have to put in my note later. So I think in that in that way, it can be helpful. I don't think in again, in my personal experience, it hasn't been a game changer. But it could be that if I was seeing, you know, 100 patients were all the same complaint, and I was basically going to say the same thing. And I just needed, you know, to stop drowning from the, you know, seven minutes, it takes me to type. You know, that's, you know, that's possible. So I think I think it will help. But I think every context is just so important and different. I just got a really great question. Do you see a role of AI, the triage of clinical patient to help prioritize new patient appointments, or any needed evaluation before clinic? You ever heard of that? Yes, this is a really exciting area. And there are a couple people working in this, particularly at large academic systems with many, many sites. I don't know of a commercial product that currently does this. But I think this stuff is coming. I think that this is a great, you know, a great area. Well, first of all, it's a great problem to have you got too many new patients. That's awesome. Congrats to you. Second of all, you know, I think prioritizing these could be done, it will all depend on how effectively your practice can get and digitize health information. So if you're in a heavily digitized practice, where everything's on Epic, and all the radiology is in packs, and, you know, you're not dealing with a ton of outside paper records, that would work really well in practices where you're, you know, so for example, for us, we're, you know, quarter referral center, we're seeing kids from all over the world that come with, you know, hard copy films, sometimes, you know, AI can't triage that AI is never gonna be able to see that film. So I think the set the context is really important, but it could be a really good help. This one, this one just came in. Are there ways that AI is helping or can help improve postoperative care and monitoring of recovery for neurosurgical patients and that which leads would lead to maybe fewer complications or readmissions? Yes, we'll get to one interesting thing of that in a second. But before we get into that, I think this digital health world and the tracking of proms of patient reported outcome measures is really interesting. And you can easily imagine better monitoring through all of our devices that we have would really help improve the situation dramatically. So if your Apple Watch and your ring and your sleep trackers and all these things are all talking and know that you had surgery and feeding that into an AI, you probably could pick up Hey, this person's taking on not taking a lot of steps, I bet they're in pain, maybe we should send them to work PT or, you know, check on them a little bit more, their heart rates up, maybe they have a fever. Absolutely. Very interesting project that we're working on. It's a little too early to talk about but that's looking at wound care. And some research that that we and others are doing globally, to look at wound healing. And those are, you know, we're looking at the intraoperative piece of this, but other people are looking at the postoperative, the preoperative piece of how we care for wounds. So I think this is this is definitely an area that that is ripe for work. Right. We've about six minutes left in our session. So if there's things you want to get to on the slide deck, I'll keep letting you know if we have questions that come through if you want to go through any more here. Well, I'm going to skip this, because there's there are questions about there are questions about readmissions, questions about ethics, those kinds of things. And, you know, just just to show that these large language models can be used to help work in some of these things, even in predicting whether a claim is going to be denied by insurance, or how long the patient's going to stay in the hospital. So those are all obviously useful things helping even write your responses to insurance. Problem is the insurance companies have their own AIs, where they're writing their response to your response. So yeah, it's a weird, it's gonna be a lot of weird AIs talking to AIs in this world. There's just too much money at stake. I do want to show a couple of things about what we're doing, but to keep this really brief and answer any last questions that come up. So I'm going to skip through some of the some of the background questions, but just share about, you know, what we do and think about a lot here. You know, we think a lot about surgical data, particularly video data, and whether we can generate, you know, meaningful assessments that may help improve care and improve outcomes by understanding what happens in the operating room. So to this question of is there anything we can take out of the operating room that can improve surgeon performance, there's good evidence from an older New England Journal paper in bariatric surgery that when surgeons watch each other's videos and give each other ratings, practicing surgeons and attendings, not in training, there's a wide variety in skill. I think you guys know this probably about as well as any group. And those varieties and dispersions in skill cause dispersions in outcome, real outcomes, things like death or return to the operating room, readmission, emergency department business, those kinds of things. So what we do is we take videos of the operating room. This is an example of an open source data set that we published, including all the methodology used to acquire it. It's been used in NIH and NSF proposals, and we take that operating room data and we study that, right? So this is a de-identified privacy preserved example of data that we've released in the public domain. What we do with that data is we can actually create performance metrics, kinematics, studying how surgeons are actually moving during surgery. This is most often during endoscopic procedures and microscopic procedures, because it's easy to know what the surgeon is seeing because you're seeing what they're seeing. And this is an example just showing that when you know the kinematics, which are depicted on the left of the different instruments being used, how long they're dwelling in various areas, you can create these very accurate predictions on the right of how much blood a surgeon's going to lose or how well they're going to perform during a particular surgery. In fact, in slightly more advanced models, you can even outperform the level of expert surgeons in performance assessments. So we had some expert skull-based surgeons watch some of these videos of surgeons performing a standardized task, and they made their predictions. Were they going to get it right or wrong? But the surgeons were really impatient. They would only watch the first minute of video. They wouldn't watch the whole thing. So we had to handicap the AI. So we only gave it the first minute of video too. And it was actually with just a minute of watching a surgeon operate, able to have as good of a prediction as an expert skull-based fellowship trained surgeon as to how they would do, and a better prediction of how much blood they were going to lose because surgeons always lie about blood loss, even when it's not their own cases. It's crazy. On a more advanced level, we can actually take these longer videos of surgery, break them down to their constituent actions automatically by understanding the context of what surgeons are doing, what tools are doing, and identifying various gestures. This is a team that I worked with that's working in prostate, and bladder surgery, and virology, and work in these multi-centered datasets to actually generate these very large models that have the ability to predict really important outcomes, in this case, continence, neurologic function after these kinds of surgeries, which is something that experts actually can't do very well. So this is by understanding the very minute elements of surgery and building up those building blocks, we're actually even beginning to start to point to, hey, maybe we should be doing this, or it shouldn't be cutting that, or should be operating in this way around this particular structure. And we kind of, as a foundational belief, I think that we should transform surgery from an art into a science. So what we're trying to do, and I'll close with this last thought, is we're trying to turn surgery from something that's uncoached, without feedback. It's very hard to collect data, as anyone who's ever written a paper knows. You've got to go through the EMR and pull all that stuff in. Like, that's absurd. Surgery is based on our local and personal practices. I just did this weird, complex neuroendoscopy case. I don't know anyone who's done a case like this. I published an operative video on it. This is just how I do it. I'm reinventing the wheel, even with this case, right? Whereas surgery could be guided by experience of tens of thousands of surgeons using real-time feedback. Data could be collected automatically from the EMR, because we know these large language models can help with that. And we can start to move towards updating this trove of globally shared knowledge, rather than operating in independent silos. So we talked a little bit about the ethical and legal considerations. That's usually the last slide. I'll only point to one last point in this, which is the last sentence, that societal attitudes can change rapidly. And it's important for all of us to be a part of that change for good and for the betterment of our patients. So a lot of these images are generated with AI. I usually give this slide as a disclosure. These are all automatically generated images. Family, that's how you contact me. It's a pleasure of mine to continue this conversation and, of course, answer any last questions there are. But I want to thank you all for your time and attention to this interesting area. And it's something I spend a lot of time thinking about and working in. So I'd love to talk with you all if there's any follow-ups from this. Thank you. Thanks, Dr. Anahot. We'll give it just a couple seconds to see if anyone gets last questions. There's a little delay, and we'll see. All right No last-minute questions. You're getting a lot of thank yous in the chat, but I think that is it So thank you for your time It is now lunchtime and we'll see you guys the rest of the crew and a half an hour or so to start The rest of the conference for today All right. Thanks all it's a pleasure to be with you today. See you later
Video Summary
Dr. Donohoe, a pediatric neurosurgeon, discussed the increasing role of AI in healthcare, emphasizing its integration in clinical operations and medicine. AI tools enhance team communication, automate surgical planning, and assist in digital pathology by offering swift tissue analysis. Generative AI, which creates outputs like text or images, is emerging, though concerns about data privacy and ethical implications persist, especially with tools like transcription apps translating medical conversations into notes. While currently AI applications have room for improvement, they are aiding in patient care by potentially allowing clinicians to focus more on patient interactions. The talk also highlighted AI's role in surgical assessment, enabling better predictions of outcomes and performance metrics. Dr. Donohoe calls for active engagement with AI technologies in healthcare, underlining the need for ethical oversight and adaptable societal attitudes toward its use.
Keywords
AI in healthcare
pediatric neurosurgery
digital pathology
generative AI
ethical implications
surgical assessment
×
Please select your language
1
English