false
Catalog
Emerging Technologies in Spine Surgery
Designing a Robot for Spinal Surgery
Designing a Robot for Spinal Surgery
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Alright, we're going to move on to the third section on robotics. It's an honor to invite Dr. Theodore to talk about robotics and spine surgery. Thanks, Daniel. So we're going to shift topics this morning. We had some beautiful presentations this morning, some on image guidance and robotics. And this is what I'd like to talk about now is this whole paradigm of usage of a robot or potential use of a robot in spine surgery. Neal showed a beautiful use of the DaVinci. This is going to be another paradigm of sort of an image guided robotic system. These are my disclosures which are relevant to this presentation. We'll have a presentation after on the Excelsius robot which I developed. I think at this point we're in a very interesting place in surgery and spinal surgery in general. We've got advances in imaging, the quality of CT scan, MRI scan, even plain radiographs now has really increased over the past several years. We've got surgical advances too. We're doing things more minimally invasively. And then we've got this field of robotics and the things are coming together. So we saw beautiful presentations earlier on image guidance and surgery and you see that that's commonplace now but still the adoption rate hasn't been high and we'll talk about that. And of course surgery and robotics with DaVinci, those two things are together. But right now I think what we can talk about really is the next step which is bringing all three of those together. So when we talk about brain surgery, there really isn't a brain tumor anymore that's done in this country without the usage of image guidance. The problem is just this and this is one of the setbacks, right? Here's a guy with a sharp probe touching the brain and what's everybody doing? They're all looking the other direction at the monitor. And that's sort of the problem, right? We want to use image guidance. We want to see where we are but to see where we are we have to look at another screen. So why do we want to use image guidance in general? We want to have high accuracy and high precision. And the reality is that in a recreated fashion, and we're not going to belabor the literature, but the usage of image guidance does definitely raise our game. Why? Because what we don't want, some of the cases I mentioned earlier before, this is a case from Ireland, there's been other notable cases, obviously it just takes one bad screw to ruin your day or ruin somebody else's life. When we look at the meta-analysis of all the studies looking at screw placement, where do we come? And we talked about this earlier, the bottom line not to belabor the point is with the usage of x-ray and 2D imaging and then 3D imaging, incrementally our accuracy goes up so the ability to do things gets better. We're still then, while we're using image guidance, we're lining everything up, this is a paper from Dave Pauly, and we're pretty good. Nobody's perfect at anything. But as I said before, still to this day, even though image guidance has been increasing, there was still, and this is a paper by Roger Hartle just a few years ago and I think the numbers have definitely gone up, but there's been a bit of a delay. People haven't done it. And I think part of the reason for that is that when you look at the history of image guidance, when it first came out, we lost a generation of surgeons. We lost a generation of surgeons who had to touch the spine, touch the screen, touch the spine, and then manually register it. So that doesn't occur anymore. And what was noted in that article was the fact that really the fiddle factor was very high. So we need to progress. This is really the spine surgeon with the compact 64 in her back and our cranial colleagues here are with the iPhone, and I think we can do better. When you look at this whole, what's happening in the future, what's happening today as far as technological advances, realize everything's happening and it's hitting us all at one time. This whole explosion of data, artificial intelligence, computing power, and then number four, or number five I should say, which is that unstoppable freight train, which is automation. It's going to occur. It's happening every day in every field of our life, right? Driverless cars, everything. So why aren't we automating the process of surgery? This is a great article that came out about a year ago. The first five jobs are going to be taken by robots, middle management, we don't need those guys. Sales people, it's still daunting when I go to Target and there's no salesperson, you have to check yourself out. Report writers and authors, a lot of that technical stuff is being done. And then accountants and bookkeepers, and then finally doctors. Doctors, how is that possible? When you look at the amount of data that an oncologist needs, for example, it's impossible to keep up with the literature. So when you have an 18-year-old woman with a Burkitt's lymphoma, with a certain genetic subtype who lives in Baltimore, supercomputers are going to help refine that. And there have been studies now done that show oncologists are doing nowhere near as well as supercomputers when it comes to looking at all the data and trying to figure out what's going on. When you look at robotics, they've been around in surgery for a while, this is the first really sort of surgical robot from the 1980s, the Puma robot. This guy, this lady seems very happy to have a hole drilled in her head by a guy who's not scrubbed in. But the reality is that, you know, they're there, people have been thinking about this, this isn't a topic that's completely ridiculous out in the Netherlands. So in 1988, or 1998 I should say, Curtis Dickman, who was my partner, I was a resident at the time in Phoenix, was doing a lot of thoracoscopic surgery. And this really is the first usage of a robot in spinal surgery. So when we were doing the thoracoscopic surgery, as a resident for about eight hours, I would hold the endoscope for him while he did the disc. And these were long, tedious procedures. Curtis is an amazing surgeon. But when this robot came out, it was a very happy day in my life. And I would be the first one to report that I was the first casualty of robotic usage in spinal surgery when I didn't have to hold the damn endoscope anymore. But it really changed things because just moving changes everything, right? And after six hours and you're getting a little tired, the scope is moving a little bit, that arm really helped out a lot. Da Vinci came out in 1999 and really has revolutionized intracavitary surgery and even transoral surgery, as Neil showed us. I mean, these are some beautiful applications of this technology. It's a master-slave robot, sort of different from what we need in our lives. John Allen, the cyber knife, really took us into this field of utilizing robotics and radiation therapy. A lot of things coming out. Mako has made a splash. This is a robot for jigging knees for knee arthroplasty. I'm going to come back to this. At first, the orthopedic surgeon said, why do we need a robot? I can do a knee arthroplasty in two hours. I don't need a robot. And I'm going to come back to the reason why it's proven to be beneficial. Rosa has been around for a little bit now, and that's replacing stereotactic electrodes in the brain. They've had some exposure in the spine. And then, you know, I think that Mike will share his experience with the Mazor robot, which has been around for a while as well, in spinal surgery. When we look at spinal surgery, the reality is this is going to sort of start off at an entry level. What are we using this for right now? Really to place hardware. Can we place hardware percutaneously? Things that we do every day. Automating that accuracy is what's important. If you're in a pub in Ireland and you're playing against this robot, you're going to lose. If you look in the lower left-hand corner, which you each throw that arm, it's a bullseye every single time. And that's what we're talking about. Are we able to take what we do on a daily basis and elevate our game so we can automate that accuracy? When we look at the goals of robotic surgery, you know, there are a few, obviously, patients are asking for less invasive surgery. As a surgeon, I would like to eliminate the usage of radiation to myself and to the OR team. And can we decrease it for the patient as well? Improve our procedural consistency? And then is there an option to do planning? When you talk about designing a robot, this is a situation that took about 19 years of my life. In reality, this is Neil Crawford, who is my co-developer. This is an iterative process over years with engineering staff, computer programmers, software engineers, et cetera. This is what we first came out with. And the working prototype of what you'll see here a little later during the demonstration, which is the Excelsius robot. When you talk about robots, you have to ask yourself a few questions. And the questions are, what does it look like? Is it mounted to the bed? Is it on the floor? Is it on wheels? Is it mounted to the ceiling? All these are considerations that will play into your workflow. The other considerations are, how does it work? You know, is it integrated with image guidance? Is it separate from image guidance? You want to be able to navigate, right? So we saw beautiful talks about taking tumors out, knowing where our margins are, knowing that we can do osteotomies and other beautiful advanced surgical techniques with the usage of image guidance. So the robot should be able to either help with that or, at the beginning at least, help us with image guidance. Where does the robot sit? Everybody's worried about bringing it into the room. Being able to bring the robot in and out of the room is a big consideration for workflow, right? How does it fit in? Are you using the O-arm? Are you not using the O-arm? What's the footprint of the device? All of these are critical questions when it comes to the adoption of this technology, because nothing will kill technology faster than having a system that you cannot use, or that is so cumbersome or so ridiculous that it's not going to fit into the workflow of your operating room. A lot also has to do with software. And as you're looking and evaluating these devices, what's the look and feel? What's the user interface? How does the surgeon interact with the machine? And the reality is that this is a tool that's meant to help us. It's not supposed to hurt us. It's not supposed to slow us down. It's really supposed to integrate seamlessly into what we do. So the questions of workflow are ones that we need to address and that we should be asking ourselves on a daily basis as we look at these technologies, as we're evaluating these technologies. How do you get in the room? How does it get draped? What's sterile? What's not sterile? Et cetera. Can you unplug it to move it when the CRM comes in? Does the thing die if it gets unplugged? What happens if you're using other equipment? Is there interference? So all these things are considerations. When it comes to robotics, the imaging considerations are key. And there are three. So what are the paradigms that we use now? Can you navigate off just two images, an AP and lateral fluoroscope? And the answer is yes, you can. Can you use a preoperative CT and then get some images, AP and lateral x-rays to marry and register that preoperative CT to the patient's anatomy? And the answer is yes. Or can we go into utilizing an intraoperative cone beam CT, like the Aero, the Zieg, or the O-arm? And the reality is you'd like to be able to have all those workflows, right? Why should you be withheld to one? Other considerations obviously include the stiffness of the arm. Obviously you'd like to get away from K-wires. If the arm is stiff enough, you can do that. Maintaining your trajectory, the end effector, what else can it do? And then you want some feedback, right? Because if you're doing something percutaneously, and Pat showed some cases, beautiful cases, where you want to place a K-wire somewhere and it feels like it's okay, the screw doesn't feel okay. So you want to have tactile feedback and you want to have some feedback as to how things are going. Because that becomes a critical part of what we do. And then of course the most devastating thing, and Pat alluded to this as well, loss of registration. What happens when we're in the operating room, something changes. Can we understand that that's happened? Because if you don't understand that that's happened, that becomes a real problem. And then of course assessment of reachability. Can the robot get to everything that you need to do? If you're doing T4 to the pelvis, can it span that entire place? So with real-time feedback, so one of the things that has always been interesting is the use of the tracker, which has been around forever. So think about a surveillance marker, and we'll talk about that during the demonstration, being able to have an extra piece of feedback information. If something moves, if the tracker moves or the surveillance marker, you have one extra piece of feedback from a very small percutaneous fiducial. Again, the usage of image guidance, can you track your instruments in real-time to know where you are? And then of course, having some sort of feedback of two things. Number one, is your registration, is the fidelity good? Are you accurate in what you're doing? And then secondly, what happens if the tip of the drill is deflected? Do you get feedback for that? So while you're drilling or starting to put a screw in, if something skives or moves off to the side, do you get feedback for that? So this is, very quickly, this is an MIS T-lift case with preoperative CT. You can go ahead and just, in two minutes, literally at breakfast, plan where your screws are going to go so that you have an understanding. The neat thing is then, you get just a couple of x-rays in surgery, you're registered to that CT, the robot rolls in, not attached to the patient. You use the robot, the position of the arm, to see where your incision is going to be. You're not putting K-wires on somebody and shooting 65 x-rays anymore. You're taking a dot with a marking pen, making two paramedic incisions, with the bovie, go through the fascia, and then ultimately, go right to drilling and putting a screw in. We've now gotten down to putting four screws in, in about the first ten minutes of the procedure, the screws are in. At that point, we're bringing the MIS retractor in, and utilizing navigation to dock the retractor, and then bring the microscope in, do the TLIF portion of the operation. But this is really where we want to be. We want to be able to, in ten minutes, patient's position, get in and get the screws in. You're going to be able to do things that you weren't able to normally do. Pat alluded to putting screws in the lateral position. Who's done that before? Freehand? Patients in the lateral position, you put pedicle screws in. Was that fun? Yeah, it's not easy. So, utilizing robotics with a patient's fix, if you do an interbody, lateral interbody, you are then able to get image acquisition, and then go ahead and use the robot. The nice thing about the robot is it's going to hold that trajectory. Part of the whole problem is, in the lateral position, is that that's not something we're familiar with. We don't do it on a daily basis. It's not like it's foreign, but it is a little odd, and having the robot arm hold you for that trajectory can collapse the time of that to probably 35 to 40 minutes to be able to percutaneously fixate a patient in the lateral position. These are just a few other cases. Just recently, this is one of my Baltimore specials. A lady came in with a little hump on her back from osteomyelitis. Again, you can use this in an open fashion. Robotics should be able to be used open. Help us with percutaneous. MIS-T Lift, this is a vertebractomy for spinal column shortening. Then, of course, thinking about other ways to treat pathologies. Can we do something like transvertebral screws? You can do that with image guidance, but the reality is this gives us an opportunity, as image guidance does now, and the next step would be robotics. What have we found in the last couple of slides here is accuracy. The neat thing is that we've got this preoperative plan of where I want the screw to go. We put the screws in, and now we're able to overline the pre implant to see how we do. Ultimately, I can conceive of a time where you're going to get a report card at the end of that case. What you want is not to have to take the patient back. At the end of surgery, if you've got a preoperative plan, it's a software issue. We're going to be able to, at some point in the future, take an x-ray and say, okay, let's marry that x-ray to the plan and to what's going on and see what our accuracy is. This is our first foray into publishing this. You can see, is everything perfect? No, but you can see the deviations are not egregious. We're not talking about screws in the canal. We're talking about what's happened here, these parallel deviations, usually as a screw, as a skiving of the drill at the entry point, which allows a parallel entry into the body. Again, nothing terrible, but certainly we're not, nothing is 100% accurate, but we're getting close to, we're certainly improving our accuracy. And then when we look at the data, what's out there right now, there's been a few studies now looking at robotics. We don't, we just don't have enough data. The freehand data and the image guided data is fairly robust. We're not quite there yet looking at that. When you look at this whole concept of accuracy, this is, I'm going to take you back to the knee experience. So it's been about five or six years since the MAKO came out, and now articles are starting to come out that show five years later that those patients who had a computer assisted navigated robotic arthroplasty have less wear on the implant itself and are doing better clinically. Why is that? Because if things are jigged perfect, there's less wear and tear on that, and biomechanically things seem to hold up better. So in our field, what are we worried about? We're worried about adjacent level disease. If you can plant a screw that's well away and don't have to disrupt that adjacent facet, there's probably a good chance that our adjacent level disease may go down too. That's going to be years in the making, but the reality is we can certainly plan that. And finally, this is just submitted for publication. 28 consecutive patients are most recent patients compared to controls utilizing just freehand and fluoroscopy. And what we found is that there's a learning curve, but it gets better. And we're about four and a half minutes per case now. So like I said, at this point now, a single level MIS-T lift is under two hours in every single case, and that can take a mid-level resident through that. It used to be a little bit more painful teaching process. We're to the point now where we're looking at, behind us here, we see this millions of dollars of technology, and we want everything to sort of plug and play, right? Robot with image guidance with different types of imaging paradigms, and this is, I think, where the future's going to go. We start off, and you look at general surgery and the use of the da Vinci, starts off with procedures that are done widely, and then you sort of get into this realm of science fiction. Can we get into a procedure where we're doing things on a daily basis? And then in spine surgery, we start off with something like pedicle screw placement and microdiscectomy, and then who knows where we're going to end up. So thank you very much for your time and attention. I appreciate it.
Video Summary
In this video, Dr. Theodore discusses the usage of robotics in spine surgery. He begins by talking about the advancements in imaging and surgical techniques that have contributed to the field of robotics. He mentions that while image guidance and robotics have been used in surgeries, their adoption rates have not been very high. Dr. Theodore explains that the next step is to bring all three components together - imaging, surgery, and robotics. He highlights the need for high accuracy and precision in surgery and how image guidance can improve this. He also discusses the challenges and considerations of using robotics in surgery, including workflow, imaging options, feedback mechanisms, and usability. Dr. Theodore demonstrates the benefits of robotics in different spinal surgeries and presents the potential for automation in surgical procedures. He concludes by suggesting that robotics could revolutionize spine surgery and improve patient outcomes. The video does not provide any specific credits or references for the information presented.
Asset Subtitle
Nicholas Theodore, MD, FAANS
Keywords
robotics in spine surgery
advancements in imaging
image guidance
challenges of using robotics
improving patient outcomes
×
Please select your language
1
English