false
Catalog
Science of Neurosurgical Practice
Critical Appraisal of a Therapeutic Trial
Critical Appraisal of a Therapeutic Trial
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
We're going to discuss critical appraisal of a therapeutic trial. So as always, I'm going to start with a patient, and here's the patient that I've selected. This is a healthy 25-year-old young woman who, for most of us, we'd see in the office for Dr. Rizek and Dr. Brenmeier. They see these people on the plane, who has their first-ever generalized tonic-clonic seizure, had no aura, no lateralizing features either during or after the seizure, and no obvious precipitating factors or neurologic examination as normal. And the only really interesting thing in the family history is a brother who has an occasional generalized seizure without any clear etiology. So patient interactions always generate these information needs, and the goal of evidence-based medicine, David Sackett would say, someone mentioned him earlier as a father of evidence-based medicine, would be to judiciously and conscientiously apply the best evidence that we can find, in this case, to this patient's seizure. And you could ask a bunch of questions, and we're always trying to say, how could I make a mistake here? That's what evidence-based medicine helps us with. And there's a formalized process, and this is a process that we'll do when we're looking at other people's work, when we're evaluating manuscripts, if you're a reviewer, clinical trials, and when you're developing your own papers and manuscripts. It's the same process, so it applies to all of those. Ask a focus question. It was striking when Dr. Nanda showed us a bunch of historical papers that the ones that really made a difference, the ones that we remember or act on today, asked a very narrow, focused question that was well-defined. These broad questions really generally aren't amenable either to evidence-based approach or to the long-term usefulness. And find the best evidence and critically appraise it, both in terms of whether it's reliable and whether it applies to our patient, and then apply it. Okay, so we're going to go through each of these for this young woman who's here now in the clinic sitting in front of you. And you can add some others if you'd like. And unless you have a preference, and even if you do, since I've already tracked down the articles, we're going to focus on this particular question. Should this young woman be prescribed anti-convulsant medication following her first unprovoked seizure as an otherwise normal woman? Opinions, by the way? How many people would treat this young woman? Just a handful. And so everybody else, no? Like 38 people here and four people said yes. So let's look at the evidence. Okay, we're going to start with the way that we usually look at the evidence with this PICO format. You fill them in. Ask the question for me. I'll push the buttons. Patient group? Always the same. Inpatients with a first unprovoked seizure. Intervention? Sure. So you want to say early anti-convulsant therapy compared with? Sure. No treatment. And an outcome? Yeah, you can think of a lot of recurrent seizures, seizure-free outcome, how many people have seized after one, two, five years. A whole bunch of potential outcomes. So does that sound like what you came up with? So that's really the first step. We've asked a very focused, answerable question. How are we going to go about finding the evidence? What are you going to do to do that? This focused question helps you because we're all busy and you don't have a lot of time to go messing around. So how are you going to use that question to help answer your information need and respond to this patient? Yeah, this is not a hard question. The way you find evidence is by using this sort of hierarchy that we've talked about and focusing on the best evidence. Some people don't think that meta-analyses belong at the top. They at least don't provide any new evidence. But certainly they're up there along with randomized control trials. How are you going to identify those trials? Yeah, that's what I would do. Some interface with Medline, right? Everybody has access to PubMed. Some places that want to pay for it have access to OVID and some others. These are all just ways to get into just interfaces for the same database though. You probably know and we'll talk about that if you're doing an exhaustive search you shouldn't restrict yourself to Medline interfaces. But if you do that for this question about early anticonvulsant therapy in a patient like the one we've discussed, you'll come up with hundreds of hits. And we'll talk about how you filter that down into the relevant number of topics. And then if you filter further you'll find that at the top of the evidence pyramid there are two randomized control trials and a meta-analysis. So maybe we'll focus on those two randomized control trials. This is, you know, I talk a lot but if you're doing this yourself you could flip, you could be at this point in a couple, in less than a couple of minutes, right? So this is a quick way, efficient way to ask a question, find the evidence, and now we've got to apprise the evidence and see what it says. So how are we going to go about that? Sure. So let's ask those questions. And there are a bunch of guides for this. So we want to basically know can we trust what we read? And is it generalizable? Can we apply it to the patient who's sitting in front of us? And as I said, there are lots of resources for that, all of them available for free. And there are a lot of published guidelines about how to evaluate. A consort is perhaps the most well-known. And that's available easily and we'll talk about that more. To illustrate this point that this is the same technique to analyze someone else's work and to prepare your own, consort is actually developed as a guideline for publication. All the major journals have adopted that. I sort of, since I'm usually in a rush and behind, I've sort of narrowed the consort three-page document to a couple of really specific questions. And this is focusing on intervention trials. You can do the same thing for diagnostic questions and prognostic questions and associations. So let's go through this. And if you're really pressed for time, these are the two critical questions and you can focus on those. So here's the trial. This is a multi-center trial where they randomized 470 almost patients immediately after the type of seizure we discussed, all ages, all types of seizures except febrile seizures. And they were randomized to receive treatment immediately, that is after this first seizure, or not until they had a second seizure. And the physician got to decide what drug to use. And because this is a study from 20 years ago, they didn't use perhaps some of the drugs we'd use today, but Tegretol and Dilantin and Phenobarbital and Depakote were the options. And they did a nice power calculation. This was a randomized trial and the randomization worked. So the patient characteristics were well balanced between the groups and that was the trial. So randomized study? Yeah. Why do you randomize? What's the point of randomization? We've sort of talked about this, but let's be real explicit. What's the reason? What does this illustrate? Why do we randomize in a trial? Because the pain in the neck. Or more broadly, to make sure that prognostically important features in the groups you're interested in are equally distributed. Why not just count up the patients and assign them yourself? You could say, well, someone who presented in status, probably a greater risk of having a recurrent seizure. So I'm going to make sure that equal numbers of those types of patients are in the two groups. And you can go through. Elderly patients probably more likely to have it. So I'll make sure that the ages are... Why not do that? It's certainly less expensive than randomizing. Yeah. Yeah. Exactly. So randomization is our best, perhaps our only way of balancing both known and unknown prognostic factors, potential confounders. That's the only really efficient way to do that. So we randomize. So yes, this was randomized. We didn't talk a lot about this. Do you know what allocation or concealment of randomization is? Before you answer the question of whether the study was concealed, you have to know what that is. That's exactly it, perfect. And I think Dr. Barker was telling us how investigators will sometimes go to great lengths to subvert the randomization, you know, holding envelopes up to the last, steaming the envelope. So it's really important. This comes, so concealment is different than masking or blinding. So blinding we do, or masking we do to protect the randomization sequence after patients have been assigned. Right? It prevents people from doing this assessment bias or misclassification, knowing what a patient's been assigned to, and then having that subtly adjust their impression of an outcome. You can't always accomplish masking. We've talked about how surgical trials, for example, make that difficult. But concealment is different. Concealment, just like you said, comes before the randomization. And it protects the randomization by keeping from the investigators knowledge of what patients are going to get. Because, you know, we all want the best for our patients, and in general we all have an idea about what's best. So if you don't like the randomization that's coming up for a given patient, it's kind of easy to slip into that, well, we'll hold this patient back, someone else can get that randomization, and come back after lunch maybe we'll have a different, okay? So that's allocation concealment. And you can see how that could lead to really pervasive selection bias. And it has, there are lots of examples of that. When you don't conceal, and this is actually more important as a flaw in randomized controlled trials than not masking. It tends to distort the association, not always, but almost always, in which direction? In favor of or against the intervention? Yeah, almost always, and by a big margin. And it happens a lot. People have done a lot of these studies in the medical literature, looking at the last five years or ten years of randomized controlled trials in the Lancet or in the New England Journal. And it turns out that the majority of studies, even today, either are not concealed or don't tell you about the concealment. Okay? It makes a huge difference. So allocation concealment before the randomization occurs, masking after, that protects your randomization. So was this allocation concealed? You actually didn't see the article, so I'll tell you that they don't discuss how they did the allocation. In general, you'd suspect that a big multicenter trial used a strategy for concealment. And if you actually talk to the investigators, you find out that they did, but there's no way to tell from the published article. And so you have to give them a score of, I don't know, they didn't say. How about patients accounted for? Why is that an important issue? We've touched on this. Why do you care? Why not? They, you know, they randomized 486 patients and got data on 250. That's a big number. And your statistics will protect you against random error. Why do we have to spend any time on this issue of lost patients, patients not accounted for? That affects the validity and the outcome of it. You have to ask why those patients are on the trial. Yeah, yeah, exactly. So at the very least, if you lose a lot of patients, it's going to make it harder to show a difference if a difference really exists, right? But you have no way of knowing why those patients disappeared. And it could have been non-random. You can't tell that. And if it's a non-random lost to follow-up, then you potentially bias your result. So how about if all of the patients who were randomized to not receive an anticonvulsant dropped out because they had seizures and they got fed up with their doctor and they went to another, so bias the result. There are a bunch of ways of handling lost follow-up. The best way by far is just not having them, okay? That's the best and most important. So people will sometimes look at the patients who are lost and weren't lost to see whether they're comparable. But a lot of times, unknown, unknowable factors will influence loss. And so that's not very satisfying. People will sometimes use this idea of last observation carried forward. Do you like that as a strategy? It's not so good because, especially in disorders that are progressive, so you see this in Alzheimer's disease studies all the time and, well, shoot, the last observation was three months ago and you're assuming that the patient is the same at your cutoff point three months later. So basically there are no good strategies to really compensate for lost to follow-up. The best may be the sensitivity analysis where you assume that all the lost patients did as bad as possible and then all the lost patients did as good as possible. And at least you get a sort of confidence interval that your result is likely to be somewhere between the two. But there's no good strategy to correct lost to follow-up. And in the same way that this religious affiliation to 0.05 as a p-value arose based primarily on someone's thought when he woke up one morning, this idea that we'll allow up to 20% lost to follow-up is sort of hatched in Sackett's mind as this would be reasonable. And it's sort of stuck. So we generally give studies a pass if they're less than 20% lost to follow-up. But it's a threat to the reliability. This is the way that you should convey this information. It's right from the consort document. We want to know how many patients were eligible for the study, how many of those eligible were actually randomized, and of the randomized ones, how many you were able to analyze. And why, if you couldn't analyze everyone, why not? So in this study, here's the numbers to ask whether all patients were accounted for. What do you think? They didn't provide, consort hadn't been published when this study came out. But you can put together the numbers from their article. Are you comfortable with what they've done? It's not a bad job. They don't tell us the characteristics of those 300 and some patients who were eligible but didn't get randomized. And that would have been nice, but they do tell us the characteristics of the 515 that were randomized. And so we can at least use that information to tell whether those patients are similar to our 25-year-old. And they did lose some patients. And I'm not sure what incorrectly randomized means. Couldn't press Dr. Bakey to tell me, but it's less than 10%. And so we'll probably give them a pass and say that all the patients are accounted for within the limits that we've set. How about blinded intervention and outcome? Was this a masked assessment trial, or how we told you this was done? How did they decide which anticonvulsant to use? Do you recall? Had a choice of four drugs. Yeah, so this was not masked, right? The physician decided what drug to use. They knew what was going on. How about intention to treat? Why is that important? We've had a little discussion about this, and it is galling. I agree that, as clinicians, we do clinical trials to improve patient care. And it just is really upsetting, in a way, to say, well, the patient didn't get the treatment. How is it fair? How does it contribute to our honest knowledge to include a patient who didn't get the treatment in the group of patients that we analyze? I'm going to try and persuade you that this is essential, that all of the benefits of randomization, and I think we agree that it's important, protect whole groups of patients. You lose the protection of the randomization in subgroups if you eliminate patients. And there are very subtle ways. You see all the time in articles in big journals that we did an intention to treat analysis in the per protocol group. That's another one of those oxymorons. Per protocol is not intention to treat. They've eliminated patients by some criteria. They may or may not tell you. So there are a lot of these subtle terms that they use in place of, we didn't really do an intention to treat analysis. Okay, and just like the loss to follow-up, the problem with patients who don't get the treatment that you've assigned, who cross over, who stop taking it, is that it's potentially non-random. And if it's a non-random alteration, then the risk of bias is huge. Okay, so Dr. Haynes showed this data. I thought it was worth repeating once. This is an old study. The drugs are no longer really that relevant, and it's probably why the investigators were willing to give us the patient-level data. But here, just for a minute, so this is a treatment, an intervention for patients who've had an MI, and they're looking at five-year vascular mortality in the drug and the placebo-treated groups. Okay, and because these are huge numbers, they got a statistically significant difference in mortality. We can argue about whether that's clinically relevant, but it was certainly statistically significant. Now I'm gonna just look at the drug group. Okay, this is just the drug group, and this was not a planned analysis for the study, but like I said, since we had the numbers, I thought it was fun to look at. And this is the PCU term now for compliance. So we just looked at patients who were compliant to their drug, by which we defined that as taking the drug 80 or more percent of the time, and patients who were non-compliant. Okay, here are the numbers. Look at the big difference in mortality. So this is people assigned to the same treatment arm. The ones who took the drug had a 15% five-year mortality, and the ones who didn't, almost 25%. You know what the next slide is gonna be, right? This is the same data, just looking at the placebo arm. So these guys got placebo. You can say it doesn't matter whether they took it or not, it was a placebo. But it does, to the same extent that it mattered with the drug. That carries almost the entire way, and certainly more statistically relevant than the overall outcome. So there are a lot of things you can say about this result, but at least one of them, I think, isn't controversial, that patients who are compliant are different from patients who aren't. This is, I think, a compelling example of why intention to treat is just essential. We don't like it, but we gotta do it, because of this. Okay. So, intention to treat here. Yes, these guys analyzed, as you saw in the flow diagram. And a bunch of patients crossed over. A lot of the patients who were assigned to anticonvulsants eventually, in the two years, stopped taking it. But they analyzed them according to the group they were assigned. And I think we already said that mass disassessment wasn't done. Let me just show you kind of a cool example from the neurology literature about why masking is so important. Because sometimes I think people feel that they're ethical enough to overcome that problem. And I wasn't familiar, I think Dr. Haynes showed us a reference to an article where randomized controlled trials and masked and unmasked were performed on the same question. And I'm eager to see the actual article. But here's an interesting study done by some really smart people. This was run by the Mayo Clinic. Okay, and so the institutions were all high caliber institutions. What they were looking at is multiple sclerosis and plasma exchange. Okay, and they did plasma exchange and sham plasma exchange. And the way the study worked was when the disability score, this EDSS thing, increased by half a point, the patient had progressed, dropped off the study. Okay, and they started the study and you could see how this would be cumbersome, like a surgical trial to mask. Actually had to do sham phoresis and, you know. But they decided about halfway through the study that they'd better mask the study. So it started off unblinded and halfway through they changed to a blinded study. And so you have this neat data. This is not just the same question, it's actually the same study. Just look at the plasma exchange line, for example, or the placebo. When investigators were unblinded when they knew their patient was getting placebo, they came off on average from the study at six months. When it was blinded, when they didn't know, it took a year longer for patients to come off. And now just compare top to bottom, for example. Compare the blinded study, the dashed line, top and bottom. So when the study was blinded, placebo a year and a half, plasma exchange two years, no statistical difference. When it was unblinded, placebo six months, plasma exchange two and a half years, a dramatic difference. You can sort of imagine how people were saying, well, placebo, I know this isn't gonna work. Let's get the patient off. Yeah, the score is, they progressed. And when they're on plasma exchange and you wanna believe plasma exchange is helping, so let's rethink this at the next evaluation. And so that's what happens with this study. Masked assessment in smart people who want the best for their patients really distort the trial. So we said how this is not a mass study, right? The physicians knew what their patients were getting, selected the anticonvulsant. You didn't get to read the study, but they did treat the patients equally in both groups apart from the anticonvulsant. And that's a basic tenant of trials in general. The only thing that should differ of importance in the two treatment groups is the intervention you're interested in examining. If the groups are different in other ways, then it potentially reduces the reliability. And that's again why we randomize. We use adequate numbers and we randomize. Randomization doesn't always work the same way that flipping a coin 10 times will sometimes give you seven heads. But if you use big enough numbers and your randomization is done well, this is our best way to assure that groups are equally balanced. Okay? In this particular trial, they were. So what do you think about the overall strength of this study to answer our question? Decent? I'd give it either a moderately high or a high level. But you can — this is the problem with assessing reliability. It's not quantitative. You wouldn't argue with me now when we decide to look at the numbers. But for assessing reliability, it can be a little bit of difference. But now we've asked the question, we found the answer, the evidence, critically appraised it. So now what did it say? And can it help us answer what to do about this patient? So here are the numbers. Okay? So this is risk of seizure recurrence at two years, their primary outcome. Twenty-six percent absolute risk reduction. Okay? And so the number needed to treat is approximately one over 26 percent. A little point that I saw people discovering the other day, when you're calculating the number needed to treat, you need to turn the percentage into a number, right? So it's one over 0.26, which is about four. Is that a good number needed to treat? Is that persuasive to you? Yeah, that's one of the best you'll see. Just as a sort of grounding in numbers needed to treat, this is a very cool article by a guy called Sam Wiebe who we'll mention again later. He looked at some kind of... He's a neurologist, neurology interventions, just to give sort of a sense of what numbers needed to treat and interventions that we use routinely in medicine are. Number four is one of the best you'll see. Okay? So that's that. Okay? There's the numbers. Here's the second article. And now I'm going to daydream and you're going to work. I'm just going to flip slides. We're going to fill out the same assessment and you do a little reading and then tell me what numbers. Okay, a little information. You can get a lot of information. We're going to do a project like this together, a small group, so you can get a lot of the information, sometimes most of it from the title and the abstract. Since time is short, you should make use of that. So here's the study. This is another multicenter randomized study in patients of all ages with first seizures, not febrile seizures, almost 1,500 patients. And they were randomized to start anticonvulsants at the time of their seizure or not until the second seizure. And again, the physician got to choose the antiepileptic drug and they had a bunch of primary and secondary outcome measures. And it's a lot like the other trial that we saw. So one more little tidbit and then we'll start answering some questions. So let's go. Is that a randomized trial? Sure. Randomization concealed? I didn't actually tell you that, but they did tell you that there was central randomization. And so that's a good indicator that, so you called in to a central facility and got the, that's pretty hard to subvert, right? You don't know what's coming next and you don't know what three or four or 10 other randomizations occurred at other institutions before yours. So I think it's okay to give them credit for that, but they should have said more explicitly what they did. All the patients accounted for? Here's the data. And this is a newer study, so they did use the CONSORT document. So what do you think? I gave them a yes. They lost maybe about, what, 12%? I didn't actually count the numbers. Intention to treat? I'll give you a little bit more stuff to read and we'll answer the rest of the questions. Okay, and one more little pieces of the article. Okay, so intention to treat? Yeah, they were pretty explicit. Masked outcome assessment? No, same study as before, right? The physician chose the anti-epileptic drug. Aside from anticonvulsants, were they treated equally? They told you how they were treated. How about similar at baseline? This is always, where do you find this information? Again, we don't have time. We're always pressed for time. It's almost always table one, exactly right. So here's their table one. These are almost all percentages except for age. So what do you think? Groups comparable? Did they include the features of each group that you might want to know? That's another thing that you have to think about. So strength of this study? Yeah, well, I guess you'd have to call it the same as whatever you called the last one. So high, moderate, but reasonably good. So again, we've done the question, the evidence. We've critically appraised. So now we're going to apply the evidence, or you are. So why don't we pick this two years mark because that's what we did for the other study. And they have a risk difference. And you can see, again, about half of patients, about 50% of patients who didn't get anti-convulsants early had second seizures. Okay. There are some problems with that study, and you didn't get to read it in detail, but they had a lot of patients who were eligible who didn't get enrolled. And again, we know the characteristics of the enrolled patients, so it helps us decide whether they're relevant to our patient. But you wonder about what was going on with almost a quarter of patients who declined participation. And there's a big spectrum of patients. You know, we have a 25-year-old, and they were looking at two-year-olds and 80-year-olds. It's an issue of what, by the way? Internal validity, bias, concealment, random error masking, or external validity, generalizability. Yeah, yeah, this is an issue of generalizability. Anyway, and as I mentioned in the other study, there are a bunch of crossovers. So we have that in mind. So back to our patient, our 25-year-old. What are you gonna tell her? How are you gonna, what's your reasoning out loud that you're going through as you're deciding what to do with this patient? You've now reviewed the sum of the randomized control trials addressing this issue. There's no more published evidence that you can acquire. We have two randomized trials that show that. Yeah, I think that's a good analysis. I agree. So this question of antiepileptic drugs, just as you said, about half of people will have a second seizure. And seizures are bad, you know, affect driving and employment and social status and a lot of other things. Anticonvulsants reduce the risk substantially. We're pretty confident in that result. We have two randomized control trials. But like you said, that requires, here's the stepping beyond evidence-based medicine to use judgment and patient values and interpretation and is this applicable to my patient? But I think it's really reasonable to make that recommendation. Okay, so this was not that hard, right? This assessment of these clinical trials. And you'll see, you've got in your, I think, day two folder these things that we looked at, the assessment and therapeutic and asking a question if you're in a rush and applying the evidence.
Video Summary
In this video summary, the speaker discusses the critical appraisal of a therapeutic trial for a young woman who had a first-ever generalized tonic-clonic seizure. The goal is to apply evidence-based medicine to determine whether the patient should be prescribed anticonvulsant medication. The speaker emphasizes the importance of asking a focused question, finding the best evidence, and critically appraising it. Two randomized controlled trials are examined, both of which show a significant reduction in the risk of seizure recurrence with early anticonvulsant therapy. The speaker highlights the importance of intention-to-treat analysis and masked assessment in clinical trials. The overall strength of the studies to answer the question is considered to be high. Based on the evidence, it is reasonable to recommend anticonvulsant medication for the patient. The speaker also mentions the importance of considering patient values and the applicability of the evidence to individual patients. Overall, the video provides a framework for assessing therapeutic trials and applying the evidence to patient care.
Asset Subtitle
Presented by Michael J. Glantz, MD
Keywords
therapeutic trial
evidence-based medicine
anticonvulsant medication
randomized controlled trials
seizure recurrence
intention-to-treat analysis
patient values
×
Please select your language
1
English