false
Catalog
Science of Neurosurgical Practice
Practical Evidence-Based Medicine: An Introduction
Practical Evidence-Based Medicine: An Introduction
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, and thank you again for coming. So I think that the title of the course is maybe a little bit obscure. What we'll really mostly be talking about you could characterize as sort of bedside evidence-based medicine, because we're going to emphasize the relevance to clinical care, or maybe practical biostatistics, because we're going to minimize the emphasis on mathematics and try and convince you that statistics is really easy and the hard part is understanding the application. But I think more importantly, we're going to look at a bunch of recent literature and think about how we can be misled. That's the biggest threat, I think, to both clinicians and researchers, and constantly ask the question, how can we be wrong? So that's really the theme of the entire day. And so it's a little bit weighted on the first day to some didactic talks, but that's not going to be the model for the entire course. We're going to spend most of our time in these small groups doing things hands-on, and we'd like to start kind of getting into that model with an example way outside of neurosurgery. And we'll do this periodically to emphasize that the principles are the important thing, and we don't want to get lost in the details of the clinical subspecialty. So I picked something that I'm hoping people aren't familiar with, and I'm going to do a really quick lesson on thermal expansion, something well-known to physicists. And we're going to restrict ourselves just to metals, because nonmetals cause other problems. So here's thermal expansion. When metals heat up, their atoms begin to move, they get more kinetic energy that increases because of the heat, and the atoms get farther away from each other. That's almost all you need to know about thermal expansion, the expansion driven by the change in temperature. And there's some math that goes with this that you don't have to know, but if you divide the change in temperature by the degree of expansion, you get this thing called the—that's what it looks like, this thermal expansion coefficient. And there are big tables of that that list the different materials at the different temperatures, and so you can look these up in the CRC manual if you want. But it actually is relevant to day-to-day life, this concept that the atoms get further away from each other when they're heated. So for example, I don't know if you've ever thought of this, I had a big train trip recently to Boston, and I was thinking, in 2014, why can't they get rid of the ch-ch-ch that trains do? You'd think that they'd be able to figure that out. And the reason they can't do that is that you need this, you need separators. After every length of railing, to avoid this. And you know, there's this really vigorous online, people who are train aficionados, you can find amazing things online about trains, and people actually have put this online about how much of a gap you have to leave between each rail in order to avoid this. And that's why the suspension bridges also look like they do, and if you don't take that into consideration, this kind of thing happens. I took this out of the back window of the train when I was still in North Carolina. I didn't get a really good picture, I should have gotten out of the train, but I was afraid they'd leave without me. But it has some real life applications, this thermal expansion. So here's the test now, here's the problem, okay? Consider a rectangular plate of metal, flat plate, with a circular hole in it, okay? When the plate is uniformly heated, what happens to the diameter of the circle? Okay? What do you think? Does it increase? Got to have a show of hands. Circle get bigger? A couple. Stay the same? Be bold. Another couple. Get smaller? Must be everybody else. So look around, find someone who disagreed with you, and convince them that you're right, and give you a couple of minutes. Go ahead. Talk them into your way of view. You guys come to some conclusions? We're a little running just a minute or two late, so I'm going to cut the discussion short and talk about this. The point of this, before we go over, is that after you have this discussion, and before we figure out the answer, I want to just analyze what we just did, because this is going to be the model for the whole course, okay? You had to make a commitment, right? We got everybody to vote, and you had to externalize your answer when you found someone else who disagreed with you. You had to reason with them, and you sort of got invested in the process. If you were standing up here, removed from the discussion, it was pretty loud, okay? People were laughing. People were interacting. That's what we want to do, okay? That's how this course is going to run. Get you committed, get you fired up, get you engaged in what we're doing. So throughout the course, that's what we'd like you to do. So increase? Anybody change their minds? Stay the same? Decrease? Decrease? Like, 100% said decrease? It actually increases, right? You want me to convince you? What would happen, for example, if you have the sheet of metal and just the circle, but you don't cut it out, right? Imagine that, and heat it up. You would see how that would enlarge. Well, you'll probably say that's not the same as having a piece of metal cut out, because I don't know, because it isn't. You know that colloquially, right? When you have a jar that you can't open, you put it under hot water, right? We can kind of analyze this a bit. What really happens is that it gets bigger. It's like a photographic enlargement. That's what really happens. But think about the atoms on the outside, on the rim of the hole, okay? If that got smaller, then the atoms would be closer together, and that can't happen, right? We just learned about thermal expansion. The hole has to get bigger in order for the atoms to follow our rule of things get further apart when you heat them, okay? So you won't forget this now, right? So that's our plan for the rest of the day, and the rest of the three days, actually. So the course, improper now, I'm going to start off with an introduction and then move through the syllabus that you all have, okay? And I'm going to start with a clinical scenario, because that's what sort of all evidence-based medicine starts with. And again, I'm going to use one that's not quite neurosurgery. I promise that most of what we do will be directly relevant, but I don't want to make you think that this is limited by discipline. So here's the scenario. Patient comes to see you, and within the last two days, had suddenly developed facial weakness. On the right, you're convinced represents a Bell's palsy, and you decide that you want to give that patient corticosteroids to improve their condition. And you discuss with your colleague what dose of corticosteroids she usually uses. She looks at you aghast and says, well, I don't use steroids at all. They don't work. And so your discussion escalates into an argument and then a fight, and you need to find some support for your decision, okay? So I'm going to sort of, this question about whether you use steroids for Bell's palsy, I'm going to introduce this metaphor of having a support, some sort of column of support for giving or not giving steroids for the question that you're asking, sort of clinical reason. And in fact, that's another name for evidence-based medicine. Some people call it science-based or reason-based evidence, and I think practitioners get into trouble that way because, of course, everybody wants to think that they practice reasonably. And in fact, they do. You know, there's this kind of implicit reasoning that clinicians do, and usually successful to make decisions, but what we're talking about is a more explicit and transparent system of reasoning that's really evidence-based medicine. So that's what we're going to pursue. In the next maybe 25 minutes or so, depending on how fast I talk, this is what we're going to try and cover. We're going to talk about reasoning in general and some of the faulty reasoning that we see all the time on the wards, and then some kind of more reasonable types of reasoning, deductive and inductive reasoning. We're going to introduce the concept of a hierarchy of evidence, so different reliabilities of different types of evidence, and then we'll just sort of dip a foot into the quantitative aspects of evidence-based medicine that we'll spend most of our time doing later on, talk about sources of error, and then as a last step, talk about what do we do when logic kind of leaves us short, when we still have to make a decision. And we'll start with faulty reasoning, okay? One type of faulty reasoning is outright lying, and you hear this probably all the time when you're reviewing a study and someone says, I don't believe that that study was sponsored by a pharmaceutical company. The implication is that the investigators are not telling you the truth in order to persuade you, okay? So you've probably heard that people say that all the time. That's a dead end for us, okay? Because if you believe that people are lying to you, then what follows is that you only believe the studies that kind of corroborate what you already think, okay? And fortunately, this kind of deceit is really rare in the medical literature. It doesn't happen often, and so a tenet of doing any kind of evidence-based medicine is that we have to believe the studies. Now, they can be biased, and people can put spin on them, but bias is not the same thing as lying, okay? We have to agree that, by and large, the medical literature is not dishonest, maybe biased, but not dishonest. So here are a couple of other sort of really common kind of fallacious reasoning. So the use of steroids for Bell's palsy is the standard of care in our community, right? How often do you hear people say standard of care, or the consequences of the disease are so devastating that I have to do this treatment? So what's wrong with those types of reasoning? Why are they—they're persuasive, right? But why are they not—why are they fallacious? What's wrong with that type of reasoning? Sure, they're not evidence-based, and they're really largely irrelevant. They sort of beg the question, right? They're rhetorical, but they're sort of emotion rather than evidence-driven. They sort of appeal to the psyche, and in fact, the whole point of those types of arguments is to persuade, okay? And often they're very persuasive, but they're not relevant, okay? Just because—well, they're just not relevant. We'll talk more about it. The way to focus on the evidence, as you pointed out, is by focusing your question, and George already kind of anticipated this for us. I know a bunch of you are familiar with this format of asking questions. I know that from the analyses of articles that you sent in, but is everybody familiar with this sort of PICO format of asking questions? No? So for those of you who are, remind us what these letters stand for. The P is what? We're going to do another physics topic if you don't play my game. Patients? Mm-hmm. And the I? The intervention, and the C? Any of those are good. The comparison intervention or the control, or if you want to—sometimes, although I hate the jargon, if you consider exposure for the intervention, then it sort of broadly applies, because these kind of questions work for interventions. They work for risk factors. They work for diagnostic tests if the exposure is a study that you're using. And then the co-intervention or the—and then the O? Yeah, the outcome, the outcome that you're looking for, okay? This is a great thing to know. I know the residents at our place are sick of me saying this, but if you do this for any question, it focuses you really nicely. It's a nice shortcut for discussing problems. It helps with your literature search. It's actually very cool, too, if you're analyzing other people's articles, because if you can frame their question, whether it's a research trial or a manuscript in a PICO format, then you're on the right track. And if you can't, then there's something wrong with the article. That's a red flag that there's going to be a problem, that the article is going to come up short, that their reasoning is not good. So anyway, PICO. And in this particular case, would you agree that we could formulate this in some way like this in patients with Bell's palsy? Does the early use of steroids compared to not using steroids improve some sort of facial recovery? Does that sound like a reasonable—and people will often add a—should often add a time component to the outcome, you know, when does the outcome occur? So some people will call this a PICOT format, but it sounds too French to me, and it's a language that I could never master, so I don't do it. But anyway, these appeals to rhetoric, this standard of care business and this business about, you know, this is an argument here that bad outcomes are bad, which is true, but it begs the question of whether steroids improve outcomes. Just because everybody is doing it, just because the consequences of a Bell's palsy are potentially bad, doesn't mean the treatment works, and so it shouldn't mean that you do that. These are kind of irrelevant questions, and there are a bunch of others. I mean, you hear this all the time, right, if I don't give steroids and the patient has a bad outcome, I'll be sued. Or how often do people, unfortunately, resort to this, get reimbursed for the intervention, and so I'm going to do that. So none of these are relevant to the question of whether steroids work for Bell's palsy, and they're all examples of this kind of deceitful or fallacious reasoning, and we want to discuss now kind of reasoned or logical reasoning, okay, that we're going to move on to the evidence-based medicine approach, which is relevant to the question, okay, and it's not emotion-based but logic-driven and data-driven, and it doesn't seek to persuade, it seeks to find the truth. I think I probably, someone told me that I should update this slide. Does anybody know the relevance of the picture in this slide, or should I, probably should update this? I think it's getting dated. So talk just for a few minutes about the kind of reasoning that evidence-based medicine demands, this deduction and induction, and getting beyond this kind of argument that you hear all the time about, well, that's how people did it where I trained, right, which sometimes works, but assuming that your professors were competent and that medical science hasn't changed in the time since you finished training, but we want to get beyond that and find a more secure pillar to support our decision, and one of those is reasoning from principles, okay, and you hear this all the time, and this is actually, you hear this argument against evidence-based medicine, so we do this in neurosurgery and neurology all the time. Here's an example of reasoning from neuroanatomical principles, right? The right side of the brain, by and large, controls the left side of the body, and my patient can't control the left side of his body, and so he has a problem on the right side of his brain. This is deductive reasoning from really solid neuroanatomical principles, okay, and we can probably apply this to the question of Bell's palsy, or maybe. We know from autopsies of individuals with acute Bell's palsy who happen to have died, you know, get hit by a car or something, that the nerve itself is swollen. We know the course of the nerve through the temporal bone exposes it to compression, and that the nerve itself suffers demyelination and sometimes axonal loss, and so we also know that steroids reduce swelling, so perhaps steroids will reduce compression of the seventh nerve within the temporal bone, and so we should use it. Is that good reasoning, logical reasoning? Do you like that reasoning? Yeah, I mean, it's at least logical. It's transparent and explicit, but maybe not quite enough to support the decision. You could say, for example, that there's also some evidence that Bell's palsy is caused by a viral infection. It's related to herpes virus, and we know that steroids are immunosuppressive, so maybe that would be a bad thing to do. So it doesn't quite get us as far as we need to go. As an aside, you hear this argument a lot against evidence-based medicine, that people will say, for example, the practitioner of evidence-based medicine would insist on a randomized controlled trial to decide whether or not to jump from a plane with a parachute. There's a fairly famous article with this title, but this is not actually what evidence-based medicine says. We have fairly solid principles. We know about gravity. We know about aerodynamics, and we have vast experience to suggest that you don't need a trial, that you should use a parachute if you're going to jump from an airplane, or you should just not jump, but that's not what evidence-based medicine tells you. Sometimes that experience and principles sort of tell us what to do if we're jumping from an airplane. Sometimes that also is not enough, and you sort of have to go a little bit further. So let's think about how to get there using this kind of reasoning. So one next step when principles don't support an action is to use our experience. So you could perhaps think back and see, well, the last three patients that I had with Bell's Palsy and what happened to them. So John had a Bell's Palsy, treated him with steroids, he got better. Same thing with Sue, Bell's Palsy, treated them with steroids, got better. And Bob didn't get steroids, had a Bell's Palsy, and was left with disfiguring facial problems. So now with our new patient, I'm going to give that patient steroids. Is that good reasoning? Again, it's logical, it's transparent, it's explicit. What's the problem with this kind? There are at least two real threats to reliability with this kind of reasoning. What problems do you see with this kind of reasoning? The suggestion was chance, random error. We just sampled three people, and this is what we got. Really good. And one other more insidious type of threat to this kind of reasoning. Your memory is unreliable, it's selective. You tend to remember your most recent cases, your most outstanding cases, or the cases that conform to the idea that you've already had. So there's, what do we call that in evidence-based terms? It's bias, yeah. So how do you fix those problems? Even though this is reasoned, what do you do to get around that? How do you fix the concern about chance? Yeah. So if three cases point you in a direction, a whole pile of cases, a whole experience, which is the same thing really as evidence, more is going to be better. So let's sort of look into that a little bit and talk about that. And more generally, talk about what makes evidence more reliable. So there's this concept that there's a hierarchy of evidence, and we'll come back and talk about that a lot during the next two days. But there's a spectrum. There's strong evidence and weak evidence. It's not right probably, and we shouldn't in this course anyway talk about evidence being not valid. It's sort of deprecating. I'm sure you all know from your own published work that it's hard to publish papers, and it's not right to say that your work is not valid. And these inferences are never certain anyway, but sometimes we're more or less confident along the spectrum and the evidence that's presented. And we've just talked about what makes sort of strong and weak evidence. So what do you think about our inference from our three patients about what to do? Is that going to be strong evidence or weak evidence? Yeah, fairly weak, I would say. And the reasons, again, for being weak? Too few cases, so random error. And this problem about selective recall. And to some extent, experts give us a sort of a bet. They have more cases in their head, and they are better at guessing, at reasoning. But they're not immune to this problem. And so this kind of evidence is relatively weak. So formally recording a bigger number of cases seems like a good solution. So how would you go about doing that? What would be an easy way to get more evidence? Go back through the data, raise it, and see what the ratio is of Bell's palsy. Sure, sure. So go back to your institution, like we did over the last 10 years, and look at all of the cases of Bell's palsy. You come up with this big number in a decade. And now you've created a problem for yourself. Because when you had four cases, it was easy just to list them. But what do you do now when you've got 300 and some? That's not going to be a useful, effective way. And so now we're going to introduce the quantitative aspects of biostatistics. And I promise you that if you can count, then you can do this. We're going to really minimize the mathematics a lot. And I feel like I can do that, because that's sort of what I did in graduate school. I feel like I can now dismiss it. But we're going to talk about this two-by-two table. So here's the 319 patients. And all I've done here is just sort of listed the outcomes. So listed the good outcomes, 239, and listed the bad outcomes. So we sort of have an outcome table. And I've just percentaged it. So you sort of get a sense of how people did as a whole. This outcome is, in statistics, referred to as a variable, something that can assume more than one value. And in this case, it can assume two values, good or bad. Does that seem reasonable? And now I've listed the other component of our data. This is whether or not patients receive steroids. So this is another variable. It also has two possibilities, yes or no. Steroids or no steroids. What do you want to do from here? What are we trying to do? So this is describing all of our data. But that really hasn't gotten us the answer that we want. What is it that you're looking to do? Yeah. Yeah, we're looking to see an association or a relationship. And how are you going to do that? We want to know whether steroids influence the likelihood of improving. And that's where this two-by-two table comes in. And it's worth your time, a little bit of time, to understand two-by-two tables. Because once you do, then really almost everything else in evidence-based medicine follows. So all I've done is put the numbers in the total column. So the rows are the same numbers we had, steroid or no steroids, in the columns. Same numbers we saw originally, good outcome or bad outcome. And I've just filled in the boxes. So is this an effective way of conveying that information? Well, certainly a reasonable way. But how often have you read a paper where you've seen this? Can you recall a paper where you saw a two-by-two table in the manuscript? Almost never. It's cumbersome. What you'd really like is what? So you've got the data displayed. So you'd like a single number that conveys the association. So how are we going to do that? I'm going to just percentage them again. So for this set of data, and this is now 319 is true. The numbers are made up. What would you say about the effect of corticosteroids? Is there an association between steroids and outcome in patients with Bell's palsy according to this data? No. This is the exact set of data for no benefit, no change, no difference. Same outcome, whether or not you had steroids. But let me change the results a little bit, the actual data. And let's pursue that a little bit. So here are the numbers. And now I've percentage them out a little bit. So now what do you think? So now steroids look like they're maybe useful. But again, we're still stuck with this two-by-two table. So what do you do to get this into a more concise expression of association? Now, there's statisticians, many statisticians, who have spent their entire careers figuring out ways to do that. But here's what people do typically. So I've just replaced the numbers for a minute with letters. Just helps us more to kind of generalize. And there are a handful of ways to do that. One of them is this thing called an odds ratio. So this is looking at the odds of a good outcome, which is A over B for the steroids, compared to the odds of a good outcome with no steroids. So it's an odds ratio. That's one way. We can do a risk ratio. Some people call it a relative risk. And this is looking not at odds, but at risk. So the risk of a good outcome in people with steroids are A, the good outcome, over the entire group, A plus B. And then to get the ratio, you look at the risk in the group of patients who didn't get steroids. So a risk ratio. And one other even simpler way, one haven't we done yet, there are lots of ways, but one other real common way, instead of doing dividing, you can just subtract. Exactly, you can do a risk difference. You can look at the risk of a good outcome in the steroid group, risk of a good outcome in the non-steroid group, and you get a risk difference. We'll talk a lot about this, but why might you prefer that risk difference besides the fact that it's easy? What mathematical property does a risk difference have that might be useful to you? If you do the inverse, the one over the risk difference, you get a thing called a number needed to treat. And we'll talk about that in a while. If you do this, and we just sort of, why don't we settle on the risk ratio, just for no particular reason. But if you do that, and we've percentage these, so the relative risk is 90% over 59%, or 1.5. So what does that tell you if you put that in words? So now we've gone from 319 individual cases to a single number. What does that tell you, that number 1.5? OK, well, this is the only talk where I'm going to do some of the answers. So one way of interpreting that is that if you have a Bell's palsy, and you're given steroids, you're 1 and 1 half times as likely to have a good outcome as if you didn't receive steroids. That's with a 1.5, or you're 50% more likely. Does that sound reasonable? Good. So are you done? Is that the end of the, it's not the end of the talk, so what do you think? What's missing now? Do you know now whether you should use corticosteroids for patients with Bell's palsy? Are you persuaded? Why not? I see heads shaking no, so why not? This seems pretty, 1 and 1 half times as likely? Seems like a slam dunk. You need to know the statistical significance. Well, let's look at that, because that's a real sophisticated response. So let's look at all of those concerns about whether this is reliable. You're suggesting that we've got a problem potentially with random error, which I agree, and so we'll take a look at that. And we've mentioned risk of bias. It's bias and confounding, so we're going to explore those real briefly. And start with bias. Some people call this systematic error, but what it really is, it's this tendency for a study to give you incorrect results to sort of move the risk ratio, in this case, in one direction or the other. So bias can go in either direction, but it goes in one or the other. So what I've done here, I've got the fat arrow, the bottom arrow there, is our 1.5. That's what we've measured. That's the result of our study. And I've just, for the purpose of this problem, assumed that there's no difference, that the truth is that the ratio is 1, that steroids have no effect. So in this case, the bias has sort of distorted the apparent association into the positive direction. How do you measure bias? It's kind of a trick question. How do you measure bias? I'm sorry? Yeah, yeah, you definitely can grade, and that's what we do. But that's sort of semi-quantitative, right? We can't actually measure bias yet. We're trying to figure out how to do that. Because to actually measure bias, you'd have to know the truth. And we don't generally know the truth when we're asking these questions. So you can't measure bias. You can only get a sense of it in a semi-quantitative way. And most societies, including the AANS, measure bias by assigning a class of evidence. And we'll be doing a lot with that later on. But it's a semi-quantitative problem. And it's why we argue so much about bias. People don't argue about random error very much. It's important, and they look at it. But in the end, you get a number. Bias, you don't get a number. And so we argue about that quite a bit. So how do you get rid of bias? I'm representing now, as we discuss this, this is this group of patients with Bell's palsy. I couldn't figure out a way to do a 319 dots. But that's representative. How are we going to get rid of bias? What kind of bias are you worried about? If we're going to get to random error, what kind of bias are you worried about? Is that a core patient? Is that a building patient? Yeah. Is that a building patient? Sure. They knew, for example, what the patient was getting. And they had to look back in the record. The way we did this study that we're describing is that they looked back through records, and the records are incomplete. And it's possible that you said to yourself, well, I'm not sure if that patient got better. But they got steroids. They probably did. What kind of bias is that, by the way, that has a name? It's a common one. A lot of people call it misclassification bias. A patient is misclassified as getting better when, in fact, they didn't. So what other kinds of problems, what other kinds of biases are you likely to see in a study like this, where you look back at 319 cases in the medical records from your institution? How about if I told you that more people in the group that did not receive steroids had diabetes, had hypertension, and were elderly? You wouldn't be surprised, right? You try and avoid giving steroids to diabetics. So it wouldn't surprise you that there were. Yeah, because you also know that those are three risk factors for bad outcomes in Bell's palsy, hypertension, diabetes, and advanced age. What's that called? Yeah, confounding, right? Another type of bias where there's a mixing of effects, where there's something that's related to the exposure and related to the outcome, but it's not directly on that line. So diabetes, right? It's related to getting steroids, less likely to give someone who's diabetic steroids. It's related to the outcome. It makes it worse. It's a confounder. How do you get rid of confounding? So our best way is to randomize, right? Randomization, if it works, will get rid of the confounders that you know. And importantly, it'll also get rid of the confounders that you don't know, right? Because there may be some risk factors for Bell's palsy that we're unaware of, but randomization deals with that. So that's what this is supposed to show, OK? So we randomize people to steroids or no steroids, and then we get randomization. And then we get this sort of outcome. And how do you get rid of that misclassification bias that you mentioned? Again, let's assume that investigators are not trying to lie to us. It's sort of human nature to want, physician nature, to want your intervention to work. And so there can be a spin on the data, a bias on the data, but we're not trying to lie to our colleagues. So how do you address that problem? Yeah, so those are two distinct and really important. So some outcome measures are so objective, for example, not in this case, but a real objective outcome measure that you'd be confident in, even in a, yeah, so death, survival, that's a good one, blood type, some laboratory value that's not subject to interpretation. Or you mask the assessment. So now the investigator doesn't know what the treatment assignment was, and so this misclassification bias can't occur, OK? So we talked about the confounding part and the misclassification and how to fix it, OK? So now we've randomized and we've masked the assessment. So that's why randomized controlled trials on that hierarchy of evidence stand so high, and it's why I think if you thought about this particular study and applied sort of a standard classification of evidence, you'd say the evidence is kind of weak. It's a little better than a handful of cases, right, a case series, but it's still weak evidence. Does that seem fair to say? And now, real briefly, we'll come to that question that you mentioned about the role of evidence and about the role of chance. Now, where bias distorts an apparent association in one direction, random error can go in either way. It's random, right? So you're all familiar with this. If you, for example, had a coin, an honest coin that's equally likely to land on heads or tails and you flipped it an infinite number of times, you would expect to get half heads and half tails. How surprised would you be if you flipped it four times and got four heads? Not that unexpected, right? And later on, we'll actually calculate how unexpected that is, but not that hard to believe. And it might lead you to suggest that the coin wasn't honest, that it was weighted in order to give you heads. How do you deal with that random error issue? It's probably not a good slide to ask the question. So, right, p-values and confidence intervals, and we'll talk a lot about these. What happens if I told you, because it's true, that the confidence interval for the risk ratio was that? How would you interpret that? There are lots of ways, but what's a good practical way to interpret that interval? And I'm gonna suggest that confidence intervals actually provide a lot more information than p-values. Okay, so this is a 95% confidence interval. Tell me what that says to you. If you had a p-value, would it be greater than or less than 0.05? Good, less than, because the numbers, good, are greater than one. And what exactly does it say? So a statistician might glare at you, but you can just tell them that we're really Bayesian, and so that's okay to say that. Another way that they might prefer is that if you did this exact same study a hundred times, 95 of those times, you would get a result that's within that interval of 1.3 to 1.8. Another important way to interpret this is that although your best guess is that the risk ratio is 1.5, the study is equally consistent, equally consistent with a benefit of 30%, 1.3, or a benefit of 80%, 1.8, and you can't tell the difference. You can't argue for one or the other. The study is equally consistent with anything in that range. So that's another important way, and we'll talk more about that. More cases fixes that, as we talked about. So now the last little topic on this. So we've looked at magnitude of effect and quality of evidence, and we've decided this is relatively weak evidence. Are you ready to make a treatment decision? Because you've got a patient now, and you can't tell patients, come back in five years when we've done the definitive studies. So what are you going to do? Is this persuasive enough? Is the magnitude of the effect and the strength of the evidence good enough to, what's the last pillar in that column to support your decision? Yeah, I think I heard what you said. So when evidence and print and reasoning, deducing from principles, not enough, and this is really critical because evidence-based medicine is not evidence-only medicine, right? Always, always there's this last step, which is judgment and experience and patient preference and values, okay? So that's it, that's judgment, or some people call it intuition, and it's what good clinicians do all the time. They do it, though, in an implicit way, and it's really hard to make it explicit. Okay, we'll try, because a few features of judgment are kind of obvious, and experts are better than others at doing it, okay? So you weigh the particular characteristics of your patient and the preferences of your patient, and also recognize that sometimes the evidence doesn't apply just to your patients. A brittle diabetic, for example, who might otherwise benefit from corticosteroids might not be an appropriate patient. That's the role of judgment, okay? So the real important thing here is that this is not evidence-only medicine, okay? No one is advocating for eliminating judgment and experience and experts and not weighing the risks and benefits in the individual patient. You have to incorporate that to make a good decision. So anyway, to finish up, the sort of three pillars that support a clinical decision are principles, evidence, and judgment. Evidence, which we'll spend most of our time talking about in the next few days, sort of ranges from strong to weak or reliable to not very reliable. The quantitative aspects of evidence-based medicine, you can do if you can count, and really, really, we'll show you that. And then over and over again, we'll talk about these three sources of error, okay? Bias, confounding, and chance. So here's sort of the whole process that we'll just recapitulate again and again, asking a focused question, gathering the evidence, interpreting the evidence, both in terms of its strength and quantitatively magnitude of effect, and then using our judgment to make a final decision, okay? And so that's sort of like what the final, in my mind, this final metaphor looks like. And now, all the rest of the course, we're going to spend time providing you with, you've sort of got the understanding, we're going to provide you with the algorithms now to do this stuff, okay? So that's going to be the entire rest of the course. This is my algorithm for staying employed. I shouldn't have shown it when my boss is watching, but it's helped me to avoid a lot of bad things. So good. So that's sort of the introduction. And you only have to bear with me one more time, okay? I'm going to do one more talk, and then I'm going to disappear.
Video Summary
In the video, the speaker introduces a course called bedside evidence-based medicine, which focuses on the relevance of clinical care and practical biostatistics. The speaker emphasizes the importance of looking at recent literature and being aware of potential sources of misleading information. The course will include didactic talks and hands-on activities. The speaker then gives an example of thermal expansion, explaining that when metals are heated, their atoms move and get farther away from each other. The speaker also discusses the relevance of thermal expansion in day-to-day life, such as train tracks and suspension bridges. The audience is then asked a question about the diameter of a circle in a rectangular plate of metal with a circular hole when the plate is uniformly heated. The speaker engages the audience in a discussion about the question and emphasizes the importance of commitment, reasoning, and engagement in the learning process. The video transitions to a discussion about evidence-based medicine. The speaker introduces the PICO format for asking clinical questions and discusses the importance of reasoning from principles, experience, and evidence. The speaker explains the concept of a hierarchy of evidence and the role of bias, confounding, and chance in research studies. The video concludes with a discussion on the role of judgment and experience in making clinical decisions and provides an overview of the course content. No credits were mentioned in the video.
Asset Subtitle
Presented by Michael J. Glantz, MD
Keywords
bedside evidence-based medicine
clinical care
practical biostatistics
thermal expansion
day-to-day life
PICO format
hierarchy of evidence
clinical decisions
×
Please select your language
1
English