false
Catalog
Science of Neurosurgical Practice
Appreciating Confounding - A Demonstration and Dis ...
Appreciating Confounding - A Demonstration and Discussion
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I just wanted to review a little bit about what we did yesterday and then move on to the next set of sessions. We talked about different types of trials, and we discussed why randomized controlled trials are the gold standard for establishing practice. We also had some discussion of alternatives to randomized controlled trials. And so I'm going to – we're going to do a little bit of hypothesis generating and discuss how we weigh the evidence. So I've developed a hypothesis. Dr. Zalatimo and I are going to test them. Have to record your data. So we're doing this all above board. Oh, other one. Dud. Okay. So there's our data. Conclusion? He's like that D.O., aren't you? What problem is this illustrating in the interpretation of clinical trials? Yeah, this is confounding, right? It has nothing to do with the color of the plate, but the composition. The color is associated with the composition, associated with the outcome of breaking, but not on that causal pathway. This is a problem. I just thought it was worth pointing out. It isn't just limited to the interpretation of scientific studies. I heard a news report during the last election that said that no incumbent president had ever won re-election without winning the state of Ohio. And I thought, that's ridiculous. So I decided that I would look at correlations like that. These ones that I'm going to show you are all. I didn't write in the P values, just the R values. And we'll talk about this at a session later on. I decided that by examining divorce and marriage rates in the states in the United States, you could tell a lot about all sorts of other characteristics of life in the United States. Divorce rate in Mississippi and murders by bodily force. Marriage rate in Alabama and people electrocuted by power lines. Really good R values there. Divorce rate in Maine and per capita consumption of margarine. Omar and I were talking about margarine recently. Marriage rate in Kentucky and people who drowned falling out of a fishing boat. What do you think about these? This is stuff that you pick up out of the newspapers all the time. What are the potential explanations for these beside a reliable correlation? Yeah, you have no idea how many of these I went through to get these graphs. How about this one? Also ridiculous on the surface. Could you develop a hypothesis for this? No? If you were interested in this area of investigation, would you consider a further look and not just my random gathering data off the Internet? How about this? I saw this in the airport. What do you think about that? Using age to screen people for hepatitis C. Does date of birth have any causal relationship? This is a really useful public health strategy. This has gotten people treated. This has reduced the frequency of hepatitis C. Even though this is a confounder, an example of confounding, public health people like to call this a surrogate, which is sort of a nice way. This is a useful confounder. In one sense, it befuddles an association because year of birth has nothing to do with contracting hepatitis C. In another sense, it's a really important public health surrogate to screen people. Is this a real difference? How do you know this isn't just due to chance? What test characteristic would you want to know to help you decide whether this is random error? How would you calculate a p-value for this? What statistical test would you use? Sure, numbers are big. Someone might say that with four women in one of the boxes, you might want to use something that is more appropriate for small samples, so Fisher exact test, but one of those. What assumption would you have to make because if you want to fill out a two-by-two table and you only have three numbers? Why don't you just assume that they're half and half? If you do that, that's probably what? What's your interpretation of that? Yeah, that this is not a random finding. How about this? Our latest data that we have is 2012. These are current medical students in 2012 by gender. Difference? Significant difference? How would you tell? Same thing? Well, certainly statistically significant. That did not happen by chance. You have to decide whether that's clinically significant. But just looking doesn't help you, right? Because a lot of people shook their heads no. See the same thing? This was in the New York Times last week. But if you actually look at the data, some of it was significant, some of it wasn't. They really misinterpreted a lot of that. We heard a little bit from Dr. Nanda, I think, about the autism controversy. This stuff is persuading the public in important health ways. How about this? You don't hear quite as much anymore, but when they used to talk about climate change, on the news you would get a person talking about the evidence in favor of climate change and then another person with equal time talking about why that was not scientifically sound. What's the evidence-based fallacy of that practice? What is this sort of equal time fallacy on the news? Not a good evidence-based practice. Because it's not the same evidence behind it. Yeah, they don't deserve equal time. There's a scientific support for one view and not for the other in equal time. It's sort of analogous to classifying the evidence. We don't give equal time to case reports as we do for randomized controlled trials. I have a friend now who is a director at the ACLU, and she told me that one of their big 2014 projects is going to involve police lineups. She said that currently they either, in small jurisdictions, do the lineups sort of like you see on TV, where there's a one-way mirror and people look through it and they choose. And then in a lot of larger jurisdictions they use photo arrays. Analyze that for me, that practice, in an evidence-based way. What are the potential threats to reliability? As I say, this is going to be number two on the ACLU list this year to change. What potential problems? So both the police officer and the victim know that the suspect is within the photo array. The police officer knows who the victim is, who the perpetrator is. Does that help? What evidence-based terms would you apply to explain some of the problems with that? Certainly potential bias. There's no masked assessment. There's no allocation concealment. You know that the suspect is in that group, right? It would be more appropriate to have an array where the suspect may or may not have been. So there are huge problems with that. Anyway, it was just sort of to make the point that this stuff applies to everyday life. So I'm going to move on now to some other myths and see what we can find.
Video Summary
In this video, the speaker reviews the previous session on different types of trials and emphasizes the importance of randomized controlled trials as the gold standard for establishing best practices. The speaker then discusses confounding, using examples such as correlations between divorce rates and margarine consumption, and age as a surrogate for screening hepatitis C. They also touch on statistical tests, p-values, and the misinterpretation of data in the news. The video concludes with a discussion on the evidence-based fallacy of giving equal time to opposing views without considering the strength of the scientific evidence. No credits are provided in the transcript.
Asset Subtitle
Presented by Michael J. Glantz, MD
Keywords
randomized controlled trials
confounding
correlations
statistical tests
evidence-based fallacy
×
Please select your language
1
English