As a psychology teacher, I regularly discuss the elements that make up a good scientific study. Things like double-blind elements, randomization, and the reliability of studies are crucial to their usefulness and interpretation. One of the things I notice is how often my students’ eyes start to glaze over as I start talking about these things.
So it occurred to me, how many of us really know how to read scientific studies? In today’s world, we use them to prove everything from why you should eat meat…to why you shouldn’t. Every doctor, scientist, and marketer has the beat on a study that proves what they are trying to sell or teach. So how do you know what’s real, and what is biased? If you find the study they reference, how do you know whether it has limitations, or flaws?
Find the original study. What was its intent? What were the goals of the researchers, and where was the study published? What were the limitations, or possible flaws in the study? Often, when people pull out a study and use it to base their argument, they may be summarizing the results in a way that supports their points, or they may be pulling a finding out of context of the larger meaning.
Take a moment to understand the terms. Words like placebo, randomization, and double-blind we have heard before, and often, we think we know what it means. But if you had to come up with the definition, say when your ten-year-old asks, how would you define it? Let's start with a scenario:
Imagine you wanted to conduct an experiment on whether chocolate improved the mood of normal people.
- First, you would gather a representative sample of people. This means that these people fairly represent the population in terms of diversity, economics, etc. If you only got oompa-loompas, they wouldn’t be representative to human beings across the spectrum. You then test them to ensure that they fall within normal, healthy guidelines. Then you would begin your experiment.
- An experiment is simply a controlled environment wherein a concept is tested to see if it can be proved.
- The experimental group is the group of people or animals that will receive the situation or substance being tested. So in our chocolate study, this is the group eating chocolate.
- A control group is the opposite of the experimental group: they are not receiving anything in the experiment. So this group would not receive any chocolate.
- Every study has an independent variable – the substance or element being tested, and a dependent variable – the outcome as a result of the application of the independent variable on the experimental group. For our purposes, the chocolate is the independent variable, and the participants’ moods as a result of the chocolate is our dependent variable.
- A blind study means that no one participating in the study knew whether they were in the experimental or control group, so they can’t throw off the study (because let’s face it: when we eat chocolate, we’re convinced life is getting better, right?)
- A double-blind study means that neither the participants in the study, nor the researcher in charge knows which group is the experimental group or the control group. This assures that the researcher’s bias can’t affect the outcome of the study.
- A placebo is often used during drug studies, where one group – the experimental group – receives the drug being tested, and the control group receives what they are told is the drug as well, but is actually a fake pill, often made of sugar. So in our study, we would give the control group something that looked and tasted like chocolate, but really wasn’t (imagine such a thing!)
- A randomized trial means that our “normal, healthy” people were randomly assigned to the experimental and control groups.
Always be a skeptic. We know that advertisers are trying to get us to buy their product or service. So when they reference a study or a trial, ask for a copy of it and read it with a critical eye. Did they mention the fact that all of their participants were an odd shade of orange with green hair?
Never confuse correlation studies with scientific studies. Correlations are just that: it’s data that matches up to other data in a positive or negative way, but doesn’t prove the relationship between the two. So for example, when I was in school getting my degree in psychology, the famous example we used was when the consumption of ice cream goes up, so does the incidence of rape. So does that mean that ice cream causes people to commit sexual assault more often? Of course not! But that is the danger when we rely on correlational information: just because they are part of the same pattern does not mean they have a relationship.
If the claim sounds too good to be true, get the research and evaluate it carefully. While studies can be overwhelming to read in the beginning, focus on the conclusions, discussion, and limitations at the end of the published article, and that will often help clear up any confusion or questionable logic.
Have you ever looked up a study to find out if the claims were true? What was the result?