kencko | How to read scientific research – and understand it

You are browsing our website for {{ userData.detectedLocation }}, but it looks like you're in {{ userData.countryName }}

Switch to the {{ userData.countryName }} store arrow right

Sorry, we don't ship to {{ userData.countryName }}.

How to read scientific research – and understand it

Want to dig deeper into the science behind nutrition and health? With these five simple tools, you’ll be able to navigate the most daunting research reports with ease.

Understanding the latest advice on healthy living takes some work – but if you’re like us, you’re willing to do it. You read the stories and follow the feeds. You’re not fooled by the fakes and fads. But there’s still so much misinformation out there that it makes sense to dig right down to the source: the science itself.

Reading scientific papers and academic research can be a minefield for us laypeople. We don’t have the training, nonetheless the time, to find our way through this stuff on our own.

At kencko, we’re lucky to have experts on staff - like Registered Dietitian, Carolina Schneider - to help us navigate such things. That means she’s here to help you, too. Here are Carolina’s five essential tips for reading scientific research: where to start, what to believe, and how to use it.

1. Check your sources

Most scientific articles are published in an academic journal affiliated with a larger health organization or institution. Before you begin reading the study, check its credentials. An example of a credible and reputable journal is the Journal of Nutrition from the University of Oxford. It’s also important to note who the authors are – are they practicing doctors, or Ph.D. students, professors, other scientists? – and their expertise on the relevant topic.

2. Know the structure

Most articles are organized into the same five sections. Knowing what to look for in each can help you be more effective when reading studies.

Abstract: provides an overview and summary of the study, including the research question, the methods, the results and the conclusions.


Introduction: includes the research question the study is aiming to answer and describes how this question fits into current science. This section includes a literature review – an overview of other relevant studies on the topic – and introduces the topic to the reader.

Methods/Study Design: provides the methodology of the study and how it was conducted. It includes a description of the participants/subjects, the design of the study, and any processes or materials used to conduct it. Pay attention to the ‘n’ number – this represents the sample size, which is the number of participants or observations made in the study. Usually, the larger the study, the more precise the results, and the better you can draw conclusions. 

Results: describes the objective data that was collected and the results of statistical tests, if any. This section tells you what was found in the study, and how likely those results are to be more generally true. Keep an eye on the ‘p-value,’ which measures how significant a result is: the lower the p-value, the stronger the evidence is. A p-value of less than 0.05 is statistically significant.

Conclusion/Discussion: explains how the results address the topic in question and whether or not they are strong enough to draw a conclusion. This section also describes the limitations of the study and offers suggestions for future research on the topic. 

3. Correlation does not imply causation.

Let’s suppose I carry a lucky coin to ward off tigers. It’s highly effective: not one tiger has attacked me in all the years I’ve had it. This is an example of confusing correlation with causation, and it’s one of the most dangerous fallacies in research.

Correlation is the relationship between two variables, whether that relationship is positive or negative. In a positive correlation, the variables move in the same direction – for example, as one increases, the other also increases. In a negative correlation, the variables move in opposite directions, for example, as one increases, the other decreases. However, this doesn’t always mean that one variable causes the other to occur. After all, my lucky coin just, like, seriously, does not keep tigers away.

Causation, also known as “cause and effect,” is when one event directly causes the occurrence of the other event. For example, high intake of high-sugar foods causes an increase in blood sugar levels. This is both a positive correlation and a cause-and-effect scenario.

4. Not all studies are created equal

If you’re looking for really watertight research findings, keep an eye out for ‘randomized controlled trials’ (RCTs). This is the ‘gold standard’ of study design, especially when measuring the effectiveness of an intervention or treatment such as in health and nutrition studies. In RCTs one group of participants are given the real treatment, while the others are the ‘control group’, and receive a placebo (which seems like the treatment, but is actually inert). Since RCTs are ‘randomized’ and ‘blinded’, neither the participants nor the researchers know which participants are in which group during the experiment - minimizing bias and providing a rigorous method of testing.

Although no study can prove causality on its own, RCTs come closest to showing cause-and-effect relationships between the intervention or treatment and the outcomes.

5. Watch out for conflicts of interest!

After the conclusion section of research articles, you’ll find ‘conflicts of interest’ – a section that discloses anything about the researchers or any components of the study that could bias or skew the results. Conflicts can often go unnoticed, but they are key to determining if the researchers’ professional judgment could be compromised. For example, if a researcher studying the possible benefits of a new medication works for the pharmaceutical company that is trying to get approval for that drug, there’s a big temptation to cherry-pick results that support their case. 

Similarly, the study’s funding sources can raise questions when an industry or organization funds work that may support their interests. This is known as ‘funding bias’ or ‘industry sponsorship bias,’ and can compromise the integrity and legitimacy of the research. It is very common to find nutrition research sponsored by food industry bodies. So if you see a study on, say, the health benefits of celery, funded by a celery farmers’ lobby group - take it with a pinch of salt!

Further Reading

Now that you’re ready to put your new tools to the test, here’s a great research report we think you might be interested in reading.

- Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality—a systematic review and dose-response meta-analysis of prospective studies 

- A randomised controlled trial using a whole food plant-based diet in the community for obesity, ischaemic heart disease or diabetes