How to interpret a research paper

 
The image represents ideas and research, with question marks, puzzle pieces, a hand pointing to a light bulb and exploding stars.

How to interpret a research paper

 
The image represents ideas and research, with question marks, puzzle pieces, a hand pointing to a light bulb and exploding stars.

Professor Steve Kamper offers some tips to help physiotherapists identify high-quality research that will be relevant to their work.

It’s a sad fact of life that the peer review process doesn’t guarantee the quality of studies published in scientific journals. 

What is published ranges from the scientifically robust and reliable all the way to the useless and fraudulent. 

So how is a busy clinician—who wants to stay up to date with the good stuff—to know where to spend their precious reading time? 

Can I just read the abstract?* 

Use the abstract to work out whether you want to read the paper. 

It is good for telling you whether the subject of the study might be relevant to you but not whether the results should inform your practice. 

Many abstracts relevant to physiotherapy contain misleading representations of the study, known as ‘spin’ (Nascimento et al 2020). 

There is no way of assessing study quality from an abstract. 

It’s also important not to comment on, publicise or distribute the findings of a study via your favourite social media outlet if you’ve only read the abstract. 

You may very well be disseminating rubbish. 

*No. 

The introduction 

The aim of the introduction is to show that the problem the study is addressing is important and that the study targets a critical research gap. 

To do this, authors often present a selective review of hand-picked studies that support their argument. 

For example, the authors might report findings from a study that showed a strong relationship between a clinical feature and an outcome (that supports the rationale for their study) but totally ignore the findings from other studies that show no relationship. 

Unfortunately, peer reviewers and editors rarely push authors to present a balanced or nuanced overview of the relevant field in the introduction. 

The problem for readers is that you don’t know what is missing. 

The upshot: don’t base your clinical decisions (or tweets or LinkedIn posts) on what appears in the introduction. 

The most important part of the introduction (or sometimes the start of the methods section) is the research question. 

If you remember nothing else, take this away: if the research question is not clear and unambiguous to you, stop reading. I guarantee you have better things to do. 

You cannot assess whether the study is any good or make a sensible interpretation of the findings if the research question is not clear. 

By clear, I mean that you should be able to describe in plain language what the study is trying to achieve to someone who isn’t a physio. 

Note that many studies will aim to answer more than one question; all the same considerations apply for every question. 

Conveniently, all research questions fit into one of three types (Kamper 2020a): 

  • descriptive—use data to present a summary of the world. Data may be numbers (quantitative) or words (qualitative)
  • predictive—test whether data at a certain time point predicts an outcome in the future
  • causal—estimate the degree to which a specific factor causes an outcome or whether a certain treatment changes an outcome more than another treatment (or no treatment). 

Knowing the question type will help you assess whether the methods are appropriate. 

Methods 

Research methods are a set of processes that give structure to data collection and analysis to directly address the study question with the minimal risk of bias (Kamper 2018a, Kamper 2020b). 

The most important thing about the methods is that they match the research question (see Table 1). 

If they don’t or if they are not described clearly enough to judge, then there is no point reading any further. In the methods section, the researchers will describe who the participants are (recruitment processes, inclusion and exclusion criteria), what data the researchers collected (baseline variables, outcomes) and how it was analysed (statistical analysis). 

They will also set out whether data was collected at one time point (cross-sectional) or at several time points from the same participants (longitudinal). 

Recruitment and inclusion need to ensure that the study participants (the sample) match and represent the group referenced in the question. 

The baseline data needs to provide a picture of the relevant demographics and clinical status of the participants and outcomes need to use reliable and valid instruments (Kamper 2019a) that directly address the study question. 

The mere mention of statistics strikes fear and loathing into many clinicians but in most cases a basic understanding is all you need. 

Descriptive questions are usually answered with simple proportions and means, predictive statistics show how well a set of indicators categorise future outcome and causal studies present the difference between the means of two datasets or associations between variables (Kamper 2019b). 

If you are interested in learning more about the key components of various study types, the Equator network (equator-network.org) houses checklists relevant to most study designs. 

Table 1: Question types and study designs (Kamper 2020a)

Question typeStudy aimsStudy designs
DescriptivePrevalenceCross-sectional population study survey
 IncidenceLongitudinal population survey
 Practice audits, case mixClinical notes review
 Cost of illnessHealth systems data review
 Clinical/natural courseLongitudinal observational cohort
 Diagnostic test accuracyCross-sectional study (clinical sample)
 Understanding experiencesQualitative study
PredictiveRisk or prognostic modelsLongitudinal study
CausalTreatment effectivenessRCT, quasi-RCT, controlled cohort study, natural experiment
 Treatment target(s)Longitudinal study (clinical sample), case-control study, natural experiment
 Treatment effect mechanisms or pathological mechanismsMediation analyses in longitudinal studies or RCTs

Results (the answer) 

The results section contains two useful bits of information. 

First, a quantitative description of the study sample, usually in the first table. 

This helps you tell how similar the study participants are to your patients so you can judge how well the findings apply (generalisability). 

Second, the answer to the research question. 

Obviously, the findings need to directly answer the research question and follow the analysis methods. 

What you need to do then is interpret the findings. 

For quantitative research this typically means judging how important the difference between group means or strength of association is likely to be for your patients. 

Judging the importance of the difference depends to some extent (but not solely) on the size (Kamper 2019c) and the precision (Kamper 2019d) of the finding. 

An alternative to deciding yourself whether the effects are important might be to take these study findings into a discussion with your patient as part of treatment planning. 

The discussion 

The discussion section should provide the researchers’ interpretation of the results, set them in the context of the wider body of research, detail the limitations and discuss their implications, and propose the implications for practice. 

The real value among these is placing the study results in the context of other research—but only if this is done comprehensively and systematically. 

The problem is that authors do not always do a good job. 

Like the introduction, the discussion section is often a cherrypicked review of research that aligns with the author’s interpretation. 

While researchers are usually not shy about offering their own interpretation and recommending what clinicians should do, arguably your own interpretation—adapted to your context—is more relevant. 

If you are not familiar with other research in the field, you might do a quick search for an up-to-date systematic review on the topic and see how the current study fits in with the findings of the review. 

A quick word on evidence hierarchies 

You may have heard of the evidence hierarchy that places systematic review of randomised controlled trials (RCTs) at the top, followed by individual RCTs, then well controlled observational studies and so on, all the way down to opinion pieces at the bottom. 

While there is value in the hierarchy—randomisation, in particular, is a powerful technique in reducing the risk of some types of bias (Kamper 2018b)—its usefulness rests on a couple of key assumptions. 

First, the hierarchy applies most specifically to studies addressing the effectiveness of interventions. 

Obviously, other research may be relevant to clinicians. 

Second, the hierarchy assumes equivalent study quality between the different types but a poorly conducted systematic review is of less value than a well-conducted observational study. 

So understanding risk of bias matters, regardless of where a study design fits in an evidence hierarchy. 

One last (difficult) thing 

On reading a research article, your job is to work out how heavily the findings should influence your clinical decision-making. 

Assessing the believability of the study is just one part. 

Arguably more challenging is deciding how to weigh up and integrate the findings with other information relevant to the decision. 

This includes your own clinical experience and the preferences and values of your patient. 

But don’t forget that information from these sources is at risk of bias too. 

Considering these biases and assessing their implications is just as important as assessing the quality of research. 

Evidence-based practice requires a set of skills, one of which is the ability to critically appraise research articles. 

Like any skill, doing it well requires investment in learning and practice. 

Resources like the Evidence in Practice series in the Journal of Orthopaedic and Sports Physical Therapy and the CASP tools can help if you want to make this investment. 

>> Professor Steve Kamper APAM is a physiotherapist and professor of allied health at the University of Sydney and Nepean Blue Mountains Local Health District. 

Quick links: 

 

© Copyright 2026 by Australian Physiotherapy Association. All rights reserved.