Advertisement
The Body: The Complete HIV/AIDS Resource
Follow Us Follow Us on Facebook Follow Us on Twitter Download Our App
Professionals >> Visit The Body PROThe Body en Espanol
Read Now: Expert Opinions on HIV Cure Research
  
  • Email Email
  • Printable Single-Page Print-Friendly
  • Glossary Glossary

How to Read a Scientific Paper: Part One

(First of a three-part series)

April 2001

A note from TheBody.com: Since this article was written, the HIV pandemic has changed, as has our understanding of HIV/AIDS and its treatment. As a result, parts of this article may be outdated. Please keep this in mind, and be sure to visit other parts of our site for more recent information!

So Many Papers, So Little Time: What Can I Trust?

When it comes to coverage of health issues on TV and in the papers, it seems like the media has a new, and often contradictory, story every week. First eggs are cholesterol-laden orbs of death, then they're a great source of protein. One day, estrogen replacement can protect against heart disease in older women, the next day it can't. Last year, a high-fiber diet was said to be your best protection against colon cancer, now they say it's irrelevant (although anything that gets you to eat all your veggies can't be too bad).

In a complex and fast-paced field like AIDS research, the confusion can be even worse. Every single month, over a hundred scientific articles and conference presentations come out, each intended to expand our knowledge about HIV. And that's not even counting all the reports (like basic immunology studies) that are key to understanding HIV/AIDS but are not technically "AIDS research." While many of these scientific reports help piece together the jigsaw puzzle of HIV, filling in a picture of the virus and its nefarious activities, the absolute truth may be as difficult to grasp here as it is in the mainstream media.

Two years ago d4T (Zerit) was widely thought to be the least toxic of all the NRTIs (nucleoside reverse transcriptase inhibitors: drugs in the same class as AZT, ddI and abacavir). More recently, findings suggest that d4T may be one of the prime suspects for causing damage to mitochondria (energy "factories" within our cells). Similarly, not long ago, missing a single dose of one's antiretroviral medication was thought to be risking catastrophe. In 2000, our spring fashion line featured the concept of strategic treatment interruption (STI) as the next great hope.

Advertisement
The Internet, while being an indispensable research tool, literally bringing the libraries of the world to our fingertips, has also increased the deluge of information. Now hundreds of articles -- some in agreement, some contradictory -- vie for our attention, along with the commentary of experts pointing out confirmation of their pet theories -- and refutation of their rivals' -- in the newest research.

How can we make sense of this? Obviously not all research is equally good or useful. How can a healthcare consumer critically evaluate this torrent of information? When two reputable sources disagree, how do we make sense of it? Do we have to believe one over the other or do we try to find some useful synthesis?

The highly technical language of most papers doesn't help matters. While much of the jargon serves as a kind of "shorthand" among scientists to refer to common methods or basic findings, it can make scientific papers difficult or impossible to understand for those who aren't involved in the field.

The good news is that there are some common rules and conventions for writing science reports that make reading, understanding and interpreting the results much easier. Certain standards and terminology are universal, whether one is discussing immunology or botany. I hope to provide you with an understanding of these conventions and how they are used -- and misused. While it's probably a good policy to mistrust anyone who claims to have the one and only answer to a controversy, it's a lot easier to evaluate someone's claims (or your own discoveries) when you have a good understanding of the scientific conventions.

Anatomy Lesson (The Components of a Research Paper)

Before we dig into the specifics of analyzing a research paper, let's take a look at how scientific reports are organized and how the information is presented. Usually, the basic structure of a scientific paper is the same, whether it's about AIDS or nuclear physics. Roughly speaking, you can always expect to find these sections in a paper:

  • the abstract,

  • the background and rationale,

  • a description of the methods used,

  • the actual results, and finally

  • the discussion -- the author's interpretation of the results.

Hopefully the discussion makes sense of the results within the context of the issues that were raised in the background and rationale.

Let's take a closer look at each of these sections.

The Abstract

The abstract isn't technically a part of a paper, rather it's the "Reader's Digest" version: a short synopsis that summarizes the key points. These key points should give you a brief recap of each part of the paper listed above. Generally when one consults Medline, AEGIS, or other on-line information sources, an abstract is all that's available. The abstract is an invaluable tool that can help you decide which papers are worth getting and reading in their entirety. As a tool for fully understanding the research, abstracts have substantial limitations. Specific details beyond those necessary to convey the main finding are scarce. Often the description of methods is cursory or missing, making it hard to assess the scientific rigor of the study. Only the primary results are summarized, omitting potentially valuable supporting data. Still, abstracts play an indispensable role by allowing you to quickly extract the gist of a paper.

Background and Rationale

The background and rationale section tells you the reasons why the paper was written. It describes previous related research, identifies the questions that are still unanswered and proposes exactly what the paper will address. For the purposes of this article I will refer to an imaginary antiretroviral drug I call "X-100" to provide examples. (X-100 does not exist, and all the data relating to X-100 is for illustration purposes only.)

In a paper describing the usefulness of X-100 for patients who have previously failed a protease inhibitor (PI), the background and rationale should explain why the X-100 experiments were done in the first place. It might discuss the rate of virologic failure in patients treated with PIs, the clinical implications of virologic failure, and why new therapies like X-100 are needed. Next, it might give us some background on X-100 itself: its basic chemistry and research results, including in vitro (in the laboratory) activity, animal studies, and any human research to date. The background and rationale is exactly what it sounds like. It provides you with the basic context of the research, and offers the rationale (reason) why this particular study is needed. It should also describe the primary hypothesis (the key question that drives the research) as well as any secondary objectives.

Methods

The methods section is one of the most important, and also most neglected, parts of a scientific paper. Here the investigators describe exactly how they did the research: how they set it up, what measurements were taken, what mathematical methods were used to analyze the data, etc. The methods section is where eligibility criteria (who was, and was not, allowed in a clinical trial) and endpoints (the exact definition of what is to be measured, and why) are defined. It is also in the methods section that the primary hypothesis and secondary objectives (introduced in the rationale) are specified in full detail.

In the case of our fictional "X-100" trial, the hypothesis might go something like this: "In adults with CD4 counts between 50 and 300, who have HIV RNA (viral load) of over ten thousand while on a protease inhibitor containing regimen, X-100 provides a greater and more prolonged decrease in HIV RNA compared to an optimized regimen using approved agents, guided by genotypic resistance testing." The study then attempts to prove (or disprove) this hypothesis. Secondary objectives might compare X-100 with standard treatments on the bases of toxicity, quality of life, or other important considerations.

In this age of high technology, with sophisticated tests like HIV RNA, sometimes the focus drifts from the specific and detailed construction of the hypothesis. It's tempting to add all kinds of extra measurements to a trial without first being clear about exactly what questions they will address. More than one scientist has observed that properly framing and describing the research question is half the job. By starting with a clearly thought out and well-described research question, much of the subsequent planning is just filling in blanks for the specifics that are dictated by the nature of the question itself. [One amusing sociological side note on the methods section: At scientific conferences you can always spot someone who works in the same field as an author -- they jump straight to the methods section, just as others tend to skip past it. But after all, if they're in the same discipline, they hardly need convincing of the background and rationale!]

Results

The heart of a paper, the results section presents the actual findings of the study. You also will read a description of what the conditions were at the start of the trial. In our fictional X-100 trial, we might see a summary of the patients' baseline (at the beginning) CD4s, their viral loads, their history of AIDS-related conditions, and, since this is a salvage trial, probably a summary of their previous treatment histories. In terms of the actual results themselves, we might expect to see the average viral loads of the group who got X-100 compared to those who didn't. We also might see information about the duration of response, relative rates of clinical disease, toxicities, and other objectives of the trial. We can certainly expect to see all the data relating to the objectives described in the methods section. Editorializing about the meaning and implications of the results is supposed to be restricted to the discussion section. Rarely are things divided this neatly, though, and the results section can contain a fair amount of analysis and sometimes, spin.

Discussion

This is where the authors try to wrap up the whole package. The findings are discussed in the context laid out in the background and rationale, and the significance of the results is established. In general, this is the most editorial section of the paper, where authors not only discuss the raw data, but attempt to generalize (extend) these findings to groups of people who were not in the trial. By nature, some speculation is not only tolerated but expected, even if it's as bland as "X-100 holds great promise for the treatment of HIV-infected individuals failing existing therapies."

Now that we have discussed how a paper is constructed, we can get to the interesting stuff -- the actual contents of the paper. Of the thousands of papers published every year, some are far more reliable and relevant than others. There are certain specific characteristics you can look for when deciding which papers to trust and use for making important decisions.

Who Is Telling You This?

It may seem very obvious, but one of the most important criteria for evaluating any piece of information is to consider its source. Many people feel more comfortable with research coming from a university or the National Institutes of Health than if it comes from a drug company. It's only recently that most research designed to lead to U.S. Food and Drug Administration approval of a company's new drug application (NDA) has primarily come from trials conducted by drug companies or by hand-picked teams of investigators. These studies are often the first and only source of information available about a new drug. Obviously it would be naïve and unproductive to ignore this research just because of the source, but knowing the source can certainly help you to critically evaluate the research, and may affect how much you are willing to take on faith.

Sad as it is, it's not just drug companies that have a personal stake in "favorable" research results. The careers of modern-day academics flourish or die based on the papers they publish. And rarely does anyone publish papers with negative results (those where the experiment failed). So there's considerable psychic pressure on everybody involved with trials to produce positive research results. Furthermore, many people feel that data is inherently less trustworthy when it's been summarized and presented by someone with a financial stake in the outcome. Fortunately, more and more journals are adopting policies that obligate investigators with a financial stake in the research results to disclose their interests.

One relatively little-known fact is the significance of the last author listed. Obviously, everybody looks at the name of the lead author. The research is likely to be their personal project and they probably have more tied up in terms of time, thought, and effort than anyone else (although let's not forget the grad student who is probably listed fifth or sixth). But the last author in the list is usually the senior scientist in whose lab the work was done. So intramural (on-campus) research at the National Institutes of Allergy and Infectious Disease (NIAID) always lists Anthony Fauci, the director of NIAID and its several labs, as the last author, even when his only participation was advisory.

Looking Forward, Looking Back (Prospective vs. Retrospective Designs)

One of the things that can most affect the usefulness and generalizability of a study is whether it was prospective or retrospective. A prospective study is an experiment that is designed to answer a specific question (prospective -- meaning literally "looking forward"), with every aspect of the design meant to facilitate getting that particular answer. Alternatively, a retrospective study starts with a bunch of information that was collected for some other purpose (maybe not even with research in mind. Clinic charts are a good example). The investigator then tries to answer a new question by piggybacking onto the existing set of data.

Retrospective designs can never produce the kind of convincing results that prospective studies can, for reasons I will shortly discuss. However, prospective designs are not always practical or possible. For example, if we wanted to know if the introduction of HAART had a major impact on AIDS-related dementia, we would have to look at people from the pre-HAART era to make that comparison. This is a question we simply cannot answer prospectively today. It would be criminal to deprive PWAs of optimal treatment simply to do that study.

Sometimes there may be only a narrow window of time during which one can answer a question prospectively. In the example above, dementia-related questions could have been included in the initial trials of HAART when the new combinations were being compared to only one or two drugs. Today, however, we know that HAART is effective and that no one should be denied it. But in the case of our mythical X-100, since we don't really know what effect it has yet, we have a unique opportunity to collect some neurocognitive data that lets us compare X-100 to standard HAART for prevention of dementia.

One of the biggest problems with retrospective data is that we have no idea who was excluded from the data set, and why. For example, if we used chart review to try to get an idea of who had dementia in the early nineties, we might stumble into some unexpected pitfalls. It's possible that one particular clinic had a very good relationship with the neurology clinic, so that all potential dementia cases were taken to neurology for evaluation and diagnosis. This clinic might report a high rate of dementia. Another clinic that serves many people with multiple addictions might not carefully distinguish between early dementia and other cognitive problems. They'd report a low rate of incidence. One of the biggest problems with missing data is that, by definition, we don't know how much data are missing or why they are missing. One of the advantages of a prospective design is that we can make sure that all participants have, say, their neurocognitive functions measured in the same way at baseline, so that we do not have to compare apples and oranges.

Next Month

In our next installment, we'll look at inclusion and exclusion criteria, endpoints, and then we'll flip a few coins to understand the role that chance plays in all of this.

Abstract

If you live in New York and would like to read some of the scientific information about HIV/AIDS, visit GMHC's Treatment Education Library. The address is 119 West 24 Street on the 7th floor.

The Treatment Library offers a wide range of information about HIV/AIDS -- from easy-to-read pamphlets to the latest scientific journals. The friendly staff can also help you search for information and abstracts on the Internet. Basic classes on using the Internet are offered for people living with HIV/AIDS. Call 212/367-1458 for more information or to make an appointment to use the library.

Click here to see Part 2.
Click here to see Part 3.


Back to the GMHC Treatment Issues April 2001 contents page.

A note from TheBody.com: Since this article was written, the HIV pandemic has changed, as has our understanding of HIV/AIDS and its treatment. As a result, parts of this article may be outdated. Please keep this in mind, and be sure to visit other parts of our site for more recent information!



  
  • Email Email
  • Printable Single-Page Print-Friendly
  • Glossary Glossary

This article was provided by Gay Men's Health Crisis. It is a part of the publication GMHC Treatment Issues. Visit GMHC's website to find out more about their activities, publications and services.
 
See Also
More HIV Treatment Resources

Tools
 

Advertisement