Monday, June 3, 2019

How I evaluate a research abstract/report - Did researchers do a good job?

Earlier, I explained the basics of reading a research abstract.  But I'm not a scientist, and many of the statistics are over my head.  Despite that, there are still intelligent questions I can ask to determine if research results are valid.  For example:

How many people were studied?  In a Phase 1 trial, focused on safety, the number will often be very small, especially with a technique not previously tried on humans.  (Remember, Phase 1 - safety, Phase 2 - effectiveness/does it work + safety (still with a relatively small group), Phase 3 - efficacy and possibly also dose + safety, Phase 4 - continuing evaluation after FDA approval.)

But not all research is a clinical trial that fits into one of the Phases; sometimes scientists are just observing, as was the case in the sample research abstract about an exercise survey used in an earlier blog post.  Or the research is on cells or on animals.  Or the research paper is a review of current research on a particular topic, or even a meta-study, where data from multiple earlier studies is evaluated in fresh ways. This means the format may not be the same as the description in the earlier blog post.

But getting back to the number of people studied, the results are more meaningful if the number of research subjects was 200, not 20 or 2.

Another related question is: did the researchers use flawed data?  In a recent study, just using medical records, the records used were for people diagnosed with PD or with a parkinsonism - and who had an EEG in their record.  Since most people with PD don't have EEGs done related to their care, few records included an EEG; this resulted in only 19 PD patients being part of the study; considering the diversity of pwp, drawing conclusions comparing only 19 PD patients with those with MSA (multiple system atrophy) and other parkinsonisms is clearly a problem.  And it wasn't mentioned by these researchers.

I don't have the advanced statistical skills to evaluate statistics, but I have learned a few things:
For example, p < 0.05 means this is a meaningful number, not just the result of chance.  (P here means probability.)

There are lots of common sense questions to ask, such as:

  • Is the age of research subjects appropriate?  (In one seriously flawed paper, subjects ranged from ages 20 to 78.)
  • Are subjects taking other medications?  (In the previous example, the researchers were evaluating an herbal supplement, and included subjects who were also taking other unspecified herbal supplements at the same time.)
  • Is their basic math accurate?  (In the previous example the number of subjects was different in different parts of the paper, with no explanation).
  • Was this research in humans, cells, rats/mice?  What happens in mice and even in petri dishes doesn't always follow once it's moved to humans; in fact, studies in mice of potential drugs have not translated well at all.
  • Do the researchers have a monetary interest that may bias them?  Did the company/foundation funding the research restrict publication of results?  (Who is funding the study?  This information is often at the end of the full paper.  Researchers are now required to identify conflicts of interest (at the end of the full paper), but this information may not be as obvious in earlier research, so you may need to evaluate it yourself (in the example above, some of the researchers are company owners).
  • Is the mix of genders reasonable?  (A paper from Iran had only male subjects, possibly for cultural reasons, but this arbitrary exclusion calls the results into question.)
  • Have researchers excluded too many people, or not enough?
  • What kinds of side effects were experienced? 
  • Do researchers have a control group, a group they can compare the experimental group against?  A related question:  Have they found a way to identify the placebo effect?  Do researchers know who is taking the trial drug, for example; their observations can be colored by their expectations.  Does the study find a way around this?
  • Did researchers ask the right questions?
  • Finally, have the results been replicated (in the case of new research, especially)?  If  the results can't be repeated by different researchers, the original study's result can be questioned. 
No easy answer to this one:  Have researchers made assumptions about pwp that get in the way?

There are probably other questions you can think of, too.  These sorts of questions arm you when reading research papers, and even those press reports that trumpet "cure for PD found."


No comments:

Post a Comment

Great tools to use during the Pandemic

Some organizations have stepped up for pwp who have lost socialization, and usually exercise programs and support groups.  Even for those ex...