Adverse effects underreported in clinical trials

21:43   21 September, 2016

The adverse effects (AEs) of medical treatments appear to be underreported in peer-reviewed journal articles documenting the results of clinical trials, according to a new systematic review.

"There is strong evidence that much of the information on adverse events remains unpublished and that the number and range of adverse events is higher in unpublished than in published versions of the same study," write Su Golder, PhD, FRSA, from the University of York, United Kingdom, and colleagues.

"The extent of 'hidden' or 'missing' data prevents researchers, clinicians, and patients from gaining a full understanding of harm, and this may lead to incomplete or erroneous judgements on the perceived benefit to harm profile of an intervention," the authors write.

The researchers published the results of their study online September 20 in PLoS Medicine.

The authors note that serious concerns have emerged about publication bias and outcome reporting bias in clinical trials, which can lead to overestimation of treatment benefits and underreporting of negative results. Moreover, studies have found significant underreporting of AEs in published trial data compared with unpublished data from the same trial.

However, the extent of underreporting of AEs of medical treatments in peer-reviewed journal articles remains unknown.

With this in mind, Dr Golder and colleagues conducted a systematic review to attempt to quantify the underreporting of AEs in peer-reviewed publications documenting the results of clinical trials as compared with other, nonpublished sources. The authors also wanted to measure the effect of this underreporting on systematic reviews of AEs.

The researchers searched several databases, as well as other sources, including hand-searching of key journals, unpublished studies, and the Cochrane library. They included studies in their review that quantified the reporting of AEs of any medical intervention in the published and unpublished formats.

"Published" articles were manuscripts in peer-reviewed journals. In contrast, "unpublished" data comprised information obtained through other routes (such as regulatory websites and trial registries), and also included "gray literature," which comprised print or electronic information not controlled by commercial or academic publishers (such as press releases and conference proceedings).

Dr Golder and colleagues identified 28 studies from 31 publications that met the inclusion criteria.

Eight of the studies compared the proportion of trials reporting AEs by publication status, and all found that more unpublished materials included information on adverse events than published material (95% vs 46%, respectively).

Eleven studies performed matched comparisons of AEs in published and unpublished documents. "All the studies, without exception, identified a higher number of all or all serious adverse events in the unpublished versions compared to the published version," the authors write.

Specifically, if readers had relied only on published documents to evaluate clinical trials' data, they would have missed 43% to 100% (median, 64%) of AEs associated with medical treatments, including 2% to 100% of serious AEs.

Further, of 24 comparisons of named AEs, such as death or respiratory AEs, 18 showed that unpublished documents reported more of the named AEs than publications did. Two other studies also showed that matched unpublished documents reported substantially more types of AEs than published documents; in one of these two studies, 67.6% of the serious and 93.3% of fatal AEs reported in the unpublished company trial reports were not included in the published documents.

Dr Golder and colleagues note that they were able to find several examples of meta-analyses that included published AE data both with and without unpublished AE data. In most instances, inclusion of the unpublished data narrowed the 95% confidence intervals for the pooled risk estimate for an AE, but did not dramatically change the direction or magnitude of the risk. In several instances, however, inclusion of the unpublished data did make the pooled risk estimate statistically significant, whereas relying on published data alone made it appear statistically nonsignificant.

The authors acknowledge several limitations of this review and note that the included studies may themselves suffer from publication bias, whereby significant differences between published and unpublished data are more likely to be published.

Nevertheless, these findings suggest that underreporting of AEs, selective outcome reporting, and publication bias represent significant threats to the validity of systematic reviews and meta-analyses of harms associated with medical treatments.

As a consequence, the authors say researchers should search beyond peer-reviewed journal publications for information on AEs associated with medical treatments. They also highlight the need for the drug industry to release full data on AEs to provide a more complete picture to healthcare providers, policy makers, and patients.

"Our findings suggest that it will not be possible to develop a complete understanding of the harms of an intervention unless urgent steps are taken to facilitate access to unpublished data," they conclude.

This study was supported by the National Institute for Health Research. The authors have disclosed no relevant financial relationships.

 

 



© NEWS.am Medicine