In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects (for example about past medical history, smoking, sexual experiences). In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.
In empirical research, authors may be under-reporting unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. In this context, reporting bias can eventually lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. Thus, each incident of reporting bias can make future incidents more likely.
Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available.
Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews.
Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies. None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased.
The decision to publish certain findings in certain journals is another strategy. Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghostwriter was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.
Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation.
Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive. Almost all failure to publish is due to failure of the investigator to submit;
The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval. These studies have shown that "positive findings" is the principal factor associated with subsequent publication: researchers say that the reason they do not write up and submit reports of their research for publication is usually because they are "not interested" in the results (editorial rejection by journals is a rare cause of failure to publish).
Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are "significant". This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed. In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.
The main factor associated with failure to publish is negative or null findings. Controlled trials that are eventually reported in full are published more rapidly if their results are positive.
Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.
It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments.
Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.
The selective reporting of some outcomes but not others, depending on the nature and direction of the results.
Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, reports for published studies were less likely than unpublished studies to record adverse events (for example, 56 vs 77% respectively for Finnish trials involving psychotropic drugs). Recent attention in the lay and scientific media on failure to accurately report adverse events for drugs (e.g., selective serotonin uptake inhibitors, rosiglitazone, rofecoxib) has resulted in additional publications, too numerous to review, indicating substantial selective outcome reporting (mainly suppression) of known or suspected adverse events.
Case study
Types of reporting bias
Publication bias
Time lag bias
Multiple (duplicate) publication bias
Location bias
Citation bias
Language bias
Knowledge reporting bias
Outcome reporting bias
See also
|
|