International Journal of Clinical Practice
Search IJCP

Nine out of Ten Doctors Prefer Camels! Three Brief Essays Regarding How We Use and Abuse Statistics

IJCP 64/8Persuasive arguments often include numbers. Clichés in advertisements include the 'survey' where a substantial proportion of respondents support a particular product. Politicians pay close attention to public opinion polls and particularly those regarding the popularity of candidates running for election. Numbers also dominate medical science and are used to support the adoption or repudiation of diagnostic tools, laboratory tests and therapeutic interventions. Often missing from discussions is the clinical significance or relevance of the results being presented, and a lack of accounting for inherent confounding biases.

Three brief essays appear in this issue of International Journal of Clinical Practice (1–3), all related to the general theme of how we use and potentially abuse statistical tools. The first, by Joel Lexchin (1), outlines several problematic concerns on how data may be presented to both clinicians and the public at large through advertising. Although some of the most egregious examples are from some time ago, the confusion between relative and absolute measures still exists (4,5) and is perpetrated not only by industry, but also by researchers as well, with all parties desiring to demonstrate large effects for perceived benefits. Continuing education on the philosophy and tools of evidence-based medicine, in particular on the simple and intuitive statistics available, is highly desirable (6).

The second essay by Ghaemi (2) illustrates the dangers of confounding bias, particularly as it applies to epidemiological studies, but also on how it may occur in randomised controlled studies, which are thought to be generally immune from this problem.

The third essay is a more 'nuts and bolts' type paper, where Williamson (3) describes calculating and publishing a 'wrong answer' and then in a valiant attempt, goes back to correct it. The essay concludes by a discussion of confounding bias, our recurring theme of a problem that has not gone away, despite being aware that it exists.

These three essays underwent peer review, and the undersigned made the conscious decision to select reviewers with rather strong points of view at polar ends of the opinion spectrum. This generated a lively and at times lengthy exchange between author-reviewer, author-editor, reviewer-editor and also among the editors involved. Some of this is reflected in Ghaemi's essay (2), where the different viewpoints are explicitly articulated. Lexchin's article (1) was reviewed by both a well-known critic of the pharmaceutical industry and by a researcher employed by a major pharmaceutical company. In discussions with the latter, the concept of pharmism came up where all persons employed by the pharmaceutical industry and their work products is viewed with automatic suspicion and derision. This is rather unfortunate given the great lengths that scientists at these companies have gone to ensure methodological rigour and accuracy, as well as inform those in marketing about what makes sense and what does not. To be sure, misguided decisions and poor judgment have been exhibited at times, but academic physicians and front-line clinicians are not exempt from displaying these behaviours.

In the end, the best strategy to deal with information regarding diagnostic and therapeutic options is to be aware of common distortions and biases inherent in study designs and how data are displayed. This does not require a degree in statistics, but it does require being a bit sceptical and curious about claims being made. The process of evidence-based medicine (Figure 1) can help focus the practitioner to help adapt the available research evidence to the care of the individual patient. The reader is invited to peruse the many resources available.

Figure 1: The five-step evidence-based medicine process.   Reproduced with permission from Citrome L & Ketter TA

Figure 1: The five-step evidence-based medicine process. Reproduced with permission from Citrome L & Ketter TA (6)

EBM Resources *

Internet Resources

Books

  • Gray GE. Concise Guide to Evidence-Based Psychiatry. Washington, DC: American Psychiatric Publishing, Inc, 2004
  • Guyatt GH, Rennie D. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago, IL: AMA Press, 2001
  • Guyatt GH, Rennie D. Users' Guides to the Medical Literature: Essentials of Evidence-Based Clinical Practice. Chicago, IL: AMA Press, 2001
  • Straus SE, Richardson WS, Glasziou P, et al. Evidence-Based Medicine: How to Practice and Teach EBM, 3rd edn. Edinburgh: Elsevier Churchill Livingstone, 2005

On the lighter side

  • Citrome L. Evidence-based flying: a new paradigm for frequent flyers. Int J Clin Pract 2010; 64: 667–8.

*Adapted with permission from Citrome L & Ketter TA (6).

Disclosures

No writing assistance or external financial support was utilised in the production of this article. Leslie Citrome, is a consultant for, has received honoraria from, or has conducted clinical research supported by the following: Abbott Laboratories, AstraZeneca Pharmaceuticals, Avanir Pharmaceuticals, Azur Pharma Inc, Barr Laboratories, Bristol-Myers Squibb, Eli Lilly and Company, Forest Research Institute, GlaxoSmithKline, Janssen Pharmaceuticals, Jazz Pharmaceuticals, Merck, Novartis, Pfizer Inc and Vanda Pharmaceuticals. As Psychiatry Section Editor for the Journal, Leslie Citrome withdrew from the review process and deferred all editorial decisions to Graham Jackson.

References

  1. Lexchin J. Statistics in drug advertising: what they reveal is suggestive what they hide is vital. Int J Clin Pract 2010; 64: 1015–8.
  2. Ghaemi NS. Death by confounding: bias and mortality. Int J Clin Pract 2010; 64: 1009–14.
  3. Williamson DF. The population attributable fraction and confounding: Buyer Beware. Int J Clin Pract 2010; 64: 1019–23.
  4. Jackson G, Citrome L. JUPITER: wake up and smell the coffee—the absolute and relative merits of statin use. Int J Clin Pract 2009; 63: 347–8.
  5. Citrome L. Relative vs. absolute measures of benefit and risk: what's the difference? Acta Psychiatr Scand 2010; 121: 94–102.
  6. Citrome L, Ketter TA. Teaching the philosophy and tools of evidence-based medicine: misunderstandings and solutions. Int J Clin Pract 2009; 63: 353–9.

Read the Paper

Death by confounding: bias and mortality
S. N. Ghaemi, S. B. Thommi Int J Clin Pract Volume 64 Issue 8, Pages 1009 - 1014

All observations – whether those of one clinician on one patient, or one clinician on a thousand patients over decades, or entire nations – are inherently wrong, or, at least probably wrong, in some way. Even if huge databases are used, and the fanciest statistical regression models employed, observations are still observations, limited by the context in which they are made.

It is a basic scientific, statistical and medical fact that all our observations are flawed to some degree. This is what we mean by bias.

Bias means systematic error, as opposed to the random error of chance, where one makes the same mistake over and over again because of some inherent problem with the observations being made. Confounding bias has to do with factors, of which we may be aware or unaware, that influence our observed results (Figure 1).

The only way to fully remove bias is by randomisation, hence the enhanced validity of randomised clinical trials (RCTs). For many topics, though, and in much of the medical literature, RCTs are either unavailable or inadequate. Thus, we are forced to examine and interpret observations, and, as a result, we need to fully appreciate and understand confounding bias.

The confounding factor is associated with the exposure (or what we think is the cause) and leads to the result. The real cause is the confounding factor; the apparent cause, which we observe, is a bystander. An example of this relationship is the statement: coffee causes cancer. Even though large epidemiological studies show that those who drink coffee are more likely to have cancer (2), this is resulting from the fact that the coffee drinkers are more likely to be cigarette smokers (3), which is the cause of cancer in those persons. Coffee is the apparent cause, whereas cigarette smoking, the real cause, is the confounding factor. Confounding bias, as described here, should be distinguished from effect modification, where the putative confounding factor is not the only cause, but rather a contributor, along with the exposure being studied (4). For instance, cigarette smoking causes deep venous thrombosis in women who take oral contraceptives. Cigarette smoking itself does not cause much deep venous thrombosis, nor do oral contraceptives alone. However, they both carry some risk of that outcome and together the risk is greatly enhanced. This is effect modification. The prior example is different because coffee has zero increased risk of cancer, and is thus an exposure without any real influence on that outcome. Cigarette smoking is the real and only cause, and thus a confounding factor.

This is the lesson of confounding bias: We cannot believe our eyes. Or perhaps more accurately, we cannot be sure when our observations are right and when they are wrong. Sometimes they are one way or the other, but, more often than not, observation is wrong rather than right because of the high prevalence of confounding factors in the world of medical care.

Confounding bias is handled either by preventing it, through randomisation in study design or by removing it, through regression models in data analysis. Neither option is guaranteed to remove all confounding bias from a study, but randomisation is much closer to being definitive than regression (or any other statistical analysis): one can better prevent confounding bias than remove it after the fact (1).

Read the full article: HTML or PDF

Read the Paper

Statistics in drug advertising: what they reveal is suggestive what they hide is vital
J. Lexchin Int J Clin Pract Volume 64 Issue 8, Pages 1015 - 1018

Pharmaceutical companies frequently mis(use) statistics in their promotional material to present a biased picture about the value of their products. This article reviews some of the common practices that companies engage in. In promotion to doctors, companies make claims about statistical significance where they are not justified based on the cited references; they misuse or omit confidence intervals and discussions about power; they use graphs and charts that have design features that lead to visual overestimation or underestimation of metrics; and they present benefits as relative risk reductions instead of absolute risk reductions (ARR). In direct-to-consumer advertisements, they rarely use tables or charts preferring instead to present benefits and risks in narrative form; and as with doctors, they rarely discuss ARR.

Read the full article: HTML or PDF

Read the Paper

The population attributable fraction and confounding: buyer beware
D. F. Williamson Int J Clin Pract Volume 64 Issue 8, Pages 1009 - 1014

In 1997, my colleagues and I estimated the fraction of new cases of diabetes in the United States population attributable to a 10-year weight gain of ≥ 5 kg (1). To estimate this population attributable fraction (PAF), we used a formula that multiplied just two quantities: (i) diabetes 'relative risk' (RR) – the probability of developing diabetes in those who gained ≥ 5 kg divided by the probability in those who gained < 5 kg and (ii) the proportion that gained ≥ 5 kg. We estimated that 27% of new cases of diabetes in the United States were attributable to gaining ≥ 5 kg.

Unfortunately, we got the wrong answer. The correct answer is 21%; we over-estimated the PAF by nearly 30%. What did we do wrong? We followed a well-established, but incorrect, tradition of putting adjusted RRs in the crude (unadjusted) PAF formula. The crude formula is only appropriate when the impact of exposure (i.e. ≥ 5 kg gain) on the development of disease (i.e. diabetes) is not confounded by other factors, i.e. the crude RR accurately estimates the exposure–disease relationship. We ignored the fact that our RR was adjusted for confounders, i.e. factors correlated with weight gain and associated with developing diabetes independent of weight gain.

We adjusted for age, gender, race, education, smoking status, cholesterol, blood pressure, antihypertensive medication, body mass index and alcohol consumption. Our error attributed too many cases of diabetes to weight gain, when some of these cases were attributable to factors we had adjusted for. We could have used an alternative PAF formula that fully adjusts for confounding factors by using proportion of cases exposed to ≥ 5 kg gain, instead of proportion of total sample exposed (cases + non-cases). Ironically, we reported all information needed to estimate correctly the PAF in the study itself (1, tables 2 and 4) (see Appendix).

Our error in putting adjusted RRs in the crude formula has been referred to as 'Probably the most common error…' associated with PAF (2, p. 16). One review estimated that at least one in four published studies make this mistake (3). A simulation study putting adjusted RR in the crude PAF formula yielded results that '…were severely biased in most situations' (4, p. 2087). In a recent analysis, this error was shown to over-estimate Unites States mortality attributable to obesity by > 100,000 deaths (5).

Although this error has been discussed in technical literature, it is not easily accessible to clinicians. In this commentary, I try to provide a relatively non-technical understanding of the issue. Probably, if epidemiologists continue to make this error, journal readers will at least be alert to the error and its implications.

Read the full article: HTML or PDF

Ask a Question

You can ask IJCP a question about this special issue using this form. We will aim to reply promptly, and reserve the right to post your question alongside an answer on this web site.

  • Name:
  • Email:
  • Country:
  • Question:
  •  
     
     
     
     

Guest Editor Profile

Leslie Citrome, MD, MPH

Leslie Citrome, MD, MPH, is Director of the Clinical Research and Evaluation Facility at the Nathan S. Kline Institute for Psychiatric Research in Orangeburg, New York, and Professor of Psychiatry at the New York University School of Medicine. He is an associate editor for IJCP, heading the psychiatry section. Les has a special interest in the philosophy and tools of evidence-based medicine.

Related Podcast from David Williamson

Get More Information

Enter your information below to receive regular email alerts from IJCP.

  • Name:
  • Email:
  • Profession:
  • Country:
  •  

Follow IJCP on Twitter