Deaton Dispels the Magic of Randomized Controlled Trials at USC Lecture

Simply thinking- and thinking creatively- is still the most important element of scientific inquiry and discovery said Nobel laureate Professor Sir Angus Deaton at a lecture hosted by the USC Dornsife Center for Self Report Science, the Center for Economic and Social Research, and the Schaeffer Center for Health Policy & Economics.  It worries him that this seems to have gotten lost when randomized controlled trials are in the mix.

The lecture, his second at USC since he was named a USC Presidential Professor, explored randomized control trials and their gold standard reputation across a variety of fields. “The preference for randomized trials has spread beyond trialists to the general public and the media, which typically reports favorably on them.  They are seen as accurate, objective, and largely independent of ‘expert’ knowledge that is often regarded as manipulable, politically biased, or otherwise suspect,” wrote Deaton in a paper he coauthored, the findings from which he drew from for his talk.

Their status as the fool proof study design has led randomized controlled trial results to be exaggerated with claims of strength and precision. In his lecture, Deaton identified the major misunderstandings that plague researchers and the broader scientific community in interpreting RCTs: incorrect assumptions about balance of the treatment and control groups which leads to assumed precision in the findings and misunderstanding of standard errors, and unreasonable extrapolation of results to broader populations.

A mistake often made is assuming that randomization within a trial will sufficiently control for balance of the control and treatment groups. But, the power of randomization comes from averages over numerous trials- not one trial. Thus in a single trial, the groups may look nothing alike. The assumption has consequences, leading to assumed precision in the findings, inadequate thought applied to the problem of outliers, and misunderstandings of standard errors. Simply put, “precision is about balance,” said Deaton, and there is nothing inherent in an RCT that ensures this balance.

Finally, the results of an RCT have to be integrated in the end into a larger body of knowledge. And, though it is tempting, simple extrapolation isn’t the right way to think about it—context is necessary. Consider a simple experiment in Kenya which showed deworming children improves educational outcomes, said Deaton. Obviously, this doesn’t apply to a child performing poorly in school in Britain.  It is easy to laugh at the absurdity of transporting these results, he said.  But seriously, somewhere along the line – in this case somewhere between Britain and Kenya – extrapolating results from a trial stops working. The critical determination is at what point does this happen.

These flaws in the way the scientific community has come to think about RCTs deserve attention, said Deaton.  This is especially important when you consider the large resources expended and relatively small sample size usually associated with a trial.

“We need to stop expecting [RCTs] to deliver magical solutions,” said Deaton. “Don’t expect these things to deliver what they can’t.”