What’s the deal with modafinil?

The following is from a blog post I wrote recently for The Guardian. You can find the original here.

A great deal of excitement has been generated in recent weeks by a review paper examining the literature on the drug modafinil, which concluded that “modafinil may well deserve the title of the first well-validated pharmaceutical ‘nootropic’ [cognitive enhancing] agent”. Coverage in the Guardian, Telegraph, British Medical Journal, and the Independent all called attention to the work, with a press release from Oxford University trumpeting “Review of ‘smart drug’ shows modafinil does enhance cognition”.

The paper in question is a well-written summary of the recent literature (although though it probably underestimates side effects, as pointed out in the British Medical Journal). A deeper problem is that reviews do not “show” anything. Reviews can be educational and informative, but that’s not the same as using all of the available data to test whether something works or not. Two different scientists can write reviews on the same topic and come to completely different conclusions. You can think of reviews as a watercolour painting of current knowledge. We sometimes forget that this is a far cry from a technical drawing, each element measured, quantified, and bearing a strict resemblance to reality.

How do we know what works?

Scientists, and the public, trying to figure out what works face a tricky problem: there will often be many papers on a given topic, offering a variety of sometimes conflicting conclusions. Fortunately, we have a well-developed toolkit for assessing the state of the current literature and drawing conclusions from it. This procedure is called meta-analysis; it combines the available sources of data (e.g., published studies), and is extensively used to assess the efficacy of medical interventions. Initiatives such as the Cochrane Collaboration use meta-analyses to synthesize available evidence into a consensus on what works and what doesn’t.

Meta-analyses, importantly, consider effect sizes: not just whether something works, but how much difference it actually makes. This is of huge importance when considering the use of a drug to enhance cognition in everyday life. Few would spend hundreds of pounds a month to improve their IQ test scores by 1% ; many would for a 10% enhancement.

The clinical community has learned time and time again the dangers of relying upon subjective assessments of drug efficacy. For instance, the drug digoxin has been treated warily by clinicians for the past 20 years following a slew of observational studies that concluded that it increased mortality. A randomized control trial shed light on why this was: doctors preferentially gave the drug to sicker patients, producing a false link between mortality and the drug itself. Since patients given digoxin were more likely to die in the first place, the fact that more patients died after being given digoxin does not mean that digoxin didn’t help those patients. Perhaps even more worryingly, there are also examples where qualitative summaries suggest a drug is helpful when it is in fact harmful. We have to be careful not to fall into a similar trap with neuroenhancement.

Communication and neuroenhancement

Sometimes science communication is about making your subject sexy and interesting. Sometimes your subject is so sexy and interesting already that the opposite approach is required. The temptation to water-down or distill science is actively harmful in this case – an overly emphatic statement that elicits a wry smile from a colleague could cause an enthusiast to start taking off-label medication. The Royal Society’s motto, “take no man’s word for it”, is a laudable principle for scientists to live by. The public, however, do not have this luxury: not everybody has been extensively educated to critically appraise the data upon which assertions are made. As scientists working in the field, we must be exquisitely careful to avoid hype around topics that hold such broad media and public appeal, and where the incentives are so heavily weighted towards positive findings. Bias enters into every stage of the scientific process, and the cumulative effect at the level of primary research, knowledge synthesis, and science reporting, can produce headlines that have a convoluted relationship to the truth.

It is always tempting for scientists to blame the media when a claim made in a paper is over-emphasised in the public domain. It’s true that scientists are frequently misquoted, but it’s also true that we are human, and we love being written about. Even a professor enjoys their mum sending them a newspaper clipping about their research. Those of us working in the field need to be extremely careful not to over-egg the pudding when describing our work, an increasingly acknowledged problem when discussing, for example, whether brain stimulation has any beneficial effects. Articles about neuroenhancement are frequently taken by enthusiasts as an operating guide to their enhancer of choice, or used by companies to sell drugs. Basic researchers finding themselves unwittingly guru-ised need to realise this, and fast.

Should I take modafinil?

In our opinion, the jury is still out. We’re not convinced it works, the consequences of long-term use are uncertain, and its side effects may have been underreported. Exercising more and avoiding sleep deprivation are both options with a much greater weight of evidence behind their cognitive benefits. It’s possible that Modafinil will prove to the first well-validated pharmaceutical ‘nootropic’ [cognitive enhancing] agent”. But it isn’t yet.

Archy de Berker is a PhD student in neuroscience at University College London
Sven Bestmann is a Reader in motor neuroscience at University College London