Sometime last year I agreed that a research group could call me to participate in Alzheimer’s research. I’m not entirely sure what I was thinking: I’m not interested in trying experimental drugs. However: you never know, and so it was that I got a phone call a few months later asking me to come in for a buccal swab DNA test.
There is a lot they don’t know about Alzheimer’s, but one thing they do know is that there is a connection to the APOE gene, and there are three variants: E2 is protective, E3 is neutral, and E4 increases the risk of developing the disease. People have these in pairs — one from each parent. Mostly, I wanted to know what my genetic makeup showed. No need for suspense: E3-E3. I couldn’t qualify for that particular trial, even if I had wanted to.
But the more interesting question about this occurred to me while listening to a talk delivered by Ben Goldacre, who summarised the work of the COMPare project to eliminate mistakes and fraud in medical papers. In this project, Goldacre and colleagues examined every trial as it was published in one of the handful of the most prestigious medical journals, and compared the stated findings with the statement of intentions written at the outset. When there was a discrepancy, they sent a letter to the journal pointing it out.
The reason this matters is known as “outcome switching”, a phenomenon familiar to skeptics from the history of research in parapsychology. As a simple example, say you’re studying whether a given subject can predict the symbol that will appear on the next Zener card you turn over. You do a bunch of trials, and it turns out that your subject does no better than chance at predicting Zener cards — but in among those trials are a few where the subject did really well because randomness works that way. You publish the successful ones and drop the rest. Or, in the case of medicine, you discover that this drug you’re testing on a sample that’s representative of the general population doesn’t hold up on the whole sample, but looks good if you look only at teenagers and elderly women, and so you publish that result as if it were representative. Bad hoodoo!
So, it occurred to me to wonder: if you are a prospective participant, how do you determine that the trial you’re considering joining is going to be conducted with appropriate scientific rigour? It’s bad enough to take on the risk of swallowing an experimental drug every day — in the trial I was called for, every day for eight years — but then to find out later that you were part of a trial that ultimately published false results and effectively defrauded the public?
I don’t know what the answer is to this. The lab rat really isn’t in a position to dictate terms because at the beginning everything looks fine. In the case of the trial I was called for, the initial stages included reading a bunch of stuff, then watching a video (“most people enjoy this”, the research assistant told me when I started arguing with the video about the perkiness of the presenter), and then answering a bunch of questions to show you’ve understood what you’ve been told. (All right, the video did have one new fact: one copy of E4 increases your risk by about 25%; two copies increase it by about 55%.) The point of all this is to ensure that each patient is given the same set of instructions, verbatim, and also to ensure that each patient’s consent really is informed.
The uncertain aspect I was focused on was the hypothesis that amyloid plaques, which are found damaging the brains of those with Alzheimer’s, are a cause rather than an effect. The drug that’s being tested is supposed to interfere with their formation. It could turn out that the drug interferes just fine with the formation of amyloid plaques but doesn’t do anything to halt the progressive cognitive loss.
But the far bigger danger is that sometime much later, when tens, maybe hundreds, of millions of dollars have been spent, despite the hard work of the nice honourable principal investigator I met, someone along the line will cherry-pick the patients or switch the outcomes in order to wring profits out of this experimental drug. Goldacre suggested a fix: medical researchers should write and publish the computer code they’re going to use to analyse the results. I suppose a patient entering a trial could ask: have you done this? And — now — we could search the internet for evidence of past mistakes.
Ultimately, trials depend on the willingness and trust of the participants. Random-controlled trials are still the gold standard of medical evidence, so they need us. I don’t think it ever occurs to them when they commit statistical errors or perpetrate outright fraud that they’re poisoning their own research pool.