While the skeptics began as a debunking group focused on presenting rational explanations of paranormal claims, Sense about Science campaigns to “challenge the misrepresentation of science and evidence and public life”. Obviously, there’s some overlap.
Among other activities, Sense about Science runs workshops for young scientists. At a recent event there were three panels: journalists, policy makers, and university PR people (I appeared on the journalists panel). The young scientists asked the kinds of questions you might expect from a group of highly dedicated people with careers to make: how do you get policy makers to consider your evidence? How do you get journalists interested in your story? Beforehand, I was asked to discuss questions like, how do you write for different audiences? How do you work with experts? How do you explain technical information?
That last question turned out to be a minefield. There are two obvious dangers in approaching any technical subject. The first is that you overwhelm with information. In the case of one of my favourites, public key cryptography, the important fact for most people is that it protects their credit card and other sensitive information in transit from their computer to that of the online retailer (or government site, etc) they’re using. If your audience is a group of anxious activists heading to work in a location where hostile state actors will snoop on their email and use the contents against them, you add to that: setting up the software involves generating a complementary pair of keys that lock and unlock each other’s encryption, keeping the private key secret is crucial, and so on. Only a handful of cryptanalysts really want details of the algorithm and the source code to scrutinise for vulnerabilities.
A cognitive neuroscientist was much more bothered by the opposite problem: oversimplification. How do you explain something very complicated without misrepresenting it entirely, doing worse damage than if you’d stayed silent?
I suggested analogy. In public key cryptography, the field’s own nomenclature is already there: keys lock and unlock doors, keys encrypt and decrypt. Vulnerabilities planted in the software by untrustworthy interlopers are “back doors”. The field’s own words give outsiders important clues about how it all works. Someone writing about it can then expand upon and enhance that initial analogy to explain how cryptography fits into the grander scheme of computer security. It helps that computers themselves have long been designed using metaphor to help people understand how to use them: our machines have “desktops” and “folders”, and email has a little postal letter icon. As time goes on, the background that produced the metaphor fades, leaving the icons and analogies meaninglessly dangling. Familiarity grants them new associations.
But he was shaking his head before I got that far. Analogy doesn’t work, he said, because then people think the brain works like a computer, an idea long discredited by Elizabeth Loftus. Computers spit back out whatever you give them (hence “garbage in, garbage out”). Human memory, as Loftus and others have shown, is malleable. Sometimes you have garbage in, fascinating stories out. Sometimes you have facts in, garbage out. And sometimes the facts seem to go in and yet … my mind’s a blank.
Well, OK, I said. The analogy to a computer is wrong. But there must be better ones. I suggested Silly Putty, a toy clay-like substance I remember from my childhood. Wikipedia tells me it’s made of silicone polymers. In any event, what I had in mind was that one of its unusual characteristics was that it would pick up ink when pressed on a page of newsprint. The image would stay for a while, but if you began manipulating the putty it would slowly break up and become dispersed through the silicone blob. I thought it wasn’t a perfect analogy, but might be a start, if you could assume some way of reassembling some of the dispersed parts of the image.
Nope, he said. Absolutely wrong. So wrong as to be worse than not explaining at all. This was exactly why he had abandoned the idea of using analogy, because the analogies were so misleading as to leave people with completely misguided ideas about how memory works. Or, rather, memories. There are 12 kinds of memory, did I know that?
Well, I knew there was more than one. Offhand, I could think of short-term and long-term memory and muscle memory (which I guess is more properly called procedural memory; musicians and athletes have good reason to be hyper-aware of this one, since it’s what you’re building through all those hours of practice). Looking it up online, I see there’s also sensory memory and various subcategories of implicit and explicit memory.
I plead that he didn’t specify. And in the absence of specification I assumed that in his original question he meant “memory” as most people use it, to mean remembering things they’ve experienced or been told. The main point, though, is that I didn’t manage to solve his problem, even though I wanted to.