A decade ago, two academics published that rare thing, a book that went to influence public policy. Both authors were well placed to be heard at the highest levels of government: Cass Sunstein, a legal theorist, would soon serve in the Obama administration, while Richard Thaler later became a Nobel laureate. In Nudge, they urged the application of our growing knowledge of how the mind works to guiding people to make better choices. People routinely choose suboptimally: we can nudge them to make better choices.
Nudges are supposed to work by taking advantage of our unconscious biases. Due to this fact, they’re highly controversial. Many critics see them as manipulative, or disrespectful of us as rational beings. I think those critics are wrong. But the defenders of nudges are wrong too. Both sides share a mistaken view of the nature of our minds. Reject that view and we’ll come to see that nudging is not necessarily manipulative, or disrespectful, or threatening to our autonomy. We’ll also see that nudging has to be done appropriately in order to be respectful. That’s not because there’s anything special about nudging: that’s just a fact that follows from the nature of our rationality.
Before I get to the controversy and my response to it, we need to lay out how the proponents and the opponents of nudging understand their nature.
Nudges are conceived by Sunstein and Thaler as taking advantage of our irrational biases, and turning them to our benefit. They mention a range of cognitive biases, of the sort influentially identified by Daniel Kanheman (another Nobel laureate, with whom Thaler once worked). Together, these biases are supposed to make us predictably irrational (as Dan Ariely entitled a book about these dispositions).
For example, we are subject to the confirmation bias: a disposition to look for evidence that confirms a hypothesis and overlook evidence against it. We are subject to the availability heuristic, a disposition to overemphasise evidence or possibilities that come easily to mind. We are subject to various ballot order effects, centrally a disposition to prefer candidates whose name appears at the top of the ballot over those lower down. We discount the future to a greater extent than is rational, and therefore don’t save enough for retirement even when we agree we should. We are subject to framing effects, assessing identical options differently depending on whether they’re presented as losses or foregone gains. Sunstein and Thaler present a compelling case that these kinds of biases lead us to choose irrationally, in important areas of our lives.
If our biases and irrationalities lead to us choosing badly, they argue, the solution is to better design the contexts in which we decide so that we choose better. For instance, if we’re presented with a default option, then we tend to stick with it (just as we tend to prefer the candidate whose name appears at the top of the ballot). So to ensure that people choose better, we should make the option that is actually in most people’s interests the default. For example, we can make a prudent savings option the default when people sign a job contract, taking advantage of their cognitive laziness to ensure that they have more comfortable retirements. Similarly, while people are unwilling to sacrifice any of their current income to save more, they are willing to sign up to plans that will increase their rates of saving when they receive an increase. Knowing the details of just how we are irrational allows us to take advantage of these quirks of our minds, to nudge us to better options.
Thaler and Sunstein call their program “libertarian paternalism”. It is paternalistic, because it aims to influence our choices, for our own good. It manipulates people into choosing well. But it is libertarian because the manipulation is – they argue – easily avoided. It is not coercive. Options are not restricted (to choose a savings plan other than the default, for example, all you have to do is tick a box). It doesn’t even place a burden on our behaviour, in the way that (for example) the use of taxation to encourage the right choices does.
Critics have not been convinced by the “libertarian” credentials of nudges. They argue that nudges are too manipulative to respect our autonomy. We treat one another as autonomous agents when we give reasons to one another, they point out. We act autonomously – we govern ourselves – when we choose for reasons. Nudges bypass our capacity to engage in reasoning, because they work by taking advantage of our heuristics and biases: the (allegedly) irrational mechanisms that make up so much of our minds. For many people, because nudges subvert autonomy it would be wrong for governments to attempt to use them to change behaviour. Nudging might increase our welfare, but at the cost of treating us as things to be manipulated, not agents to be addressed.
Thaler and Sunstein, and their supporters, have a variety of responses to this criticism. They point out that nudges can be transparent and democratically endorsed: people may vote for a program of nudges. They point out that nudging may actually improve our reasoning, by making us more consistent over time. They also maintain that given the pervasiveness of the kind of influences that nudges build on, we can’t avoid nudging. Consider the finding that people tend to prefer food choices presented at eye level. We can take advantage of that disposition, to bring people to eat healthier diets. Or can we refuse to nudge, allowing corporations that aim to sell as much junk food as possible to nudge in our place, or leaving it to chance which food items are at eye level and therefore tend to be selected. Whatever we do or refrain from doing, people’s agency is bypassed and they are influenced. We might as well nudge, because there is no alternative to manipulating people.
I used to think that this kind of response was the right one. I thought we had to get used to thinking of ourselves as partially rational agents and partially irrational objects to be manipulated, by ourselves if not by others. We can’t fully respect rational agency, I thought, because we’re not sufficiently rational for that. I now think that’s the wrong response to worries about nudging. We should, instead, reject certain features of the account of cognitive architecture shared by proponents of nudges and their opponents. Once we do that, we can respond to the worries much more powerfully.
Proponents and opponents of nudges share the dual process account of mind that was influentially defended by Kahneman. According to that account, the mind is composed of two kinds of mechanisms. Our biases are representative of one type, which consists of mechanisms that are fast, automatic in their operation, unconscious and sensitive to only a narrow range of information. They therefore have characteristic limitations. The other type of mechanism consists in our capacity for slow, effortful and conscious reflection. It is what enables us to do science and philosophy. Type one processes produce rapid responses, with which we are usually satisfied. Type two thinking allows us to override these responses, to assess them in the light of reason.
Genuine reasons can only be addressed to reflective mechanisms. Nudges, however, work by bypassing reasoning: they are addressed instead to type one – irrational – processes. What is distinctive about us, what makes us rational animals, is our capacity for genuine reasoning; that’s why addressing arguments to us is maximally respectful. The debate between proponents and opponents of nudges is largely a debate over whether bypassing reasoning by addressing type one mechanisms is permissible.
I think we should abandon a central feature of the account of cognitive architecture both sides share. Whether due to genetic evolution or cultural evolution, we do seem to exhibit the kinds of dispositions that dual process theorists have emphasised. We really do seem pervasively subject to the confirmation bias, the availability heuristic, and so on. But we should deny that these mechanisms are irrational components of our minds. Rather, we should see them as reasoning mechanisms.
Reasoning is, very roughly, the capacity to respond to information in an appropriate way. A reasoning mechanism takes information fed to it (through the senses or from other mechanisms) and transforms it or responds to it in a way that matches its actual reason-giving force. Paradigmatically, it infers conclusions from the information (the pavement is wet therefore it has rained), but as Aristotle noted, it may issue in an action instead of a thought. Reflective processes are impressive reasoning mechanisms, but so are type one mechanisms.
Why do dual process theorists think system one mechanisms are irrational? There seem to be two reasons. First, they point to the limitations of these mechanisms. One reason we have these mechanisms at all is that some challenges are too important or too urgent to leave them to slow and demanding reflection. If there is some possibility that the long thin object lying in my path is a snake, I should take evasive action, not spend the time required to check whether it’s really a stick. Type two reasoning would allow me to come to a justified belief whether it’s a stick or a snake, but I might be dead before then, so instead I react, guided by inflexible and limited, but very rapid, type one mechanisms.
The second reason type one processes are seen as irrational is that there may be a mismatch between the environment for which these mechanisms are designed (by evolution) and the contemporary world. It might have been rational, say, to prefer immediate reward in the environment of the Palaeolithic, but today we should save for retirement. Those of us in developed countries live in a world in which food is abundant and lifespans are long, but our irrational minds have not caught up with these developments.
Even in the contemporary world, however, these mechanisms guide us in ways that track reasons much more often than both sides seem to recognise. Take our preference for the default option. Is that just a reflection of our intellectual laziness? Probably not: there is evidence that we take the default to be an implicit recommendation. Similarly, we respond as we do to frames because framing conveys information: the choice of frames conveys information and both those who choose frames and those who respond to them are implicitly aware of this. This is what explains the ballot order effect, too: in many situations the first option is more likely to be the best option, and that’s why people prefer it (it is important to emphasise that the effects are weak: it is only those who are more or less indifferent between political candidates or ignorant about the options who are swayed by the effect). The confirmation bias is not so irrational, either: rather, it is probably an adaptation for group deliberation.
Is it really irrational to respond to the ambiguous object as if its snake, without waiting to check? Surely not. Ah, but the dual process theorist will say, but there aren’t many snakes around in London or New York. Maybe type one mechanisms were reasoning mechanisms once, but they’re not now. They don’t match our environment; in the contemporary world, they’re not reasoning mechanisms. They have a point: given the wrong inputs, type one mechanisms do a bad job at reasoning. But that’s just as true of reflection.
There are lots of situations in which type one processes are more accurate than reflection. For example, expert handball players pick worse options when they have time to think than when they are required to respond immediately. Similarly, when people are encouraged to think about which option to choose from among several, and there are multiple competing factors to weigh, they tend to pick worse options than when they must choose without deliberation, and being asked to think about one’s reasons lowered the accuracy of predictions about the outcomes of basketball matches compared to being asked just to report one’s gut feelings. In the right domains, we do better to rely on the gut than on reflection: it is not just quicker and less demanding of effort: it is actually more accurate.
The fact that a mechanism can lead us badly astray when the inputs don’t match those it is designed to handle doesn’t show it’s not a reasoning mechanism. That’s just as true of reflection as type one processes. We can mislead people by giving them arguments, at least as effectively as we can mislead them by nudging them. We can lie to them, or present data in a format they can’t easily process (we understand statistical data much better when it is presented as frequencies – e.g. out of every 1000 people who are exposed to this chemical, 4 will get cancer – compared to when it is presented as percentages). Even the classical syllogism, our paradigm of rational argument, may mislead: people tend to judge syllogisms as valid when the conclusion is true, even when they’re invalid (“All elephants have big ears; Dumbo has big ears; therefore Dumbo is an elephant”). People respond to arguments in predictable ways, as conmen know all too well. We don’t and shouldn’t conclude from that fact that presenting arguments is disrespectful or manipulative in general. It may be disrespectful or manipulative when it is done in a way that is designed to mislead – that is, when the inputs into the reasoning mechanism don’t match the kind of inputs its designed to process – but not in general.
Perhaps we have tended to think that type one thinking isn’t really reasoning because type two gets to do the philosophy and the psychology, and it doesn’t have access to the workings of type one. It, understandably, overvalues itself. More generally, we tend to identify ourselves with consciousness. But that’s a mistake. Nudging doesn’t bypass our agency (as John Doris, in his recent book Talking to Our Selves thinks). The nonconscious processes that respond to nudges and to features of our context are partially constitutive of our agency. As Daniel Dennett has said, in a related context, “You are not out of the loop; you are the loop”.
Reflective thinking – type two mechanisms – and the type one mechanisms that underlie our intuitions and our (so-called) biases should be thought of as on a par. They’re both reasoning mechanisms. Their purpose is to produce appropriate responses to information. They can both mislead, if the inputs they’re presented with are misleading or not the kind they’re designed to handle. We can manipulate people by giving them arguments (most obviously by lying, but you can mislead using only true statements too, as well as by taking advantage of the limitations of reflective thought) and you can manipulate people by nudging them without giving explicit reasons. Both are disrespectful, both threaten autonomy. But they’re not disrespectful because of the kind of mechanism they address.
We need to nudge appropriately, just as we need to argue appropriately: respectfully, by giving reasons (of the kind the mechanisms are designed to process). Nudging is a way of giving reasons to agents, not bypassing their reasoning or their agency.