The free will debate is one of the oldest in philosophy and considered by many to still be one of the most intractable. Hume thought he knew why. He believed that whenever a dispute persists for very long without resolution, “we may presume that there is some ambiguity in the expression, and that the disputants affix different ideas to the terms employed in the controversy.” And so he thought by resolving the ambiguity all people of good sense would see they had nothing to disagree about.
Two hundred years later, when P F Strawson had his stab at the problem, the disagreements were as wide as ever, and unlike Hume, Strawson was under no illusion that he would resolve them. “This lecture is intended as a move towards reconciliation,” he said at the beginning of his classic 1962 essay “Freedom and Resentment,” “so it is likely to seem wrongheaded to everyone.”
There’s a lot still be said for Hume’s diagnosis of the problem, and his solution. Hume argued that there was no contradiction between accepting that human beings are fully part of nature, their actions subject to the same laws of cause and effect as anything else, and believing that we have free will. Free will is not some magical power to escape the necessity of nature but a capacity to make choices free from coercion.
Hume’s thesis was the parent of a family of similar positions collectively known as compatibilism. The PhilPapers Survey, which has canvassed the views of thousands of philosophers, show it to be the majority position today. But it commands far from unanimous support. So why is agreement elusive?
Perhaps the most fundamental reason why the free will debate never ends is that many see the compatibilist version of free will as a “watered-down” version of the real thing, as Robert Kane puts it. Others dismiss compatibilist accounts of free will in less temperate terms. For Sam Harris, it amounts to nothing more than the assertion “A puppet is free as long as he loves his strings.” Kant called it a “wretched subterfuge,” James a “quagmire of evasion” and Wallace Matson “the most flabbergasting instance of the fallacy of changing the subject to be encountered anywhere in the complete history of sophistry.” For many, the free will which compatibilism offers is never as attractive as what they set out to look for, and so we are caught between settling for what we can get and holding out for the elusive ideal.
Personally, I cannot make sense of the supposedly commonsense alternative to compatibilist free will. Even if I could, I don’t see why we should argue about which version of free will was the real thing. We might simply have more than one notion of free will, in which case the right question to ask would not be “Do we have free will?” but “What sort of free will do we have?”
Even if compatibilist free will is not what most people understand by the term, there is no reason why the right understanding of free will is obliged to be exactly the same as the common sense one. Conceptions can evolve and it is unreasonable to insist that if you propose any alteration, you are simply changing the subject. “Imagine a discussion with someone in the fourteenth century articulating a pre-chemical theory of water,” says Manuel Vargas. “It would strike us as unreasonable if such a person were to declare: ‘Either our pre-chemical theory of water will be vindicated by natural philosophy, or we will have watered down the meaning of water!’”
Vargas suggests that “we might have free will but it might be different than we tend to suppose.” Accordingly, he advocates a “revisionist” conception of free will, which he says “can be considered a species of compatibilism” which is “a replacement and upgrade of commonsense.” As he wittily puts it, this “revisionist free will is even better than the real thing, for on my view it has the comparative advantage of existing.”
That might be true, but for many it isn’t good enough. Daniel Dennett explained to me why he thought some remain persuaded, using one of his favourite quotes, from the magician Lee Siegel: “I’m writing a book on magic, I explain, and I’m asked, Real magic? By real magic people mean miracles, thaumaturgical acts, and supernatural powers. No, I answer, conjuring tricks, not real magic. Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.”
“For many people, if free will isn’t real magic, then it’s not real,” says Dennett. They will not settle for any account in which it turns out to be plain old boring real, no more than functions of lumps of physical matter obeying the laws of physics. “Because both free will and consciousness have been inflated in the popular imagination, and in the philosophical imagination, this is a big deal. Anybody who has a view of either one which is chopped down to size, this is, as Kant said, a ‘wretched subterfuge’. Well, it’s not the overwhelming, supercalifragilisticexpialidocious phenomenon that you thought it was, but it’s still real.”
The resistance of many to what I see as realistic accounts of free will seems to me to be no more than a kind of philosophical table thumping, insisting that “it’s just not the same!” But how can it be that an account which seems sensible to many informed experts looks to others as though it completely misses the point? Philosophy has generally shied away from using personality to explain theoretical differences, but in doing so I think it has skirted over an important but embarrassing truth: no one simply goes “wherever the argument, like a wind, carries us,” as Plato described the philosophical ideal. Everyone is at least being led by their dispositions.
Free will sceptic Saul Smilanksy expressed something along these lines when he told me that the issue is complicated, “partly because it’s philosophy and partly because philosophers are human beings and they come from different places and have different values. Even if there is agreement about different notions of free will, some philosophers will be – I think there’s no other word – temperamentally inclined to set the bar high and therefore say that there is no free will, and others set the bar lower and say obviously there is free will, and some people like me will say it’s complex and we have various bars. I’m even inclined to think that to some extent some people have an optimistic or pessimistic temperament and therefore they tailor the bar that they intuitively feel will satisfy them.”
Smilansky is speculating about optimism and pessimism. But one study has come up with some empirical evidence that extraversion and introversion are correlated with beliefs about free will, concluding that “extraversion predicts, to a significant extent, those who have compatibilist versus incompatibilist intuitions.”
Many are appalled by this idea as it goes against the whole notion that philosophy is about arguments, not arguers. But you only need to read the biographies and autobiographies of great philosophers to see that their personalities are intimately tied up with their ideas. W V O Quine, for instance, recalled how as a toddler he sought the unfamiliar way home, which he interpreted as reflecting “the thrill of discovery in theoretical science: the reduction of the unfamiliar to the familiar.” Later, he was obsessed with crossing state lines and national borders, ticking each off on a list as he did so. Paul Feyerabend recalled how, not yet ten, he was enchanted by magic and mystery and wasn’t affected by “the many strange events that seemed to make up our world.” Only a philosopher with delusions of her subject’s objectivity would be surprised to find out that Quine and Feyerabend went on to write very different kinds of philosophy: Quine’s in a formal, logical, systematising tradition (though typically on the limits of such formalisations); Feyerabend’s anti-reductive and anti-systematising. It would take a great deal of faith in the objectivity of philosophy and philosophers to think that Feyerabend and Quine arrived at their respective philosophical positions simply by following the arguments where they led, when their inclinations so obviously seem to be in tune with their settled conclusions.
Smilanksy is sanguine about what this means for philosophy, believing that the only way forward is simply for people to follow their own paths. “In a way it’s like there used to be this notion of my station and my duties. You cannot be somebody else, you can try to understand people with different views but in the end maybe the most productive thing is that you be obsessive and try to develop your position in the best way possible and then see what happens, whether it seems plausible to other people and what objections they have to it.”
There is more to this than simply how personality affects the positions we adopt. It’s also about what attitude we take to the position. Two people may agree exactly on the best way to describe the kind of freedom we have. But whereas one will take the attitude, “that’s good enough,” that won’t be good enough for others. I sometimes call this the problem of intonation. One person says, calmly, “human freedom consists in nothing more than the capacity to make choices and guide action in accordance with their settled, reflective beliefs and desires.” Another says the same thing, but an octave higher, with incredulity, the sentence ending with an alarmed exclamation mark, the “nothing more” being a condemnation rather than a statement of fact.
What we see here is an emotional as well as a purely intellectual element in people’s philosophical judgements. “That’s really an under-reflected upon feature of philosophy, and I think there’s a good reason for it,” says Dennett. “It’s dangerous and even verges on the offensive to draw attention to the emotional stake that philosophers often have and betray in their argumentation. But that doesn’t mean it’s not there. I see it a lot. I see what I think is white-knuckled fear driving people to defend views that are not really well-motivated, but they want to dig the moat a little further out than is defensible because they’re afraid of the thin end of the wedge. I think that fear of the slippery slope motivates a lot of going for absolutes that just don’t exist.” And it’s because people have different temperaments and personalities that on an issue like free will, where there are no killer facts to settle the debate, disagreements will continue until the end of time.
There is one more thing about free will which might explain why no one theory of it commands universal assent: it may not be a truly universal idea at all, but particular to modern western culture. Even within the West, scholars like Michael Frede have argued compellingly that the Ancient Greeks lacked the modern idea of free will, even though their ideas of responsibility appear to be very similar to ours. There is some evidence, however, that there may be even more important differences around the world today about ideas of responsibility and freedom than there have been in the West over history.
The key area of difference appears to hinge on the relationship between free will and responsibility. Tamler Sommers argues that the Western idea of responsibility rests on what he calls a “robust control condition: in order to be genuinely blameworthy for a state of affairs, you must have played an active role in bringing it about.” Indeed, the need for a control condition would seem to be obvious. How could you be held responsible for something you did not cause to happen?
However, “like other intuitions and beliefs about moral responsibility,” says Sommers, “it is not nearly as universal as we might think.” He gives as an example the reaction by the Korean community in America to the shootings by Seung-Hui Cho at Virginia Tech in April 2007. Seung-Hui killed 32 people and injured 17 others before committing suicide, in the worst massacre by a lone gunman in US history. The reaction of Hong Sung Pyo, a 65-year-old textile executive in Seoul was typical of many Koreans. “We don’t expect Koreans to shoot people, so we feel very ashamed and also worried.” It was this sense of shame which led the South Korean ambassador to the US to fast for 32 days, one for each of the murdered victims.
Many Americans were baffled by this, but every expert on South Korea approached by a newspaper, television programme or magazine had the same explanation. “It’s a notion of collective responsibility,” said Mike Breen, author of The Koreans. “I can smell a collective sense of guilt,” said Lim Jie-Hyun, a history professor at Hangyang University in Seoul. “There is confusion [in Korea] between individual responsibility and national responsibility.” As Sommers concludes, “Koreans did not merely feel shame for the act of the Virginia Tech killer, they felt responsible. They wished to apologise and atone for the act.”
The psychologist Richard Nisbett has assembled an impressive array of evidence which suggests that deep cultural differences like these do actually change the way people think. In particular, the very idea of who performs an action differs across cultures. “For Westerners,” writes Nisbett, “it is the self that does the acting; for Easterners, action is something that is undertaken in concert with others or that is the consequence of the self operating in a field of forces.” This means Easterners have a sense of “collective agency” largely absent in the West. Given that, it should not be surprising that there is not the same emphasis on a control condition in the East as in the West.
Korean culture is not the only one that does not require a control condition for responsibility. Sommers quotes the anthropologist Joseph Henrich, who says it is “common knowledge among anthropologists that in most small-scale societies you can be blamed for actions you don’t intend to do.” In several ancient myths, the control condition is even more conspicuously absent. Gods manipulate how people will act and then hold them responsible for what they do. Hence God told Moses that when he visits the Pharaoh “I will harden his heart so that he will not let the people go.” In Greek mythology, Agamemnon is compelled to murder his daughter, as Zeus sent Ate to confound his wits. But, says Sommers, “In spite of the constraint and the manipulation, Clytemnestra and the Chorus (in Aeschylus’s version) hold Agamemnon morally responsible for the act.” This judgement “is not illogical,” but “it is counterintuitive from a contemporary Western perspective.”
Of course, it does not follow from this that notions of responsibility that do not have a control condition are no better or worse than those that do. Saul Smilansky responds to these kinds of cases with a robust defence of the superiority of modern, Western ideas about personal responsibility. “The fundamental idea that control is a condition for responsibility and therefore for desert and just punishment, I think is a discovery,” he says, the roots of which can be found in the Bible in the rejection of the idea that “you do not take the sins of the fathers upon the sons.” So “if it is true that certain cultures do not respect this fundamental moral principle then so much the worse for them, they just earn bad grades morally from my side. The idea of punishing somebody who did not commit the crime, unless there is some very strange story going on, is a barbaric practice.”
Smilansky might well be right. However, I think it would be too quick to dismiss notions of responsibility that do not rest on a robust control condition as simply primitive or misguided. A more positive way of looking at it is that responsibility is not some kind of morally basic notion, but one which is tied up with social practices. As such, it may be fitting that it appears in slightly different guises depending on what the social setting is.
In the Korean example, to use a standard anthropological distinction, the key point is that, as is often the case in South and East Asia, Korea has a shame rather than a guilt culture. In shame cultures, the emphasis is on honour and maintaining face, usually collectively as a group. In guilt cultures, the emphasis is on the individual and conscience. So, as Sommers puts it, “if agents violate norms in a shame culture but the violation is undiscovered, the agents are less likely to hold themselves responsible; agents in guilt cultures will likely hold themselves responsible whether or not the offence is discovered.”
To those who have grown up in a guilt culture, which is dominant (but not universal) in Christendom, shame cultures can appear bizarre. But it’s not difficult to see how the guilt culture, taken to its logical conclusion, has absurdities of its own. Given what we know about the importance of nature and nurture, for example, isn’t it actually unreasonable to hold the individual and the individual alone responsible for all the bad things they do? The control condition sounds sensible but no one can completely satisfy it, since we simply are not in control of everything that makes us who we are and so what we do.
Looked at in that light, what makes shame cultures different may not be that they lack the control condition, it might rather be that they attribute control to something wider than the individual. Koreans shared responsibility for Seung-Hui Cho’s violence because they accepted that he was a product of their culture and not simply an atomised individual who acted in a vacuum.
Shame and guilt cultures may not be two alternative models but ends of a spectrum. And it could be healthy not to be far out at either pole. In the British Parliament, for example, there used to be a convention of ministerial responsibility, in which ministers would resign over a major failure in their departments, whether it was their fault or not. This is an example of people taking personal responsibility in a way that does not make sense on a highly individualistic guilt model. But perhaps that is precisely why it was a good convention, the erosion of which has not helped improve the degree to which government is held to account. A healthy society needs to see responsibility falling at different levels, collectively and individually, and acting in whatever way is appropriate to the particular case.
Given all that we have considered, it should not be surprising that philosophers, or indeed all those who philosophise, are not likely to end their disagreements about free will any time soon. That does not mean, however, that we cannot reach a kind of peace settlement in the dispute. All parties need to accept that there is more than one notion of free will and simply dismissing ones you disagree with as “not really free will” won’t do. Furthermore, many of the ideas tied up with free will such as responsibility, also admit of large variation, some of which are cultural. We do not need to accept that all are equally good to accept that we cannot simply ignore or dismiss those we disagree with.
Ultimately, we might have to accept that whether we accept or deny the existence of free will is a kind of choice: do we want to describe what compatibilists say we have as free will or not? That doesn’t mean that like Humpty Dumpty, we can mean whatever we want to mean by “free will.” When there is a choice of how you describe something, and neither description is objectively right or wrong, the question of which description is better becomes one of value. We are no longer asking what is true and what is false but what most matters. So it is we can think of free will in different ways and once we’ve clarified what each way of thinking entails, we have to decide which, if any, captures what we value, or ought to value, about human freedom. And we also have to decide whether or not the kind of freedom we have merits the label “free will.”
My own judgement is that the kind of real freedom we have can rightly be called free will. Those who claim that free will is an illusion are overstating their case. However, as we know from Ancient Greece, it also seems possible to have a robust notion of human freedom and responsibility without the modern concept of free will. This makes the notion of “free will” an unusual one. The assertion of its existence is what I’d call a discretionary truth. Truth is standardly taken to be a bivalent property, meaning that a statement is true or it isn’t. From this it follows that if you do not accept that something is false, unless you simply don’t know what to think, you accept it is true. However, I think there is class of statements which, although it is wrong to say they are false, we can equally well do without asserting their truth.
The concept of “terrorism” might provide one such example. Some have argued that “terrorism” is an empty concept and we should just do away with it. I think this is too strong and there is a meaningful sense of the word. Nonetheless, it is a contested one and it is perfectly possible to describe different forms of violent action in terms of the perpetrators’ goals, methods, legitimacy and so on without using the word. You can argue about the comparative morality of the bombing of military targets which results in some civilian casualties and suicide bombing without ever having to use the word “terrorist” or deciding whether “terrorism” is wrong. Because of this, “x is a terrorist act or group” is a discretionary truth: you don’t need to assert it and it may in some ways be better to avoid using this kind of language, but to say it is false is too much.
A much less serious example is a claim such as “blueberries are superfruits.” The term “superfruit” is not a particularly robust one and many nutritionists think it’s best avoided. Nonetheless, it is not completely meaningless. A superfruit is one which contains particularly high concentrations of important nutrients, and there are indeed some such fruits. So, again, although we don’t need the term and in some ways would be better off without it, saying blueberries aren’t superfruits is just false.
As these examples show, discretionary truths concern discretionary concepts: ones that have some meaning but are not needed to make sense of the world. Free will is one such concept. To assert “we have no free will” is false, because there is at least one meaningful kind of free will which we do have. However, that is not to say that we need the concept of free will, or that it is even useful. Someone who says that “free will” is too ambiguous and has too many misleading connotations may well be right. But they are not thereby saying that free will does not exist.
I sometimes find myself attracted to this view. No matter how much compatibilists insist that free will requires no magic, no unmoved mover or uncaused cause, the assumption that free will does indeed require such things stubbornly persists. Perhaps then it is better to drop the term. But on balance, I think this is unnecessary and unhelpful. To cease talking about our “free will” would inevitably risk the appearance of actively denying it. Furthermore, I think the idea that human action is somehow free-floating and unconstrained is too pervasive for it to be removed by the simple act of removing the term “free will.” So there is no short cut here. The best route forward is simply to continue to argue that we do have free will, but that it’s just not what many, perhaps most, assume it to be. However, because that is a discretionary truth, some will persist in exercising their discretion and refuse to endorse it.
If we can accept what I have argued for, then I think we can accept that there are legitimate notions of free will and responsibility that can help us to think about how we can take control of our own lives and what the limits on that control are. The last word on free will may not have been written, but we do know enough to come to the most important conclusions about it.