Technologies such as nanotechnology, synthetic biology, and information technologies often give rise to heated, emotional public debates. In such debates, a typical pattern can be observed: society is alarmed and worried about the risks and uncertain negative consequences of such new technologies, whereas experts assure us that the risks are negligible. The experts say that the worries of the public are due to a lack of understanding. They emphasise that a dialogue with the public is impossible as it is supposedly ill-informed and so emotional about risks that it won’t listen to rational, objective, scientific information. Policy makers usually respond to this in one of two ways: they ignore the emotions of the public or they take them as a reason to prohibit or restrict a technology. Let me call these the technocratic pitfall and the populist pitfall respectively.
The technocratic pitfall ignores the emotions and the concerns of the public in favour of formal risks analyses of experts. This happens frequently in public policy about risky technologies, and it fuels the concern of the public as they feel that they are not being heard. The populist pitfall on the other hand takes the emotions and concerns of the public as a given, as the endpoint of deliberation, and bows to these concerns without further debate. This happened in Germany after the accident at the Japanese Fukushima Daiichi nuclear power plant in March 2011: without any public debate, the German government immediately decided to stop its nuclear energy program. In both pitfalls, there is no genuine debate about emotions, concerns and moral values. This means that both pitfalls actually fall short of genuine democratic decision making, which should involve explicit deliberation about moral values.
The technocratic and populist pitfalls occur repeatedly in public debates, concerning nuclear energy, cloning, genetic modification, shale gas extraction, carbon capture and storage, and vaccination, to mention just a few of many recent, hotly debated, controversial technological developments, which have typically lead to stalemates between proponents and opponents. Such stalemates may seem unavoidable. At least as long as we take it for granted that emotions are irrational and impenetrable by rational information. However, there are developments in the psychological and philosophical study of emotions that can shed an entirely new light on these issues. But let us first take a look at the state of the art in psychological and philosophical research of risk and decision making under uncertainty.
Social scientists, psychologists and philosophers have argued against the technocratic approach for decades. They have pointed out that risk is more than a quantitative, scientific notion. Risk is more than the probability of an unwanted effect that we could assess with cost-benefit analysis, as conventional, technocratic approaches take it to be. Risk concerns the wellbeing of human beings and involves ethical considerations such as fairness, equity and autonomy. There is a strong consensus amongst risk scholars that ethical considerations should be included in a risk assessment. Interestingly, as we know from the influential work of the psychologist Paul Slovic in for example his book The Perception of Risk, these considerations do figure on the risk perceptions of laypeople. Apparently, the pre-theoretical connotations that people have with risk include ethical considerations that get excluded from the quantitatively oriented approach to risk used by experts. Several risk scholars have argued that laypeople have a different, but equally legitimate rationality than experts.
However, it has become more and more clear that laypeople’s risk perceptions are largely influenced by their emotions. Social scientists are struggling with how to deal with this, as they take emotions to be irrational, which seems to undermine the idea that laypeople might employ an alternative, legitimate rationality. For example, in his work on risk emotions, Paul Slovic distinguishes between “risk as feeling” which has to be corrected by “risk as analysis”, re-invoking a technocratic approach after all. This leads to the following puzzle: research on risk perception seems to indicate that laypeople provide for important insights into ethical aspects of risk, whereas research on risk emotions seems to undermine this because laypeople’s risk perceptions are emotional and hence supposedly irrational.
It seems to be a platitude that reason and emotion are opposite faculties; reason being targeted to providing us with objective, rational information about the world, emotion being a faculty that provides us with basic survival mechanisms but with no reliable knowledge about the world. This is by and large the picture that is endorsed by many scholars who study emotional responses to risk and uncertainty.
In the empirical literature on decision making under uncertainty, this approach is called Dual Process Theory. Nobel Prize winner Daniel Kahneman has recently popularised this approach in his bestseller Thinking Fast and Slow. According to Dual Process Theory, there are two systems through which we apprehend reality. System 1 is intuitive, affective and provides us with unreflective gut reactions; System 2 is rational, deliberative and analytical. From the standpoint of evolution, System 1 is older. It provides us with fast heuristics or rules of thumb that help us navigate smoothly through a complex world, but it comes at the price of being occasionally unreliable.
System 2 provides us with reliable knowledge, but its operations require a lot of effort and attention and are time consuming. So there seems to be a trade-off between the fast but unreliable System 1, and the reliable but slow System 2. Emotions are generally seen as the work of System 1, and rational reflection falls under the purview of System 2.
This approach could be seen to justify the two pitfalls: given the unreliability of emotions, we should avoid them and endorse the conclusions of System 2 – this leads to the technocratic approach. This is the line taken by Cass Sunstein, in his book Laws of Fear. Alternatively, some scholars have argued that we should follow the emotional concerns of the public, even if they are irrational, because we live in a democratic society. This would mean endorsing System 1 operations, but it would lead to populism because there would be no hope for genuine democratic deliberation.
However, emotions are not necessarily a threat to rationality. This follows from the model of Dual Process Theory, but developments in emotion-scholarship challenge the dichotomy between reason and emotion. The neuropsychologist Antonio Damasio has famously shown in his book Descartes’ Error that without emotions, we cannot be practically rational. People with damage to the amygdala lose their capacity to have emotions. Even though their IQ is unaffected, they are incapable of making concrete practical and moral judgements. They also lose their capacity to make reasonable risk judgements: the so-called Iowa-gambling task shows that patients with damage to the amygdala have no inhibitions in gambling and are willing to take huge risks that others would find unacceptable. Damasio thinks that emotions are “somatic markers” with which we perceive morally and practically salient aspects of the world.
The dominant approach in emotion research these days is a so-called cognitive theory of emotions, according to which emotions are a form or source of cognition and knowledge. The psychologists Klaus Scherer and Nico Frijda (the latter for example in his book The Emotions) have developed so-called appraisal theories according to which emotions contain cognitions or appraisals with which we assess the world in an evaluative way. Philosophers of emotions have also developed theories according to which emotions are a source of moral knowledge and practical rationality. Robert Solomon in The Passions and Martha Nussbaum in Upheavals of Thought have developed accounts according to which emotions contain or even are judgements of value. These approaches indicate that there are classes of emotions that are affective and cognitive at the same time.
These ideas can shed completely new light on the role of emotions in debates about risky technologies. They can help solve an important theoretical and practical puzzle, by reconciling the idea that laypeople have an alternative, valuable rationality with the fact that they are emotional. The fact that laypeople are emotional about risks does not so much show that they are irrational, rather, their emotions might be the very ground on which they are capable of including ethical considerations in their risk assessments. Rather than being opposed to rationality and hence inherently misleading, emotions can then be seen as an invaluable source of wisdom when it comes to assessing the moral acceptability of risk.
The emotions of the public can provide insight into reasonable moral considerations that should be taken into account in moral decisions about risky technologies. We need moral emotions in order to form well-founded insights into the moral acceptability of technological risks. Emotions such as sympathy, empathy and compassion can point out unfair distributions of risks and benefits. Indignation and resentment can point to moral transgressions such as involuntary risk impositions. Experts might feel responsible and worried about the technologies they develop. Fear can point to concern about unforeseen negative consequences of a technology. Disgust can point to the ambiguous moral status of for example clones and human-animal hybrids. Fear and anxiety can indicate that a technology is a threat to our well-being. Sympathy and empathy can contribute to our understanding of a fair distribution of risks and benefits, and indignation may be an indication of violations of autonomy in cases of risks to which we are exposed against our will.
It is often thought that emotions are by definition anti-technology and therefore one-sided, but this is not necessarily the case. Enthusiasm for a technology, for example, can suggest benefits for our well-being. Emotions can draw our attention to important moral considerations that may otherwise be insufficiently addressed.
These insights allow for a different way of dealing with risk emotions in public debates, by avoiding the technocratic pitfall and the populist pitfall alike. Instead, this alternative approach allows for a so-called “emotional deliberation approach to risk”. This approach connects with approaches of deliberative democracy that have been developed in political philosophy. According to such approaches, political decision making should be based on genuine public deliberation in which moral values are explicitly addressed, discussed and reflected upon. Such approaches have also been developed in the context of risk, by proposing public participation in decision making about risky technologies. However, these approaches are based on a rationalist ideal of deliberation, whereas the approach proposed here argues for the importance of emotions in moral and political deliberation in order to be sensitive to moral saliences. This offers a fruitful alternative to the two pitfalls: in the technocratic pitfall public emotions and moral concerns are ignored, while in the populist pitfall public emotions are taken as a given that makes discussion impossible. The alternative approach instead gives the public a genuine voice, in which their emotions and concerns actually get heard, listened to and discussed.
This “emotional deliberation approach to risk” has a lot of potential. It can provide us with ideas on how to communicate about risks in a morally responsible way. Moral emotions can provide for important insights into moral constraints and desirable parameters of technology development. For example, in debates, experts should not only focus on small probabilities of possible risks, but also provide a balanced outlook on positive and negative possible consequences, allowing individuals to make an informed assessment. For example, debates about nuclear energy often focus primarily on meltdowns, where experts emphasise the low probability while laypeople are mainly concerned with the consequences. Given the potentially catastrophic consequences of a meltdown, it is reasonable to have an ethical discussion about this even if a meltdown is considered to be unlikely to occur. In addition, debates about nuclear energy should also take into account potential consequences for future generations, given that we burden them with waste that is radiotoxic for thousands of years. Debates about nuclear energy should not so much be about whether or not it is acceptable, but under which circumstances it might be acceptable. My colleagues Behnam Taebi, Ibo van de Poel and I have developed these arguments in more detail in an article called “The ethics of nuclear power: Social experiments, intergenerational justice, and emotions” (Energy Policy, 2012).
In addition, a discussion about the moral acceptability of energy technologies should not take place in isolation but should also take into account a broader outlook on what we as a society consider a morally desirable energy mix. This involves also considering the risks of other forms of energy. Given that at this point most possible sources of energy are contested, appealing to emotions such as feelings of responsibility can also contribute to create awareness that it may be inevitable to use controversial sources of energy if we are not prepared to adjust our lifestyle.
Involving emotions in deliberation and communication about risks can also contribute to needed changes in behaviour. For example, appealing to emotions in campaigns about climate change can increase the currently lacking “sense of urgency”, and at the same time provide for motivation to contribute to environmentally friendly behaviour, as emotions are a predominant source of motivation.
This approach means that debates about risky technologies include emotions and moral concerns that have to be taken into account in order to come to a well-grounded ethical assessment. Involving the public in an open and respectful way, where emotions are explicitly addressed and acknowledged, can result in morally better decision making. It is morally better in a procedural sense, as it is more genuinely democratic, as participants are treated with full respect, rather than diminishing their contributions as so-called irrational or based on lack of knowledge, which happens in the two pitfalls whenever laypeople respond emotionally. But the decision making is also morally better in a substantive way, concerning the outcomes, as acknowledging and explicitly addressing people’s emotions means that there is more room to address the moral concerns that underlie these emotions.
At the same time, this approach will help overcome the gap between experts and laypeople that occurs over and over again in debates about risky technologies. If the public feels that it is taken seriously, it will be more willing to also make concessions. Risk communication should not be understood as the marketing of a certain point of view. Rather, it should be a symmetrical process in which all parties have a genuine voice, including listening to emotions. This increases the prospect that all parties are willing to give and take, which can eventually allow for genuine consensus rather than a pragmatic compromise. By endorsing my proposed “emotional deliberation approach” to risk, both the technocratic and the populist pitfalls are avoided, and decision making about risk and uncertainty becomes more genuinely democratic