Cristopher Knight was arrested, in Maine, 2014, for the theft of modest amounts of alcohol and candy. Unusually for a petty thief, his case received a certain amount of media interest. Journalists stalked him in prison, ignoring his repeated pleas for solitude. Ironically, it was his yearning for solitude that drew them to him: this man had lived alone, in a tent, in a forest near a summer camp, for the last three decades. He’d stolen from his neighbours – and the owners of the nearby cabins had barely detected his presence. Some noticed items were disappearing, but were unable to explain it. Some resorted to blaming ghosts.
The appeal of Knight’s story was twofold. First, there was the fascination for someone who lived authentically as a hermit, in 2013. For thirty years, he hadn’t spoken, or seen a human face. Those who met him afterwards described him as “inexpressive”; he, himself, once remarked on how uncanny the experience of seeing a human face had become to him.
Even more than this, people were interested in the insight they thought he could offer. With all that time to think, he must, between a sip of bottled strawberry daiquiri and a 1990s Tom Clancy book, have uncovered some profound truths about existence – right? This thought isn’t new. Hermits have always been seen to have privileged access to hidden knowledge. Their seclusion, we think, allows them to gain a better purchase on reality.
But is it really that simple? In a time of open-plan offices and enforced engagement with the wider world, it hardly seems anyone has the time to think. Some might conclude that this is the root of our poor decisions: if we could only isolate ourselves for a few hours we’d be able to actually “hear our thoughts”.
One possible explanation for the epistemic creditability we give to hermits is that they seem to be free from the more mundane, earthly concerns. They aren’t supposed to have personal relationships or possessions. In fact, a much-highlighted feature of a hermit’s life seems to be poverty. But why? Can’t one bask in one’s wisdom in a secluded Jacuzzi? No – somehow that seems to taint the experience – because, maybe the thought goes, wealth implies involvement with the world. We expect objectivity from them: they have nothing to defend. We think they have fewer biases.
But do hermits really have an edge on us when it comes to knowing? Let’s leave the romanticised world of contemplative loners and get clinical with some studies on reasoning. According to Hugo Mercier, and his psychology-based theory of reasoning, we are lazy and biased arguers. Lazy, because we don’t spend a lot of time and effort producing high-quality arguments to support our claims. Biased, because we have a strong tendency towards “confirmation bias”: when new information is introduced, reasoners tend to re-absorb it within their worldview, hence confirming what they already thought was true. We are ill-equipped to reason on our own; if we do it, we end up with overly conservative conclusions, an inflated sense of self, and the feeling of moral superiority.
Moreover, Mercier claims that the only remedy is exposure to opposing views. Why? We evolved to work together. We take better decisions together. The mechanism is clever, and based on the fact that we are more objective and demanding when assessing other people’s arguments, than our own.
Consider a dialogue. The first person to speak proposes one argument; the opponent responds with a counterargument, the strongest they can think of, drawn from a wide range of possible responses. Assuming a cooperative environment in which the aim is to reach a conclusion together, it doesn’t make much sense for the interlocutors to try to anticipate what the other will say. It is more cognitively economical to simply wait and address the specific objection brought to the table. Anything else would be a poor use of mind power.
Each participant throws arguments at the other and sees what sticks. Weak reasons will, over time, be discarded, and only the strongest arguments will survive. By the end of the debate, only the most persuasive arguments will be left, which Mercier assumes have a good chance at also being correct. Since everyone’s a critic, the weakest arguments die at the hand of the opponent, and only the others survive. That is, the more people interact and weigh up each other’s arguments, the better the outcome.
A hermit, however, is denied these epistemic resources.
But it’s not all smooth sailing for the proponent of collaborative reasoning. As a many have noticed, since Plato at least, groups can go wrong in a lot of ways. Irving Janis pins all this down with the concept of “groupthink”. He argues that the need for cohesion can lead to bad decisions. What’s more, groups are easily enticed by charismatic leaders and suspicious of outsiders. Delusions of moral supremacy, owed to a sense of collective power, has been “justification” for a catalogue of heinous actions.
Groups’ disturbing features are well known in political settings. Groupthink has been observed, for instance, in cults, squads, boardrooms and gangs, but also in whole nations: Nazi Germany is of course the most disquieting example.
It is also true, however, that the endeavours we, as humans, are proud of, are mostly collective. Science is fundamentally collaborative. Of course, individual scientists sometimes work on their own, but any significant achievement would be impossible without building on colleagues’ work and forming research groups. It’s very unusual to see an article on a peer-reviewed science journal authored by just one person.
True, things appear to be different for philosophers. It is rare to see an “et al.” in philosophical citations. Yet philosophy is also a profoundly collective enterprise: the best-quality work comes from constant discussion with other philosophers, especially when fierce, yet respectful, criticism is involved. Whenever an argument is made in the wild, reactions will help to refine the original proposal. Theories are like pebbles, getting smoother and smoother by rubbing against each other.
What shall we make of all this? Are groups cruel, irrational monsters or healthy, epistemologically rich communities? My answer: it depends.
Let’s go back to groupthink. According to Janis’s original formulation, groupthink flourishes in a specific, intellectually claustrophobic, habitat. The group needs to be homogeneous, hold unity as a value to begin with, feel power often backed up by what group members take to be unquestionable moral authority. Strong leadership needs to keep hold of the situation. The group is protected from outside information that might introduce doubt. Opposition is banned. Outsiders are seen in a negative, stereotyped way. If such a group is in the position of making a decision, it is likely to be irrational. This kind of group is simmering in a broth of its own confirmation bias: only information that supports its values and worldview is examined.
Essentially, a groupthink-prone group doesn’t act like a group at all, from an argumentative perspective: it acts, in fact, as an epistemic hermit. In a homogeneous group, members mirror each other’s beliefs, letting their pre-conceived opinions grow stronger and stronger. In order to obtain an epistemically virtuous group, its members should be encouraged to express their own point of view: groupthink is a result of the suppression of intellectual diversity.
What, then, is the ideal setting for arriving at truthful conclusions and sensible decisions? Disagreements are useful – and for genuine disagreement let’s say the minimum people is two. One person starts, and her opponent comes up with counterarguments. But more people are really needed, because one of the two contenders might simply be a better orator, or introduce false information into the debate in order to win. Manipulation, in other words, is a risk. So in order to reduce manipulation attempts, it might be useful to add more people to our theoretical model who can call out the offender and bring her back to the rules of the game, or challenge factual claims.
Interestingly, fostering disagreement, encouraging accountability and introducing expert opinion are part of the common antidotes to groupthink: to the suffocating atmosphere found in closed groups, we should substitute a well-oxygenated environment where outside influences and accurate information help us avoiding the pitfalls of bad reasoning. In a sense, it is not merely a quantitative matter, even if we consider the plurality of points of view; it’s a matter of quality. Attitude is important: participants need to oppose each other in the reasoning arena, while being aware that, from the system’s viewpoint, they are cooperating. They have a common end goal, which they can reach by challenging each other fairly.
Hermits deliberately leave the playing field, and maybe rather than revere them as fonts of truth, it might be better to think that they gain undeserved admiration for their isolation.
But how should we apply all this to our own case? Are we in trouble? We tend to hang out with people who share our opinions. A context in which this mechanism is becoming more apparent is online: when we look for something on the internet, or open our Facebook timeline, everything has been selected for us by algorithms that deepen the effects of the confirmation bias, by showing us results that agree with us. Since a very loose form of peer-review is the main fact-checking mechanism online, this is becoming an important problem in terms of false information. We don’t have the energy to argue all the time, or perhaps don’t feel like it. What should we do about the conclusions pushed on us online? Greater exposure to a variety of opinions might be the answer. Perhaps, if we were more used to debating cooperatively, we would be at ease with opposite viewpoints, which in turn would foster a more breathable epistemic air.
If we assume that we come to better decisions, and more accurate knowledge, when we act within an epistemically healthy and diverse community, then Facebook, Google and other major players are isolating us, in the worst case, reducing us to misinformed, biased hermits. I see this as perpetrating a form of epistemic injustice – a concept owed to Miranda Fricker, which means, as she puts it, “harming people in their capacity as knowers”. The same applies to censorship of certain points of view in universities. As long as contradicting a claim is an option, as long as people are used to arguing and thinking critically about what they hear, factual claims are checked as well as possible, unusual views can be freely expressed — when all that is in place, we stand a chance of flexing our arguing muscles and keeping them fit.
Arguing is a moral duty, an antidote to a form of injustice; living in an environment that encourages argument should be a right. We recognise a right to free speech, which is a right to share one’s opinion; I think we should defend its mirror image – the right to know the opinions of other people too.