How should the well-informed counter propaganda? I am thinking about this question in light of recent campaigns trying to sell citizens on two claims: that there is widespread voter fraud requiring legal action in the form of tighter voting restrictions; and that critical race theory (CRT), a purportedly hateful “Marxist” ideology that teaches students to hate white people and the US, is being taught at every level of our school systems and must be stopped – once again – through legislative means. As has been pointed out by various people with expertise in these areas, both claims rely on lies and distortions. (For instance, see NYU’s Brennan Center for Justice’s analysis of the Heritage fraud database about voter fraud). Both claims have broad support among significant segments of the populace. According to a Quinnipiac poll, 77 percent of Republicans believed Donald Trump’s claim of widespread voter fraud during the election. A Pew Research Center study also found a difference in confidence about the integrity of mail-in ballots among Republicans who relied on Trump as a primary news source and those who did not, the former being less confident. With respect to CRT, an Economist/YouGov poll found that 58 percent of Americans have an unfavourable view of it.
For some, those sounding the alarm about voting integrity and CRT are offering useful information to which any rational person should pay attention. For others, they are bad faith actors disseminating dangerous propaganda that threatens to disenfranchise and further marginalise vulnerable people. The situation confronts us with at least two important questions: What distinguishes propaganda from information? And how ought we respond to it? In what follows, I engage in a brief meditation about a few problems I believe emerge when attempting to get a handle on this notion of propaganda.
Propaganda and information. Perhaps a reasonable way to distinguish between propaganda and information is to think of the former as biased speech in the service of some ideological goal or agenda and the latter as unbiased speech in service of truth or justice. This kind of distinction has an intuitive appeal when we consider, for example, Brazilian President Jair Bolsonaro’s unfounded claims about voter fraud as a real concern for upcoming elections (plagiarising Donald Trump’s playbook). It does not take much to see that he is basically seeding the ground to justify not giving up power if he loses. If that is right, Bolsonaro’s claim would certainly be an instance of biased speech, in this case, for personal gain.
The emphasis in this kind of characterisation is on the interests or attitude of the speaker. The difference between some speech counting as information or propaganda depends on whether the speaker acts in good faith or not. This implies, though, that someone could not unknowingly pass along propaganda, which seems wrong. An anti-vaxxer may believe her claims about COVID vaccines causing infertility in women are true, but that does not mean she is not engaging in propagandistic speech.
Edward Bernays provides a different characterisation of propaganda as “a consistent, enduring effort to create or shape events to influence the relations of the public to an enterprise, idea or group.” Bernays focuses on propaganda’s function as a means of manipulation aimed at shaping public opinion. We witness one instance of this propaganda in this sense in Chris Rufo’s campaign against CRT. Rufo believed the phrase critical race theory contained language that triggered negative associations for conservative audiences and would be useful for coalescing several issues under one banner. He routinely appears on cable news shows and writes on Twitter about the dangers of CRT as a dangerous ideology.
For Bernays, propaganda is not necessarily always a bad thing. Consider situations in which some kind of consensus is needed to move on big policy issues. If information is supposed to represent the no-frills presentation of facts that leaves it to people to decide for themselves how to interpret them, which courses of action (or inaction) are best and so forth, this will predictably lead to gridlock due to contrasting interpretations of the data. It could also result in a paralysis of thought for many since that kind of openness can stultify decision-making. Thus, in some circumstances having a consistent mechanism for shaping public opinion might be a good thing.
Characterisations like these tend to make propaganda extremely pervasive, perhaps even inescapable. In fact, we might believe, along with Jacques Ellul, that propaganda relies on information in order to be effective; those swayed by propaganda must first be informed to a certain degree. Ellul notes a kind of symbiotic relationship between information and propaganda: “Almost inevitably information turns into propaganda; it makes propaganda possible, feeds it, and renders it necessary.” Ellul’s characterisation paints a picture of propaganda as a kind of parasite operating on an already present factual host.
These considerations might lead to a further revision of the information vs. propaganda distinction, one that defines information in a narrower way. Instead of making bias the relevant factor, our new version identifies interpretation as the distinguishing feature. As a result, information becomes the mere display of uninterpreted facts while introducing a specific interpretation of those facts is propagandistic. This might make virtually everything an instance of propaganda since even when we think we are giving a dispassionate presentation of the facts we must make choices about which ones to include, to leave out, to emphasise, and so on.
There are a number of questions this way of thinking raises. What is the difference – if there is one – between good and bad propaganda? If this difference depends on your motives for making a claim, how can you tell you have the right ones? Is there a difference between making a false claim and making a (bad?) propagandistic one? To help us think a bit more about a distinction between good and bad propaganda, consider some of the things being said around COVID, masks, and vaccines. One notable case involves a former vice president of Pfizer, Michael Yeadon, who co-authored a petition to the European Medicines Agency (EMA) asking for it to halt COVID vaccine trials. The petitioners claimed, without evidence according to a Reuters story, that vaccines may cause infertility in women. The Reuters story goes on to note that in a survey taken in January 2021, 13 percent of unvaccinated people in the US “had heard that ‘COVID-19 vaccines have been shown to cause infertility.’” Yeadon hadn’t worked at Pfizer since 2011, but his status as a former vice president certainly lent him credibility with people already sceptical of vaccines.
Of course, officials at the EMA refuted the petition’s claims. But how could you determine that Yeadon’s claims were an instance of bad propaganda and those of the EMA an instance of good propaganda? Here one might be tempted to bring back in some kind of distinction between propaganda and information, which has already proven tricky. The meandering way thorny issues continuously pop up when attempting to provide a distinction we might refer to as the problem of demarcation. Though the feeling that there is a distinction feels obvious, actually making it plain appears almost intractably difficult.
A second problem concerns what I will call the problem of vulnerability. Even if we cannot decisively distinguish information from propaganda, we can note that some people appear more susceptible to misinformation than others. Explanations for this susceptibility can draw on various factors, but I want to highlight two. First, ordinary working conditions make it difficult for people to find time for careful deliberation and thought. According to a 2015 report by the RAND Corporation on working conditions in the United States, 32 percent of men and 20 percent of women report working more than 48 hours per week. Further, approximately one-half of American workers report conducting work at home in their “free time.” These two points already suggest citizens generally have less time for activities like reading and analysing data.
The second factor combines with the first to exacerbate the general public’s vulnerability. As Edward Herman and Noam Chomsky point out in Manufacturing Consent, the structure of the mass media system systematises propaganda in a way that functions to keep people in the dark. Herman and Chomsky construct a propaganda model that highlights five “filters” used to “filter out the news fit to print, marginalize[s] dissent, and allow[s] government and private interests to get their messages across to the public.” This includes things like the concentration of wealth and ownership of mass media firms, ads as the primary source of income, reliance on government, business, and paid “experts” for source material, the use of “flak” to keep media in line, and the use of “anticommunism” as a control mechanism. These two factors make the public vulnerable to embracing misinformation because they help ensure limited exposure to information not heavily filtered by powerful interest.
Lastly, there is what I will refer to as the problem of trust. Since the lack of a clear demarcation between information and propaganda makes it easier to call something into question and the material conditions of ordinary life significantly reduce the availability of time and energy for careful scrutiny, many are left more vulnerable to the interests and choices of an elite. This is not lost on people, leaving them with fewer reasons to trust the information coming out of various mainstream sources. According to a Gallup/Knight Foundation poll, 57 percent of Americans believe their own news sources are biased while 69 percent express concern of bias in others’ news sources. Fifty-two percent believe reporters misrepresent facts and 70 percent believe media ownership influences coverage. These phenomena are also affected by things like political affiliation. This dynamic arguably produces a diminishing of trust in at least two ways. First, it reduces trust in information sources themselves, evidenced by the poll numbers previously mentioned. And second, it minimises trust between citizens. If people doubt their fellow citizens are drawing from trustworthy information sources, this will undoubtedly erode trust in the testimony of others.
How should we respond to propaganda? Given these three problems accompanying propaganda, how then are we to respond to it? The demarcation problem threatens to leave us without a clear target, the vulnerability problem poses a challenge for audience uptake and the trust problem immerses one in a fight over credibility. These are formidable challenges to overcome. Social media activity via comments on Twitter and Facebook, news consumption habits tracked by age, race, and political affiliation and polaris
ation gives one the impression that the problems of propaganda are intractable. Let’s take a moment to consider three strategies that have been used to approach them: fighting bad information with good, censorship, and aesthetic engagement.
First consider fighting bad information with good information. We saw this strategy used, for instance, by news pundits during the 2020 presidential election campaign in response to Donald Trump’s repeated claims of widespread voter fraud. This is also a tactic being used by some responding to the CRT offensive. Marc Lamont Hill and Joy Reid, for instance, have conducted segments on their respective shows while others like Sam Hoadley-Brill take to Twitter and op-ed sections in The Washington Post and The New York Times to set the record straight.
While these are admirable efforts – and perhaps, even necessary – their efficacy is significantly limited. For one, it is essentially an increase of speech which may contribute to the vulnerability and trust problems. If propaganda is effective because people’s attentions are overwhelmed, contributing yet more speech is unlikely to lessen that attentional load. With respect to trust, those countering anti-CRT speech generally do so in venues untrusted by their target audience. The avid Fox News watcher is typically not in the habit of checking out what folks are saying over on MSNBC or CNN. Lastly, the demarcation problem also causes trouble since the counter-speech opponents engage in what often looks like propaganda itself to those being addressed.
Next, there is the censorship strategy. Think back again to 2020 and various social media outlets’ attempts to deal with pernicious speech from Trump and his supporters. Twitter and Facebook initially took up a corrective speech strategy attaching content warnings about veracity to Trump’s posts. But after a prolonged effort and Trump’s refusal to amend his behaviour, both suspended, and ultimately banned, his accounts. An obvious effect of this effort is that it cut off a source for misinformation, but again it is likely this had little effect on people’s attitudes. Presumably, those sites’ efforts actually contributed to the trust problem by making those organisations less credible as a platform for information in his supporters’ eyes.
Lastly, there is what I will call aesthetic engagement. This strategy addresses propaganda primarily through artworks, standup comedy, film and TV shows, and literature. Where the limits of deliberative exchange by way of assertions run short, expressive actions that appeal to the heart as well as the head hope to make more headway. Such campaigns have also been prevalent. In New York state, for instance, TV ads that show neighbours engaging with one another about the virtues of getting the COVID vaccine ran frequently. This was also a strategy that motivated major art movements like the Harlem Renaissance, believing anti-Black propaganda could be successfully countered through artistic portrayals of Black life that express their essential humanity.
But once again, the returns on this kind of aesthetic investment are uncertain. The use of political humour is often touted for its persuasive potential as a means of aesthetic engagement. Studies by humour researchers have shown that humour can reduce counterargumentation (argument scrutiny) but have not found strong consistent evidence of its power to persuade.
Obviously, we are not obliged to follow just one of these strategies; we can combine and deploy them in ways that might be more effective given the particular circumstances. One thing these reflections show is that addressing misinformation effectively may take a multipronged approach. Not only must individuals be better informed, but our information structures should also be addressed to break up the monopoly of powered interests.