In 1957, a proprietor of a movie theater claimed he could increase sales of Coca Cola by flashing the text “Drink Coca Cola!” too quickly for moviegoers to be consciously aware of the text, but subconsciously implanting the suggestion. Despite the marketing potential, this was later revealed to be false.
In the early 2010s, decades of research into social priming, the leveraging of subtle cues to unconsciously change the decision context later, was shown to be unreproducible and probably an artifact of motivated researchers with shoddy statistics. Social priming had been used to justify a great deal of social engineering, from small-scale “nudges” to Supreme Court decisions.
In 2018, reeling from the unexpected Brexit vote, the British political consulting firm Cambridge Analytica ignited a controversy after obtaining consumer marketing profiles from Facebook. Who knows what private information could be inferred about someone from their social media habits, and what nefarious causes people could be manipulated into supporting by someone with that data? In response, Facebook and other social media companies allowed users to see and edit their marketing profiles, which turned out to be less the panopticon of imagination and more just a scattershot of low-confidence and low-resolution affinity buckets.
Around the same time in the US, reeling from the unexpected Trump election, attention turned to the Youtube algorithm and the “rabbit hole” dynamic, where the recommendation algorithm could amplify an innocuous click into increasingly politically extreme territory. This spawned a cottage industry of ‘misinformation studies’, i.e. narrative control, an industry later deployed in service of the Covid regime. Covid censorship turned out to be an enormous political debacle that poisoned decades of progress on vaccination, and later research found that the rabbit hole dynamic was never really that strong.
In 2002, AI alarmist Eliezer Yudkowsky published a thought experiment where an AI intelligent enough to convince you of anything was trapped in a box. He suggested that “a [superintelligent AI] can take over a human mind through a text-only terminal,” and convince you to let it out to (what else) devastate the earth. In 2022, ChatGPT’s remarkable facility with text renewed interest in this thought experiment along with a wave of AI “doomerism”. Isolated cases of AI psychosis began to appear, but it appears more likely that AI is not causal, but simply a new manifestation for previously existing psychoses.
What all these cases have in common is:
And sometimes – especially in the middle three cases –
No doubt this reasoning got uptake in large part because it flatters tastemakers in more ways than one. First, the hoi polloi are those susceptible to this sort of manipulation, which you – by knowing this argument – stand above. Second, (3) implies policy is not a matter of democratic deliberation (which you might lose), but a rather easier technocratic matter of engineering consensus.
But even serious intellectual disciplines got taken in. Behavioral economics, driven by dissatisfaction with the rational actor model, was an influential fad in the early 2000s, and 2002 Economics Nobelist Daniel Kahneman’s enormously popular book Thinking, Fast and Slow drew on a great deal of priming research that he later admitted he should have been more skeptical of.
Evolutionary psychology, similarly, leaned heavily on priming research as data to be explained. Why would humans respond to cues in these sorts of predictable ways? — maybe we can explain that by asking what this cue would have indicated in the ancestral environment. And indeed a fixed cue-response connection is how behavior is typically modeled in evolutionary ecology.
I highlight these two fields in particular because they should have known better.
Both economics and evolutionary ecology have equilibrium as a central concept. It’s all well and good to explain things as they are, but suppose we play the tape forward. We’d like to know, are things likely to remain as they are, or is there reason to expect they have to change?
The two premises above set up what’s called, in both fields, a signaling game. Alice has a choice of messages, or signals, to send. Bob has a choice of responses, and can condition his response – or not – on the message Alice sends.
Then we play the tape – we solve for equilibrium. Suppose Bob plays a compassionate strategy of helping whenever Alice says “Wolf” with alarm, or “Mom” plaintively. This clearly creates an opportunity for Alice to take advantage of Bob.
Then we play the tape again. Bob can respond by switching to a two-strikes strategy: if Alice cried wolf a few times before and there was no wolf, ignore the signal next time. Or he could switch to a trust-but-verify strategy. But what if reputations aren’t available or reliable? What if the danger isn’t verifiable until long after the fact? If Bob isn’t able to protect himself from getting taken advantage of, his best strategy is just don’t listen to the signals. Maybe Bob learns, or maybe Bobs go extinct from stubbornness or failure to adapt, but either way the end result is that the signal is not listened to in equilibrium.
This is what makes human language such a remarkable evolutionary achievement. In order for it to be in my interest to understand the words you speak, I have to trust you not to take advantage of me. Chimps, for example – despite having spatial intelligence comparable to humans – have no language because they will not trust the voluntary vocalizations of other chimps. Language is not a triumph of human intelligence, but of human trust. (And, of course, verification.)
My recent paper “Strategies Are Not Algorithms” uses this logic to think about how minds translate cues and perceptions into concepts and actions. A world in which minds responded to cues in the mechanical way assumed by priming theory, or subliminal messaging, would be one in which signals are reliably exploitable. And a world in which signals are reliably exploitable is one in which either (1) people put up defenses over time, or (2) communication in general is not possible.
On the one hand, the paper argues that since language does exist, we have to explain how signals can be ensured to be reliable. For this reason, a simple cue-response mechanism cannot be an accurate description of how humans make decisions. If priming is real, language is not possible. It must be the case that the way I translate cues into categories (for example determining whether to trust a car salesman) cannot be straightforward, predictable, or rigid: it must be opaque, discontinuous, and changeable, because otherwise I would be too exploitable to make it worthwhile to condition my behavior on the language of others at all.
If this is true, worries about misinformation in the age of AI are likely overblown, or at least misplaced. There simply are not enough reliable contact points between perception and action for people to be manipulated at scale like that, at least not for very long, and manipulated people – whether at their expense or for their ostensible good – will never stay manipulated forever.
“Fool me twice, well… uh… you fool me, can’t get fooled again.”
On the other hand, this logic can also predict how attempts at manipulation have to play out. It applies, therefore, not just to the populists ostensibly manipulating the masses, but also the stewards of the polity who would assert control over discourse in defense. In the first place, it suggests that “misinformation” is a misdiagnosis. Populists and charlatans can and do deceive people about factual matters. But the appeal and stickiness of populism cannot be explained this way. People are rarely reliably deceived about their own, or their coalition’s, interests.
Kelsey Piper recently remarked that the left lies by misrepresenting evidence, and the right lies by ignoring evidence. More than just a pithy taxonomy, this is the predictable equilibrium outcome of a situation where the right perceives the terms of discourse to be set in a way adverse to its own interests. Nor is this an unreasonable perception. Consider the mystical power of redefinition in progressive praxis: “Love is love.” “Childcare is infrastructure.” “Trans women are women.” If the evidence is misrepresented, if the categories of discourse themselves are liable to be redrawn in a hostile manner, anti-intellectualism – rejecting the informativeness of signals perceived to be manipulative – is the only viable response.
What this suggests is that reacting to populism by doubling down on misinformation control is just throwing gas on the fire, leaning harder into the very conditions that created the populist reaction. In the limit, it’s not implausible that right and left could end up speaking entirely different languages – a process not without precedent. There are certainly technical questions that can be decided by experts once basic agreement on values is established; central bank independence for example is justified on these grounds. But on questions of basic values, there is no getting around the process of political deliberation and confronting those differences head on. To paper over these differences by suggesting political opponents are deluded as to their true interests, and nudging them into consensus through narrative control, can only end in a fractured polity speaking different languages.
Leave a Reply