A second pass at the themes in The Meta Level Doesn’t Justify Itself. I’ll roll the two together at some point in the indefinite future.
Imagine you need money, and somehow you find yourself sitting across from Warren Buffet pitching a new business venture.
From a purely self-interested perspective, your best scenario is to get the money, blow it all on raves and parties, and then run off to Cancun before he catches on.
Of course, if he knows you’re going to do this, he doesn’t give you the money at all. That’s your worst scenario.
Your second best scenario is to get the money, plow it into your business venture, and return it to Mr. Buffet later on along with a good portion of your returns. So in order to get to your second best, you have to be able to deny yourself the first best. This is hard because once you’ve achieved your second best, you have the option of jumping to your first-best.
This is the problem of time-inconsistency. Kydland and Prescott formulated it in their Nobel-winning paper in terms of rules vs discretion. By not following your interests at every point in time, you can actually do better than if you did. This seems to be something similar to what Scott Alexander had in mind in distinguishing between the meta vs object levels of thought, even though the pitch is in terms of epistemic virtue rather than practical interest. Still, the common thread is that there are cases when it’s desirable not to act (or formulate opinions) in your own immediate interest.
I argued in the previous installment that the institutions of modernism – rationalism, capitalism, and liberalism – could all be formulated as rules over discretion, or meta over object level, and that modernism in general consists in the predilection for ever more abstract rules of this sort. Because problems of the Warren Buffet sort are pervasive impediments to social cooperation, these institutions were largely responsible for sparking the takeoff of Western civilization.
Both ways of formulating the problem are odd though. Presumably there’s some set of (unarticulated?) rules that direct discretionary action too; otherwise it’d be random action and we couldn’t call it purposeful at all. And not even an abstract rule like “pursue your own interests”, but a whole panoply of rule-bound heuristics and cognitive processes that turn environmental input into a course of action. In the end, everything is rules. So if we want to make a distinction like Kydland and Prescott do, it has to refer to the content of the directing rules rather than the existence of rules in general.
A more exact way to formulate the difference is in the capacity to commit to behaving in ways that might not be in your immediate self-interest. Returning to the Buffet example, it’s in your interest ex ante to commit, since this moves you from your worst option to your second best. But once you have the money, it’s no longer in your interest to have committed ex post, since this prevents you from moving to your first best. Commitment might consist in allowing a third party to coerce you in the event of defection (arbitration), or from having other-regarding preferences (where I value your interests even at my own expense – Bowles and Gintis argue that this is necessary for any social cooperation at all). Rationalism, capitalism, and liberalism were powerful because they were commitment technologies; they were legitimators of cooperative strategies. Similarly, following discretion = operating on the object level = failing to commit, all of which leave you in your least-preferred scenario.
The problem becomes more acute in situations where both parties have to cooperate. We can imagine a similar dilemma facing Mr. Buffet. In his first-best scenario, he convinces you to get to work, telling you he’ll give you the loan if the first results are solid. He then pretends to be dissatisfied with the results in order to avoid paying for it. Free labor for him!
Of course, if you think he’s going to do this, you won’t bother asking in the first place, which is his worst case scenario assuming you’re honest. So, like you, he has to commit to forswearing his first best in order to get his second best.
This is the essence of a prisoner’s dilemma: there are big gains to be made, but both parties have to commit ex ante to not follow their interests ex post in order to do better than the baseline. If either one fails to commit, neither one gets the benefits.1
With this in mind, I’m going to try to reformulate the problem of applying the meta-level to itself or trying to justify a rule with itself. If you value political neutrality between interests (a meta-level rule), this might look like being neutral between neutralists and anti-neutralists. If you value tolerance, this might look like tolerating the intolerant. If you value epistemic humility, this might look like being willing to countenance epistemic hubris. Something similar can be done to any rule that commits you to acting against your own ex post interest.
All of these cases describe committing to cooperate with a party you know won’t cooperate in return. You know Mr. Buffet won’t come through with the loan, but you come in with printouts of your credit score and swear on your mother’s grave that you’ve never defaulted on a loan. Mr. Buffet knows you’re going to run off with his money, but he brings you in and solemnly promises to appraise your progress with an impartial third party.
In cases like this where the other party hasn’t committed to cooperate, it’s not even in your ex ante interest to cooperate. Doing so will put you even worse than your previous worst-case scenario!
So to say that “rules don’t justify themselves”, or that “the existence of the meta level is itself an object level norm“, is just to say that commitment to cooperate has to be grounded in your interests. It has to be in your interests ex ante to commit to denying your interests ex post, which in general is only true when dealing with other committed cooperators.
Putting it this way makes it sound perfectly obvious. Of course it makes no sense to cooperate with defectors. And yet, few people think of the point of morals, rules, institutions, ideologies, and so forth being to foster cooperation.
The modern world being characterized by increasingly abstract rules, therefore, means that it is characterized by increasing capacity for commitment, and increasing scope for cooperation. This is a good thing – the Great Divergence, history’s most dramatic and sustained increase in the scope of social cooperation, would have been unthinkable without it. And yet, the fetishization of these abstract rules – the idea that they could justify themselves (modernism) without reference to some other end (cooperation) – resulted in disillusionment when they were shown to have failed to do so (postmodernism). The drive for abstract rules turns on itself, delegitimizes the project entirely, and reduces the existing normological order to ex post interest (deconstructionism). From such a standpoint, cooperation never pays.
At this point there are two main responses. Right-deconstruction says, “Stop cooperating! Never be a sucker!” – a workable survival strategy, but an impoverishing one that undoes everything that made the West successful and distinctive. Left-deconstruction says, “Cooperate, though the heavens fall!” – the ultimate commitment, but vulnerable to sudden extinction. Unconditional cooperation cannot be an equilibrium, since now the other party has no reason not to defect.
The West is in a situation now where large swaths of the population are unconditional cooperators (at least in an important subset of games), masses of people from a society of defectors are washing in, and homegrown defectors are starting to build mass in response.
The sensible and stable strategy, of course, is conditional cooperation: cooperate only so long as it remains in your ex ante interests, and punish defectors to ensure that it does remain in your ex ante interests. The task of punishing defectors is itself a prisoner’s dilemma, or a commons problem, and the West is in a situation where the dominant ideology is that justice requires cooperating with defectors, which mostly precludes punishing them. And so antisocial behavior is subsidized on a gargantuan scale. Such is the result of taking justice as an end in itself, of assuming that it can justify itself, rather than justifying it by reference to cooperation as its end.
In closing, what might a society with healthy defection-punishment mechanisms look like, relative to the current political reality?
The upshot is the same in each case: promote cooperation by efficiently weeding out defectors. Sustainable cooperation cannot be an end in itself; it must be grounded in (but not limited to) a more narrowly conceived interest.2 If this proves unachievable, the alternative is the end of impersonal cooperation and the extended order.