Inside and Outside Perspectives on Legitimacy
May30
Political Economy
Gustave Doré – Barthelemi Undergoing the Ordeal of Fire

Inside and Outside Perspectives on Legitimacy

An Economic Theory of the Noble Lie

This paper is also available on SSRN as a PDF.

Leeson and Suarez (2015) argue that “some superstitions, and perhaps many, support self-governing arrangements. The relationship between such scientifically false beliefs and private institutions is symbiotic and socially productive.” This paper stakes out a stronger claim: that something like superstition is essential for any governance arrangement, self- or otherwise.

Specifically, we argue that human social structure both requires and maintains a systematic divergence between subjective preferences and objective payoffs, in a way that usually (though in principle does not necessarily) entails “scientifically false beliefs” for at least a subset of agents. We will refer to the basis of such preferences from the perspective of those holding them as an “inside perspective”, as opposed to a functionalist-evolutionary explanation of their existence, which we will call an “outside perspective”. Drawing on the theory of cooperation, we then show that the two perspectives are in principle irreconcilable, and discuss some implications of that fact for political economy and the prospects of social organization.

Inside and Outside Perspectives: The Example of Ordeals

For reasons that will be clearer in subsequent sections, the distinction between the inside and outside perspectives can be seen more clearly the more distant we are from the society in question. Before proceeding to more familiar examples, therefore, we will use as our initial paradigmatic example something utterly foreign to most contemporary people’s experience: trial by ordeal.

In both medieval Europe (Leeson 2012) and contemporary Liberia (Leeson and Coyne 2012), belief that God punishes the guilty is operationalized in criminal justice rituals – ordeals – designed to discover the judgment of God in ambiguous cases. In Europe, the accused plunged his hand into boiling water. If God protected him from scalding, it was a sign of innocence; otherwise he was guilty. Similarly, in contemporary Liberia, the accused imbibes the brew of a toxic bark. If the spirits cause him to vomit it up, he is innocent; otherwise his resulting illness indicates guilt.

Such is the inside perspective on ordeals, something like the account one would get from someone living in that society. From the vantage point of modern criminal justice on the other hand, which does not share the basic presuppositions of divine justice, such practices seem backward and arbitrary. But Leeson argues that such procedures are feasible second-bests in the absence of modern standards of evidence and the state capacity to act on them. Specifically, trial by ordeal results in a separating equilibrium whereby the innocent willingly undergo the ordeal and the guilty refuse, meaning that willingness to undergo the ordeal conveys valuable information on the innocence of the accused.1 He then shows that a large proportion of ordeals were in fact successfully passed, establishing innocence, and implicating the administrators (priests) in consciously or unconsciously adjusting the severity of the ordeal conditional upon the accused’s willingness to undergo it.

This is an outside perspective on the institutions of ordeals, analyzing its functionality from some remove and without committing to the society’s presuppositions. Indeed, the two are fundamentally incompatible, not only in the basic sense of their being alternative explanations of the same phenomena, but in the deeper sense that introducing an outside perspective to medieval Europe or contemporary Liberia would destroy the possibility of trial by ordeal. If a criminal were to “see through” from the outside, a criminal justice regime predicated on a belief in the justice of God could not function, as his assent to the ordeal would carry no informational value. And indeed, Leeson shows that known nonbelievers were not subject to ordeals.

Leeson demonstrates the stability of such an institution provided people are believers. This assumption is not quite taken for granted: Leeson (2013b) for example builds a Bayesian model of belief, so believers are not totally credulous in the face of crass manipulation, and derives an equilibrium quantity of quantity of manipulation. What is missing, however, is a strategic model of belief; an explanation of why susceptibility to manipulation would be a viable phenotype in the first place.2 This is not to say that one necessarily has a choice in one’s beliefs, but rather that the viability of belief (and of honesty about one’s belief) in a population will erode over the longer run if unbelief results in persistently higher payoffs, eventually leaving an ordeal system nonfunctional.3 In such an environment believers will be outcompeted by nonbelievers – and particularly by nonbelievers who profess belief – regardless of their updating strategy. As is shown below, other institutional features intended to reverse the advantage (say, burning heretics) displace the problem elsewhere, but ultimately do not solve it.

The following section shows briefly the inability of selfish and rational agents – the homines æconomici of classical theory – to cooperate in large groups. Later sections then show where in such a model the inside/outside perspective distinction arises, how agents who do make this distinction can outcompete homines æconomici who do not, and some implications of the distinction for social theory.

There Is No Incentive-Compatible Social Organization

More generally, a signaling game with a costless signal reduces to a social dilemma, a class of games that includes prisoner’s dilemmas (two-person) and public goods, commons, and collective action problems (N-person). Social dilemmas are non-zero-sum games which are characterized by mutual gains from cooperation, but also a Nash equilibrium of mutual defection. In other words, the Nash equilibrium is not Pareto-optimal. Social behavior is defined by cooperation in such games, against one’s own narrow interests.4

Unfortunately, much of the literature on cooperation and governance generalizes from two-person games to N-person games, and therefore concludes that repeated play is sufficient to establish incentive-compatible governance structures even in the latter.5 This section shows that this inference is not warranted. We use for our paradigmatic game, therefore, a public goods game (or, equivalently, a collective action problem) rather than a prisoner’s dilemma. Governance, broadly speaking, consists in collective action of some sort or another. If it is true that rational agents cannot cooperate in a public goods game, then governance and society more broadly will also be impossible for them.

The lack of an airtight solution, we argue, necessitates distinct “inside” and “outside” perspectives: that is, a divergence between the objective game structure, a social dilemma with an equilibrium of universal defection, and – for at least a subset of agents – a different reckoning of the subjective costs, which transforms the game into one with a cooperative equilibrium.

Social Behavior Poses a Problem

The basic difficulty with sustaining social behavior can be seen in a one-shot public goods game. Suppose there are N agents, each with an endowment of 1. Each agent has the choice of contributing c ∈ [0, 1] to a communal pot, in which case γc (with γ>1) is distributed equally to all agents. Agent i’s payoff function, then, is

(1) $$p_i = 1-c_i + \sum^N_{n=1}\frac{\gamma c_n}{N}$$

whereas the total payoff function, summing pi over all i, simplifies to

(2) $$P = \sum_{n=1}^N (1 + (\gamma-1) c_n)$$

The total payout is maximized if cn = 1 for all n, but that – provided cn is independent of ci for all ni – individual payoff is maximized at ci = 0 so long as γ < N. There is a divergence between the private cost of non-contribution (∂pi/∂ci = γ/N – 1) and the social cost of non-contribution (∂P/∂ci = γ – 1).

In any game of this structure, defection is the dominant strategy for rational and self-interested agents. The public good is not provided; the signal has no informational value; the commons is depleted; collective action is not undertaken; society does not get off the ground.6 Ordeals may have been an important supporting institution in the medieval criminal justice regime, but they were not, by themselves, sufficient to establish peaceful society.

Repeated Play Isn’t A Solution

It is well known that repeated play can sustain cooperation in two-person prisoner’s dilemmas, provided the end of the game is not known, by allowing players to punish defectors. By playing a trigger strategy, for example, where one player responds to defection by defecting in all future games, one player can threaten the other with the loss of all future gains from cooperation. For narrower but still plausible ranges of discount rates, more forgiving strategies such as tit-for-tat (where the retaliation lasts only for a single subsequent period), or even tit-for-double-tat (where one period of retaliation is triggered only after two defections) can be cooperation-supporting equilibrium strategies (Axelrod 1984), especially where mistakes are made.

The same will be true in N-person dilemmas, provided players have perfect In a repeated N-person dilemma however, punishment is diffused over all agents, unlike the dyadic game that makes it possible to punish the defector and only the defector. Consider an infinitely repeated version of (1), with the simplifying modification that ci ∈{0,1}, meaning the agent has a binary choice of whether or not to contribute. A trigger strategy is not robust to the commission of any mistakes. If we introduce a parameter ε∈(0,0.5) for the probability that an agent mistakenly fails to contribute where he meant to or vice versa,7 a more forgiving strategy will be necessary if a single mistake is not to snowball into defection.

The more forgiving the strategy, however, the lower the difference in payoffs between cooperation and defection, and therefore the lower threshold discount rate necessary in order for cooperation to be feasible. Consider the N-person analogue to a tit-for-tat strategy, “contribute unless some fraction µ of the population failed to contribute in the last period”. As N becomes arbitrarily large, any individual’s choice of strategy matters increasingly less for the payoffs of his peers, making it increasingly difficult for him to punish defection and, in turn, for others to punish him. The strategy can therefore be invaded by a “never contribute” strategy. As Bowles and Gintis (2011: 63-67) show using a simulation, and Fehr and Gächter (2000) confirm in the lab, contribution to a public good drops off precipitously as N rises beyond about 5, even for very low error rates (0-0.02) and discount rates (0-0.04).8

Targeted Punishment Defers, But Does Not Solve, The Problem

The public goods game as set out so far is somewhat more limited than real-world public goods games. In particular, it is overly restrictive to assume that the only margin of choice is contribution or noncontribution. Real-world social behavior operates on many different margins, and choices along one can influence cooperation in another, for better or for worse (Reiter, et al. 2018).

Suppose now that individual i can pay some cost νij to “punish” non-contributor j by subtracting νij from j’s payoffs that period (the total product therefore falls by 2νij). If ∑i νij > 1–γ/N, there is no divergence between private and social cost for j, and j’s dominant strategy is to contribute.

Things look different from the perspective of agent i, however. An agent facing the choice of whether to punish j faces the cost νij, but – because this is a public good – the benefits of j’s future cooperation accrue to all agents. In other words, the “punish non-contributors” game is simply another public goods game superimposed upon the first. Even in a repeated game, it will be in i’s interest to free-ride on the punishment of j.

Targeted enforcement of contribution brings us into strategic territory that begins to look like governance. And immediately we run into the fundamental problem of governance: quis custodiet ipsos custodes? – “who watches the watchmen?” – a phrase whose original formulation in Latin testifies to its pervasiveness and intractability. In our case, we can reformulate it as: who punishes the punishers who fail to punish? Second-order punishment is beset by the same problem on another level, along with third- and fourth- order punishment and so on.

We have, therefore, an unbridgeable gulf between Pareto optimality and Nash equilibrium in large groups, provided there is no outside authority to appeal to. Whether or not this is true of any particular group, it is always necessarily true of society as a whole. We will call this problem the incentive gap: the impossibility in broad classes of social dilemmas of eliminating the temptation to defect among some subset of agents, whose defection would lead to the total unravelling of cooperation.

The Incentive Gap in the Firm

The problem of incentive alignment has been studied most systematically in the theory of the firm, where incentive mechanisms and organizational relationships are most explicit. If we regard the firm as a locus of joint production (Alchian & Demsetz 1972),9 effort – to the extent that it is imperfectly monitorable – becomes a public good. The question then is: how can production be organized so that no agent has an incentive to shirk?

The conventional wisdom, per Alchian and Demsetz, holds that entrepreneurs provide monitoring services, and their own incentive to avoid shirking is ensured by their status as residual claimants. Holmström (1982), however, proved that there exists no set of incentives that can motivate employees to avoid shirking in joint production so long as the budget is balanced. This is the problem of imperfect monitoring in 2.2: where effort is partially unmonitorable, any incentive system will face a tradeoff between capricious punitiveness and allowing scope for profitable defection (i.e., between Type I and Type II errors). Eswaran & Kotwal (1984) then showed that even incentive schemes which aligned incentives for employees by failing to balance the budget (i.e. enforcing effort via a penalty of paying out less than the entire product) create perverse incentives for the residual claimant. This is the problem of second-order punishment in 2.3. They conclude that “the crucial necessity of monitoring the monitor is thus not met. . . . the problem of moral hazard [i.e. defection in the firm’s social dilemma] takes a different form but remains unsolved.”

Relationships between firms are similarly fraught. The inability to write complete contracts, for example, is a common assumption in organizational economics, which is simply to say that the variety of choice margins open to two transactors precludes the ability to ensure incentive compatibility ex post, even assuming perfect enforcement of written contracts. As Alchian and Demsetz (1972) note, “it is hard to imagine any contract, which, when taken solely in terms of its stipulations, could not be evaded by one of the parties.” Trust is commonly taken as an exogenous feature of functional social systems, but it is underappreciated that the problem of trust consists in the fact that trustworthiness is not individually rational. Without special assumptions, the capitalized value of reputation is necessarily less than the value of a one-time defection in intertemporal markets (such as credit or money issue), as the value of defection rises in proportion with the value of reputation (Harwick & Caton 2019; Bulow & Rogoff 1989; Taub 1985). Where this is the case, to the extent that information is not public and reliable, reputation will be insufficient to ensure cooperation even for dyadic interactions if they are drawn from a larger population (e.g. in Kandori 1992).

The Incentive Gap in Society

The class of social dilemmas is vast, particularly in the context of governance. Besides the number of people involved, games can vary on the imperfection of information and monitoring, agents may have a wider or narrower range of choices in contribution or punishment, games may be conditioned upon the results of other games, and so on.

Depending on these various factors, many of the dilemmas encountered by people in a society can be adequately solved by repeated play, especially if interactions are dyadic. Indeed, many social institutions – most importantly, property rights and market exchange – have the function of transforming would-be N-person social dilemmas into soluble dyadic interactions. Nevertheless, the enforcement and/or voluntary respect of the rules constituting these transformative institutions are themselves irreducibly public goods. Despite the importance in the developed world and (especially) in economic theory of opportunities for dyadic exchange, the very existence of a market – and, for that matter, of a state – rests on the provision of a number of genuinely public goods on both micro and macroeconomic levels. Similarly to the second-order punishment problem, even if we suppose that the provision of property rights could in turn be transformed into a dyadic game through some supervening institution, that institution itself would constitute a public good.

The open-endedness of human strategies can also be an impediment to cooperation and commitment (Stewart et al. 2016), analogous to the problem of incomplete contracts in organizational economics. In broader society, this problem inhibits the establishment of both potential exchange relationships (Harwick 2017) and governance solutions.10 The much-celebrated fact in economics that incentive-compatible Pareto-optimal resource allocations exist given well-defined property rights, complete contracts, and limited behavioral repertoire, should not blind us to the gulf separating the Arrow-Debreu world of general equilibrium from the real world, where open-ended behavior makes complete contracts impossible and property rights costly to establish.

The upshot is that, for plausible rates of discount and error, there exists no potential structure or set of strategies to ensure that every member in a large group has an incentive to cooperate in the face of social dilemmas, a problem as true for society broadly (Bowles & Gintis 2011, ch. 4-5) as it is for a firm. To the extent that social sanctions are effective to render cooperation the dominant strategy, they do so by placing the enforcers – whether the entire population or some specialized subset – into another social dilemma.

What then of the infinite variety of equilibrium strategies possible under the folk theorem? Bowles and Gintis argue that most of these equilibria, even for dyadic interactions, are “evolutionarily irrelevant” – that is, there is no reason to expect the folk theorem to be actually operational under conditions of imperfect information, and there is no feasible path for the evolution of such strategies from a starting point of noncooperation. A viable strategy must be robust to error, and be able to outcompete noncooperators under a wide variety of unfavorable situations. In other words, relevant cooperative strategies must – in addition to being Nash equilibria – be evolutionarily stable. This requirement binds even more tightly for N-person games. “Knife-edge” equilibria, trigger strategies, and other strategies that do not meet these criteria shed no light on the strategies actually employed by humans.

Self-Deception and Cooperation

The pessimism of the previous section is belied by the Great Fact that functional firms and high-trust societies do in fact exist. Many, especially economists, have taken this as prima facie evidence that there is some incentive mechanism, even if we have not perfectly understood it yet, that bridges the incentive gap. In light of the previous section, however, this assumption is not tenable. How, then, do we account for the Great Fact?

The Phylogeny of Self-Deception

Any decisionmaking agent employing a strategy that fails to optimize according to its rational self-interest can necessarily be outcompeted by a strategy that does so more closely. In the environments we have considered so far, this means that cooperative strategies are ineluctably maladaptive for individuals qua individuals. Human social life, however, is not a mere public goods game; it is characterized by a group structure. Within groups, assortativity can ensure cooperators reap enough of the benefits of cooperation to outcompete noncooperators (Alger & Weibull 2013; Bergstrom 2003). Competition between groups presents itself to members as a coordination game, the gains from which can outweigh the losses from cooperating in social dilemmas. In other words, as Sober and Wilson (1998) argue, selection for cooperative groups can more than balance the within-group selection against cooperative individuals.

A conflict arises, however, because – in humans, anyway – strategies are employed by individuals, and selection on individuals necessarily favors noncooperation. The individual, if we regard him as making conscious decisions on the basis of his interests, must deceive himself regarding those interests. In other words, his subjective preferences must systematically diverge from the objective payoffs.11 Note that both assortativity and group competition require mechanisms to enforce the assortativity and the group structure.12 They do not therefore make cooperation individually rational; rather, they make irrationality (with respect to the objective payoffs) viable. In either case, agents must at minimum find intrinsic utility in punishing or excluding noncooperators in order to solve the second-order punishment problem. If a subpopulation employing such a strategy manages to achieve a mass sufficient to impose its preferences on the remaining selfish rational maximizers, cooperation can be stable, and the incentive gap can be closed – both in the firm (Miller 1992 ch. 10-11)13 and in society (Bowles & Gintis 2004; 2011 ch. 9).

Furthermore, experimental evidence shows unequivocally that such a divergence does in fact exist: humans are, on many margins, genuinely altruistic (Bowles and Gintis 2011, ch. 1; Tomasello 2009). If strict correspondence is incompatible with human (or any) social organization among distantly related agents with imperfect information, social organization must depend on some such tendency, at least in the breach.

Economists have traditionally sidestepped the question of preferences versus payoffs by regarding beliefs and preferences as given, and may indeed see no reason why subjective preferences should correspond to objective payoffs in the first place – after all, de gustibus non est disputandum. Even the thoroughgoing subjectivist, however, has to assume some correspondence in order for analysis to get off the ground at all. In orthodox analysis, this is done by treating preferences as preferences over consumer goods. If all goods have a positive income elasticity of demand, we do not even have to assume economic rationality as basic to the model. Rather, a strict preference for income can be derived by similar selection logic: agents without a strict preference for income get outcompeted in the marketplace, and do not ultimately affect the conclusions of the model (Alchian 1950; Becker 1962). Per the argument on group structure, it is on precisely this point that the disanalogy between altruism and consumer goods as objects of preference arises.

To call altruism “self-deception” is not a claim about the psychology of altruism. Rather, it is to take strict correspondence, as simple Darwinian (or, Alchian-Becker) logic demands, as the benchmark of strategic rationality. The actual psychological heuristics an organism uses are irrelevant if we assume sufficient selective pressure (Koppl 2002; Satz & Frerejohn 1994). “Self-deception” – even referring to a pure preference phenomenon – points to the fact that cooperative strategies such as humans in fact employ systematically fail to maximize individual fitness, and that the individual has adopted the fitness of something else (whether the group as a whole or other individual members) as a terminal goal.

The inside and the outside perspectives correspond, respectively, to looking at an institution from the perspective of its members’ subjective preferences and beliefs, and of the objective payoffs. And because these consist of (respectively) preferences for and costs of punishing the noncontributing or nonconforming behavior of others, we can think of them as inside and outside perspectives on legitimacy – i.e., on the determination of and coordination upon focal punishment strategies. The question of legitimacy is important, in particular, for preventing punishment from devolving into warring factions of mutual punishers, as often still manages to happen in human societies (cf. Bowles & Gintis 2011: 26ff).

The inside-outside perspective distinction is not identical with the fact-value distinction, but the latter does follow straightforwardly from the former. If social organization necessarily relies upon individually maladaptive altruistic preferences in the breach, and if the function of human morality is to coordinate cooperative strategies (Curry 2016; Curry et al. 2019), then it will be impossible to derive a morality that sustains human society from the nature of things (i.e. from the objective payoffs). To accept the broad and universal features of human moral life is ipso facto to deny the ability to derive normative force from the objective payoffs. Facts and values are related, of course – where else would morality come from if not the nature of things? – but the relationship cannot be a deductive one.

The Ontogeny of Self-Deception: Preference vs. Belief

The human capacity to deliberate is the capacity to explicitly justify behavior; that is, to ground strategy choice in terms of a more basic objective function. For an organism with this capacity, this divergence will consist either in failure to perceive the lack of correspondence, or a deliberate decision to ignore the objective payoffs. The former corresponds to an inside perspective as a belief phenomenon; the latter to an inside perspective as a preference phenomenon.

The initial example of ordeals was a belief phenomenon. Agents must be convinced that it is really in their interest to employ a strategy which is in fact dominated by another – a “Noble Lie”, so to speak. If this strategy entails punishing others who fail to deploy the strategy in question, then this belief can even be self-fulfilling in most cases – it really will be in the interest of most people to deploy the strategy – and operate mainly in the breach without being disconfirmed. In this way, larger-scale political organization can get off the ground without depending on highly or uniformly altruistic preferences. Only a relatively small number of altruists in key positions will be sufficient to maintain cooperative norms.

There are tradeoffs to closing the incentive gap using beliefs versus preferences. Though less reliant on deliberate self-sacrifice, belief-based inside perspectives are not necessarily robust to outsider contact, for example: it is more difficult to maintain rich factual beliefs when confronted with other functional cultures maintaining incompatible factual beliefs (see Leeson 2013a for an example). Ecumenical polytheism and evangelical monotheism were both institutional technologies to deal with this problem, either by creating ideological space to preserve local norms, or through homogenization.

In principle, a population that preferred cooperation sufficiently strongly for its own sake could dispense with noble lies entirely, provided they were still willing to punish defection wherever it did arise.14 Nevertheless, in practice, a preference for altruism can only withstand so much defection. Humans do make deliberative choices on the margin, and in experimental public goods games, even groups highly inclined to cooperate at first will quickly decay to negligible contributions (Ledyard 1995). For this reason, any nonauthoritarian society – that is, one where overt punishment can be kept to a reasonable minimum – must rely on some combination of false facts and maladaptive preferences among the masses to maintain the divergence between objective and subjective reckonings of costs. The more intrinsically altruistic will be able to get by with fewer factual commitments, and (therefore) with more abstract religions and ideologies. A richer belief system can satisfy both groups with a single body of doctrine: metaphysics and theology for cooperators; the wrath of God for would-be defectors. Indeed, vengeful deities appear in the historical record to be strongly linked with the rise of large-scale political organization (Norenzayan, et al. 2016). Finally, for those who nevertheless expect gains or derive pleasure from defection, there’s punishment – which, when effective, itself relies primarily on the altruism of the first group.

Normative Drift and the Invisibility of the Inside Perspective

Belief-based inside perspectives are a deliberative blind spot, almost by definition. If the criminal from Section I were to form his beliefs in full view of the objective payoffs, he could exploit the value of the signal and increase his own fitness. There is an unexploited arbitrage opportunity which he systematically overlooks. Similarly for preference-based inside perspectives, altruistic preferences as an ultimate fact cannot be argued about. If one prefers helping others over one’s own convenience in full view of the cost, there is no convincing him otherwise except on the basis of an even more fundamental preference or value.

Both of these situations make it difficult to criticize a culture’s norms from within that culture – again, unless this is done from the vantage point of another shared norm. An effective inside perspective must appear self-evident; in other words it must, whether through beliefs (e.g. in a moralistic deity) or preferences (e.g. the self-evidence of the Golden Rule among post-Christian Westerners), make itself invisible and present itself as an ultimate fact. Inside perspectives which fail to provide for their own survival this way, quite simply, do not persist.

In the absence of outside contact, therefore, insular societies are prone to normative drift, which is to say there are no internal or external forces tending to select for prosocial rather than antisocial norms: no internal forces because its inside perspective remains invisible to its own practitioners, and no external forces by hypothesis. Bowles and Gintis (2011, ch. 10) show that, in the absence of strong external pressure, fitness-reducing norms can hitchhike on a more general norm-internalization capacity, and the invisibility of the inside perspective indicates how exactly the human deliberative capacity fails to weed out pathological norms, especially antisocial punishment.15 The same logic also holds for hegemonic societies: without effective inter-societal competition to weed out pathological norms, beliefs, and practices, normative drift may set in and allow highly Pareto-suboptimal norm complexes to thrive unchecked.16

Implications for Political Economy

Inside and Outside Perspectives in Political Economy

The fact that economists frequently find themselves on the wrong side of sacred values should not be taken to imply that economics as a discipline stands firmly on the ground of the outside perspective. Indeed, there is a rich tradition of inside-perspective economics, most notably welfare economics, which in its pure axiomatic form is unfalsifiable by design (Buchanan 1969). Various other simplifying constructs, such as perfect competition, are not descriptively valid. To point this out is not, pace some radical critiques of neoclassical economics, to impugn their status and utility. To the extent that neoclassical welfare economics obscures opportunities for strategic rent-seeking from policymakers with the assumption of competitive markets, such “lies” may be truly noble, cooperation-enhancing, and self-fulfilling in exactly the same sense as a belief in the wrath of God.17

The distinction also runs directly through the middle of the economics of institutions, with Nobel prizes on both sides. On the outside are economic historians such as North (e.g. 1990; 2005), Greif (2009), and Acemoglu and Robinson (2005; 2012), who – though they have a normative goal of economic development – approach the question functionally and historically. On the inside are “rational reconstructions” such as Buchanan and Tullock (1962)18 and Rawls (1971), who are concerned to connect existing or potential institutions with widely shared moral intuitions (sacred values) using thought experiments rather than history. The same distinction can be traced very far back through the Western canon: quite apart from the quality of the respective analyses, Hobbes ([1668] 2012) and Locke ([1690] 1960) for example were engaged in projects on decisively different sides of the divide. If it is true that the inside and outside perspectives are irreducible one into the other, then it is hardly surprising that the arguments of Hobbes and Locke have both persisted despite their basic incompatibility.19

The divide does not map precisely onto what have traditionally been understood as “positive” and “normative” economics; indeed, one of the more persistent critiques of the distinction is that a truly wertfrei economics is impossible, and the very act of interpreting sense data is a fundamentally normative exercise. Even so, the inside-outside distinction seems to capture much of what the positive-normative distinction was meant to: there is a basic difference between analysis and legitimation, with most “positive” economics counting as the former, despite its implicit normative commitments.

Critical Theory and the Ethics of Political Economy

Nevertheless, it would be a mistake to draw the lines of economics, or of science more broadly, to exclude exercises in legitimation. Such is the goal of critical theory, broadly conceived; perhaps the most salient example of normative drift in the developed world.

Critical theory can be understood as a method for analyzing social institutions in terms of the objective payoffs; a systematic outsiding. Divergence between subjective preferences and objective payoffs constitutes “false consciousness” or some equivalent term: in other words, supposedly oppressed classes could do better for themselves by minding their own fitness and declining to buy into their community’s noble lies.

Per the foregoing analysis, this contention is correct – or at least, there always exists some such class. And yet, it also follows from this analysis that reducing higher values to power relations renders social cooperation strictly impossible (cf. Hayek 1988: 68). We have argued that there will always be parties in a society whose dominant strategy is defection. Critical theory is simply a method for identifying those parties and alerting them to that possibility20 – perhaps the most deliberate method of doing so, but far from the only method. Indeed, the game theory in this paper points to the very same possibility.

This poses an ethical dilemma for the political economist as a student of society, no less for the game theorist and the new institutionalist than the critical theorist. On the one hand, a functionalist outside perspective is valuable for identifying systemic problems in economic development and institution-building (e.g. Acemoglu 2003). Without the ability to accurately identify the source of institutional failure, efforts at foreign aid and development are likely to be quixotic, if not harmful (cf. Easterly 2001, ch. 2-7). On the other hand, given that approaching social institutions from an outside perspective (whether critically or not) can render them impossible to maintain, it may be the case that a scientific approach itself will do more harm than good.

For the same reason that explicit rules have a limited ability to support cooperation, there can likely be no hard-and-fast set of prescriptions for dealing with this problem. Nevertheless, there is some reason for optimism. First, many inside-perspective beliefs are quite resilient to disconfirmation, especially where an aspect of sacredness is involved – a fact which has often consternated iconoclastic intellectuals, but which may limit any damage done by the scientist interested in understanding rather than revolution. Trial by ordeal may be impossible to maintain in a population of atheists, but there is evidence that people believe in order to support such institutions, rather than the institutions existing to prop up belief (Chen 2010; Ager and Ciccone 2016; Auriol et al. 2018). It may, therefore, be difficult to “deconvert” a population without obviating the institution, a fact which would give the scientist much more latitude in inquiry.

Second, the scientist may resort to what Melzer (2014, ch. 6) called “protective esotericism” and self-censor in popular works, a tactic with a long history (as Melzer documents) among intellectuals dealing with contemporary inside perspectives. To the extent that academics can be relied upon for preference-based rather than belief-based altruism (as suggested by Eisenberg-Berg [1979] and Millet & Dewitte [2007], though see Madison et al. [2017]), it will not be necessary to censor more technical social-scientific work.

Even so, the task of deriving policy implications from social scientific work is complicated by this analysis. As organizational and new institutional economists have long recognized, optimal policies may be outside the feasible opportunity set in the absence of commitment power. But in a fitness landscape riddled with local optima and varying distributions of altruists willing to take up the slack left by failing belief, the importation of scientific-rationalistic modes of thought, even in full recognition of the commitment problems in the way, may clear away the coordinating power that previous institutions offered. And without a sufficient proportion of preference-altruists to maintain Western-style liberal democratic institutions, more virulent ideologies may rush in to fill the gap.21

Conclusion

The logic of social behavior entails a structure of human motivation that implies an irreducible distinction between inside and outside perspectives on social institutions – that is, between legitimating exercises on the one hand, and analytical exercises on the other. That same logic implies that the distinction will in normal circumstances be invisible to the member of a particular society, to the extent that invisibility aids the internalization of an inside perspective. Because of the problems inherent in the social organization of distantly related agents, it is necessary that the subjective preferences of those agents diverge from their objective payoffs in precisely the places that support the provision of public goods, and the punishment of noncontributors.

This fact connects the facts of the game theory of cooperation not only with the logical structure of human morality, but also with the relationship between informal institutions and economic growth. With the developed world relying upon and exporting norms and modes of governance that require preference-based altruism, the problem facing the world going forward will be to create conditions conducive to the survival and flourishing of that altruism, as well as restraining its drift into antisocial punishment.

Footnotes

  1. Superstition also results in a separating equilibrium via a similar mechanism in Iannacone (1992) and Leeson (2013a), where visible commitment to a burdensome superstition filters out noncooperators. These examples also result in separate inside and outside perspectives, and the remarks below on costless signaling will also apply.
  2. In this sense we are considering belief not as a Nash equilibrium, but as an evolutionarily stable strategy.
  3. This consideration also militates against the theory that language evolved for the purpose of manipulation or deception (e.g. Dawkins and Krebs 1978). It must be incentive-compatible not only to send a signal, but also to receive and act upon a signal (Fitch & Hauser 2002; Searcy and Nowicki 2005: 8). Knight (1998) takes these considerations and comes to a similar conclusion to this paper, with trust in the veracity of language backstopped by “literally false but metaphorically true statements” drawn from the quarries of the costly rituals implicit in a normative community.
  4. Sociality in this sense is distinct from gregariousness (e.g. herding behavior), which is incentive-compatible.
  5. E.g. the classic simulation in Axelrod (1984) which showed the dominance of tit-for-tat when paired against other strategies for some number of periods. Hardin (1985) criticizes his generalization to N-person games. Kandori (1992), similarly, shows that cooperation-sustaining strategies exist for repeated pairwise games where the pairings are sampled randomly from a population, but not for non-pairwise interactions. Alger & Weibull (2013) examine the divergence between preferences and payoffs in a similar spirit to the present paper, but only for pairwise interactions with positive assortativity.
  6. For the signaling game in particular, if ci is an unobservable or imperfectly observable cost that one may bear for the benefit of the group (say, refraining from crime), and the cost of signaling compliance is known to be zero (say, enthusiastically assenting to undergo an ordeal), then the signal’s value as an indicator of ci will be a commons which free riders will be motivated to deplete by falsifying the signal.
  7. Or, equivalently, the probability that any agent assesses another agent to have failed to contribute when he in fact did not, or vice versa.
  8. The same argument also applies to inclusive fitness explanations for cooperation, i.e. that altruistic genes can proliferate on the basis of kin selection. As Bowles and Gintis (2011: 60) note, relatedness enters into the structure of payoffs from a gene’s perspective in exactly the same way as the probability of a repeated interaction, which is to say that the relatedness coefficient within human groups must be implausibly high in order for kin selection to support altruistic behavior.
  9. The importance of joint production is that it forecloses the possibility of paying by marginal product, as product exhaustion will not hold where each agent’s marginal product is not independent of the effort of other agents. In this situation, compensation on the basis of inputs (i.e. effort) can be more feasible than compensation on the basis of value added. The fact that this requires monitoring is what creates the incentive gap.
  10. Ostrom’s (1990: 185; 2005: 259) famous design principles for the management of common pool resources, particularly the ones relating to monitoring, sanctions, and punishment, do presuppose altruistic preferences of some form or another. In this sense, Ostromian agents are not rational maximizers of a single-valued objective function. See the following section.
  11. Bear, Kagan, and Rand (2017) show that deliberation (the capacity for rationally assessing our interests) leads to lower levels of cooperation, and that cooperative strategies are pervasive, but generally non-deliberative. Alger, et al. (2018) offer a formal model showing that exactly such a divergence can sustain social behavior. Note that this argument would apply to inclusive fitness explanations as well (see above, Footnote 8): if altruism is selected for at the gene level, but strategies are effectuated by individual organisms, such a divergence must still arise.
  12. Thus, even if assortativity is a property of a dyadic matching model, the enforcement of assortativity will still be a public good in the sense of the previous section.
  13. Miller is concerned here to show that the incentive gap in the firm can be closed by a non-maximizing “company culture” which allows credible commitments. This constitutes the divergence necessary to approximate Pareto-optimality in joint production.
  14. This suggests a novel interpretation of the rise of scientific rationalism (in the Weberian sense) in the West as a transition from belief-based cooperation to preference-based cooperation. This interpretation is supported by the facts that 1) scientific rationalism has been accompanied from the beginning by persistent worries of social decay, 2) that decay has so far failed to materialize, at least in terms of organizational capacity, and 3) that people from Weberian-rationalist cultures do indeed seem to have a stronger preference for altruism (specifically, they are far more generous in one-shot dictator and ultimatum games than those from more traditional cultures – see Henrich et al. 2010). I will not pursue this line of thought here, however.
  15. Nikiforakis (2008) shows that antisocial punishment is a viable strategy in a public goods experiment (cf. also Rand et al. 2008). Hermann et al. (2008) demonstrate varying individual potential for antisocial punishment across various societies. Edgerton (1992) shows that such punishment may gain the imprimatur of legitimacy (i.e. an expression of altruistic preferences rather than being a purely self-interested strategy) in more isolated societies.
  16. Rappaport (1971b) argues that inside perspectives with fewer material referents are more adaptable as they can be reinterpreted rather than disconfirmed.
  17. Krugman (1993) is a particularly self-aware example. The real point of classical trade theory, Krugman argues, is not that tariffs can never be welfare enhancing, but to obscure opportunities for rent-seeking that an “optimal tariff” policy would allow. Buchanan and Wagner (1977) lament the eclipse of classical public finance principles by Keynesian aggregate demand management on the same basis.
  18. Buchanan, at least, seems to have been self-aware on his assumed role as Noble Liar: “Our normative role, as social philosophers, is to shape this civic religion” (Brennan & Buchanan 1985: 166). See also Brennan and Buchanan (1988) and Leeson (2018).
  19. From the perspective of this paper, it could be argued that the solution to Hobbes’ dilemma is not an overawing Leviathan – which, per above, poses its own dilemmas – but the fact that humans are apt to generate and internalize Locke-esque ideologies. Locke’s work is not itself an effective answer to Hobbes, but the existence of Locke’s work is.
  20. Science itself is a microcosm of the dilemma. In recent years critical theory has been aimed at the institutions of science, even the scientific method itself, as upholding certain power structures. The “objectivity” of science is derided as a self-serving myth. It is of course true that science does not grant the scientist an Archimedian vantage point from which to view the world without bias. It is, rather, a process of replacing descriptions of objects in terms of our senses with descriptions in terms of other objects (Hayek 1952: 3) – a process which can in principle never reach perfection, and one which benefits some interests over others. The benefits of science, like the benefits of society in general, are vast, but predicated on a prosocial myth – in this case, objectivity.
  21. A large proportion of Islamic fundamentalist leaders, for example, are Western educated (Devarajan et al. 2016), not traditionalists in any meaningful sense. Similarly, scientists and engineers (whom we may take as exemplars of education in the Western scientific-rationalist tradition) are dramatically overrepresented among extreme Hindu nationalists in India (Lutz 2007: 151). The West has also seen its intellectuals swept by waves of ideological extremism, for example in the attraction of fascism and communism.

Tags

CooperationGame theoryArmen AlchianDoug NorthElinor OstromF.A. HayekGary BeckerGary MillerHerbert GintisJames BuchananJohn LockeMichael TomaselloPeter LeesonSam Bowles

SHARE

Facebook Twitter Reddit StumbleUpon

Leave a Reply

More Content