Inside and Outside Perspectives on Legitimacy
Political Economy

Inside and Outside Perspectives on Legitimacy

The Implications of Incentive-Incompatibility

This paper is also available on SSRN as a PDF.

Leeson and Suarez (2015) argue that “some superstitions, and perhaps many, support self-governing arrangements. The relationship between such scientifically false beliefs and private institutions is symbiotic and socially productive.” This paper stakes out a stronger claim: that something like superstition is essential for any governance arrangement, self- or otherwise.

Specifically, we argue that social organization requires that agents’ subjective preferences systematically diverge from the objective fitness landscape in a way that usually (though in principle does not necessarily) entails “scientifically false beliefs”. We will refer to the basis of such preferences from the perspective of those holding them as an “inside perspective”, as opposed to a functionalist-evolutionary explanation of their existence, which we will call an “outside perspective”. Drawing on the theory of cooperation, we then show that the two perspectives are in principle irreconcilable, and discuss some implications of that fact for political economy and the prospects of social organization.

Inside and Outside Perspectives: The Example of Ordeals

For reasons that will be clearer in the following sections, the distinction between the inside and outside perspectives can be seen more clearly the more distant we are from the society in question. For this reason we will use as a paradigmatic example something utterly foreign to most contemporary people’s experience: trial by ordeal.

In both medieval Europe (Leeson 2012) and contemporary Liberia (Leeson and Coyne 2012), belief that God punishes the guilty is operationalized in criminal justice rituals – ordeals – designed to discover the judgment of God. In Europe, the accused plunged his hand into boiling water. If God protected him from scalding, it was a sign of innocence; otherwise he was guilty. Similarly, in contemporary Liberia, the accused imbibes the brew of a toxic bark. If the spirits cause him to vomit it up, he is innocent; otherwise his resulting illness indicates guilt.

Such is the inside perspective on ordeals, something like the account one would get from someone living in that society. From the vantage point of modern criminal justice on the other hand, which does not share the basic presuppositions of divine justice, such practices seem backward and arbitrary. But Leeson argues that such procedures are feasible second-bests in the absence of modern standards of evidence and the state capacity to act on them. Specifically, trial by ordeal results in a separating equilibrium whereby the innocent willingly undergo the ordeal and the guilty refuse, meaning that willingness to undergo the ordeal conveys valuable information on the innocence of the accused. He then shows that a large proportion of ordeals were in fact successfully passed, establishing innocence, and implicating the administrators (priests) in consciously or unconsciously adjusting the severity of the ordeal conditional upon the accused’s willingness to undergo it.

This is an outside perspective on the institutions of ordeals, analyzing its functionality from some remove and without committing to the society’s presuppositions. Indeed, the two are fundamentally incompatible, not only in the basic sense of their being alternative explanations of the same phenomena, but in the deeper sense that introducing an outside perspective to medieval Europe or contemporary Liberia would destroy the possibility of trial by ordeal. If a criminal were to “see through” from the outside, a criminal justice regime predicated on a belief in the justice of God could not function, as his assent to the ordeal would carry no informational value. And indeed, Leeson shows that known nonbelievers were not subject to ordeals.

Leeson derives several interesting implications from his model of the institution. What is missing, however, is any consideration of strategic behavior at the belief-forming stage. In other words, he does not show why it would be individually rational to believe in the justice of God if one had a choice, except for a suggestion that such belief was bundled with other positive-value services offered by the Catholic church. Indeed, given the structure of the game, deception as to one’s status as a believer must be individually rational. If all agents are rational and know the same about others, such a system may be able to persist on the strength of bundling once established, but the question of its origin remains a mystery. How can a signaling game with imaginary costs result in a stable separating equilibrium? – this is a question that Leeson’s oeuvre does not answer.1

In order to approach this question, and before attempting to apply the inside-outside distinction to more familiar institutions, it will be useful to show more formally 1) the necessity of the distinction in principle for any society, 2) the necessity that members of any particular community do not make the distinction with respect to their own community, and 3) the origin and the stability of these conditions in a population of meta-rational agents.

There Is No Incentive-Compatible Social Organization

More generally, a signaling game with a costless signal reduces to a social dilemma, a class of games that includes prisoner’s dilemmas (two-person) and public goods, commons, and collective action problems (N-person). Social dilemmas are non-zero-sum games which are characterized by mutual gains from cooperation, but also a Nash equilibrium of mutual defection. Social behavior is defined by cooperation in such games, against one’s own narrow interests.2

Unfortunately, much of the literature on cooperation and governance generalizes from two-person games to N-person games, and therefore concludes that repeated play is sufficient to establish incentive-compatible governance structures even in N-person social dilemmas.3 This section shows that this inference is not warranted. We use for our paradigmatic game, therefore, a public goods game (or, equivalently, a collective action problem) rather than a prisoner’s dilemma. Governance, broadly speaking, consists in collective action of some sort or another. If it is true that rational agents cannot cooperate in a public goods game, then governance and society more broadly will also be impossible for them.

The lack of an airtight solution, we argue, necessitates distinct “inside” and “outside” perspectives: that is, a divergence between the objective game structure, a social dilemma with an equilibrium of universal defection, and – for at least a subset of agents – a different reckoning of the subjective costs, which transforms the game into one with a cooperative equilibrium.

Social Behavior Poses a Problem

The basic difficulty with sustaining social behavior can be seen in a one-shot public goods game. Suppose there are N agents, each with an endowment of 1. Each agent has the choice of contributing c ∈ [0, 1] to a communal pot, in which case γc (with γ>1) is distributed equally to all agents. Agent i’s payoff function, then, is

(1) $$p_i = 1-c_i + \sum^N_{n=1}\frac{\gamma c_n}{N}$$

whereas the total payoff function, summing pi over all i, simplifies to

(2) $$P = \sum_{n=1}^N (1 + (\gamma-1) c_n)$$

The total payout is maximized if cn = 1 for all n, but that – provided cn is independent of ci for all ni – individual payoff is maximized at ci = 0 so long as γ < N. There is a divergence between the private cost of non-contribution (∂pi/∂ci = γ/N – 1) and the social cost of non-contribution (∂P/∂ci = γ – 1).

The public goods game is isomorphic to a commons problem or a collective action problem, all of which are equivalent to an N-person prisoner’s dilemma. In all of these cases, defection is the dominant strategy for rational agents. The public good is not provided; the commons is depleted; collective action is not undertaken; society does not get off the ground.

In addition to these, a signaling game with a zero-cost signal, as in the previous section, is also isomorphic to a public goods game. If ci is an unobservable or imperfectly observable cost that one may bear for the benefit of the group (say, refraining from crime), and the cost of signaling one’s compliance is zero whether or not one bears it (say, enthusiastically assenting to undergo an ordeal), then the signal’s value as an indicator of ci will be a commons which free riders will be motivated to deplete by falsifying the signal. The signal will, therefore, have no informational value in equilibrium. Ordeals may have been an important supporting institution in the medieval criminal justice regime, but they were not, by themselves, sufficient to establish peaceful society.

Repeated Play Isn’t A Solution

It is well known that repeated play can sustain cooperation in two-person prisoner’s dilemmas for large ranges of payoffs and discount rates, provided the end of the game is not known, by allowing players to punish defectors. By playing a trigger strategy, for example, where one player responds to defection by defecting in all future games, one player can threaten the other with the loss of all future gains from cooperation, which – for a wide range of discount rates – is substantially larger than the one-shot gain from defection. For narrower but still plausible ranges of discount rates, more forgiving strategies such as tit-for-tat (where the retaliation lasts only for a single subsequent period), or even tit-for-double-tat (where one period of retaliation is triggered only after two defections) can be cooperation-supporting equilibrium strategies (Axelrod 1984), especially where mistakes are made.

The same will be true in an N-person public goods game, provided players have perfect information and make no mistakes. Consider an infinitely repeated version of (1), with the simplifying modification that ci may only take the values of 0 or 1, meaning the agent has a binary choice of whether or not to contribute. A trigger strategy, by definition, is not robust to the commission of any mistakes. If we introduce a parameter ε ∈ (0, 0.5) for the probability that an agent mistakenly fails to contribute where he meant to or vice versa,4 a more forgiving strategy will be necessary if a single mistake is not to snowball into defection. Intuitively, this lowers the difference in payoffs between cooperation and defection, and therefore requires a lower threshold discount rate in order to be feasible.

There is a wide variety of possible strategies in such a game, each of which entails a different payoff structure. The general problem, however, is that punishment in a repeated public goods game is diffused over all agents, unlike in a prisoner’s dilemma where the dyadic structure makes it straightforward to punish the defector and only the defector. As Bowles and Gintis (2011: 63-67) show using a simulation, and Fehr and Gächter (2000) confirm in the lab, contribution to a public good drops off precipitously as N rises beyond about 5, even for very low error rates (0-0.02) and discount rates (0-0.04).5

Targeted Punishment Defers, But Does Not Solve, The Problem

The public goods game as set out so far is somewhat more limited than real-world public goods games. In particular, it is overly restrictive to assume that the only margin of choice is contribution or noncontribution. Real-world social behavior operates on many different margins, and choices along one can influence cooperation in another, for better or for worse (Reiter, et al. 2018).

Suppose now that individual i can pay some cost νij to “punish” non-contributor j by subtracting νij from j’s payoffs that period (the total product therefore falls by 2νij). From the perspective of j in an effective punishment regime, if ∑νij over all ij is greater than the gains from defecting, there is no divergence between private and social cost, and j’s dominant strategy is to contribute.

Things look different from the perspective of agent i, however. An agent facing the choice of whether to punish j faces the cost νij, but – because this is a public good – the benefits of j’s future cooperation accrue to all agents. In other words, the “punish non-contributors” game is simply another public goods game superimposed upon the first. Even in a repeated game, it will be in i’s interest to free-ride on the punishment of j.

Targeted enforcement of contribution brings us into strategic territory that begins to look like governance. And immediately we run into the fundamental problem of governance: quis custodiet ipsos custodes? – “who watches the watchmen?” – a phrase whose original formulation in Latin testifies to its pervasiveness and intractability. In our case, we can reformulate it as: who punishes the punishers who fail to punish? Second-order punishment is beset by the same problem on another level, along with third- and fourth- order punishment and so on.

We will call this problem the incentive gap: the impossibility in broad classes of social dilemmas of eliminating the temptation to defect among some subset of agents, whose defection would lead to the total unravelling of cooperation.

The Impossibility of Incentive-Compatible Governance

A public goods game is not, of course, exhaustive of the class of social dilemmas. Others, such as sequential dyadic interactions selected from a larger group, have mechanisms such as reputation to direct punishment at noncooperators under certain conditions (Kandori 1992).6 Information may be more or less public in various contexts, and agents may have a wider or narrower range of choices in contribution or punishment. One may choose to model the conflict as one between the interests of one agent at multiple points in time rather than between multiple agents at one point in time (Kydland & Prescott 1977; Root 1989), in which case we have a Commitment Problem. Nevertheless, for plausible rates of discount and error, there exists no potential structure or set of strategies to ensure that every member in a large group has an incentive to cooperate in the face of social dilemmas, as Miller (1992) shows for organizations, and Bowles & Gintis (ch. 4-5) show for social behavior more broadly. To the extent that social sanctions are effective to render cooperation the dominant strategy, they do so by placing the enforcers – whether the entire population or some specialized subset – into another social dilemma.

What then of the infinite variety of equilibrium strategies possible under the folk theorem? Bowles and Gintis argue that most of these equilibria, even for dyadic interactions, are “evolutionarily irrelevant” – that is, there is no reason to expect the Folk Theorem to be actually operational under conditions of imperfect information, and there is no feasible path for the evolution of such strategies from a starting point of noncooperation. A viable strategy must be robust to error, and be able to outcompete noncooperators under a wide variety of unfavorable situations. This requirement is even more stringent for N-person games. “Knife-edge” equilibria and other strategies that do not meet these criteria shed no light on the strategies actually employed by humans.

Social dilemmas, actual and potential, are pervasive. Despite the importance in the developed world and (especially) in economic theory of opportunities for dyadic exchange, the very existence of a market (and, for that matter, of a state) rests on the provision of a number of genuinely public goods on both micro and macroeconomic levels, most importantly the respect and/or enforcement of property rights. Miller (1992), for example, is concerned with the equivalent problem of joint production in a firm. The literature on organizational economics, likewise, commonly assumes the inability to write complete contracts, which is simply to say that the variety of choice margins open to two transactors precludes the ability to ensure incentive compatibility ex post, even assuming perfect enforcement of written contracts. This very open-endedness is an impediment to cooperation and commitment (Stewart et al. 2016), a problem which inhibits the establishment of both potential exchange relationships (Harwick 2017) and governance solutions.7 The much-celebrated fact in economics that incentive-compatible Pareto-optimal resource allocations exist given well-defined property rights, complete contracts, and limited behavioral repertoire, should not blind us to the gulf separating the Arrow-Debreu world of general equilibrium from the real world, where open-ended behavior makes complete contracts impossible and property rights costly to establish.

Self-Deception and Cooperation

Nevertheless, though perfectly rational agents cannot under realistic circumstances jointly provide public goods in large groups, self-deceiving agents can do so, and – furthermore – can establish themselves in a population of rational maximizers. At a minimum, such agents must find intrinsic utility in punishing non-contributors, and deliberately ignore the potential for free riding. In other words, their subjective preferences must systematically diverge from the objective fitness landscape. If such a subpopulation manages to achieve a mass sufficient to impose its preferences on the remaining rational maximizers, cooperation can be stable (Bowles & Gintis 2004; 2011 ch. 9), and the incentive gap can be closed.

But why should there be a correspondence between subjective preferences and the objective fitness landscape in the first place? – after all, de gustibus non est disputandum. Economists are accustomed to regard preferences – at least, preferences over consumer goods – as strictly exogenous to their analysis, with the formation of preferences left to other disciplines such as sociology. This circumscription to preferences over consumer goods – which, in a Walrasian world, includes all preferences – allows the economist to operationalize his model with a strict preference for income (Buchanan 1969). Indeed, an agent whose utility function was decreasing in income would be utterly irrelevant for the economist analyzing any market. In usual economic analysis, the fitness landscape corresponds to income, so closely in fact that the assumption of utility maximization can be dispensed with entirely, with market outcomes selecting for strategies corresponding more closely to it (Alchian 1950; Becker 1962). Economic rationality – which is to say, convergence between subjective preferences and the objective fitness landscape – does not have to be assumed as basic to the model; it can be derived from the environment. Even the thoroughgoing subjectivist, therefore, must assume some correspondence between subjective preferences and the objective fitness landscape for analysis to proceed at all. In a Walrasian world, agents’ preferences for consumer goods can be safely regarded as “data” without impairing the necessary correspondence, so long as consumer goods all exhibit a positive income elasticity of demand.

Simple Darwinian logic, such as that employed by Alchian and Becker, seems to demand correspondence. If an agent is not interested in its own survival, it is less likely to survive. And yet, by the logic of Bowles, Gintis, and Miller, strict correspondence is basically incompatible with human (or any) social organization among distantly related agents in a world with imperfect information. Furthermore, experimental evidence shows unequivocally that such a divergence does in fact exist: humans are, on some margin, genuinely altruistic (Bowles and Gintis 2011, ch. 1; Tomasello 2009), and social organization depends on some such tendency, at least in the breach.

To call altruism “self-deception” is not a claim about the psychology of altruism. Rather, it is to take strict correspondence, as simple Darwinian (or, Alchian-Becker) logic demands, as the benchmark of strategic rationality: for an organism to behave in a way that advances anything other than its own fitness goals, whether farsighted or myopic, is ineluctably maladaptive. The actual psychological heuristics an organism uses are irrelevant if we assume sufficient selective pressure.8 “Self-deception” – even referring to a pure preference phenomenon – points to the fact that cooperative strategies such as humans in fact employ systematically fail to maximize individual fitness.

The inside and the outside perspectives correspond, respectively, to looking at an institution from the perspective of its members’ subjective preferences and beliefs, and of the objective fitness landscape. And because these consist of (respectively) preferences for and costs of punishing the noncontributing or nonconforming behavior of others, we can think of them as inside and outside perspectives on legitimacy – i.e., on the determination of and coordination upon focal punishment strategies. The question of legitimacy is important, in particular, for preventing punishment from devolving into warring factions of mutual punishers, as often still manages to happen in human societies (c.f. Bowles & Gintis 2011, p. 26ff).

The inside-outside perspective distinction is not identical with the fact-value distinction, but the latter does follow straightforwardly from the former. If social organization necessarily relies upon maladaptive altruistic preferences in the breach, and if the function of human morality is to coordinate cooperative strategies (Curry 2016; Curry et al. Forthcoming), then it will be impossible to derive a morality that sustains human society from the nature of things (i.e. from the objective fitness landscape). To accept the broad and universal features of human moral life is ipso facto to deny the ability to derive normative force from the objective fitness landscape. Facts and values are related, of course – where else would morality come from if not the nature of things? – but the relationship cannot be a deductive one.

The Inside Perspective as Preference or Belief

The initial example of ordeals had the inside perspective as a belief phenomenon. Agents must be convinced, either by themselves or by others, that it is really in their interest to employ a strategy which is in fact dominated by another – a “Noble Lie”, so to speak. If part of this strategy includes punishing others who fail to deploy the strategy in question, then this belief can even be self-fulfilling in most cases – it really will be in the interest of most people to deploy the strategy – and operate mainly in the breach without being disconfirmed. In this way, larger-scale political organization can get off the ground without depending on highly or uniformly altruistic preferences. Only a relatively small number of altruists9 in key positions will be sufficient to maintain cooperative norms.

There are tradeoffs to closing the incentive gap using beliefs versus preferences. Belief-based inside perspectives are not necessarily robust to outsider contact, for example: it is more difficult to maintain rich factual beliefs when confronted with other functional cultures maintaining incompatible factual beliefs. Ecumenical polytheism, such as that practiced by the Roman empire, was an institutional technology to preserve local norms and beliefs in the face of contact with distant invaders, which allowed some degree of coexistence rather than the total war that had been characteristic of pre-empire conflict. Evangelical monotheism was another, which dealt with inter-perspective conflict by homogenization.

What characterized the rise of scientific rationalism (in the Weberian sense) in the West was a historically unprecedented method of maintaining cooperation: an inside perspective relying among the masses primarily on internalized altruistic preferences rather than beliefs. It is good to cooperate regardless of the facts. With beliefs “freed” from their function of enhancing cooperation, we should expect reduced cultural-selective pressure against atheism and iconoclasm, though not necessarily any positive selection for them. And indeed this is what we see: there are certainly stable iconoclastic niches in the West, though the strong secularization hypothesis has not borne out (Finke & Stark 2005). This dynamic explains both the persistent worries of decay in the social order during Western Europe’s secularization, as well as the failure of any such decay to materialize, at least in terms of organizational capacity, which increased spectacularly throughout.

In principle, a population that preferred cooperation sufficiently strongly for its own sake could dispense with noble lies entirely, provided they were still willing to punish defection wherever it did arise. Nevertheless, in practice, a preference for altruism can only withstand so much defection. Humans do make deliberative choices on the margin, and in experimental public goods games, even groups highly inclined to cooperate at first will quickly decay to negligible contributions (Ledyard 1995). For this reason, any nonauthoritarian society – that is, one where overt punishment can be kept to a reasonable minimum – must rely on some combination of false facts and maladaptive preferences among the masses to maintain the divergence between objective and subjective reckonings of costs. The more intrinsically altruistic will be able to get by with fewer factual commitments, and (therefore) with more abstract religions and ideologies. A richer belief system can satisfy both groups with a single body of doctrine: metaphysics and theology for cooperators; the wrath of God for defectors. Indeed, vengeful deities appear in the historical record to be strongly linked with the rise of large-scale political organization (Norenzayan, et al. 2016). Finally, for those who nevertheless expect gains from or derive pleasure from defection, there’s punishment – which, when effective, itself relies primarily on the altruism of the first group.

Normative Drift and the Invisibility of the Inside Perspective

The fact that inside and outside perspectives are generally incompatible explanations of the same phenomena means that it will be difficult to criticize a culture’s norms from within that culture. An effective inside perspective must appear self-evident; in other words it must, whether through beliefs (e.g. in a moralistic deity) or preferences (e.g. the self-evidence of the Golden Rule among post-Christian Westerners), make itself invisible and present itself as an ultimate fact. Inside perspectives which fail to do so, quite simply, do not persist.

In the absence of outside contact, therefore, insular societies are prone to normative drift, which is to say there are no internal or external forces tending to select for prosocial rather than antisocial norms: no internal forces because its inside perspective remains invisible to its own practitioners, and no external forces by hypothesis. Bowles and Gintis (2011, ch. 10) show that, in the absence of strong external pressure, fitness-reducing norms can hitchhike on a more general norm-internalization capacity, and the invisibility of the inside perspective indicates how exactly the human deliberative capacity fails to weed out pathological norms, especially antisocial punishment.10 The same logic also holds for hegemonic societies (Woodley et al., 2017): without effective inter-societal competition to weed out pathological norms, beliefs, and practices, normative drift may set in and allow self-destructive norm complexes to thrive unchecked.

Implications for Political Economy

Inside and Outside Perspectives in Political Economy

The fact that economists frequently find themselves on the wrong side of sacred values should not be taken to imply that economics as a discipline stands firmly on the ground of the outside perspective. Indeed, there is a rich tradition of inside-perspective economics, most notably welfare economics, which in its pure axiomatic form is unfalsifiable by design (Buchanan 1969). Various other simplifying constructs, such as perfect competition, are not descriptively valid. To point this out is not, pace some radical critiques of neoclassical economics, to impugn the validity or the usefulness of the theoretical constructs. Indeed, to the extent that neoclassical welfare economics obscures opportunities for strategic rent-seeking from policymakers with the assumption of competitive markets, such “lies” may be truly noble, cooperation-enhancing, and self-fulfilling in exactly the same sense as a belief in the wrath of God.

The distinction also runs directly through the middle of the economics of institutions, with Nobel prizes on both sides. On the outside are economic historians such as North (e.g. 1990; 2005), Greif (2009), and Acemoglu and Robinson (2005; 2012), who – though they have a normative goal of economic development – approach the question functionally and historically. On the inside are “rational reconstructions” such as Buchanan and Tullock (1962) and Rawls (1971), who are concerned to connect existing or potential institutions with widely shared moral intuitions (sacred values) using thought experiments rather than history. The same distinction can be traced very far back through the Western canon: quite apart from the quality of the respective analyses, Hobbes ([1668] 2012) and Locke ([1690] 1960) for example were engaged in projects on decisively different sides of the divide. If it is true that the inside and outside perspectives are irreducible one into the other, then it is hardly surprising that the arguments of Hobbes and Locke have both persisted despite their basic incompatibility.11

The divide does not map precisely onto what have traditionally been understood as “positive” and “normative” economics; indeed, one of the more persistent critiques of the distinction is that a truly wertfrei economics is impossible, and the very act of interpreting sense data is a fundamentally normative exercise. Even so, the inside-outside distinction seems to capture much of what the positive-normative distinction was meant to: there is a basic difference between analysis and legitimation, with most “positive” economics counting as the former, despite its implicit normative commitments.

Critical Theory and the Ethics of Political Economy

Nevertheless, it would be a mistake to draw the lines of economics, or of science more broadly, to exclude exercises in legitimation. Such is the goal of critical theory, broadly conceived; perhaps the most salient example of normative drift in the developed world.

Beginning with the Marxian logic of “base” and “superstructure”, critical theory can be understood as a method for analyzing social institutions (the superstructure) in terms of the objective fitness landscape (the base), i.e. a relentless outsiding.12 Divergence between subjective preferences and the objective fitness landscape constitutes “false consciousness” or some equivalent term: in other words, supposedly oppressed classes could do better for themselves by minding their own fitness and declining to buy into their community’s noble lies.

Per the foregoing analysis, this contention is correct – or at least, there always exists some such class. And yet, it also follows from this analysis that reducing higher values to power relations renders social cooperation strictly impossible (cf. Hayek 1988: 68). We have argued that there will always be parties in a society whose dominant strategy is defection. Critical theory is simply a method for identifying those parties and alerting them to that possibility13 – perhaps the most deliberate method of doing so, but far from the only method. Indeed, the game theory in this paper points to the very same possibility.

This poses an ethical dilemma for the political economist as a student of society, no less for the game theorist and the new institutionalist than the critical theorist. On the one hand, a functionalist outside perspective is valuable for identifying systemic problems in economic development and institution-building (e.g. Acemoglu 2003). Without the ability to accurately identify the source of institutional failure, efforts at foreign aid and development are likely to be quixotic, if not harmful (cf. Easterly 2001, ch. 2-7). On the other hand, given that approaching social institutions from an outside perspective (whether critically or not) can render them impossible to maintain, it may be the case that a scientific approach itself will do more harm than good.

For the same reason that explicit rules have a limited ability to support cooperation, there can likely be no hard-and-fast set of prescriptions for dealing with this problem. Nevertheless, there is some reason for optimism. First, many inside-perspective beliefs are quite resilient to disconfirmation, especially where an aspect of sacredness is involved – a fact which has often consternated iconoclastic intellectuals, but which may limit any damage done by the scientist interested in understanding rather than revolution. Trial by ordeal may be impossible to maintain in a population of atheists, but there is evidence that people believe in order to support such institutions, rather than the institutions existing to prop up belief (Chen 2010; Ager and Ciccone 2016; Auriol et al. 2018). It may, therefore, be difficult to “deconvert” a population without obviating the institution, a fact which would give the scientist much more latitude in inquiry.

Second, the scientist may resort to what Melzer (2014, ch. 6) called “protective esotericism” and self-censor in popular works, a tactic with a long history (as Melzer documents) among intellectuals dealing with contemporary inside perspectives. To the extent that academics can be relied upon for preference-based rather than belief-based altruism (as suggested by Eisenberg-Berg [1979] and Millet & Dewitte [2007], though see Madison et al. [2017]), it will not be necessary to censor more technical social-scientific work.

Even so, the task of deriving policy implications from social scientific work is complicated by this analysis. As organizational and new institutional economists have long recognized, optimal policies may be outside the feasible opportunity set in the absence of commitment power. But in a landscape riddled with local optima and varying distributions of altruists willing to take up the slack left by failing belief, the importation of scientific-rationalistic modes of thought, even in full recognition of the commitment problems in the way, may clear away the coordinating power that previous institutions offered. And without a sufficient proportion of preference-altruists to maintain Western-style liberal democratic institutions, more virulent ideologies may rush in to fill the gap.14


The logic of social behavior entails a structure of human motivation that implies an irreducible distinction between inside and outside perspectives on social institutions – that is, between legitimating exercises on the one hand, and analytical exercises on the other. That same logic implies that the distinction will in normal circumstances be invisible to the member of a particular society, to the extent that invisibility aids the internalization of an inside perspective. Because of the problems inherent in the social organization of distantly related agents, it is necessary that the subjective preferences of those agents diverge from their objective payoffs in precisely the places that support the provision of public goods, and the punishment of noncontributors.

This fact connects the facts of the game theory of cooperation not only with the logical structure of human morality, but also with the relationship between informal institutions and economic growth. With the developed world relying upon and exporting norms and modes of governance that require preference-based altruism, the problem facing the world going forward will be to create conditions conducive to the survival and flourishing of that altruism, as well as restraining its drift into antisocial punishment.


  1. Similar considerations militate against the theory that language evolved for the purpose of manipulation or deception (e.g. in Dawkins and Krebs 1978). It must be incentive-compatible not only to send a signal, but also to receive and act upon a signal (Fitch & Hauser 2002; Searcy and Nowicki 2005: 8). Knight (1998) takes these considerations and comes to a similar conclusion to this paper, with trust in the veracity of language backstopped by “literally false but metaphorically true statements” drawn from the quarries of the costly rituals implicit in a normative community.
  2. Sociality in this sense is distinct from gregariousness (e.g. herding behavior), which is incentive-compatible. Sociality generally depends on a favorable mix of coordination games and social dilemmas faced by the cooperating group (Bear et al. 2017), but – as this section shows – the dilemma aspect is irreducible. Because coordination games have cooperative equilibria, we leave those to the side and focus on social dilemmas as the more difficult impediment to social behavior.
  3. E.g. the classic simulation in Axelrod (1984). Kandori (1992) proves a similar result for repeated pairwise games where the pairings are sampled randomly from a population.
  4. Or, equivalently, the probability that any agent assesses another agent to have failed to contribute when he in fact did not, or vice versa.
  5. The same argument also applies to inclusive fitness explanations for cooperation, i.e. that altruistic genes can proliferate on the basis of kin selection. As Bowles and Gintis (2011: 60) note, relatedness enters into the structure of payoffs from a gene’s perspective in exactly the same way as the probability of a repeated interaction, which is to say that the relatedness coefficient within human groups must be implausibly high in order for kin selection to support altruistic behavior.
  6. Which – however – has the limitation that information must be public and reliable lest agents have an incentive to lie.
  7. Ostrom’s (1990: 185; 2005: 259) famous design principles for the management of common pool resources, particularly the ones relating to monitoring, sanctions, and punishment, do presuppose altruistic preferences of some form or another. In this sense, Ostromian agents are not rational maximizers. See the following section.
  8. Koppl (2002) makes this argument in the context of Schutz’ phenomenology: to the extent the “system constraint” (in the economic context, market entry and exit on the basis of profit and loss; in the biological context, selective pressure) binds tightly, we can employ more abstract types in scientific analysis and legitimately ignore actual psychological processes.
  9. Which is, needless to say, not synonymous with “good person” even from any given inside perspective. See the following subsection.
  10. Hermann et al. (2008) demonstrate varying individual potential for antisocial punishment across various societies. Edgerton (1992) shows that such punishment may gain the imprimatur of legitimacy (i.e. an expression of altruistic preferences rather than being a purely self-interested strategy) in more isolated societies.
  11. From the perspective of this paper, it could be argued that the solution to Hobbes’ dilemma is not an overawing Leviathan – which, per above, poses its own dilemmas – but the fact that humans are apt to generate and internalize Locke-esque ideologies. Locke’s work is not itself an effective answer to Hobbes, but the existence of Locke’s work is.
  12. Critical theory is distinguished from orthodox Marxism, however, in its rejection of historical materialism.
  13. Science itself is a microcosm of the dilemma. In recent years critical theory has been aimed at the institutions of science, even the scientific method itself, as upholding certain power structures. The “objectivity” of science is derided as a self-serving myth. On a factual basis, it is of course true that science does not grant the scientist an Archimedian vantage point from which to view the world without bias. It is, rather, a process of replacing descriptions of objects in terms of our senses with descriptions in terms of other objects (Hayek 1952: 3) – a process which can in principle never reach perfection, and one which (in the last resort) benefits some interests over others. The benefits of science, like the benefits of society in general, are vast, but predicated on a prosocial myth – in this case, objectivity.
  14. A large proportion of Islamic fundamentalist leaders, for example, are Western educated (Devarajan et al. 2016), not traditionalists in any meaningful sense. Similarly, scientists and engineers (whom we may take as exemplars of education in the Western scientific-rationalist tradition) are dramatically overrepresented among extreme Hindu nationalists in India (Lutz 2007: 151). The West, of course, has also seen its own intellectuals swept by waves of ideological extremism, as for example in the attraction of fascism and communism.


CooperationGame theoryArmen AlchianDoug NorthElinor OstromF.A. HayekGary BeckerGary MillerHerbert GintisJames BuchananJohn LockeMichael TomaselloPeter LeesonSam Bowles


Facebook Google Plus Twitter Reddit StumbleUpon

Leave a Reply

More Content

About »

Hi, I'm C. Harwick, an economist in New York State with an interest in monetary theory, institutional evolution, and folk music.

Care to know more? Read on »

Twitter »

Design By Cameron Harwick Powered By Wordpress Hosted By Nearlyfreespeech No Copyrights