JSCOPE 2006
Strategies, Rationality, and Game Theory
in the
Philosophy of War
Randall R. Dipert
C.S. Peirce Professor
of American Philosophy
Department of
Philosophy
SUNY at
rdipert@buffalo.edu
In this paper I would like to develop three observations
about current work in the philosophy of war:
1. The contemporary
philosophy of war shows little awareness of, and has not incorporated, major
contemporary results of game theory.
These results have have had a major influence in political science and
economics. Relevant research in game
theory includes most notably game-theoretic
studies of Robert Axelrod but also Nobel prize-winning work by John Nash
(Nobel, 1994) and the work of Robert J. Aumann and Thomas C. Schelling (Nobel, 2005)[1].
This oversight is somewhat surprising
since recent discussions in evolutionary biology, some results of which are
themselves products of game theory (by Richard Dawkins, John Maynard Smith, and
others), have been discussed in moral
theory—notably a recent discussion of the possible evolutionary naturalness of
altruism and Humean sympathy.
2. Contemporary
philosophy of war has shown little patience with what is ordinarily called
Geopolitical Realism (Realpolitik), either ignoring it, rejecting it out of
hand, or defining it only in a philosophically unpalatable version of Military
Realism: that there is no morality whatsoever in war. Probably the majority of international
affairs experts are geopolitical realists to some degree.
3. Contemporary
philosophy of war has scarcely discussed in a thorough way the morality of
preventive war, (a) often distorting historical attitudes to preventive war
(e.g., saying that traditional philosophy of war has uniformly rejected it),
(b) confusing it with a procedural-legal consideration, namely the legality of
preventive war, that is not directly relevant to the morality of war and in any
case far more controversial than usually
portrayed and finally, (c) much of the extensive contemporary hostility to war
apparently comes from the undemonstrated if initially plausible assumption that
a strategy of preventive war, if adopted by all nations, would lead to more
wars and destruction than would a ban on preventive war.
Game theory
(since 1948) has been thought to be
relevant to a discussion of war in political science because it appears to
capture salient elements of what could be called "idealized conflict"
between two or more parties. As a branch
of mathematics, game theory's results have the status of being a priori truths. The fact that some results have been obtained
only through large-scale, randomized computer simulations might give the
impression of their being a posteriori
and experimental results. However, some
complex a priori results can
presently be known by us only through a
posteriori observations, and there is nothing inherently mysterious or
troubling about this. To ignore the
results of game theory in philosophy because of their superficially
experimental and idealized character would be like ignoring the results of
arithmetic such as adding and subtraction.
Imagine a discussion of utilitarianism
or another consequentialist theory that began by doubting arithmetic
because sometimes, when you "add" two rabbits together, you get—in
short order—many more rabbits through reproduction. Game theory surely has the epistemological
character of arithmetic.
Geopolitical
realism often might appear to be amoral or even immoral because of realists'
reluctance to place moral considerations as a primary or important
consideration. Indeed, the impatience of
some realists with any discussion of moral considerations, such as the human
rights of other countries citizens', might give the impression that they
believe that moral theory has no place in determining correct strategic
policy. One key tenet of geopolitical
realists seems to be roughly that war is a "given" in human
existence, and that we must then deal with this fact in a rational way. There are three versions of this tenet, which
might be called the psychological-anthropological thesis about war, the
historical argument, and a more rarely articulated rational argument. If we believe in scientific induction, and
notice the frequent appearance of violent conflict whenever humans have
organized themselves into groups in human history and shared geographical
proximity, the pull of the historical tenet of realism should be very
strong. Both the psychological and the
historical versions of the Realist thesis of the ubiquity of war typically have
given rise to a sterile philosophical discussion about the "nature"
of human beings, and human societies, what such a nature is, and of fatalist or
determinist issues of whether we can or should prevent the manifestation of
this nature. Namely, even if human
beings and especially organized groups of them are inclined or prone to violent
attack on another (whether from greed or from suspicion), surely these forces
do not have the forces of a necessary compulsion. However, the fact that any given individual
can overcome this inclination does not detract from the fact that each of us is
all but certain to live in a world where other nations and other individuals do
initiate violence. Given this fact, some
version of Geopolitical Realism is the most rational of our unpleasant
options. It is moral because it is
rational in a world like ours.
I would propose
that game theory gives us considerable evidence that whatever conclusion we
draw about inherent violent natures or inclinations of human beings, one further argument is that
violent conflict is simply put, rational—one might say unfortunately rational. If it is rational to engage in war, perhaps
even sometimes to initiate wars, then refraining from war is more likely to be
morally supererogatory than obligatory.
A distinction here must be made between a rationality that is narrowly
based on self-interest, and what I will call an interest in the general
welfare—in others' or total welfare and in the future welfare of others. It has been fairly easy to dismiss the possible
rationality of short term self-interest as morally insignificant; it is much
more difficult if an argument can be made that all or most others are better
off in the long term if we engage in wars.
This is precisely what I will argue:
everyone is better off in the long term future if all parties have
policies of initiating certain kinds of wars, namely retaliatory ones and even
preemptive and preventive wars and refraining from others. Furthermore this is a matter of
game-theoretic and hence mathematical fact.
(There are some important assumptions and caveats in this
conclusion.) In particular, there are
wars that would be clearly unjust by traditional Just War criteria but which
nevertheless are in the long term general interest.
One important
class of games was almost from the beginning of game theory of considerable
interest. These are games that mirror
the salient features of conflict between two parties, especially
"non-cooperative" games. In
this general sense it does not matter whether the parties are individuals or
nations. The "conflicted"
aspect of conflict games arise from the fact that what is advantageous to one
party is to the disadvantage of the other.
Zero-sum games of course fall into this category. Of special interest however are games that
resemble the Prisoner's Dilemma. If one
party makes a move, say, initiating violence, and the other does not, then the
first party comes out markedly ahead.
One further twist is necessary, namely that if both parties should
initiate attacks, then both parties would be slightly worse off than if they
both did not initiate this attack. There
are various ways of conceptualizing this scenario in terms of international
conflict. One can think of this as
military expenditures. It is fairly
clear, for example, that each nation would be better off if it did not have
military expenses because it did not need them—that is, if everyone would
refrain from even having armed forces and thus lack the capacity to attack.
Because of work
by Nash and others—the so-called Nash equilibrium—it has plausibly been argued
that rationality would dictate that the rational action for both parties is to
make these expenditures and thus be slightly worse off than if they could both
agree and be bound that neither party has military capacity. From that position if either side would make
a different move, it would be far worse off. In the real world the problem is not just
agreement but verification—adequate knowledge that the other party is indeed
foregoing this capacity to attack. The
intuitive attractiveness and hence rationality of Nash equilibrium grows as the
difference between the payoffs grow for an attacking or armed nation over one
that does not counterattack or is unarmed.
It is not so important that we believe those taking part in this
conflict are bloodthirsty and aggressive, or that they are greedy for the
other's captured wealth. Instead we
might view such relatively frequent attacks, and military expenditures as the
rational response to a future threat.
I have assumed up
to now that we are looking at only at one opportunity to engage or not engage
in conflict. This is a description of
the simple, one-play Prisoner's Dilemma (PD).
Of considerable more interest is the so-called Iterated Prisoner's
Dilemma (IPD): here we engage in conflict over and over again. This means that over time each party can gain
some understanding of the other party's policies: one can
"communicate" with the other party and can also persuade them to
adopt certain policies through one's own policy.
In computer
simulations of the consequences of parties' adoptions of certain policies,
Robert Axelrod has shown that a consistent superior strategy over almost all other
strategies in IPD is Tit-for-Tat. If
your enemy attacked you on the last move, then you attack him on this move;
otherwise refrain from aggression He did
this by staging a competition in which game theorists could submit what they
thought was the most powerful strategy, and then played these submitted
strategies against each other (with hundreds of iterations).
Furthermore he gave a number of analyses
of actual behavior—in international conflict and in legislative
maneuvering—that seems to exhibit how rational parties would indeed react in
something like the IPD. In other words,
he proposed that IPD was a useful model in understanding a wide variety of
real-world activities in which parties sought to understand others' probable
policies and to create optimal strategies themselves for dealing with most of
the behavior they would encounter. Later
researchers have added hundreds of real-world scenarios that seem to duplicate
these payoffs and in which parties slowly reach an equilibrium that approaches
Tit-for-Tat. (Related techniques have also
been employed to model processes in evolutionary biology, in which successful
strategies constitute adaptation to and environment.)
Axelrod
speculated that Tit-for-Tat had these advantages over other strategies because it
had several features. First, it was
generous: it did not initiate attacks until attacked in a previous move. Second it was clear: an opponent could determine what strategy you
were using. Third it was
universalizable: if every party adopted it they would be tolerably well off—at
least in the sense of Nash Equilibrium.
Fourth, it is relatively forgiving: whatever the past or accumulated
damage given you, you attack only if the opponent's last very move was an
attack. Fifth, it acts to compel others to adopt a
similar strategy.
It is important
to recognize limitations on these results.
Axelrod's initial research was limited to two-party, symmetric
conflicts. The payoff matrix was fixed
to four standard values. Each party had
perfect knowledge of the history of the conflict—what that party and its
opponent had done last move or ten moves ago.
Measurement of success was performed using a range of strategies with a
fairly limited grammar and there were no adaptive, learning strategies in which
one party made serious attempts to learn the other party's strategy and adapt
to it. (Later work took account of the
evolution of strategies that evolved according to success against other
strategies.) Finally, the measurement of
success was crude, measuring only the relative success of one strategy over
another.
It is fairly
obvious that these conditions are highly idealized when compared with
real-world scenarios. The simulations
did not take into account the multi-national character of international
conflict and the ability to form shifting alliances. They did not mirror each nation's poor
knowledge of history and of other nation's intentions or current strategy. Finally, there is a huge array of means by
which one nation can punish another for harming its interests, or threatening
to harm those interests, from embargos and tariffs to massive nuclear attack.
Measuring Deep Rationality and Morality in IPD
In order to allow summative scoring of many iterations, and
to make it so that a high total score indicates a successful strategy, Axelrod
used the following payoff matrix: each party get 3 pts if both refrain from
attacking ("cooperating" in traditional terminology); a party gets 5
points if it ambushes (attacks/"defects") the other party, while the
ambushed party gets 0 points; and if both parties attack each other
("defect"), they each get 1 point.
The basic PD or IPD requires only that the ordering of these payoffs be
structured in the way they are here, namely, 5> 3, 3>1, and 1>0, and
not that their absolute values be fixed in any way. The values could just as well be 500 for
attacking a peaceful player (and 0 for the peaceful player), 499 for mutual
peace, and 498 if both attack. While
these new values would preserve the essential elements of PD, they would create
a strong incentive to avoid at almost all costs the victimization of the
surprise attack. Consequently it is far
more informative to keep track of components for the total number of times one
succeeds in attacking a peaceful player (AP), both being peaceful (PP), both
attacking (AA) and are peaceful but attacked (PA). The required ordering is
AP>PP>AA>PA. If both parties
attack (AA), the result might be that we survive but have spent a great deal of
money for an anti-ballistic missile system, while if we have no such system and
are attacked the result could be our annihilation.
A number of
other modifications would seem to be necessary for minimal realism. First, occasionally one is mistaken about
whether the opponent really did attack on the last move (this is handled in
game theory as "noise").
Occasionally one attacks mistakenly thinking an opponent attacked on the
last move. Second, if one is employing
Tit-for-Tat and has any reason to suspect an opponent might be using a
Tit-for-tat-like strategy (that was triggered by noise, or the opponent occasionally
launches a probing attack), and for some reason the opponent has started
attacking, then it would be useful to attempt to break out of a cycle of
mutually attacking each other. After n attacks, try one peaceful move and
absorb the damage (a unilateral pause), hoping the opponent replies in kind. Finally, scoring should distinguish between a
"just," retaliatory attack and an attack that had its origins in
other sources, such as in initiating an attack simply because one always
attacks after an opponent has been peaceful for 5 moves.
Several
observations about scoring. First,
notice that it might be rational to choose a strategy that is more likely to
lose to some other strategies if the absolute damage from that strategy is
less. For example, in some cases one
might achieve more victories but the damage to your own nation might be higher
than it would be if you used a "losing" strategy. Consider these scores:
Your strategy A
against opponent: you win, 50 to
opponent's 45.
Your strategy C
against opponent: you lose, 100 to
opponent's 120.
Here, strategy C is preferable to us, even though it loses
to the opponent.
Secondly, morality would seem to require some
attention to be paid in your calculations to what the likely expected total
damage, to yourself and opponent, will be when using a certain strategy. There is some, perhaps very small value that
you would sacrifice in order to keep total damage low. Consider:
Your strategy D
against opponent: you win, 50 to opponent's 45.
Your strategy E
against opponent: you win, 51 to opponent's
-1000.
In other words, under strategy E the opponent suffers
enormous losses (possibly innocent lives) to your slight gain, 51 to 50, over the using strategy D. In other words, particularly if the
opponent's low score indicates losses of innocent lives, under some
circumstances one should prefer winning strategies that do not entail such
heavy losses for the opponent.
Since in the popular (and perhaps
distinctively Western-Christian) conception, retaliation and reprisal are
almost always equated with atavistic revenge, we have what I would call tutored
intuitions against all forms of retaliation.
These tutored intuitions about retaliation and deterrence have been
proven, as an a priori matter, to be
problematic by the mathematics of game theory.
With minimal assumptions about rationality, strong rejection of retaliative punishment is
a strategy that invariably produces a world in which there is more total
destruction over worlds in which parties practice retaliation.
In a series of computer simulations of the
so-called Iterated (Two-player) Prisoner's Dilemma (IPD) with the modifications
(and others) sketched above, I have taken standard models of conflict and
applied them to preventive war.[2] I have also incorporated some of the
"moralizing" and non-summative features of scoring that are discussed
above. These programs allow two players
to employ a wide variety of probabilistic strategies, including ones where one
player or both are using ‘preventive war’ strategies. In these strategies, one party occasionally
has some fallible knowledge of the move of the opponent on the next move. The strategy I used had a baseline probabilistic
Tit-for-Tat strategy in which war was probabilistically initiated 1-15% of the
time (termed ‘noise’), returned to non-attack mode (‘unilateral cease-fire’)
after its own attack on the last move
5-15% of the time, and otherwise applied Tit-for-Tat. The range of values for probabilities
indicates that I had tournaments (of thousands of plays) in which each
incremented value within the range was tried against all other combinations of
parameters. The baseline preventive war strategy was one in which an attack was
initiated 10-90% of the time if the opponent's next move (or two moves in the
future) was an attack; however, this also included 1-20% of ‘false’ preventive
attacks—i.e., preventive attacks where the opponent was in fact not attacking. Otherwise, the strategy was as in the
baseline Tit-for-Tat strategy. One must
remember that there is no single best strategy for IPD simpliciter,
particularly if both relative advantage and absolute measures of destruction
are applied. That is, one strategy may outperform
another strategy, but the absolute destruction even to the winning player might
be relatively high. Even Tit-for-Tat can
be bested by the Constant Attack strategy—as well as by other strategies.
The results of
applying tournaments to dozens of these strategies, each for thousands of
iterations, are described as follows. Preventive war strategies typically slightly
outperformed baseline Tit-for-Tat strategies—roughly by blunting the effects of
the enemy's attack or forcing them into ceasefires. This was the case even when they always
initiated some percentage of false preventive attacks which then lead to
strings of retaliations against other tit-for-tat-like strategies. The total destruction to both parties was,
with certain values for preventive strategies, less than with any other similar
retaliatory strategy without preventive war.
The preventive war strategies displayed a significant ‘threshold
effect’, namely if mistaken prevention was much higher than 10% of the cases,
then the total destruction to both players increased even if one player
retained a relative advantage. This
threshold value of approximately a maximum of 10% mistaken preventive wars was
highly sensitive to the absolute values in the payoff matrix and to the other probabilistic
values in the strategy.
Given
the idealizations inherent in almost any game-theoretic description of a
real-world conflict, this does not prove conclusively that some policy of
preventive war is morally workable in actuality. However, it does strongly suggest that the
intuition or presumption against preventive war is, like the intuition against
deterrent and retaliatory strategies, misguided. Everyone's having such policies does not lead
to more war than if no one had them—at least not in the general case in which
opponents attack sometimes. Perhaps more
to the point, there is reason to suspect that these game-theoretic threshold
effects (e.g., of sharply increasing total destruction for some values of
mistaken preventive war) are deeply related to what would be a reasonable value
for what I have elsewhere called (JSCOPE
2005) "epistemic thresholds." That
is, preventive war is moral only if we have a strategy in which we believe the
opponent will attack with some degree of reliability. Our ability to determine these values by
computationally intensive means indicates possible a priori constraints on epistemic thresholds, and thus that some
universally adopted policies of preventive war are not only morally permissible
but are morally preferable to their non-preventive variants.
Conclusion
I cannot possibly investigate all the ways in which game
theory might be relevant to the morality of war—jus in
A second
assumption is that this incorporation of game theory via its connection to
rationality implicitly endorses a "rule" as opposed to an act-based
conception of morality. Namely, it is
strategies that become the proper object of moral assessment and not individual
actions. I do not make much of an
apology for this, since I believe that some forms of rule utilitarianism (or
rule egoism) seem less indefensible to us than they did a decade or so ago.[4] There is of course no problem at all with
non-consequentialist theories, since they are typically rule-based. However, there is also a key difference
between moral theory for individuals and that for states. It is rarely an important factor in the calculation of consequences for one
individual to keep track of the probable life-strategies of every other
individual. One's behavior and perceived
strategy does not have an effect on many others. There are too many other individuals and most
of your life is too hidden from me to infer a pattern even if we are
acquaintances. States, at least
relatively large states, are different in these respects. Histories of nations are accessible in ways
that the histories of different individuals are not. Hence it is possible for a leader to infer
the probable strategy of another nation.
Furthermore, especially in the post-industrial age it is possible for a
nation to punish, and be seen as punishing according to an inferred strategy, any
other nation in a way that is utterly unthinkable for billions of
individuals. (Observe that this makes
visible retaliation especially incumbent on larger states and alliances.) In game-theoretic terms, there is closer to
complete knowledge of the past behavior of states and to make at least crude
inductions about future behavior. If a
rule-based moral theory is plausible for individuals, it is considerably more
plausible for larger states.[5]
The strategy,
and the recognition by others of our consistent application of this strategy, is
more important than any effects that may seem cruel and unnecessary in a given
case. In terms of Just War theory, war
damage as punishment may thus fail both Chance of Success and Proportionality with respect to that
conflict; what may be more important is other nations' observations of the
ruthless application of our strategy.
This is a well-known aspect of Geopolitical Realism that is typically
maligned in moral-philosophical circles.
If game theory is correct, then we should also punish those who themselves
fail to punish flagrant malefactors.
This does not meet the traditional criterion of Just Cause. As we have already seen, game theory suggests
we should accept some preventive strategies that may also violate Just Cause.
There are some other oddities that a game-theoretic approach
to the morality of war will have—at least when compared with Just War
theory. For example, we might think that
proper strategies should always have all parameters public: we tell everyone
what they are and others can see from our actions that we are conforming to
them. This is indeed the case for the
basic Tit-for-Tat strategy. However, let
us suppose that we make public our algorithmic criteria that would be necessary
for us to initiate preventive war: n
amount of evidence of m amount of
opponent's weaponry. That would
unfortunately invite an opponent to work especially hard to hide exactly the
kinds or amounts of evidence n, and
it invites all opponents to accumulate weapons—up to just below m.
Consequently, with regard to some parameters, it is important not to be
consistent and essentially to randomize our actions to some degree. This will serve to obscure these parameters,
and produce a less destructive world.
(This result is widely understood in strategic game theory, but it is
easy to decry in the public arena.)
Whither
traditional Just War theory? I am not
sure I am ready to throw it out just yet, and certainly not ready to throw it
all out. In fact, some aspects of a
sophisticated strategy (such as Proportionality) would merely translate the
Just War component into some game-theoretic element of a wise and successful
strategy. Indeed, Tit-for-Tat—if we
look at the Just Cause condition and not Success and Proportionality—is a
formulation that accords with Just War theory in a majority of cases. We might even view Just War theory as a
"folk," or somewhat amateur but helpful attempt to capture the game-theoretic
strategy that would be best for all. Just War theory might even come close to
describing some equilibrium or plateau: it is just not the optimal equilibrium
we can reasonably hope to attain. Certainly it would be good (indeed perfect) if every state would adopt it. As it now stands, however it lacks three
features. First, it lacks epistemic
criteria: the extent to which we should have evidence that a given condition is
met. This I have discussed
elsewhere. Second, it lacks a feature of
basic Tit-for-Tat, namely its public clarity.
It would be difficult to determine if another nation were following
Proportionality and Chance of Success, for example, although Just Cause would
presumably be more transparent. Finally,
and most grievously, it lacks the compelling, enforcing character of the
Tit-for-Tat variants I have been discussing.
It does not contain within itself a feature that promotes compliance
with it. It does not advocate punishing to
some degree those who tolerate others' noncompliance. It does not advocate possibly punishing
noncompliance with more harm that the attacking nation caused. It
rejects producing more harm to everyone than simply ignoring an enemy's attack
would produce. Here we see the damage
that the preoccupation with an act-moralistic approach produces over a
game-theoretic approach. We avoid short
term harm to all parties from this
event when there is now strong mathematical evidence that we are then producing
more total long term harm.
Whither
Geopolitical Realism? Philosophers: sit
down to dinner with the devil, and listen.
References
Aumann R.J. (1959): “Acceptable points in general
cooperative n-person games”, in R. D. Luce and A. W. Tucker (eds.), Contributions to the Theory of Games IV, Annals of Mathematics Study
40, 287-324, Princeton University Press, Princeton NJ.
Aumann R.J. (1976): "Agreeing to disagree”, The Annals of Statistics 4, 1236-1239.
Aumann R. J. (1981): “Survey of repeated games”, in Essays in Game Theory and Mathematical
Economics in Honor of Oskar Morgenstern, pages 11-42, Wissenschaftsverlag (
Aumann R.J. (1985): “What is game theory trying to
accomplish?” in K.Arrow and S. Honkapohja (eds.), Frontiers of Economics, Basil Blackwell,
Aumann R.J. and A. Brandenburger (1995): "Epistemic
condition for Nash equilibrium”, Econometrica
64, 1161-1180.
Axelrod, R. (1984 ) The Evolution of Cooperation
(
Axelrod, R. (1997)
The Complexity of Cooperation
(
Bennett, D. and Stam, A. (2004) The Behavioral Origins of War (
Christopher, P. (2004) The Ethics of War and Peace: An Introduction to the Legal and Moral
Issues (
Dipert, Randall R. "Preventive War and the
Epistemological Dimension of the Morality of War," Journal of Military Ethics (forthcoming) Spring, 2006 also an earlier version at JSCOPE 2005: http://www.usafa.edu/isme/JSCOPE05/Dipert05.html
Fudenberg D. and J. Tirole (1991): Game Theory, MIT Press.
Greif A., P. Milgrom, and B.R. Weingast (1994):
“Coordination, commitment, and enforcement”, Journal of Political Economy 102, 745-776.
Güth W., K. Ritzberger and E. van Damme (2004): “On
the Nash bargaining solution with noise”, European
Economic Review 48, 697-713.
Luban, D. (2004)
Preventive War, Philosophy and
Public Affairs 32(3): 207-248.
McMahan, J. (1994) Self-Defense and the Problem of the
Innocent Attacker Ethics (104:2):
252-290.
McMahan, J. (2004) War as Self-Defense, Ethics and International Affairs (18:1):
75-80.
McMahan, J. (2004) The Moral Case against the Iraq
War, posted at http://leiterreports.typepad.com/blog/2004/09/the_moral_case_.html
and authorship acknowledged in correspondence.
Maynard Smith J. (1982): Evolution and the Theory of Games,
Reiter D. (1995): “Exploding the power keg myth:
Preemptive wars almost never happen”, International Security 20, 5-34.
Schelling T.C. (1960): The Strategy of Conflict,
Schelling T.C. (1966): Arms and Influence,
Schelling T.C. (1967): “What is game theory?” in J.C.
Charlesworth (ed.), Contemporary Political
Analysis, Free Press,
Selten R. (1975): “Re-examination of the perfectness
concept for equilibrium points in extensive games”, International Journal of Game Theory 4, 25-55.
Snyder G.H. and P. Diesing (1977): Conflict among
Nations: Bargaining, Decision Making, and System Structure in International
Crises,
Tuck, R. (1999)
The Rights of War and Peace:
Political Thought and the International Order from Grotius to Kant (
Walzer, M. (1992)
Just and Unjust Wars (
Walzer, M. (2004)
Arguing about War (
Wester, F (2004) Preemption and just war: considering the case of
Wohlstetter A. (1959): “The delicate balance of
terror”, Foreign Affairs 37, 211-234.
Notes:
[1] See the
summary of their work accompanying the Nobel Prize at http://nobelprize.org/economics/laureates/2005/ecoadv05.pdf.
[2] They were written in the computer language Prolog over 2003-2006. Each strategy was tested for runs of 500-1000 iterations (compared with Axelrod's original 400) and then these runs were repeated up to ten times and averaged so as to eliminate artifacts of the random-number generator. Strategies were compared not just for relative advantage over other strategies, but also for total net destruction to both parties, a rough measure of the Just War criterion of Proportionality. My purpose was originally to calculate a possible value for the "epistemic threshold" discussed in Dipert 2006.
[3] Specifically that in some repeated games benefits to every party in a multi-party conflict will fall below what is otherwise guaranteed by Nash Equilibrium.
[4] See the excellent article on "Rule Consequentialism" in the Stanford Internet Encyclopedia of Philosophy.
[5] There are other, metaphysical reasons why a rule-based theory is preferable to an action-based one. Act-utilitarianism for example runs into the calculation problem in part because it is examining consequences of an action conceived of merely as an event in the physical world. However, actions are not just events. To be an action, an event must minimally be the product of deliberation; this deliberation must involve the consideration of rules by the actor and the conclusion of the deliberation must be "guided by" (but not determined only by) some normative rules (i.e., what are normally called desires in the belief-desire model of action). An action is thus ipso facto rule-guided and any moral flaw in the act must harken back to a flaw in the rule. (Faulty beliefs, if one is culpable for them, can only have been produced by faulty normative belief-forming rules. Thus again we arrive at rules.)