The Axelrod library is an open source Python package that allows for reproducible game theoretic research into the Iterated Prisoner’s Dilemma. This area of research began in the 1980s but suffers from a lack of documentation and test code. The goal of the library is to provide such a resource, with facilities for the design of new strategies and interactions between them, as well as conducting tournaments and ecological simulations for populations of strategies.

With a growing collection of 139 strategies, the library is a also a platform for an original tournament that, in itself, is of interest to the game theoretic community.

This paper describes the Iterated Prisoner’s Dilemma, the Axelrod library and its development, and insights gained from some novel research.

Several Iterated Prisoner’s Dilemma tournaments have generated much interest; Axelrod’s original tournaments [

To complicate matters further, a new strategy is often studied in isolation with opponents chosen by the creator of that strategy. Often such strategies are not sufficiently described to enable reliable recreation (in the absence of source code), with [

This paper introduces a software package: the Axelrod-Python library. The Axelrod-Python project has the following stated goals:

To enable the reproduction of Iterated Prisoner’s Dilemma research as easily as possible

To produce the de-facto tool for any future Iterated Prisoner’s Dilemma research

To provide as simple a means as possible for anyone to define and contribute new and original Iterated Prisoner’s Dilemma strategies

The presented library is partly motivated by an ongoing discussion in the academic community about reproducible research [

Open: all code is released under an MIT license;

Reproducible and well-tested: at the time of writing there is an excellent level of integrated tests with 99.73% coverage (including property based tests: [

Well-documented: all features of the library are documented for ease of use and modification

Extensive: 139 strategies are included, with infinitely-many available in the case of parametrised strategies

Extensible: easy to modify to include new strategies and to run new tournaments

As stated in [

The tournament is based on an iterated game (see [

Two players must choose between

If both choose

If both choose

If one chooses

and the following reward matrix results from the Cartesian product of two decision vectors 〈

The

An overview of a selection of published tournaments. Not all tournaments were ‘standard’ round robins; for more details see the indicated references.

Year | Reference | Number of Strategies | Type | Source Code |
---|---|---|---|---|

1979 | [ |
13 | Standard | Not immediately available |

1979 | [ |
64 | Standard | Available in FORTRAN |

1991 | [ |
13 | Noisy | Not immediately available |

2002 | [ |
16 | Wildlife | Not a computer based tournament |

2005 | [ |
223 | Varied | Not available |

2012 | [ |
13 | Standard | Not fully available |

In [

As mentioned before, IPD tournaments have been studied in an evolutionary context: [

These works investigate particular evolutionary contexts within which cooperation can evolve and persist. This can be in the context of direct interactions between strategies or population dynamics for populations of many players using a variety of strategies, which can lead to very different results. For example, in [

Further to these evolutionary ideas, [

The library is written in Python (

Algorithmic Game Theory [

Astrophysics [

Data manipulation [

Machine learning [

Mathematics [

Visualisation [

Furthermore, in [

The library itself is available at

This is a hosted git repository. Git is a version control system which is one of the recommended aspects of reproducible research [

As stated in the

Source code for the Grudger strategy.

Test code for the Grudger strategy.

You can see an overview of the structure of the source code in Figure

An overview of the source code.

To date the library has had contributions from 26 contributors from a variety of backgrounds which are not solely academic. These contributions have been mostly in terms of strategies. One strategy is the creation of an undergraduate mathematics student with little prior knowledge of programming. Multiple other strategies were written by a 15 year old secondary school student. Both of these students are authors of this paper. As well as these strategy contributions, vital architectural improvements to the library itself have also been contributed.

The Axelrod library runs on all major operating systems: Linux, Mac OS X and Windows.

The library is continuously tested for compatibility with Python 2.7 and the two most recent python 3 releases.

There are no specific additional system requirements.

Support is readily available in multiple forms:

An online chat channel:

An email group:

The following Python libraries are required dependencies:

Numpy 1.9.2

Matplotlib 1.4.2 (only a requirement if graphical output is required)

Tqdm 3.4.0

Hypothesis 3.0 (only a requirement for development)

The names of all the contributors are not known: as these were mainly done through Github and some have not provided their name or responded to a request for further details. Here is an incomplete list:

Owen Campbell

Marc Harper

Vincent Knight

Karol M. Langner

James Campbell

Thomas Campbell

Martin Jones

Georgios Koutsovoulos

Holly Tibble

Jochen Müller

Geraint Palmer

Paul Slavin

Alex Carney

Martin Chorley

Cameron Davidson-Pilon

Kristian Glass

Nikoleta Glynatsi

Tomáš Ehrlich

Timothy Standen

Luis Visintini

Karl Molden

Jason Young

Andy Boot

Anna Barriscale

The Axelrod library has been designed with sustainable software practices in mind. There is an extensive documentation suite:

The availability of a large number of strategies makes this tool an excellent and obvious example of the benefits of open research which should positively impact the game theory community. This is evidently true already as the library has been used to study and create interesting and powerful new strategies.

Installation of the library is straightforward via standard python installation repositories (

Figure

A simple set of commands to create a demonstration tournament. The output is shown in Figure

The results from a simple tournament.

Due to the open nature of the library the number of strategies included has grown at a fast pace, as can be seen in Figure

The number of strategies included in the library.

Nevertheless, due to previous research being done in an irreproducible manner with, for example, no source code and/or vaguely described strategies, not all previous tournaments can yet be reproduced. In fact, some of the early tournaments might be impossible to reproduce as the source code is apparently forever lost. This library aims to ensure reproducibility in the future.

One tournament that is possible to reproduce is that of [

Cooperator

Defector

ZD-Extort-2

Joss: 0.9

Hard Tit For Tat

Hard Tit For 2 Tats

Tit For Tat

Grudger

Tit For 2 Tats

Win-Stay Lose-Shift

Random: 0.5

ZD-GTFT-2

GTFT: 0.33

Hard Prober

Prober

Prober 2

Prober 3

Calculator

Hard Go By Majority

This can be reproduced as shown in Figure

The results from [

Source code for reproducing the tournament of [

In parallel to the Python library, a tournament is being kept up to date that pits all available strategies against each other. Figure

to:

Results from the library tournament (2016-06-13).

The current winning strategy is new to the research literature: Looker Up. This is a strategy that maps a given set of states to actions. The state space is defined generically by

The example of (2) is an incomplete illustration of the mapping for

There are various other insights that have been gained from ongoing open research on the library, details can be found in [

A closer look at zero determinant strategies, showing that extortionate strategies obtain a large number of wins: the number of times they outscore an opponent during a given match.

This negative relationship between wins and performance does not generalise. There are some strategies that perform well, both in terms of matches won and overall performance: Back stabber, Double crosser, Looker Up, and Fool Me Once. These strategies continue to perform well in noisy tournaments, however some of these have knowledge of the length of the game (Back stabber and Double crosser). This is not necessary to rank well in both wins and score as demonstrated by Looker Up and Fool Me Once.

Strategies like Looker Up and Meta Hunter seem to be generally cooperative yet still exploit naive strategies. The Meta Hunter strategy is a particular type of Meta strategy which uses a variety of other strategy behaviours to choose a best action. These strategies perform very well in general and continue to do so in noisy tournaments.

This paper has presented a game theoretic software package that aims to address reproducibility of research into the Iterated Prisoner’s Dilemma. The open nature of the development of the library has lead rapidly to the inclusion of many well known strategies, many novel strategies, and new and recapitulated insights.

The capabilities of the library mentioned above are not at all comprehensive, a list of the current abilities include:

Noisy tournaments.

Tournaments with probabilistic ending of interactions.

Ecological analysis of tournaments.

Moran processes.

Morality metrics based on [

Transformation of strategies (in effect giving an infinite number of strategies).

Classification of strategies according to multiple dimensions.

Gathering of full interaction history for all interactions.

Parallelization of computations for tournaments with a high computational cost.

These capabilities are constantly being updated.

The authors would like to thank all contributors. Also, they thank Robert Axelrod himself for his well wishes with the library.

The authors declare that they have no competing interests.