Multiagent systems—collections of computer programs capable of autonomous action—have been used for social simulation for two decades, and especially to generate artificial societies in which features of their real counterparts can be studied ‘in silico.’1 Most of these social simulations model communication between agents as if they could write into and read from each others' memories directly. In other words, information is passed in such a way that every agent reacts to it in an identical manner, regardless of its path dependence (i.e., history).
However, for modelling the emergence and evolution of norms (a special case of behavioural rules that require obedience), agents need to be able to interpret the contents of messages sent by other agents in light of their own experience. They will act differently even in very similar situations, and will be particularly influenced by information they receive from their fellow agents. Understanding these dynamics is the primary objective of the European Commission's 6th Framework Future and Emerging Technologies project EMIL (Emergence In the Loop: The Two-Way Dynamics of Norm Innovation),2–4 which developed an architecture for norm-learning and -adopting agents.
We have conceived and implemented a simulation tool that enables modellers to design agents which valuate other agents' behaviour, send them positive or negative norm invocations (i.e., approval or disapproval), and use received norm invocations to adapt their behavioural rules. Agents act in an environment which includes passive objects such as obstacles, tools and resources, as well as other agents (and, when necessary, several different types). Our simulation tool comprises two levels. One, which we call EMIL-S,5,6 provides the artificial intelligence ‘mind’ required for decision making. A second, ‘physical’ level represents the ‘bodies’ which are responsible for the agents' actions.
EMIL-S interfaces with a variety of standard agent-modelling toolkits, which means that the physical level can be an open-source product such as Repast.7 The artificial intelligence level mainly consists of a graphical agent designer which allows the user to specify the behavioural rules for different agents with the help of event-action trees. On the basis of past experience, these trees define which actions can be taken and which out of a group of actions will be taken with what probability (the probabilities can change over time).
Example of an event-action tree in the simulation tool EMIL-S. A: Action. E: Event. G: Groups of actions. NI: Norm invocation.
Figure 1 provides such an example. A car driver agent observing a pedestrian beginning to cross a street in front of its car (event E10) has several actions to choose from. If it decides to react physically (G1) to this observation (which it will do with a probability of 0.5), it can slow down (A10) with a probability of 0.1, accelerate (A11) with probability 0.4 or stop (A12). This agent might also decide to take one of two possible norm-invocation actions (GNI1-A, with a probability of 0.6): admonishing the pedestrian (ANI10) with probability 0.2, or just honking the horn (ANI11) with probability 0.8. Two short videos that are available online show some of the results of our work. In one of the videos,8 children cross a street between two meadows at the start of a longer simulation. In the second video,9 two car drivers have adopted a norm of stopping in front of a striped area of the street, while nearly all of the children have adopted a norm of crossing the street at exactly that spot.
If the car driver of Figure 1 merely observes that a pedestrian is crossing the street in front of another car, it could decide to issue the same norm invocations (GNI1-O) only with probability 0.3 (as it is not directly involved in the imminent collision). Thus, agents not involved in an interaction can also evaluate behaviour or misbehaviour and issue positive or negative norm invocations. Once received, these will change the probabilities of actions within the respective action groups and, consequently, the individual and collective behaviour of all the agents.
EMIL-S has been applied to several different scenarios, including discussions among contributors to wiki pages criticizing each other for good or bad style, and between a group of loaners and a bank in microfinance scenarios. It has also been used to model the behaviour of people waiting in queues and similar situations where in actual life people of different cultural backgrounds would or would not line up and wait their turn. In all these scenarios, agents behaved realistically. Indeed, on the basis of our findings, the EMIL agent architecture with EMIL-S as its implementation is a promising model of how the human mind internalizes social norms. In artificial societies of agents acting according to EMIL-S, rules emerge in a way that feels familiar to human observers. As a next step, we will use the simulation tool to model even more complex situations to find out whether it will still yield believable results.