December 18, 2017
 

Trust and cooperation for critical resource sharing in sensor networks

Giuseppe Persiano and Angelo De Caro
Groups working together improves not just overall performance but also individual benefits for the components of a system.

Pervasive systems are often composed of small artefacts with very limited resources (most notably, energy needed to communicate). The overall system efficiency depends crucially on how the scarce resources are used and shared among the artefacts. If they are controlled by the same organization, they can be assumed to cooperate, and the optimization problem of using, say, battery energy to maximize system lifetime can be stated and solved algorithmically in an exact or (at least) approximate way.

Consider instead the following simple scenario. We have two or more possibly competing authorities. Each has its own base station equipped with sensors that collect messages. Each sensor, in turn, has a limited battery that mainly provides energy for transmission. We neglect the energy needed for computation. The amount of power needed to transmit a message depends on the square of the distance to the recipient. Thus, splitting a long-distance communication into two (or more) short-distance transmissions results in an energy saving. For example, in Figure 1, each sensor is closer to the base station of its counterpart (e.g., sB is closer to A than to B) than to its own station. In this case, if the two sensors do not cooperate and send the message directly to the respective base station, each spends 4l2. Instead, each sensor could send the information for its base station to the other sensor (at the cost of l2), which could then forward the message (for an additional cost of l2).

Thus, cooperation makes it possible to halve the energy needed for communication. Obviously, cooperation can only exist if there is trust: each sensor trusts the other sensor to forward the information to the base station. Our thesis is that if sensors are rational, trust will emerge naturally. Indeed, a rational and selfish agent will prefer spending energy 2l2 to 4l2.


Saving energy by cooperating. A and Bare base stations, and sA and sBare sensors.

As part of the European Commission's FRONTS project,1 we have conducted extended experiments in which we have simulated cooperative and noncooperative behaviours. The sensors are first organized into groups. Each group chooses a cooperation strategy, which will be implemented by all sensors in the group. The options are a full-cooperation strategy, which means that a sensor forwards messages coming from any other groups, or a group-cooperation strategy, in which only messages coming from the same group are forwarded.

The simulation is partitioned into steps. At each step, a sender sensor is chosen at random that transmits its message to an intermediate sensor or to its own base station. An intermediate sensor is defined as one that is closer than the sender to the target base station and closest to the sender with respect to the intermediate sensor's cooperation strategy. When a message is sent to the intermediate sensor, it in turn uses the same routing strategy as the sender to forward the message to the sender sensor's base station.

We have considered two different scenarios. In one, we assume a segment of length 100 where two groups of 20 sensors each have been positioned randomly. Each of the two extremes of the segment hosts one base station. The second scenario posits a plane of side length of 200 units where four groups of 20 sensors each and four base stations, one for each group of sensors, are situated. The base stations are placed in the corners.

Figure 2 plots the system lifetime as function of the initial energy of each sensor under different levels of cooperation. For the line scenario, the system lifetime is maximized when both groups use the full-cooperation strategy: see Figure 2(a). When the groups do not cooperate, the lifetime drops noticeably. Cooperation dramatically reduces the energy needed to transmit. Furthermore, lifetime increases even if just one of the two groups cooperates. Finally, players also have an incentive to cooperate regardless of what the other group of sensors does.


System lifetime against initial energy for each group of sensors under different levels of cooperation. Results of (a) the line experiment, and (b) and (c) the plane experiment. The ⋆indicates that the group is using the full-cooperation strategy.

Figures 2(b) and 2(c) show that cooperation is also very efficient in the plane scenario. Here, groups have an incentive to cooperate provided that at least one other group chooses to do so. When all groups use the full-cooperation strategy—see Figure 2(b)—they maximize the system lifetime even if they are starting from low energy levels. Figure 2(c) shows that when only one group uses the full-cooperation strategy, it is quickly cannibalized by the other three groups. Conversely, when the number of groups using the full-cooperation strategy increases, their system lifetime extends correspondingly.

The existence of an equilibrium in which artefacts cooperate only guarantees that it is possible to instruct/program each artefact so that it is mutually convenient for them to share resources. This assumes that the programmer is aware of all possible situations that might arise in the future, which is an ambitious assumption indeed. We would prefer that the artefacts be given some very-high-level instructions from the programmer and then adapt on their own to the changing environment. Consequently, we ask: Is there a simple set of rules by which a set of small artefacts can reach a cooperative equilibrium? How fast can they reach it? Similar questions have been posed elsewhere,2 in which it is shown that a very simple local rule can take the system to a specific, risk-dominant, equilibrium.3

Finally, it would be interesting to know whether history-dependent strategies in which the decision to cooperate depends on the previous history (e.g., cooperate only if more than a certain threshold of previous interactions resulted in mutual cooperation) can give better results. We will devote our future efforts to this end.


Authors

Giuseppe Persiano
Dipartimento di Informatica ed Applicazioni, Universit di Salerno

Giuseppe Persiano is a professor. He received his PhD in computer science from Harvard University in 1993. His long-term research goal is to develop an algorithmic theory of secure and robust systems that takes cryptography and game theory into account.

Angelo De Caro
Dipartimento di Informatica ed Applicazioni, Universit di Salerno

Angelo De Caro is a PhD student in computer science. His research focuses mainly on cryptography and trust.


References
  1. http://fronts.cti.gr Foundations of Adaptive Networked Societies of Tiny Artefacts. European Commission Seventh Framework Programme project. Accessed 12 February 2010.

  2. Andrea Montanari and Amin Saberi, Convergence to equilibrium in local interaction games and Ising models, CoRR abs/0812.0, 2008. Also in Proc. Foundat. Comput. Sci. 2009

  3. John C. Harsanyi and Reinhard Selten, A General Theory of Equilibrium Selection in Games, MIT Press, Cambridge (MA), USA, 1988.


 
DOI:  10.2417/2201002.002607