1 Introduction

We live in an increasingly complex and interconnected world, where there is an increasing need for autonomous systems that can control systems that are beyond the capabilities of human operators. To be useful, however, these autonomous systems must be able to be trusted, even in scenarios which cannot be predicted in advance. This is particularly important in safety critical systems where a mistake may lead to loss of life. At the same time, however, not taking advantage of the performance benefits of autonomous systems could also potentially lead to loss of life. One of the key issues to be addressed in developing trusted autonomous systems is dealing with the phenomenon of ‘emergence’ , either by taking advantage of emergence or avoiding emergence.

In simple terms, emergence is behaviour at the global level that was not programmed in at the individual level and cannot be readily explained based on behaviour at the individual level. More formally, De Wolf identifies that “A system exhibits emergence when there are coherent emergents at the macro-level that dynamically arise from the interactions between the parts at the micro-level. Such emergents are novel w.r.t. the individual parts of the system” [1]. A well known example of emergence is the appearance of ‘gliders’ in Conway’s The Game of Life [2]. The glider-like objects are an outcome of the code that controls the The Game of Life but the objects themselves were never explicitly ‘designed in’ as part of the code. In nature, the complex patterns displayed by flocks of birds and schools of fish are an emergent property of the interaction of many individual units without any centralised control.

Emergence is closely related to the concepts of ‘complexity’ and ‘self-organisation’ . Including both of these concepts, Goldstein defines emergence as “... the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems” [3]. Complexity has been defined by Kennedy et al. as: “The interaction of many parts of a system, giving rise to behaviours and/or properties that are not found in the individual elements of the system” [4]. Or as Wolfram put it: “It is possible to make things of great complexity out of things that are very simple. There is no conservation of simplicity” [5]. Self organisation is defined by Camazine et al. as “... a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system” [6]. Features of self organising systems that are essential to emergent behaviour are the existence of: positive feedback—that leads to amplification of fluctuations; negative feedback—to counterbalance amplification and provide stabilisation; multi stability—the coexistence of many stable states; and the existence of state transitions—leading to dramatic change of the system behaviour, i.e. ‘bifurcations’ in behaviour occur when some parameter/s are varied.

Goldstein [3] identifies five essential features of emergence:

  • Radical novelty—novel behaviour occurs that cannot be predicted.

  • Coherence or correlation—the novel behaviour has some level of coherence over time.

  • Global or macro-level behaviour—coherence occurs at the macro level.

  • Dynamical—the macro-level, while having some coherence in time, also evolves over time.

  • Ostensive—emergent behaviours are recognised ostensively, i.e. by showing themselves.

While Goldstein identifies that emergence is inherently unpredictable, Fromm [7] proposes that there are four types of emergence, only two of which are unpredictable. The four types of emergence proposed by Fromm are shown in Table  6.1. Using Fromm’s classification scheme, there is a clear gradation in the complexity of systems that display emergent behaviour, from the least complex in Type I, to the most complex in Type IV.

Table 6.1 Fromm’s classification of types of emergence

The following Sections will examine the implications of emergent behaviour in swarm intelligence systems, specifically in relation to their potential use in autonomous systems . As identified in Table 6.1, based on Fromm’s classification scheme, swarm intelligence systems fall into Type II ‘Weak and predictable’ emergence .

2 Emergence in Swarm Intelligence

Swarm intelligence systems, based on the local interaction of a large number of relatively simple agents, display complex, goal-oriented behaviour at the global level. Swarm intelligence is defined by Kordon as “... coherence without choreography and is based on the emerging collective intelligence of simple artificial individuals” [8]. Swarm intelligence systems have proven useful in solving a wide range of complex, non-linear, real-world problems based on their ability to search complex problem spaces where other methods are unsuitable or ineffective.

Swarm intelligence systems, commonly comprising large numbers of relatively simple, homogenous agents, are one form of multi-agent systems. Alternate implementations exist. For example in Chap. 5, Bryant and Miikulainen examine the advantages and disadvantages of homogenous versus heterogenous agents and the benefits of adaptability of the agents.

Examples of swarm intelligence systems relevant to trusted autonomy include swarm robotics  [9,10,11,12], control of groups of unmanned aerial vehicles [13,14,15,16], control of autonomous land and underwater vehicles [17, 18], network switching [19], economic load dispatch [20], the control of switching networks [21], and the control of chaotic non-linear networks [22].

The ‘intelligence’ displayed by swarm intelligence systems is an emergent property of the system, without any form of external control, synchronous clock or shared memory and in the absence of any system-wide communication mechanism [23].

While the emergent behaviour of swarm intelligence systems has proven useful in solving complex real-world problems, as Parunak notes: “Neither self-organization nor emergence is necessarily good” [15]. Emergence, therefore, can be both a blessing and a curse in the application of swarm intelligence techniques to develop trusted autonomous systems .

3 The ‘Blessing’ of Emergence

The emergent behaviour of swarm intelligence systems can be a ‘blessing’ in some complex problem solving situations, based on a number of advantages that emergent behaviour offers. The first of these advantages is simplicity: individual agents tend to be quite simple, yet together they can produce very complicated behaviour. This means that programming is easy as the complexity of individual agents is low [24,25,26]. And because agents are relatively simple, programming errors are less likely and debugging and validation of performance of the individual agents is relatively simple.

The second is robustness: swarming systems are able to continue to operate, albeit at a lower performance, even though there are failures in some individuals or disturbances in the environment [12, 27]. Robustness also comes from the lack of centralised control, which means there is no single point of failure.

The third is flexibility: the system is self-adjusting, able to adapt quickly to changing circumstances without changing individual agents’ behaviour [12, 26, 27]. Closely related to flexibility is the concept of environment integration: environmental dynamics are directly integrated into swarm’s behaviour, and can enhance system performance [25].

The fourth is scalability: the swarm can operate using different swarm sizes with little if any change to coordination mechanisms. Processing requirements, therefore, tend to increase linearly as the swarm size increases [12, 26].

The fifth is autonomy: swarm intelligence systems operate without external control or supervision, providing the capacity to control systems that are too complex or a require a response beyond the capacity of human involvement [17,18,19].

The sixth is parallelism: swarm intelligence systems inherently use parallel computation for problem solving [19].

Together, these factors make the emergent nature of swarm intelligence systems attractive for solving complex problems that cannot be broken down into simple parts. They can therefore be attractive for use in autonomous systems and the advantages they offer potentially contribute towards trust of the system.

4 The ‘Curse’ of Emergence

The emergent behaviour of swarm intelligence systems can also be a ‘curse’ in some complex problem solving situations, based on a number of inherent limitations of swarm intelligence systems. These limitations can lead to lack of trust in autonomous systems that rely on swarm intelligence, which, in turn, rely on emergence.

The first of these limitations is the challenge of predicting the behaviour of swarm intelligence systems. Fromm categorises swarming systems as Type II emergent behaviour and predictable in principle, but in practice predictability is difficult to achieve [7]. A simple swarming/not-swarming prediction may be possible, predicting the detailed characteristics of swarming behaviour, however, is more challenging. Predictability is particularly important in relation to phase boundaries where fundamental changes in behaviour occur [28]. As Wright et al. note, in real-world systems “.. the presence of undesirable behaviours that are a result of unforeseen non-linear interactions with the different components of these systems ... can have catastrophic consequences ...” [29]. If predictability cannot be guaranteed, at least within acceptable bounds, swarm intelligence systems will not be used for safety critical applications a priori. In one approach to improve the predictability of swarming systems, Harvey et al. have used measures typically associated with chaotic dynamics to quantify and predict swarming behaviour [30, 31].

The second limitation, and closely related to that of unpredictability, is the inability to control the behaviour of swarm intelligence systems [28]. As Everitt and Hutter note in Chap. 3, “\(\dots \) with increasing autonomy and responsibility, and with increasing intelligence and capability, there inevitably comes a risk of systems causing substantial harm.” Control of swarming systems is inherently difficult due to the emergent, non-linear nature of their dynamics. Lack of control may be unacceptable in some problem solving areas where safety is critical. The inherent absence of centralised/higher-level control of swarming systems means that control of behaviour must be achieved indirectly, through the rules that control individual agent behaviour or the parameters that ‘tune’ the rules. Developing appropriate rules at the individual level can be a complex task. As Chevrier notes, the complexity is “...proportional to the distance between the simplicity of individuals and the complexity of the collective property” [32]. Choosing parameters to achieve a particular behaviour outcome is also a difficult task and in many cases may not be possible [12, 33, 34]. An alternate approach is to adjust parameters until a particular behaviour is achieved based on an objective measure of behaviour using an optimisation routine [35]. Another possible approach is to incorporate dynamic tuning—effectively a form of adaptation—in the model but this considerably increases the complexity of the agents and potentially the processing overhead and unpredictability of the behaviour of the system [36].

The third major limitation of swarm intelligence systems relates to the time required to reach a solution, which may limit the usefulness of swarm intelligence systems for on-line control tasks and time-critical tasks [33]. Options to improve the time to obtain a solution include increasing the number of swarm members and increasing the complexity of the members, for example, by incorporation of adaptation of members. These same changes, however, can also increase the processing time to converge to a solution. Balancing these competing factors is itself a complex optimising task which still may not lead to an acceptable outcome in the time required.

5 Taking Advantage of the Good While Avoiding the Bad

As systems become too complex and/or too dynamic for human control, some form of trusted autonomy will be required. In attempting to control such complex systems, emergent behaviour is likely, and probably necessary. Paranuk observes, therefore, that what is required are principles for designing and developing systems whose emergent behaviour is beneficial, or at least benign [15].

Swarm intelligence systems have shown they can be beneficial in solving complex real-world problems. This beneficial behaviour is dependent on emergence but currently processes are not available to guarantee behaviour will be benign in all possible circumstances. In an effort to take advantage of the benefits of swarm intelligence systems, while avoiding the limitations, Winfield et al. [37] have introduced the concept of “swarm engineering” which they see as a fusion of dependable systems engineering and swarm intelligence. They acknowledge the need to validate the behaviour of such systems but argue there is no reason that validating swarm intelligence systems should be any more complex than validating other complex systems. Winfield et al. discuss two key features of a system in relation to dependability: ‘liveness’, which relates to the swarm doing the right things; and ‘safety’ which relates to the swarm not doing the wrong thing. The two concepts are related but not the same thing. As Winfield et al. note: “A system that is provably safe could, for example, do the wrong thing safely” [37].

Promising mathematical modelling approaches have been developed to validate the ‘liveness’ aspect of swarm intelligence systems . In the context of swarm robotics examples include: Lancaster who uses networks of simple probabilistic graphs to predict swarm behaviour [38]; Dixon et al. who have investigated the verification of swarms using temporal logic and model checking [39]; and Brambilla et al. who have introduced an approach to the top-down design and verification of swarms via formal specification and model checking [40]. Less progress has been achieved on validation of safety aspects but Harper has shown the potential for using Lyapunov stability techniques [41].

But even if ‘liveness’ and ‘safety’ aspects can be unambiguously determined, there is still a body of work to be conducted to determine what ‘trusted’ means in the real world. As Devitt notes in Chap.  10, “We have different thresholds for trust depending on the risk of the decisions that have to be made and this in turn depends on impact of decisions.” Consider a scenario where a swarm of robots is tasked to find all the survivors after a disaster. If the robots find 90% of the survivors but can be guaranteed not to injure anyone in the search process—can that system be considered ‘trusted’. What if the swarm of robots can find 99% of survivors but there is a 10% chance of injuring a survivor during the search—would that system be trusted? Which would be the most trusted?

6 Conclusion

There is an increasing need for autonomous systems to control an increasingly complex world. To solve real world problems, however, autonomous systems must be able to be trusted. Swarm intelligence systems are one form of autonomous systems that have proven useful in controlling complex real-world systems. The intelligence displayed by these systems is an emergent property of swarming systems. The emergent behaviour of these systems is both a blessing and a curse. The emergent behaviour provides the potential to solve problems that may not be able to be solved by other means. But without the ability to verify and trust the emergent behaviour of swarm intelligence systems in the full range of situations in which they will be applied, there will be strict limits to their applicability in real-world systems. This is particularly important in safety critical systems.