Course:CPSC522/Swarm Intelligence

From UBC Wiki

Swarm Intelligence

Swarm intelligence is a form of collective intelligence that emerges as a result of a group of natural or artificial agents interacting with each other and the environment.

Principal Author: Kumseok Jung

Abstract

While symbolic AI techniques involving logic and reasoning have been successful in solving many computational problems, it has been difficult to devise a practical symbolic algorithm for solving problems with larger dimensions and complexity[1]. Mechanisms found in nature has inspired many concepts in Artificial Intelligence, a prominent example being that of the Neural Net, and they have been useful in solving problems in complex and dynamic models. In this page we discuss the idea of swarm intelligence and some of the nature-inspired heuristic algorithms that are used to solve difficult combinatorial optimization problems.

Related Pages

The swarm is modelled as made up of multiple simple agents, and therefore it can be viewed as a kind of a multi-agent system.

Content

Swarm intelligence is a form of intelligence observed in the collective behaviour of individual agents interacting with each other under simple rules[2]. Each member of the swarm follows a simple decision-making policy without being aware of the group's objective, and acts under uncertainty, only using information locally available to it. In this sense, a swarm is a self-organizing system coordinated by a decentralized control mechanism. The interactions between the individual members give rise to the emergence of a collective intelligence. As a result, the swarm is able to solve a problem that would otherwise have been too complex or unsuitable for an individual.

Although the term multi-agent system is rarely used in the context of natural systems, a swarm can be described as a kind of a multi-agent system. In contrast, the term swarm intelligence is used broadly, to describe both natural and artificial systems. Like many nature-inspired disciplines in Artificial Intelligence, the engineering efforts in artificial swarm intelligence are rooted in the scientific study of swarm behaviour in nature. Swarm algorithm such as Ant Colony Optimization mimics the foraging behaviour of ants, and Particle Swarm Optimization takes inspiration from the flocking behaviour of birds.

Swarm Intelligence in Nature

Ant Colonies

Ant paths built from pheromone traces

A colony of ants is able to find the shortest path to the most rewarding food source, by each ant following a simple policy[3]. This can be viewed as solving an optimization problem. An ant deposits pheromone along its trail when returning to the nest with food. The amount of pheromone along the trail is proportional to the quality of the food. When an ant leaves the nest to forage for food, it follows the path with the strongest pheromone intensity. Other ants travelling along a path further intensify the pheromone levels along the trail, resulting in a positive feedback cycle – i.e. the path with the stronger pheromone trail will be chosen more frequently and further reinforce the attractiveness of the path. After enough ants travel back and forth, less attractive paths are eliminated and the colony eventually selects the shortest path. The shortest path would have a higher frequency of trips and therefore result in a higher pheromone intensity. In the case of equidistant food sources, the food with the higher quality will be selected as there will be a stronger pheromone trail leading to it.

Bird Flocking

Large bird typically migrate in V echelon formations. There are significant aerodynamic gains. All birds can see ahead, and towards one side, making a good arrangement for protection.

When birds migrate or forage, they maintain a group flying formation, a behaviour known as flocking. Studies indicate that the flying formation yields beneficial aerodynamic properties and thus helps the flock save energy over a long migratory route. Additionally, the timely rotation of flying positions within the formation helps distribute fatigue over the different members of the flock. Each bird in the formation follows three simple rules: keep a certain distance away from the neighbours, steer towards the average heading of the neighbours, and steer towards the position of the flock. As a result of these individual actions, the flock is able to find an optimal flying formation with the lowest energy cost.

Swarm Algorithms

Swarm Algorithms belong to a class of problem solving techniques called metaheuristics[4]. It is called “meta” heuristics because it is a higher-level heuristic used to find a good search heuristic for solving optimization problems. These techniques are useful when the solution space is too large to search, when there is limited computing power, and when there is incomplete information. In contrast to traditional optimization methods, metaheuristic techniques do not guarantee convergence to a globally optimal solution. Instead, they are more capable of finding near-optimal solutions with less computational cost. Metaheuristic techniques work at a higher-level without any assumptions about the specific problem, making it generally applicable to a range of problems.

Stochastic Diffusion Search

Stigmergic exploration in the swarm of ants in PB-DNA.[1]

Stochastic Diffusion Search is the first swarm algorithm and was first described in 1989[5]. It is a population-based global search algorithm, useful in solving optimization problems where the objective function is composed of partial functions that can be evaluated independently.

The operation can be easily understood by the following analogy of The Restaurant Game[6]:

A group of students travel to an unfamiliar town to attend a long seminar. In this town, there are many restaurants and a variety of menus to choose from, and the group wants to find the best restaurant where the maximum number of people would enjoy dining. To solve this optimization problem, they employ Stochastic Diffusion Search. Every student first picks her own candidate for the best restaurant in town. Then every night, each student visits their candidate restaurant and eats a random item from the menu. The next morning, if a student, say Alice, did not enjoy her meal the previous night, she asks one random friend, say Bob, to share his experience. If Bob's dinner was enjoyable, Alice adapts Bob's restaurant as her candidate restaurant. If Bob says his dinner was awful, then Alice picks a random restaurant. In the case Alice enjoys her dinner, she decides to return to the same restaurant but tries a different menu item. By the end of the seminar period, a popular restaurant emerges from the students, which is the global optimal.

Although many metaheuristic techniques are based on empirical study and lack formal analysis, SDS is a well-studied metaheuristic. Its computational complexity[7] and robustness was analyzed and its convergence has been proven[8].

Ant Colony Optimization

Ant Colony Optimization applied to the Travelling Salesman Problem

Inspired by the behaviour seen in ant colonies, this is a probabilistic technique for solving optimization problems that can be reduced into the problem of finding optimal paths on a graph[9]. The algorithm differs from Stochastic Diffusion Search in the way the “ants” communicate with each other. Instead of direct ant-to-ant communication, in ACO the ants communicate through stigmergy, which is an indirect way of communicating through a medium in the environment. ACO was applied to find near optimal solutions for the Travelling Salesman Problem, an NP-hard problem of finding the shortest path a salesman should travel on to visit every city exactly once. In its application, artificial ants travel along the edges between the vertices, keeping a memory of vertices already visited. The choice of the next edge to follow is probabilistic over a pheromone value and a heuristic value associated with an edge. If the pheromone level is higher on an edge, it is more likely to be selected. With a certain probability the ant would take a random edge, to allow exploration. After an ant visits every vertex, it has found a candidate solution. At the end of a trip, the pheromone values on the edges are updated by decreasing the values by a small fraction to emulate evaporation and then adding some value proportional to the quality of the solution found. The emulation of evaporating pheromones is important in preventing the ants from getting stuck in a local optimum. After enough ants have travelled, paths with higher pheromone level emerge from the possible paths and the edges would make a solution.

Particle Swarm Optimization

A particle swarm searching for the global minimum of a function

PSO is a population-based optimization technique that involves iteratively improving candidate solutions according to a heuristic inspired by the flocking behaviour seen in birds and fish. The heuristic function does not involve a gradient and hence it does not require the problem to be differentiable[10]. The usual formation of the problem is of minimizing an objective function over n-dimensional space. An example could be a school of fish trying to find the coldest spot in a body of water. In PSO, several particles navigate around in the solution space. The position of the particle represents a possible solution to the objective function. Each particle keeps a personal best and a neighbourhood best position, whose values are evaluated by applying the objective function over the position coordinates. In each iteration, each particle's velocity is updated factoring in the personal best position and the neighbourhood best position. The velocity update function involves three terms characterizing the flocking behaviour. The momentum component, involving an adjustable inertia coefficient, prevents the particle from drastically changing its direction. The cognitive component, reflects the tendency of the particle to go towards its personal best. Finally, the social component represents its tendency to follow the swarm's direction. The coefficients in the three terms are adjusted accordingly to suit the needs of the problem at hand. As is with the other swarm intelligence metaheuristic, this algorithm does not guarantee an optimal solution to be found.

Relevant Work

Site Selection for Wireless Networks using Stochastic Diffusion Search

Wireless network infrastructure is established by transmitters placed at different locations, each providing coverage over a local region. From the perspective of a service provider, it is most economical to minimize the number of transmitters and at the same time provide the best coverage. This problem of selecting the best site to place a transmitter is an NP-hard optimization problem. Hurley et al.[11] provide a solution to this problem using SDS (Stochastic Diffusion Search). In this solution, a fixed number of agents are initially placed in different locations. Each agent can partially test the quality of the site by assessing either the strength of the signal or the noise level. If the test is passing a specified standard, the agent remains in active state, otherwise it becomes inactive. After every testing phase, the inactive agents each choose another random agent to consult; this phase is referred to as the diffusion phase. If the randomly chosen agent is active, the inactive agent moves to the active agent's site; otherwise it moves to a random location. The testing and diffusion phase is repeated until a desired proportion of the agents are active. The final site is then selected from the population of active agents remaining.

AntNet

The use of ACO (Ant Colony Optimization) in communication networks was first proposed in 1996[12] to use Ant-based Control for routing and load balancing in a circuit-switched network. AntNet was implemented in 1998[13] for routing in packet-switched networks. Traditional routing techniques involve flooding the network with packets containing information about the link-states, and then each router updating its routing table with the received information. In AntNet, there are control packets, or ants, flooded through the routers to serve as mobile agents iteratively updating the routing tables. The ants update a score value of the edge taken to arrive at the router, and then choose the next hop heuristically using the scores as probabilities. Over time the packets tend to favour the better paths and the routing is optimized. The performance of AntNet depends on the network size and topology, though it has been demonstrated that AntNet outperforms commonly used algorithms such as Bellman-Ford and OSPF, and is comparable to Dijkstra's shortest path algorithm in most cases. AntNet is shown to adapt better to varying traffic loads, and works best for sparse random graphs.

Particle Swarm Optimization in Data Mining

A study by Poli[14] in 2008 suggests that there are about 650 publications on the application of PSO (Particle Swarm Optimization) in various fields including biomedical, communication networks, finance, and computer graphics, to name a few. To give an example, PSO is applied in object tracking in a video[15]. In traditional brute-force approaches, the pixels of an entire frame in a video would be assessed to locate the region of pixels that meet a certain criterion. In the PSO approach, several rectangular windows of defined width and height - each representing a "particle" - are initialized at random positions in the frame. The target object is described by transforming its pixel values into HSV format, then taking the values of the hue to build a histogram. At each iteration of the PSO algorithm, each window's histogram is assessed by comparing with the target histogram, using a user-defined comparison function. After the assessment, its personal best position is updated, and then its velocity is updated to move towards its personal best position and the global best position. The algorithm eventually finds the region in the image that best fits the target's histogram. The PSO approach has an additional advantage in processing videos, which requires applying the object detection algorithm frame after another, since a subsequent frame would differ from the previous frame only by a little bit in most cases, and by retaining each particle's best position the PSO algorithm is able to track the target object much faster in subsequent frames.

Annotated Bibliography

  1. Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach (2 ed.). Pearson Education.
  2. E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm Intelligence: From Natural to Artificial System. Oxford University Press, New York, 1999.
  3. J.-L. Deneubourg, S. Aron, S. Goss, and J.-M. Pasteels. The self-organizing exploratory pattern of the Argentine ant. Journal of Insect Behavior, 3:159-168, 1990.
  4. Michael A. Lones. 2014. Metaheuristics in nature-inspired algorithms. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO Comp '14). ACM, New York, NY, USA, 1419-1422.
  5. Bishop, J.M., Stochastic Searching Networks, Proc. 1st IEE Int. Conf. on Artificial Neural Networks, pp. 329-331, London, UK, (1989).
  6. Williams, H. and Bishop, M. 2014. Stochastic Diffusion Search: A Comparison of Swarm Intelligence Parameter Estimation Algorithms with RANSAC. Algorithms. 7, 4 (May 2014), 206–228.
  7. Nasuto, S.J., Bishop, J.M. & Lauria, S., Time complexity analysis of the Stochastic Diffusion Search, Proc. Neural Computation '98, pp. 260-266, Vienna, Austria, (1998).
  8. Nasuto, S.J., & Bishop, J.M., (1999), Convergence of the Stochastic Diffusion Search, Parallel Algorithms, 14:2, pp: 89-107
  9. M. Dorigo and T. Stützle. Ant Colony Optimization. MIT Press, Cambridge, MA, 2004
  10. J. Kennedy and R. C. Eberhart. Particle swarm optimization. Proceedings of IEEE International Conference on Neural Networks, IEEE Press, Piscataway, NJ, pp. 1942-1948, 1995
  11. Steve Hurley and Roger M. Whitaker. 2002. An agent based approach to site selection for wireless networks. In Proceedings of the 2002 ACM symposium on Applied computing (SAC '02). ACM, New York, NY, USA, 574-577.
  12. R. Schoonderwoerd, O. Holland, J. Bruten and L. Rothkrantz. Ant-based Load Balancing in Telecommunications Networks. Adaptive Behavior, 5(2):169-207, 1996.
  13. G. Di Caro and M. Dorigo. AntNet: Distributed stigmergetic control for communications networks. Journal of Artificial Intelligence Research, 9:317-365, 1998.
  14. Riccardo Poli. 2008. Analysis of the publications on the applications of particle swarm optimisation. J. Artif. Evol. App. 2008, Article 4 (January 2008), 10 pages.
  15. Y. Zheng and Y. Meng, "Adaptive Object Tracking using Particle Swarm Optimization," 2007 International Symposium on Computational Intelligence in Robotics and Automation, Jacksonville, FI, 2007, pp. 43-48.

To Add

Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.


Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-NonCommercial-ShareAlike 3.0. The full text of this license may be found here: CC by-nc-sa 3.0
By-nc-sa-small-transparent.png