Swarm intelligence is a form of collective intelligence that emerges as a result of a group of natural or artificial agents interacting with each other and the environment.
Principal Author: Kumseok Jung
While symbolic AI techniques involving logic and reasoning have been successful in solving many computational problems, it has been difficult to devise a practical symbolic algorithm for solving problems with larger dimensions and complexity^{[1]}. Mechanisms found in nature has inspired many concepts in Artificial Intelligence, a prominent example being that of the Neural Net, and they have been useful in solving problems in complex and dynamic models. In this page we discuss the idea of swarm intelligence and some of the natureinspired heuristic algorithms that are used to solve difficult combinatorial optimization problems.
The swarm is modelled as made up of multiple simple agents, and therefore it can be viewed as a kind of a multiagent system.
Swarm intelligence is a form of intelligence observed in the collective behaviour of individual agents interacting with each other under simple rules^{[2]}. Each member of the swarm follows a simple decisionmaking policy without being aware of the group's objective, and acts under uncertainty, only using information locally available to it. In this sense, a swarm is a selforganizing system coordinated by a decentralized control mechanism. The interactions between the individual members give rise to the emergence of a collective intelligence. As a result, the swarm is able to solve a problem that would otherwise have been too complex or unsuitable for an individual.
Although the term multiagent system is rarely used in the context of natural systems, a swarm can be described as a kind of a multiagent system. In contrast, the term swarm intelligence is used broadly, to describe both natural and artificial systems. Like many natureinspired disciplines in Artificial Intelligence, the engineering efforts in artificial swarm intelligence are rooted in the scientific study of swarm behaviour in nature. Swarm algorithm such as Ant Colony Optimization mimics the foraging behaviour of ants, and Particle Swarm Optimization takes inspiration from the flocking behaviour of birds.
A colony of ants is able to find the shortest path to the most rewarding food source, by each ant following a simple policy^{[3]}. This can be viewed as solving an optimization problem. An ant deposits pheromone along its trail when returning to the nest with food. The amount of pheromone along the trail is proportional to the quality of the food. When an ant leaves the nest to forage for food, it follows the path with the strongest pheromone intensity. Other ants travelling along a path further intensify the pheromone levels along the trail, resulting in a positive feedback cycle – i.e. the path with the stronger pheromone trail will be chosen more frequently and further reinforce the attractiveness of the path. After enough ants travel back and forth, less attractive paths are eliminated and the colony eventually selects the shortest path. The shortest path would have a higher frequency of trips and therefore result in a higher pheromone intensity. In the case of equidistant food sources, the food with the higher quality will be selected as there will be a stronger pheromone trail leading to it.
When birds migrate or forage, they maintain a group flying formation, a behaviour known as flocking. Studies indicate that the flying formation yields beneficial aerodynamic properties and thus helps the flock save energy over a long migratory route. Additionally, the timely rotation of flying positions within the formation helps distribute fatigue over the different members of the flock. Each bird in the formation follows three simple rules: keep a certain distance away from the neighbours, steer towards the average heading of the neighbours, and steer towards the position of the flock. As a result of these individual actions, the flock is able to find an optimal flying formation with the lowest energy cost.
Swarm Algorithms belong to a class of problem solving techniques called metaheuristics^{[4]}. It is called “meta” heuristics because it is a higherlevel heuristic used to find a good search heuristic for solving optimization problems. These techniques are useful when the solution space is too large to search, when there is limited computing power, and when there is incomplete information. In contrast to traditional optimization methods, metaheuristic techniques do not guarantee convergence to a globally optimal solution. Instead, they are more capable of finding nearoptimal solutions with less computational cost. Metaheuristic techniques work at a higherlevel without any assumptions about the specific problem, making it generally applicable to a range of problems.
Stochastic Diffusion Search is the first swarm algorithm and was first described in 1989^{[5]}. It is a populationbased global search algorithm, useful in solving optimization problems where the objective function is composed of partial functions that can be evaluated independently.
The operation can be easily understood by the following analogy of The Restaurant Game^{[6]}:
A group of students travel to an unfamiliar town to attend a long seminar. In this town, there are many restaurants and a variety of menus to choose from, and the group wants to find the best restaurant where the maximum number of people would enjoy dining. To solve this optimization problem, they employ Stochastic Diffusion Search. Every student first picks her own candidate for the best restaurant in town. Then every night, each student visits their candidate restaurant and eats a random item from the menu. The next morning, if a student, say Alice, did not enjoy her meal the previous night, she asks one random friend, say Bob, to share his experience. If Bob's dinner was enjoyable, Alice adapts Bob's restaurant as her candidate restaurant. If Bob says his dinner was awful, then Alice picks a random restaurant. In the case Alice enjoys her dinner, she decides to return to the same restaurant but tries a different menu item. By the end of the seminar period, a popular restaurant emerges from the students, which is the global optimal.
Although many metaheuristic techniques are based on empirical study and lack formal analysis, SDS is a wellstudied metaheuristic. Its computational complexity^{[7]} and robustness was analyzed and its convergence has been proven^{[8]}.
Inspired by the behaviour seen in ant colonies, this is a probabilistic technique for solving optimization problems that can be reduced into the problem of finding optimal paths on a graph^{[9]}. The algorithm differs from Stochastic Diffusion Search in the way the “ants” communicate with each other. Instead of direct anttoant communication, in ACO the ants communicate through stigmergy, which is an indirect way of communicating through a medium in the environment. ACO was applied to find near optimal solutions for the Travelling Salesman Problem, an NPhard problem of finding the shortest path a salesman should travel on to visit every city exactly once. In its application, artificial ants travel along the edges between the vertices, keeping a memory of vertices already visited. The choice of the next edge to follow is probabilistic over a pheromone value and a heuristic value associated with an edge. If the pheromone level is higher on an edge, it is more likely to be selected. With a certain probability the ant would take a random edge, to allow exploration. After an ant visits every vertex, it has found a candidate solution. At the end of a trip, the pheromone values on the edges are updated by decreasing the values by a small fraction to emulate evaporation and then adding some value proportional to the quality of the solution found. The emulation of evaporating pheromones is important in preventing the ants from getting stuck in a local optimum. After enough ants have travelled, paths with higher pheromone level emerge from the possible paths and the edges would make a solution.
PSO is a populationbased optimization technique that involves iteratively improving candidate solutions according to a heuristic inspired by the flocking behaviour seen in birds and fish. The heuristic function does not involve a gradient and hence it does not require the problem to be differentiable^{[10]}. The usual formation of the problem is of minimizing an objective function over ndimensional space. An example could be a school of fish trying to find the coldest spot in a body of water. In PSO, several particles navigate around in the solution space. The position of the particle represents a possible solution to the objective function. Each particle keeps a personal best and a neighbourhood best position, whose values are evaluated by applying the objective function over the position coordinates. In each iteration, each particle's velocity is updated factoring in the personal best position and the neighbourhood best position. The velocity update function involves three terms characterizing the flocking behaviour. The momentum component, involving an adjustable inertia coefficient, prevents the particle from drastically changing its direction. The cognitive component, reflects the tendency of the particle to go towards its personal best. Finally, the social component represents its tendency to follow the swarm's direction. The coefficients in the three terms are adjusted accordingly to suit the needs of the problem at hand. As is with the other swarm intelligence metaheuristic, this algorithm does not guarantee an optimal solution to be found.
Wireless network infrastructure is established by transmitters placed at different locations, each providing coverage over a local region. From the perspective of a service provider, it is most economical to minimize the number of transmitters and at the same time provide the best coverage. This problem of selecting the best site to place a transmitter is an NPhard optimization problem. Hurley et al.^{[11]} provide a solution to this problem using SDS (Stochastic Diffusion Search). In this solution, a fixed number of agents are initially placed in different locations. Each agent can partially test the quality of the site by assessing either the strength of the signal or the noise level. If the test is passing a specified standard, the agent remains in active state, otherwise it becomes inactive. After every testing phase, the inactive agents each choose another random agent to consult; this phase is referred to as the diffusion phase. If the randomly chosen agent is active, the inactive agent moves to the active agent's site; otherwise it moves to a random location. The testing and diffusion phase is repeated until a desired proportion of the agents are active. The final site is then selected from the population of active agents remaining.
The use of ACO (Ant Colony Optimization) in communication networks was first proposed in 1996^{[12]} to use Antbased Control for routing and load balancing in a circuitswitched network. AntNet was implemented in 1998^{[13]} for routing in packetswitched networks. Traditional routing techniques involve flooding the network with packets containing information about the linkstates, and then each router updating its routing table with the received information. In AntNet, there are control packets, or ants, flooded through the routers to serve as mobile agents iteratively updating the routing tables. The ants update a score value of the edge taken to arrive at the router, and then choose the next hop heuristically using the scores as probabilities. Over time the packets tend to favour the better paths and the routing is optimized. The performance of AntNet depends on the network size and topology, though it has been demonstrated that AntNet outperforms commonly used algorithms such as BellmanFord and OSPF, and is comparable to Dijkstra's shortest path algorithm in most cases. AntNet is shown to adapt better to varying traffic loads, and works best for sparse random graphs.
A study by Poli^{[14]} in 2008 suggests that there are about 650 publications on the application of PSO (Particle Swarm Optimization) in various fields including biomedical, communication networks, finance, and computer graphics, to name a few. To give an example, PSO is applied in object tracking in a video^{[15]}. In traditional bruteforce approaches, the pixels of an entire frame in a video would be assessed to locate the region of pixels that meet a certain criterion. In the PSO approach, several rectangular windows of defined width and height  each representing a "particle"  are initialized at random positions in the frame. The target object is described by transforming its pixel values into HSV format, then taking the values of the hue to build a histogram. At each iteration of the PSO algorithm, each window's histogram is assessed by comparing with the target histogram, using a userdefined comparison function. After the assessment, its personal best position is updated, and then its velocity is updated to move towards its personal best position and the global best position. The algorithm eventually finds the region in the image that best fits the target's histogram. The PSO approach has an additional advantage in processing videos, which requires applying the object detection algorithm frame after another, since a subsequent frame would differ from the previous frame only by a little bit in most cases, and by retaining each particle's best position the PSO algorithm is able to track the target object much faster in subsequent frames.
Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.
