Tuesday 11 December 2012

Latest ideas in brief

In light of the continuing plans for the Mosaic project (check out the new website here) I've been trying to flesh out more of an idea for what I can contribute to this field in the coming years. Thankfully with confirmation from my supervisors that it's not a stupid idea (always worth knowing!) I'm beginning to build more of a concrete plan of what I can (at least try) to implement.
Mosaic project logo. Not representative of our choice of UAVs...
The logic behind it:
It looks very much like the Mosaic demonstrator we're planning for next summer will consist of heterogenous UAVs: specifically fixed wing gliders and some form of 'copters. Given that these clearly have very different capabilities, speeds, durations in the air etc; how best can they act as a team to find the targets in a space? There has been work on this area already, specifically in coalition formation, but there doesn't seem to be much emphasis on a message-passing hierarchy between different forms of UAV. The kind of example I envisage is where a high-level glider does a few passes over a search area and generates some initial guesses for locations of search-objects (in real life, these might be casualties), before passing this on to the lower-level 'copters to deal with in some way, perhaps as a max-sum task allocation since the framework for this already exists. In a non-discretised case, where perhaps it would be best not to treat individual locations as point-like tasks, I'd also be interested in using rapidly-exploring random trees as a basis for working out optimal routes over a probability map produced by the glider to maximise some utility in finding objects.

Animation of a rapidly-exploring random tree. After growth is complete the UAV can pick an optimal path to follow.
Well that's the summary. Now comes the hard part of making all of this actually tenable! Back to work...


Friday 16 November 2012

Project progress, search patterns, and a DIY UAV

Following more meetings on our shiny new UAV project we've begun to thrash out a few more of the scenario details including a loosely defined goal of having a working prototype demonstrator next summer which involves multiple UAVs locating some sort of object (perhaps radio beacons) in a predefined search space. The ultimate goal is to scale this up over coming years into something resembling a radiation/chemical leak response package which could meet a number of targets, for example locating people in the path of the contaminant, alerting emergency services to their location, etc.

With that in mind I've been focussing my reading more around the search-pattern, search-algorithm area and trying to find out about existing methods and their relative pros and cons. Bayesian methods make a recurring appearance simply because having some pre-conceived belief and then updating it as you find more data is an extremely logical approach to the problem. There's also the issue of what can be achieved with multiple drones, and how best to confirm a target's identity once a potential sighting has been made. To that end I've had a few thoughts and ideas on possible approaches, based partly on existing work:

Utilising multiple similar UAVs in a heterogenous way
Given a few different types of UAV in a situation it's clearly logical to treat them differently. But what if you have identical drones that can perform different tasks? For example, given three drones and a search space, rather than having them all running search patterns independently at the same height (so the same detail) would it be advantageous to have a hierarchy where one (or more) UAVs takes high altitude, low-detail images and then co-ordinates the other drone/s towards possible matches?
This...
... vs this
This introduces a quasi-centralised system - which would obviously need to be robust enough that if the higher UAV fails the search can still continue (perhaps reverting to the first arrangement). It would also need to be established if there is any benefit to this method.

Inter-UAV task allocation
Going on the basis that the MaxSum algorithm mentioned in previous posts is an extremely versatile task-allocation tool, I've been considering if there's scope for its implementation in this area. For example, say a search pattern is being carried out by a swarm of UAVs using either of the two arrangements given above, and a possible sighting is made. Could then, the co-ordination switch from Bayesian searching to a MaxSum task assignment as one UAV creates a task of "go and take a closer look at this location" and broadcasts it to surrounding vehicles? In the same-height arrangement it might turn out that such a method would be moot as it would maximise the utility function if each UAV checks out its own possible sightings, but with the second method the higher altitude drone could easily interrupt the search of the lower UAVs by creating a task of looking in detail at a location. 

The bottom line of this is that the UAVs could have two modes; a Bayesian search method and a MaxSum task mode; designed to compliment each other

Prior planning and search space restriction
One might imagine a number of these situations where, once establishing the area to be searched, time could be saved in the co-ordination stage by assigning each UAV a portion of the area to search which they are then restricted to (barring a MaxSum task being received). Indeed there need not even be awareness between the UAVs of the other drones' behaviour, since each could have its own Bayesian belief map of their portion. If the designated target is found by one UAV they then need only communicate this to the others for them to stop searching. This could also reduce message overhead. A pithy summary then, is "do the UAVs actually need the information gained by the other UAVs in the area?".

More questions:
The seemingly simple task of "find some objects in an area" actually has considerable elements that need to be accounted for in any solution. As well as the things mentioned above, the formalised problem has the following points to contemplate:

  • What uncertainty will the UAVs' images produce (presumably related to sensor resolution and altitude)?
  • How accurately do they need to know a target's position to consider it a successful task?
  • Is there any other metric to measure success beyond "finding the targets quickly"?
  • Do the UAVs know in advance how many targets there are?
  • If the answer to the above is "no", will they ever stop their search or is it actually desirable that they keep looking indefinitely?
A final word, from a DIY unmanned vehicle search team
For interest, I found the blog of a team from Australia who competed (and almost aced) a competition held there to find a missing person and drop a water bottle for them in a pre-defined search space using an unmanned aerial vehicle. While not offering a huge amount of inspiration on the co-ordination front (a simple "lawnmower" search pattern was used with a recognition algorithm) it still makes interesting reading. I'm thoroughly impressed by how well it all worked considering it was an amateur effort done essentially by hobbyists. Check it out!

-C


Friday 2 November 2012

Some disjointed thoughts on UAV deployment 2

Using a Mini UAV to Support Wilderness Search and Rescue: Practices for Human Robot Teaming -Goodrich et al

Carrying on from yesterday, with the added caveat that following a meeting it seems we may be looking more into the realms of radiation leak scenarios (which actually holds a lot of overlap for this sort of thing. More on that later), this is the second paper I meant to ramble about yesterday but didn't get a chance to.

The unique perspective provided by this publication is that it uses actual search and rescue data and methods and examines how UAVs could augment these searches. The basic process is Bayesian, with an initial belief system which is then updated with various different time horizons in mind. As before there are considerations about terrain, paths, undergrowth etc which directly affect how a person might behave.

Terrain mapping
There are three methods outlined which detail how a plan of action is carried out by the various members of a search and rescue team. At this stage these are all scenarios where the UAV has a human pilot controlling it directly (albeit remotely).

Sequential:

  • A search plan is designed by a mission planner or director based on initial beliefs
  • This is then carried out by an overhead UAV
  • Any signs of the missing person are followed up by ground crews 
  • If the person isn't found, a new plan is though of and the process begins again
This method is especially useful over inaccessible terrain where a ground team can't easily move to track the missing person on foot, and where the area is very large. 

Remote-Led:
  • A ground team performs a search as quickly as possible from the last known location of the missing person, following any clues of their whereabouts
  • The UAV is tasked with orbiting overhead and tracking the ground team, extending their effective range of vision
  • Telemetry is sent back to the mission director on the ground to further the search
A good plan for cases where the missing person has left evidence of their location or the area is quite local.

Base-Led:
  • Bayesian pattern based on waypoints chosen
  • UAV carries out pattern sending back data to base. Circles waypoints looking for missing person evidence.
  • Ground crew then follows the UAV's search pattern ready to close in on potential signs of missing person, but otherwise remaining central to UAV circles.
Plan works well for easily mobile ground crews who don't have access to enough information to perform a remote-led search.

The paper also outlines in-brief a stochastic approach for modelling likely search positions (essentially calculating the initial probability distribution to then be updated in a Bayesian fashion) by modelling a probabilistic quasi-random walk with functions taking into account terrain direction and environment.

So- what's useful here?
Well clearly I now have a potential new context to think about (radiation or chemical leaks). It might even be worth asking the emergency services what sort of process they would carry out in this case, and then seeing whether it resembled the above at all. The initial probability distribution generation is worth considering as one might potentially have more computing power available (consider calculating it remotely and then transmitting to the UAV?). More broadly, it highlights the need for the consideration of the command structure of an operation when deciding what the UAVs need to do and who they need to report to. In any case, a Base-led or Sequential process for mapping out an emerging pollutant cloud might not be a bad starting point.

-C

Thursday 1 November 2012

Some disjointed thoughts on UAV deployment

In lieu of our continuing group efforts to have a separate UAV project which links Orchid work with collaboration with UAV engineers in the University (entitled MOSAIC), I've been scouring papers for various suggestions on how to implement UAV co-ordination in a selection of scenarios.

In essence, since we might end up with a few different scenarios it'd be beneficial if we had a range of options we could refer to in a reference-like way. "Hey! I want to get some UAVs to do this task" says an enquirer. "Ok", says we, "we think This is the best algorithm, for these reasons given your situation, hardware, environment etc etc". Since I'm ideally placed to research existing work in this area I'm trying to start gathering methods, reviews, tests, trials, and all the various pitfalls of different algorithms into a coherent lump that may end up as a literature review.



A couple of papers caught my eye recently in the specific area of search and rescue of a missing person in some given terrain area or wilderness location. While specific, they do list a few useful classes of problem which could be given future thought.

Supporting Search and Rescue Operations with UAVs -S Waharte and N Trigoni

This is a nice paper giving a very broad overview of three approaches to searching for a missing person with an evaluation of their respective efficacy in a simulation. Nearly all such scenarios can be categorised by the exploitation-vs-exploration payoff, which goes something like this:

How much time should I spend searching areas I haven't yet searched, compared to looking more closely at areas I have searched?

Clearly both extremes are undesirable: you would not want your UAV to zip quickly over a huge area and miss the missing person because of lack of attention to detail, nor would you want it to spend four hours staring ever closer at a person-shaped rock.

Like this one, on Mars.

Unlike the Max-Sum utility, the methods here deal only with minimising the time of finding the missing person: a difference in that there is typically only one overriding 'task' (albeit split into possible sub-tasks) for the UAVs to undertake. Nonetheless it is important to consider the algorithms outlined to avoid being funnelled into one specific line of thinking:

Greedy Heuristics
Each UAV maximises its own utility (ie search coverage) in a Bayesian way, developing its own guesses as to the location of the missing person and acting on them. Various methods for route-choosing were explored including those maximising immediate gain and those that actually plan into the future slightly.

Potential Heuristics
Areas of interest are modelled as attractive potentials on a 2D surface, and less accessible areas as repulsive potentials. Force is calculated (as in physics) as the negative of the potential gradient. Potential increases with subsequent visits to discourage loitering, and some message-passing is allowed.

POMDPs
Partially observable Markov decision making problems are a well known branch of decision making in computer science and provide a forward-looking strategy for action based on noisy data which may not actually represent the situation of reality. For instance, the chance of recording a fake positive result is increased with decreased height and the model takes this into account. The question then becomes one of maximising coverage with a view to doing so in future, given uncertainty of existing data. Again some message-passing was allowed, but in a very computationally intensive way: with UAVs sharing their entire belief set periodically when they came in range of another UAV.

Despite the very simple scenario and simulation (only a few tens of square meters of simulated woodland) the tests showed clear advantages to a POMDP method including message passing. A brief concluding thought here is that such a method has two very big problems: a terribly costly message overhead, and required computing power which increases exponentially with the grid size (since essentially every possible path is considered). Possible, but unwieldy without some serious solution space pruning.

More thoughts to follow

-C

Friday 26 October 2012

Few videos of interest

Couple of interest videos related to decentralised co-ordination (in the second case, there is an overhead from a UAV).


This is a nice talk and demo from Vijay Kumar at the TED talks, showing agile flight control, formation flying and (more interestingly from my point of view) decentralised co-ordination. Definitely worth a read of some papers from here. He talks at one point about disaster relief work although I think the reference is to lower-level building infiltration. The video concludes with the now well-known James Bond theme played by drones.

This second video shows an interesting amalgamation of a flying co-ordinator sending instructions to co-operative ground based agents. I'm wondering if this could be applied to our forest fire scenario where both UAV and UGV agents will be working together. Also, the colour algorithm they demonstrate would surely be improved using simple number-selection and passing algorithms wirelessly, rather than visible light? This will negate the need for line-of-sight visibility as long as each robot can broadcast some kind of identifier.

Food for thought I think.

-C

Tuesday 23 October 2012

Disaster news out of Italy

In a pretty shocking turn of events, the BBC today reported that seven Italian scientists have been sentenced to imprisonment for "falsely reassuring" people about the possibility of an earthquake, despite their statements being qualified by the fact that such predictive work is extremely difficult.

I think this is an appalling turn of events which can only damage the resolve of future research into the vital areas of disaster relief and prediction. Certainly if in three years I've managed to make a contribution to UAV co-ordination and this verdict stands, I would feel uncomfortable seeing any project involved in a country which persecutes those trying to act for the public good.

News story here: http://www.bbc.co.uk/news/world-europe-20039769

Anyway, I shall leave it at that, and go and simmer in my outrage.

-C

Paper: Bayesian modelling of lost-person behaviour. Lin and Goodrich

Fully titled: A Bayesian approach to modelling lost person behaviors based on terrain features in wilderness search and rescue

While not directly about UAV co-ordination, this paper holds some relevance to automated behaviour in the sense that it presents a way of modelling probability of a person's path over certain terrain: potentially providing a template map for autonomous searching in this area. More relevantly for what I'm interested in, the exact method it uses could have applications in the forest-fire tracking project which has recently been kicked into gear. A quick look at the method:

Bayesian statistics are a particular interpretation of what "probability" actually is; in this case, a 'belief' about a system. Taking that into the realm of computer science, a Bayesian model seems to be one where some prior probability assumption is assumed to start with, and then this is continually updated as new information becomes available. The available distribution affects directly the areas to be searched. The initial probability distribution is generated from experts in search and rescue operations, and analysis of search and rescue data which was used to generate a Markov transition matrix of probabilities of transitioning from one terrain type to another.


Why this could be useful:
The nature of forest fires is clearly not entirely predictable, else this project would be redundant and there would be a lot less charred woodland in the world. A Bayesian model such as this, adjusted to take into account existing data of past fires, could be used to model both high-risk areas worthy of preventative monitoring, and also (in the event a fire is detected) the most likely path of propagation for the fire. That is, with terrain and vegetation data one could build a map showing the most likely areas of fire spreading, which is then continuously updated by the incoming sensory data from UAVs and UGVs in the area, allowing adaptive behaviour and monitoring. This could, in theory, also be tied into early warning systems for population areas at risk.

Discretising the area into hexagonal 'cells' proved a useful method of simplification in the paper and could help manage computing resources in a forest fire scenario too.

Shortcomings:
The paper had a number of shortcomings worthy of discussion, stemming mainly from an abundance of assumptions. Whilst it is clear the paper was very much the beginning of possible future research some of these appeared to be so broad as to render the results questionable. Examples of this included the definition of a 'slope' (affecting a person's travel time and direction) as being any elevation difference over adjacent cells resulting in at least a 20 degree angle in terrain. Through some experience in hiking and off-road biking, a 20 degree slope is pretty harsh; amounting to a rise of greater than 1 in 3. Practically, it must be expected that much smaller inclinations than this would affect the missing person's path.

Another rather glaring omission is the lack of consideration of a possible destination. The paper quotes another study which showed that in missing-person incidents a person's desired destination 'strongly correlates' with their behaviour. This strikes me as something of a truism. Indeed, an example of geocachers claims that the location of the 'treasure' they are intending to find "play an important role in the person’s behavior in the wilderness": surely this statement amounts to the admission that people will tend to move towards a destination? This sort of consideration; while not directly applicable to forest fires (fires generally don't desire to move anywhere); would be vital in the context of trajectories and (as stated) paths tending towards populated areas. 

Final word:
With some modification it seems this paper lays a potentially useful groundwork for forest fire tracking. Bayesian statistics are well established in autonomous agents and I believe it would be a method worth pursuing. It would also be instructive to see if such a model could perhaps be married to a max-sum co-ordination algorithm (possibly through influence of the utility function).

-C

Thursday 18 October 2012

Paper: Decentralised situational awareness. Settembre et al


Info: Cite as: A Decentralized Approach to Cooperative Situation Assessment in Multi-Robot Systems, G. P. Settembre, P. Scerri, A. Farinelli, K. Sycara, and D. Nardi, Proc. of 7th Int. Conf. on Autonomous Agents and Mul- tiagent Systems (AAMAS 2008), Padgham, Parkes, Müller and Parsons (eds.), May, 12-16., 2008, Estoril, Portugal, pp. XXX-XXX.
Copyright ⃝c 2008, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.


I picked up another paper which focusses on robot co-ordination in a slightly different way, by combining observations from the robots to come to a group decision. The Max Sum literature I've been reading has been very instructive, and having been directed to the author Professor Paul Scerri and finding this paper, I thought it worthwhile exploring alternative methods for agent co-ordination. It seems the case that Max-Sum is extremely adept at the UAVs-over-a-disaster-zone scenario, but since our UAV team is possibly going to be dealing with slightly different tasks in the near future a broad knowledge of some problem solving algorithms seems useful. This particular example was tested on a computer model of UGVs in a search space.



The algorithm (referred to only as the MAS [Multi Agent System] Policy) involves essentially transposing an argumentative framework into a situation where the agents are working together. Rather than having message passing and a form of argumentative negotiation to ensure your best gains (such as might be encountered in bidding software), they use this method to challenge plans of action based on each robot's own data gathered about the world, and their 'belief' in a certain set of circumstances. Optimal plans of action are known by each agent in advance, and using their own 'beliefs' they can compute what they believe to be the best course of action, given especially that they don't know their belief to be definitely true. 

For example, in a burning building if the robot encounters what it recognises as a human, which it believes to be moving, is it best to implement a plan to get the human to follow the robot out of the building? Or is it better; depending on the exact level of belief; to assume the human is unconscious and remotely summon the emergency services to that location? 

Such decision making turns out to be surprisingly in-depth. The model relies on the basic principles that as-specific a solution as possible is best, but a general solution is better than being wrong. So in the above example, leading the human out would be best if they were conscious, but it would be less-bad to call the firefighters if they were conscious than trying to lead them out if they were unconscious. Applying numerical values to different scenarios effectively determines an individual robot's ultimate plan.

Once a robot has worked out what it believes to be the best plan of action, the message passing begins. The robot tells a random neighbour its plan. The neighbour may just agree, and pass the message on. But, it may have its own beliefs which affect the plan:

In this case, the first robot can either stick to its guns and send its original plan back to the arguer, along with its own data to support its case, or it can change its mind. Every time an agreement is reached the message is sent on randomly until a prerequisite number of robots have agreed with the plan. 

The paper itself points out two down-sides to this arrangement. Firstly it produces poor decisions in scenarios with too few robots (and therefore, not enough data to share around). Secondly dynamic environments constantly changing will result in outdated information by the time an agreement is reached. In practice, this places some restrictions on what sort of tasks might be carried out. Specifically information density is key and this would clearly not be suitable in a sparsely populated environment (say, a handful of UAVs in a large search area). 

The paper records performance examples as being almost as good as a central situation where every robot knows all the information gained by every other robot. Clearly this is unfeasible in terms of message sizes being passed...

Final thought
Indeed, this is where I think this method falls down. Compared to something like Max Sum, which is specifically optimised to minimise interaction, this algorithm seems to carry a huge message overhead which could increase polynomially with the number of agents and the amount of agreement required. It could still prove feasible in a densely populated environment but as-yet, it seems a little too specific. 

-C

Disasters: the UN report

Been very busy recently. It looks like our group is going to have an independent team looking at UAVs (which is cool). Anyway I've spent some time recently going back-to-basics and having a glance at disaster relief papers to see what sort of requirements might be needed (see my last post).

As a quick addition, here's a nice quote form a UN paper on the issue:

High-resolution imagery—defined here as being able
to see to the level of one meter—has not traditionally
been available at the field level for operating agencies
immediately after a disaster. Imagery can be critical to
making operational decisions, especially in regards
to logistics. But imagery also is time consuming to
process and analyze—a task for which field staff has
precious little time. During the response to the 2004
Indian Ocean tsunami, the UN staff in Aceh had asked
for detailed imagery of bridge outages so that they
could design supply chains around the island. They
had also requested nighttime infrared imagery of the island, hoping that heat of campfires and congregations
of (warm) people would help the UN identify where the
populations of destroyed villages had moved. While
they eventually received some imagery, they never
received a data set that answered their operational
questions.

In the Haiti instance, these images were provided from satellites and overflights, but the requirement for imagery is clearly a vital one, lending credence to the use of UAVs in this way.


Tuesday 16 October 2012

Disasters: how to make them less disastrous

It occurred to me yesterday that before coming up with too many esoteric ideas on how to extend the capabilities of a Max-Sum co-ordinated swarm of UAVs, it might be an idea to see if I can find out exactly what first responders might be interested in; and what they need in a disaster area.

To that end, I found a report commissioned in the wake of the Haiti earthquake (see it here) about lessons learned and problems which need addressing. The paper is very broad in scope and deals  in large part with the reconstruction and long-term aftermath of the event, which is not relevant to what I'm studying. There were a number of discussions on the general structure of the relief effort: which is relevant in the sense that it's important to know who might actually be using the final product. It seems that typically relief work is separated into 'clusters' of people who are responsible for certain areas: eg. food, medicine, reconstruction etc etc. They also specifically refer to the work done by ground first responders (USARs) who; in the Haiti case; were grouped into 53 teams which managed to save 211 people.

53 teams inputting different tasks into UAVs would be quite a large task:agent ratio, if one imagines there might be ten or so vehicles in the skies. Clearly this ties in to remarks I read in the earlier paper on extending flight tests to domains where the number of tasks largely outweighs the number of UAVs.

On the software front, it seems language considerations could play a surprisingly important role in rolling out the PDA applications to communicate with the UAVs. Apparently there were even protests against some foreign aid groups who the locals felt were ignoring their cultural and personal considerations. "Ownership" of a relief effort is highlighted as being valuable. In such a case it would make sense that the local first responders (and not just English-speaking international workers) would be able to use the system.

Another software consideration is a system called OneResponse which has been developed as a website to centralise data collection in disaster relief. Basically this was motivated by scattered data being available to different groups in an uncoordinated way in previous scenarios. A centralised repository such as this might be a valuable way of allowing additional parties access to valuable data collected by UAVs.

As much as the paper was a little broader in scope than I was hoping for, it did spur a few ideas about possible utility of the UAVs. Indeed: a section mentioning that initial imagery is currently collected via satellite certainly suggests close-in higher resolution data would be useful. It also got me wondering about the exact nature of the images being collected; would it be worth giving a responder direct control once the UAV was over a target? That way things like angle, and altitude could be adjusted to view things in more detail or more clearly at the discretion of the responder. Something to consider, I think.

-C


Friday 12 October 2012

Paper: Max Sum algorithm on UAVs. Delle Fave et al

Fully titled: Deploying the Max-Sum Algorithm for Decentralised Coordination and Task Allocation of Unmanned Aerial Vehicles for Live Aerial Imagery Collection
This was the first paper I was asked to read, mostly because it provides a pithy overview of what's so-far been achieved in the field of UAV co-ordination in disaster relief. The content of this post may vary between technical discussion and general overview so I apologise in advance if it seems a bit two-toned. 

The paper introduces a method called the Max-Sum algorithm as a way of co-ordinating UAVs as required, and outlines its virtues. It then mentions several results in simulations before showing some empirical examples from a real-world demonstration involving two hexacopters. 


Picture of a hexacopter from the paper in question
I already mentioned in the introduction a bunch of the challenges of co-ordinating the UAVs (or rather, allowing the UAVs to co-ordinate themselves). The paper re-iterates these and adds that existing co-ordination methods and algorithms don't address the uncertainty of a task's completion: obviously if a UAV fails, if the task changes, or if the emergency services require a different task to take priority this could potentially be a big problem for real-world applications. A first responder isn't going to know precisely how long they require imagery from a location. It could depend on what they see in the first place.

Addressing the problem of the de-centralised nature of the system (ie. no central computer: all shared out) the paper notes that they've included the computing power of the first responders' PDAs in order to help the calculations. Since the imagery will be sent directly to the PDAs, they'll have to be in communication with the UAVs anyway so this seems sensible. 

Max Sum
The best way of beginning to find a solution to a problem like this is to express it mathematically. To that end, the authors of the paper present something called a "Utility function", which essentially gives a numerical value to a task which might be attended to by a UAV. The higher the number, the better. In principle one can imagine the UAVs working out all of these values for every task they can go to, and then working out how to get the highest overall number for the whole system. 


Simple example where each UAV can attend one task

Max Sum is a message-passing algorithm which allows this to happen through UAVs passing messages to their nearest neighbours. As such, no single agent knows the whole "picture"- but by knowing how they affect the system they can all work out the best course of action. 

Explicitly, the paper describes the Utility function of a task i for all agents (UAVs) which can attend the task x_i:


The first two terms codify the priority and urgency of the task at hand (the urgency being dependant on the submission time and the current time). The last term in brackets is the solution to a probability function describing how likely it is that the UAV's battery will run out during the task. The reason this can't be stated exactly is that the paper is enforcing the condition that it's impossible to know in-advance exactly how long a task will take. 
With the utility function designed, the Max-Sum implementation involves the passing of limited messages between 'functions' and 'variables'; which in this case are the PDAs and UAVs respectively. Messages from variables to functions take the form of "here's the maximum values the rest of the system can achieve for the possible task allocations, if you take one of a number of options". For example, if there are three tasks, the variable (UAV) will tell the function (PDA) the maximum utility values for the rest of the system for each possible task assignment of whatever other variables the function is attached to.

The messages from the functions to the variables are similar in the sense that they return values corresponding to what task is taken by the variable they're attached to, and they give the maximum possible utility that their variable can produce. 

By adding together incoming messages, variables reach the stage where they have a series of 'maximums' for each available task that they can undertake. They then choose the highest value available to them. For example if a variable receives messages from all its neighbours for three tasks giving the maximums as (2,19,8) for tasks 1,2 and 3 respectively, then the variable (ie. the UAV) will move to task 2 regardless of what its own individual utility for the task is.

This happens continuously, with the UAVs updating what they're doing all the time. It also doesn't have to happen in a particular order, as UAVs will change their behaviour as soon as information is available to them. That's a particularly non-rigorous explanation of Max-sum and I fear I've grossly understated its utility and robustness for this kind of situation.

Results
Simulations in the paper dealt with a few different scenarios including some where parts of the utility function were left out- for instance some scenarios ignored the urgency of a task. Those experiments where the total function was used produced better results in returning high-priority tasks and avoiding running out of power mid-task. This is important information as it confirms the viability of using this sort of method in real situations where everything needs to be considered. 

Finally empirical evidence is shown of real UAV flight tests conducted in Australia where two UAVs faced three different scenarios: two tasks at the same time, two tasks arriving one after the other (the second with a higher priority), and a situation with three eventual tasks but where one UAV doesn't know about one of the tasks. In brief, the UAVs conducted the tasks properly. A video can be seen here which includes a quick overview of the simulations too. 

Thoughts
Most of the paper seems pretty straightforward and with a  bit of extra reading I'm fairly happy I understand the Max Sum algorithm; which is good since it seems to form the backbone of this project. The videos of the real UAVs is nice as it presents things in a very obvious and straightforward way, but it does raise a few questions. Firstly I'm curious why in the examples the UAVs don't follow a straight path to the tasks, but rather 'wiggle' their way there. It would be interesting to find out if this is a result of the algorithm itself or something to do with the UAVs' hardware. Another curiosity is that for the second example the UAVs actually swap over mid-way to moving to their tasks. Initially they move to the closest target (which makes sense) but thereafter they swap over before settling over the target. 

Possibly, this is because of the battery-life aspect of the utility function (which isn't obvious just from looking at the video), since the only other factor is really the distance to the target. If the battery life was not a  factor this suggests something funny going on with the algorithm that isn't at all intuitive.

The paper finishes suggesting that the scenario needs to be extended to more complex systems with more UAVs: something I'd be very keen to be involved in. 

-C

Welcome and introduction

Hello internet, and welcome to a humble blog about my life as a PhD student, which should hopefully contain a great deal of thoughts and musings on the work I've got myself into for the next three years. The purpose of this is twofold: firstly, this provides me somewhere to share what I understand from reading and learning about the algorithms I'm supposed to be dealing with: and by extension a place for people to tell me how wrong I am. Secondly in the unlikely event you have a vested interest in my life you can use it to keep track of what I'm doing over the next few years.

As a brief summary overview, following my graduation with an MPhys in Physics, I'm now being funded for a PhD in computer science from something called the Orchid Project. This is a big collaboration of companies and Universities which work together in three specific areas of computer science, under the broad umbrella of "Agents".

I have to admit I had no idea what an "agent" was until I went to a seminar on available PhDs. In broad terms, as I understand it an agent is essentially a distinct unit of computation which can act independently and (in the sense that they can be intelligent) autonomously to achieve certain tasks in an environment, and in the presence of other such agents. This actually makes the field extremely broad: covering things from smart-thermostats or forms of bidding software, to fully fledged autonomous unmanned-aerial-vehicles (UAVs).  Specifically Orchid focusses on Smart Grid (energy distribution) work, Citizen Science (which involves crowdsourcing), and Disaster Relief. It was this latter area which piqued my interest; not least because the method they plan on using involves deploying intelligent agents in the form of UAVs to assist first-responders and emergency services in disaster situations; for example earthquake zones.

Orchid is looking to have the UAVs act autonomously: removing the need for that clunky remote control.
The ultimate goal looks something like this: in the event of a disaster, a fleet of UAVs is deployed into the sky above the area. Thereafter, emergency services will have a PDA or smartphone app of some sort, with which they can request imagery of a particular area from the UAVs (which are equipped with cameras), whilst also specifying things like urgency and duration required. This information is sent to the UAVs who then act autonomously to work out between themselves which vehicle goes to which task such that they achieve the best overall coverage that the emergency services need. Clearly they need to co-operate to achieve this.

This is no simple task, for a couple of reasons. Firstly, a great deal needs to be taken into account before simply deciding that UAV 1 will go and look at task 1. Obviously distance to the target is a factor, as well as the task's urgency, priority, and duration. Then there's the issue of battery life, ie. is it possible for the UAV to complete the task before running out of juice? Is it better to send 2 UAVs to the same task to try and maximise effectiveness?

Considering how tricky this exercise would be on paper, implementing them on UAVs is quite a challenge. The vehicles aren't exactly equipped with supercomputers, so computation has to be kept to a fairly simple level. There's also no central-computer to which they all connect. This means they can only compute by talking to each other; and of course the number of available UAVs, tasks, and PDAs is changing all the time. It's not good enough to make a decision and carry it out: the UAVs have to constantly update and check on what else is going on.

Anyway: co-ordinating the little beasts being the ultimate aim, there are (thankfully) some useful existing methods for getting them to work together nicely. I'll go into them in the next post. For now: that's what I'm hoping to contribute to. Let's see how it goes.

-C