Showing posts with label flight test. Show all posts
Showing posts with label flight test. Show all posts

Monday, 1 June 2015

UAV Demo Vid

I made a video. Lucky me! Not much of my work in it, but it was a nice distraction from the day-to-day.


Friday, 16 November 2012

Project progress, search patterns, and a DIY UAV

Following more meetings on our shiny new UAV project we've begun to thrash out a few more of the scenario details including a loosely defined goal of having a working prototype demonstrator next summer which involves multiple UAVs locating some sort of object (perhaps radio beacons) in a predefined search space. The ultimate goal is to scale this up over coming years into something resembling a radiation/chemical leak response package which could meet a number of targets, for example locating people in the path of the contaminant, alerting emergency services to their location, etc.

With that in mind I've been focussing my reading more around the search-pattern, search-algorithm area and trying to find out about existing methods and their relative pros and cons. Bayesian methods make a recurring appearance simply because having some pre-conceived belief and then updating it as you find more data is an extremely logical approach to the problem. There's also the issue of what can be achieved with multiple drones, and how best to confirm a target's identity once a potential sighting has been made. To that end I've had a few thoughts and ideas on possible approaches, based partly on existing work:

Utilising multiple similar UAVs in a heterogenous way
Given a few different types of UAV in a situation it's clearly logical to treat them differently. But what if you have identical drones that can perform different tasks? For example, given three drones and a search space, rather than having them all running search patterns independently at the same height (so the same detail) would it be advantageous to have a hierarchy where one (or more) UAVs takes high altitude, low-detail images and then co-ordinates the other drone/s towards possible matches?
This...
... vs this
This introduces a quasi-centralised system - which would obviously need to be robust enough that if the higher UAV fails the search can still continue (perhaps reverting to the first arrangement). It would also need to be established if there is any benefit to this method.

Inter-UAV task allocation
Going on the basis that the MaxSum algorithm mentioned in previous posts is an extremely versatile task-allocation tool, I've been considering if there's scope for its implementation in this area. For example, say a search pattern is being carried out by a swarm of UAVs using either of the two arrangements given above, and a possible sighting is made. Could then, the co-ordination switch from Bayesian searching to a MaxSum task assignment as one UAV creates a task of "go and take a closer look at this location" and broadcasts it to surrounding vehicles? In the same-height arrangement it might turn out that such a method would be moot as it would maximise the utility function if each UAV checks out its own possible sightings, but with the second method the higher altitude drone could easily interrupt the search of the lower UAVs by creating a task of looking in detail at a location. 

The bottom line of this is that the UAVs could have two modes; a Bayesian search method and a MaxSum task mode; designed to compliment each other

Prior planning and search space restriction
One might imagine a number of these situations where, once establishing the area to be searched, time could be saved in the co-ordination stage by assigning each UAV a portion of the area to search which they are then restricted to (barring a MaxSum task being received). Indeed there need not even be awareness between the UAVs of the other drones' behaviour, since each could have its own Bayesian belief map of their portion. If the designated target is found by one UAV they then need only communicate this to the others for them to stop searching. This could also reduce message overhead. A pithy summary then, is "do the UAVs actually need the information gained by the other UAVs in the area?".

More questions:
The seemingly simple task of "find some objects in an area" actually has considerable elements that need to be accounted for in any solution. As well as the things mentioned above, the formalised problem has the following points to contemplate:

  • What uncertainty will the UAVs' images produce (presumably related to sensor resolution and altitude)?
  • How accurately do they need to know a target's position to consider it a successful task?
  • Is there any other metric to measure success beyond "finding the targets quickly"?
  • Do the UAVs know in advance how many targets there are?
  • If the answer to the above is "no", will they ever stop their search or is it actually desirable that they keep looking indefinitely?
A final word, from a DIY unmanned vehicle search team
For interest, I found the blog of a team from Australia who competed (and almost aced) a competition held there to find a missing person and drop a water bottle for them in a pre-defined search space using an unmanned aerial vehicle. While not offering a huge amount of inspiration on the co-ordination front (a simple "lawnmower" search pattern was used with a recognition algorithm) it still makes interesting reading. I'm thoroughly impressed by how well it all worked considering it was an amateur effort done essentially by hobbyists. Check it out!

-C


Thursday, 1 November 2012

Some disjointed thoughts on UAV deployment

In lieu of our continuing group efforts to have a separate UAV project which links Orchid work with collaboration with UAV engineers in the University (entitled MOSAIC), I've been scouring papers for various suggestions on how to implement UAV co-ordination in a selection of scenarios.

In essence, since we might end up with a few different scenarios it'd be beneficial if we had a range of options we could refer to in a reference-like way. "Hey! I want to get some UAVs to do this task" says an enquirer. "Ok", says we, "we think This is the best algorithm, for these reasons given your situation, hardware, environment etc etc". Since I'm ideally placed to research existing work in this area I'm trying to start gathering methods, reviews, tests, trials, and all the various pitfalls of different algorithms into a coherent lump that may end up as a literature review.



A couple of papers caught my eye recently in the specific area of search and rescue of a missing person in some given terrain area or wilderness location. While specific, they do list a few useful classes of problem which could be given future thought.

Supporting Search and Rescue Operations with UAVs -S Waharte and N Trigoni

This is a nice paper giving a very broad overview of three approaches to searching for a missing person with an evaluation of their respective efficacy in a simulation. Nearly all such scenarios can be categorised by the exploitation-vs-exploration payoff, which goes something like this:

How much time should I spend searching areas I haven't yet searched, compared to looking more closely at areas I have searched?

Clearly both extremes are undesirable: you would not want your UAV to zip quickly over a huge area and miss the missing person because of lack of attention to detail, nor would you want it to spend four hours staring ever closer at a person-shaped rock.

Like this one, on Mars.

Unlike the Max-Sum utility, the methods here deal only with minimising the time of finding the missing person: a difference in that there is typically only one overriding 'task' (albeit split into possible sub-tasks) for the UAVs to undertake. Nonetheless it is important to consider the algorithms outlined to avoid being funnelled into one specific line of thinking:

Greedy Heuristics
Each UAV maximises its own utility (ie search coverage) in a Bayesian way, developing its own guesses as to the location of the missing person and acting on them. Various methods for route-choosing were explored including those maximising immediate gain and those that actually plan into the future slightly.

Potential Heuristics
Areas of interest are modelled as attractive potentials on a 2D surface, and less accessible areas as repulsive potentials. Force is calculated (as in physics) as the negative of the potential gradient. Potential increases with subsequent visits to discourage loitering, and some message-passing is allowed.

POMDPs
Partially observable Markov decision making problems are a well known branch of decision making in computer science and provide a forward-looking strategy for action based on noisy data which may not actually represent the situation of reality. For instance, the chance of recording a fake positive result is increased with decreased height and the model takes this into account. The question then becomes one of maximising coverage with a view to doing so in future, given uncertainty of existing data. Again some message-passing was allowed, but in a very computationally intensive way: with UAVs sharing their entire belief set periodically when they came in range of another UAV.

Despite the very simple scenario and simulation (only a few tens of square meters of simulated woodland) the tests showed clear advantages to a POMDP method including message passing. A brief concluding thought here is that such a method has two very big problems: a terribly costly message overhead, and required computing power which increases exponentially with the grid size (since essentially every possible path is considered). Possible, but unwieldy without some serious solution space pruning.

More thoughts to follow

-C

Friday, 26 October 2012

Few videos of interest

Couple of interest videos related to decentralised co-ordination (in the second case, there is an overhead from a UAV).


This is a nice talk and demo from Vijay Kumar at the TED talks, showing agile flight control, formation flying and (more interestingly from my point of view) decentralised co-ordination. Definitely worth a read of some papers from here. He talks at one point about disaster relief work although I think the reference is to lower-level building infiltration. The video concludes with the now well-known James Bond theme played by drones.

This second video shows an interesting amalgamation of a flying co-ordinator sending instructions to co-operative ground based agents. I'm wondering if this could be applied to our forest fire scenario where both UAV and UGV agents will be working together. Also, the colour algorithm they demonstrate would surely be improved using simple number-selection and passing algorithms wirelessly, rather than visible light? This will negate the need for line-of-sight visibility as long as each robot can broadcast some kind of identifier.

Food for thought I think.

-C

Friday, 12 October 2012

Paper: Max Sum algorithm on UAVs. Delle Fave et al

Fully titled: Deploying the Max-Sum Algorithm for Decentralised Coordination and Task Allocation of Unmanned Aerial Vehicles for Live Aerial Imagery Collection
This was the first paper I was asked to read, mostly because it provides a pithy overview of what's so-far been achieved in the field of UAV co-ordination in disaster relief. The content of this post may vary between technical discussion and general overview so I apologise in advance if it seems a bit two-toned. 

The paper introduces a method called the Max-Sum algorithm as a way of co-ordinating UAVs as required, and outlines its virtues. It then mentions several results in simulations before showing some empirical examples from a real-world demonstration involving two hexacopters. 


Picture of a hexacopter from the paper in question
I already mentioned in the introduction a bunch of the challenges of co-ordinating the UAVs (or rather, allowing the UAVs to co-ordinate themselves). The paper re-iterates these and adds that existing co-ordination methods and algorithms don't address the uncertainty of a task's completion: obviously if a UAV fails, if the task changes, or if the emergency services require a different task to take priority this could potentially be a big problem for real-world applications. A first responder isn't going to know precisely how long they require imagery from a location. It could depend on what they see in the first place.

Addressing the problem of the de-centralised nature of the system (ie. no central computer: all shared out) the paper notes that they've included the computing power of the first responders' PDAs in order to help the calculations. Since the imagery will be sent directly to the PDAs, they'll have to be in communication with the UAVs anyway so this seems sensible. 

Max Sum
The best way of beginning to find a solution to a problem like this is to express it mathematically. To that end, the authors of the paper present something called a "Utility function", which essentially gives a numerical value to a task which might be attended to by a UAV. The higher the number, the better. In principle one can imagine the UAVs working out all of these values for every task they can go to, and then working out how to get the highest overall number for the whole system. 


Simple example where each UAV can attend one task

Max Sum is a message-passing algorithm which allows this to happen through UAVs passing messages to their nearest neighbours. As such, no single agent knows the whole "picture"- but by knowing how they affect the system they can all work out the best course of action. 

Explicitly, the paper describes the Utility function of a task i for all agents (UAVs) which can attend the task x_i:


The first two terms codify the priority and urgency of the task at hand (the urgency being dependant on the submission time and the current time). The last term in brackets is the solution to a probability function describing how likely it is that the UAV's battery will run out during the task. The reason this can't be stated exactly is that the paper is enforcing the condition that it's impossible to know in-advance exactly how long a task will take. 
With the utility function designed, the Max-Sum implementation involves the passing of limited messages between 'functions' and 'variables'; which in this case are the PDAs and UAVs respectively. Messages from variables to functions take the form of "here's the maximum values the rest of the system can achieve for the possible task allocations, if you take one of a number of options". For example, if there are three tasks, the variable (UAV) will tell the function (PDA) the maximum utility values for the rest of the system for each possible task assignment of whatever other variables the function is attached to.

The messages from the functions to the variables are similar in the sense that they return values corresponding to what task is taken by the variable they're attached to, and they give the maximum possible utility that their variable can produce. 

By adding together incoming messages, variables reach the stage where they have a series of 'maximums' for each available task that they can undertake. They then choose the highest value available to them. For example if a variable receives messages from all its neighbours for three tasks giving the maximums as (2,19,8) for tasks 1,2 and 3 respectively, then the variable (ie. the UAV) will move to task 2 regardless of what its own individual utility for the task is.

This happens continuously, with the UAVs updating what they're doing all the time. It also doesn't have to happen in a particular order, as UAVs will change their behaviour as soon as information is available to them. That's a particularly non-rigorous explanation of Max-sum and I fear I've grossly understated its utility and robustness for this kind of situation.

Results
Simulations in the paper dealt with a few different scenarios including some where parts of the utility function were left out- for instance some scenarios ignored the urgency of a task. Those experiments where the total function was used produced better results in returning high-priority tasks and avoiding running out of power mid-task. This is important information as it confirms the viability of using this sort of method in real situations where everything needs to be considered. 

Finally empirical evidence is shown of real UAV flight tests conducted in Australia where two UAVs faced three different scenarios: two tasks at the same time, two tasks arriving one after the other (the second with a higher priority), and a situation with three eventual tasks but where one UAV doesn't know about one of the tasks. In brief, the UAVs conducted the tasks properly. A video can be seen here which includes a quick overview of the simulations too. 

Thoughts
Most of the paper seems pretty straightforward and with a  bit of extra reading I'm fairly happy I understand the Max Sum algorithm; which is good since it seems to form the backbone of this project. The videos of the real UAVs is nice as it presents things in a very obvious and straightforward way, but it does raise a few questions. Firstly I'm curious why in the examples the UAVs don't follow a straight path to the tasks, but rather 'wiggle' their way there. It would be interesting to find out if this is a result of the algorithm itself or something to do with the UAVs' hardware. Another curiosity is that for the second example the UAVs actually swap over mid-way to moving to their tasks. Initially they move to the closest target (which makes sense) but thereafter they swap over before settling over the target. 

Possibly, this is because of the battery-life aspect of the utility function (which isn't obvious just from looking at the video), since the only other factor is really the distance to the target. If the battery life was not a  factor this suggests something funny going on with the algorithm that isn't at all intuitive.

The paper finishes suggesting that the scenario needs to be extended to more complex systems with more UAVs: something I'd be very keen to be involved in. 

-C