In lieu of our continuing group efforts to have a separate UAV project which links Orchid work with collaboration with UAV engineers in the University (entitled MOSAIC), I've been scouring papers for various suggestions on how to implement UAV co-ordination in a selection of scenarios.
In essence, since we might end up with a few different scenarios it'd be beneficial if we had a range of options we could refer to in a reference-like way. "Hey! I want to get some UAVs to do this task" says an enquirer. "Ok", says we, "we think This is the best algorithm, for these reasons given your situation, hardware, environment etc etc". Since I'm ideally placed to research existing work in this area I'm trying to start gathering methods, reviews, tests, trials, and all the various pitfalls of different algorithms into a coherent lump that may end up as a literature review.
A couple of papers caught my eye recently in the specific area of search and rescue of a missing person in some given terrain area or wilderness location. While specific, they do list a few useful classes of problem which could be given future thought.
Supporting Search and Rescue Operations with UAVs -S Waharte and N Trigoni
This is a nice paper giving a very broad overview of three approaches to searching for a missing person with an evaluation of their respective efficacy in a simulation. Nearly all such scenarios can be categorised by the exploitation-vs-exploration payoff, which goes something like this:
How much time should I spend searching areas I haven't yet searched, compared to looking more closely at areas I have searched?
Clearly both extremes are undesirable: you would not want your UAV to zip quickly over a huge area and miss the missing person because of lack of attention to detail, nor would you want it to spend four hours staring ever closer at a person-shaped rock.
Unlike the Max-Sum utility, the methods here deal only with minimising the time of finding the missing person: a difference in that there is typically only one overriding 'task' (albeit split into possible sub-tasks) for the UAVs to undertake. Nonetheless it is important to consider the algorithms outlined to avoid being funnelled into one specific line of thinking:
Greedy Heuristics
Each UAV maximises its own utility (ie search coverage) in a Bayesian way, developing its own guesses as to the location of the missing person and acting on them. Various methods for route-choosing were explored including those maximising immediate gain and those that actually plan into the future slightly.
Potential Heuristics
Areas of interest are modelled as attractive potentials on a 2D surface, and less accessible areas as repulsive potentials. Force is calculated (as in physics) as the negative of the potential gradient. Potential increases with subsequent visits to discourage loitering, and some message-passing is allowed.
POMDPs
Partially observable Markov decision making problems are a well known branch of decision making in computer science and provide a forward-looking strategy for action based on noisy data which may not actually represent the situation of reality. For instance, the chance of recording a fake positive result is increased with decreased height and the model takes this into account. The question then becomes one of maximising coverage with a view to doing so in future, given uncertainty of existing data. Again some message-passing was allowed, but in a very computationally intensive way: with UAVs sharing their entire belief set periodically when they came in range of another UAV.
Despite the very simple scenario and simulation (only a few tens of square meters of simulated woodland) the tests showed clear advantages to a POMDP method including message passing. A brief concluding thought here is that such a method has two very big problems: a terribly costly message overhead, and required computing power which increases exponentially with the grid size (since essentially every possible path is considered). Possible, but unwieldy without some serious solution space pruning.
More thoughts to follow
-C
In essence, since we might end up with a few different scenarios it'd be beneficial if we had a range of options we could refer to in a reference-like way. "Hey! I want to get some UAVs to do this task" says an enquirer. "Ok", says we, "we think This is the best algorithm, for these reasons given your situation, hardware, environment etc etc". Since I'm ideally placed to research existing work in this area I'm trying to start gathering methods, reviews, tests, trials, and all the various pitfalls of different algorithms into a coherent lump that may end up as a literature review.
A couple of papers caught my eye recently in the specific area of search and rescue of a missing person in some given terrain area or wilderness location. While specific, they do list a few useful classes of problem which could be given future thought.
Supporting Search and Rescue Operations with UAVs -S Waharte and N Trigoni
This is a nice paper giving a very broad overview of three approaches to searching for a missing person with an evaluation of their respective efficacy in a simulation. Nearly all such scenarios can be categorised by the exploitation-vs-exploration payoff, which goes something like this:
How much time should I spend searching areas I haven't yet searched, compared to looking more closely at areas I have searched?
Clearly both extremes are undesirable: you would not want your UAV to zip quickly over a huge area and miss the missing person because of lack of attention to detail, nor would you want it to spend four hours staring ever closer at a person-shaped rock.
Like this one, on Mars. |
Unlike the Max-Sum utility, the methods here deal only with minimising the time of finding the missing person: a difference in that there is typically only one overriding 'task' (albeit split into possible sub-tasks) for the UAVs to undertake. Nonetheless it is important to consider the algorithms outlined to avoid being funnelled into one specific line of thinking:
Greedy Heuristics
Each UAV maximises its own utility (ie search coverage) in a Bayesian way, developing its own guesses as to the location of the missing person and acting on them. Various methods for route-choosing were explored including those maximising immediate gain and those that actually plan into the future slightly.
Potential Heuristics
Areas of interest are modelled as attractive potentials on a 2D surface, and less accessible areas as repulsive potentials. Force is calculated (as in physics) as the negative of the potential gradient. Potential increases with subsequent visits to discourage loitering, and some message-passing is allowed.
POMDPs
Partially observable Markov decision making problems are a well known branch of decision making in computer science and provide a forward-looking strategy for action based on noisy data which may not actually represent the situation of reality. For instance, the chance of recording a fake positive result is increased with decreased height and the model takes this into account. The question then becomes one of maximising coverage with a view to doing so in future, given uncertainty of existing data. Again some message-passing was allowed, but in a very computationally intensive way: with UAVs sharing their entire belief set periodically when they came in range of another UAV.
Despite the very simple scenario and simulation (only a few tens of square meters of simulated woodland) the tests showed clear advantages to a POMDP method including message passing. A brief concluding thought here is that such a method has two very big problems: a terribly costly message overhead, and required computing power which increases exponentially with the grid size (since essentially every possible path is considered). Possible, but unwieldy without some serious solution space pruning.
More thoughts to follow
-C
No comments:
Post a Comment