Monday 23 December 2013

2013 in ... one post?

Whoops. So it seems I've managed to neglect this blog for almost an entire year. Not intentionally of course, it's just other things seem to take precedent. Anyway a bunch of progress is being made both in my own tiny little section of the UAV-algorithm field, and with UAVs as a whole.

My Stuff
Post progress-report (not transfer thesis: that's next year) I've been thinking more about what first responders actually want or need in a disaster situation. I had set up a meeting with Rescue Global (a sort of... real-life International Rescue) but they had to cancel after needing to dash off to the Philippines. As excuses to miss meetings go, I think "biggest Typhoon in history" is one of the better ones. Hope all is well out there.

I honestly would love to know that work I've done could help in these sorts of situations. Source: Dailymail
Anyway, I came across this interesting assessment from the Red Cross which outlines a bunch of goals and implementations of what is done/what needs doing after a disaster strikes. This actually helped me solve a key question in my work, which goes something like this:

If you're REALLY sure that there's people, in a location, that need help, do you need to bother sending a UAV for imaging anyway?

Put another way: are we taking images for the sake of it, or to work out where people are? Because if it's the latter you can actually ignore parts of the map where you're certain there are people, since you know already.

It turns out, from the wording of the report, that there's merit in imaging anyway: the wording suggests emergency responders value being able to see and assess what situation the victims are in even if they know in advance where they are. That makes sense to me, since it might inform what sort of response is needed (helicopter? boat? ground crew?).

Anyway my ideal system for controlling the UAVs now works something like this: Given an area, with some sort of prior belief of where people are, and some sort of belief about what danger they're in (quantified as an expected death-rate), and given that taking images of these people will let them get rescued more effectively, what's the optimum route to travel around the space, taking pictures, with multiple UAVs, to minimise the overall death rate?

I think the main motivation for phrasing it like this—apart from the fact no-one has done it before and PhD (if nothing else) is supposed to break new ground—is that I'm keen on producing something which is actually practical and useful, and not just an interesting exercise in computer science. I'm increasingly convinced that the idea of discretising the needs of disaster workers into "tasks" is not particularly reflective of the more cohesive picture that can be painted of a disaster area.

Anyway the exact mechanism for this has yet to be decided, but it'll probably be some sort of Monte-Carlo Tree Search with factoring to account for UAV co-ordination. So here's hoping for a paper early next year.

Other stuff
Seems that UAVs have made the news in a few places recently. There's the RAF trying to soften their image as all-purpose killing machines in this BBC article for a bit of interest. Much more bizarrely, there's the recent revelation that Amazon are thinking of using UAVs to deliver packages:
In news next year: People now hunting for Xboxes using rifles. Source: Amazon
On our end, we've also had the Beeb round to film some of our stuff for an upcoming episode of Bang Goes the Theory. I'll post the link up here once it airs, and I might even be in the background. Somewhere. Very briefly.

Anyway there's more out there if you look for it, and it seems UAVs are going to be big business in years to come. Hopefully that'll mean more cheap drones for us to buy and test on :)


I promise I'll be better at updating in the New Year. It'll be a resolution. Or not, since those are so often discarded. Anyway until then, have fun and enjoy your not-yet-delivered-by-drone presents!

Wednesday 16 January 2013

Quick NY update

Haven't had much to say recently, but I doubt there are that many readers out there desperate to hear about my every move. Short summary time, then.

Basically I've spent the last few weeks trying to get my head around ROS, a software package for coding operating protocols, algorithms, navigation, message passing etc for robot hardware. It's good for simulations too, which is what I'd like to have done in a week or so. I suffered a big-ish setback when I found out the version I was using was obsolete and had to start again with the new build environment called "catkin". I'm guessing the changes are mostly under the hood because it seems somewhat less intuitive than the first version.



So now I'm spending my time trying to work out how to get Max Sum into a form suitable for simulation, which is quite daunting given that I have no experience in software engineering before now and with multiple interacting parts to consider, that's very much what I'm faced with. The sums and functions themselves aren't all that complex, although translating my understanding of Max Sum from a simple graph-colouring exercise into task allocation is proving trickier than first imagined. Let's see how it goes.

-Chris