Supplementary content information

In this video, we provide an overview of key research challenges being addressed by ORCHID researchers in the development of Human-Agent Collectives (HAC) for disaster response applications. Such challenges arise at the various stages of the disaster management process. In the preparedness phase, there is a need to determine where the key resources are going to be placed and hence high fidelity simulations of possible disasters are required and we have developed such simulations in collaboration with Hampshire County Council.

For the initial response phase, we have developed crowdsourcing techniques that can help gather information quickly from volunteers and people on the ground in order to improve situational awareness. Moreover, we have developed algorithms for UAVs to work in tandem with human emergency responders to search disaster areas. Based on the information gleaned, software agents can then optimise the allocation of tasks to responders to maximise the number of lives saved and resources secured. To test these agents, we have developed a platform based on mixed-reality games. Our platform allows us to evaluate the mechanisms for human-agent coordination in the real-world and hence acts as a benchmark for HAC.

You must select the video player for these keys to function.

Keyboard shortcut Function
Spacebar Play/Pause when the seek bar is selected. Activate a button if a button has focus.
Play/Pause Media Key on keyboards Play / Pause.
K Pause/Play in player.
Stop Media Key on keyboards Stop.
Next Track Media Key on keyboards Moves to the next track in a playlist.
Left/Right arrow on the seek bar Seek backward/forward 5 seconds.
J Seek backward 10 seconds in player.
L Seek forward 10 seconds in player.
Home/End on the seek bar Seek to the beginning/last seconds of the video.
Up/Down arrow on the seek bar Increase/Decrease volume 5%.
Numbers 1 to 9 on the seek bar (not on the numeric pad) Seek to the 10% to 90% of the video.
Number 0 on the seek bar  (not on the numeric pad) Seek to the beginning of the video.
Number 1 or Shift+1 Move between H1 headers.
/ Go to search box.
F Activate full screen. If full screen mode is enabled, activate F again or press escape to exit full screen mode. 
C Activate closed captions and subtitles if available. To hide captions and subtitles, activate C again. 
Shift+N Move to the next video (If you are using a playlist, will go to the next video of the playlist. If not using a playlist, it will move to the next YouTube suggested video).
Shift+P Move to the previous video. Note that this shortcut only works when you are using a playlist. 

Professor Nick Jennings - ORCHID Director, University of Southampton [NJ]

In modern disasters the role of information and communications technologies (ICT) is becoming increasingly important. There are obviously lots more people on the ground with phones, with cameras, able to really provide a picture of what's going on on the ground. What you want to be able to do in these complex, uncertain, dynamic situations is pull together collections of humans with relevant software support, so relevant software agents in a team and that's what we call a human agent collective. And so the ORCHID project is very much about the science and the engineering of human agent collectives, how we can build them, how we can maintain them and how we can operate them in particular scenarios.

Our aim is for the ORCHID project and the technologies that we develop to help first responders to make better decisions and ultimately save lives. We've looked at a number of disasters around the world, we've looked at simulations and training exercises that have been run in America and in the UK by disaster response organisations to really understand how it currently works today, what the major challenges are and how things can be made better. We've done some work with Hampshire County Council in order to be able to construct and simulate how a disaster would play out in that particular scenario. And they have a disaster response plan that they have to put in place and one of the key challenges here was to be able to have accurate maps and simulations of how people would get out of the area.

Dr Gopal Ramchurn - Lecturer, University of Southampton

So what you are seeing here is a map of Fawley Oil Refinery, right next to Southampton, and as you can see on this map there are a number of villages that contain thousands and thousands of people. So we developed simulations of pedestrians and cars moving out of these disaster areas and using very powerful computers we were able to scale these simulations to thousands and thousands of pedestrians and cars. In a real-world deployment of this platform, what you will have is that people would be taking pictures of the area and they would be uploading these with GPS annotations and that would allow us to keep track of where these buildings and connections are and populate a map with all of this information.

We can run these simulations in faster than real time and by doing that we will be able to estimate very accurately how many people would get onto the roads from those buildings and therefore the congestion settings that would result during an evacuation scenario.


In a disaster there is very little information; typically you may or may not know what's caused it, you may or may not know its extent, so really there is a lack of information there. You don't know who is in the area so what people there are around, what resources you have at your disposal to be able to understand how you can deal with this problem.

In some aspects of disaster response, what you have at your disposal is an unmanned autonomous system. What we are interested in is exploring how the human first responder and these UAS can work together as a really close-knit team.

Dr Luke Teacy - Research Fellow, University of Southampton

In any kind of disaster, having an aerial view is really invaluable for gathering information about the situation on the ground. Whereas before this was always manned flight, now UAVs are providing this really versatile, cheap alternative for doing this, but at the moment those kinds of platforms are really labour intensive, so someone has to decide where I am going to fly this thing, what am I going to look at next? Someone else might have to then trawl through all of that video footage in order to figure out what are the key things here that we are really interested in. So, in ORCHID what we are interested in doing is supporting this and allowing more UAVs to be used by fewer people, how can we get the UAVs to decide themselves, what should I look at next given what I have seen already? So, for instance, if I am flying up as a UAV and have a look down, if I see something on the ground and I am not quite sure what it is, should I take a closer look at that or should I pass it on to a human and say is this something that we are really interested in? Second thing is coordination. If we have more than one UAV, how can they decide amongst themselves how to decide to break up that task to make the best use of their combined resources?

And finally flexible autonomy, so neither the machine or the person acts in isolation so we want to be able to make some things easier for human, by letting the UAVs take control of some of the decisions, but at the same time you want the human to be able to understand those decisions and also be able to intervene in those decision processes as well.

So this is the interface that Theng has developed to allow a first responder to control multiple UAVs at the same time. He can specify tasks and allow the UAVs to decide amongst themselves which ones are best to do which task, but at the same time he can change these modifications manually if he likes; he can take control of an individual UAV and have a closer look at what it is looking at, so this really shows this idea of going from full autonomy right down to full manual control depending on what the human thinks is most appropriate.


One of the vignettes we have looked at is Fukushima, so the recent nuclear incident, and that was very interesting. There were over 500 people who built their own radiation sensors, so they built their own Geiger counters, to be able to track the amount of radiation that they had and these were then able to be integrated with official information coming from official sensors, in order to be able to construct a picture of what was actually going on. Now, some people were good at building their sensors and gave very accurate readings and some people were less good at building their sensors and so their readings weren't very accurate. We combined these, fused these, with the official high quality sensors in order to be able to get a much more comprehensive view of the information space.

Dr Meritxell Vinyals - Researcher, University of Southampton

These are very challenging tasks. First because data provided by the crowd is sparse. Second because some reports might be untrustworthy, for example some sensors might be malfunctioning, or some people might be reporting based on their emotional state and not the reality. So, to face the challenge, what we did is to develop a novel trust-based model that the law has to learn special temporal patterns that are present in that data.


The smart bit, the smart technology, is the algorithm that we constructed for putting that information together, for learning what is likely to be a good reading, what is likely to be bad reading and combing them in the most efficient way that we can.

Dr Victor Naroditskiy, Research Fellow, University of Southampton

During disaster response, it is very important to gather accurate information about conditions on the ground and in order to do that we need to motivate people to contribute this information, not to verify information. So we are looking at a number of incentive mechanisms and we are tackling this problem using lessons that we learned from a recent social experiment called the 'tag challenge' and the task there was to find the person walking around a city just based on their picture. The internet and the social media made it possible to easily invite their friends to share information and also to track their furls. We found three out of five people, showing that it is indeed possible. These are the kind of techniques that will help during disaster response to gather information fast and to verify it.

Dr Joel Fischer - Research Fellow, University of Nottingham [JF]

One of the most important things that we are interested in is studying how people actually interact with technology.

Professor Alex Rogers - Electronics and Computer Science, University of Southampton [AR]

In agile teaming we are really interested in how we can form dynamic teams of actors so that those teams can perform tasks that none of the individuals can perform and then we are interested in how we can disband and reform as appropriate. Within the disaster response setting, those actors may be humans or agents, they could be first responders, they could be unmanned aerial vehicles collecting imagery, they could be human analysts.

We are interested in algorithmic challenges, which is really how we can calculate the best teams. We are interested in algorithms that scale - in a disaster response setting there may be tens of thousands of actors, so how can we build algorithms that can calculate the best teaming as fast as possible? And also how do some of the natural issues of trust and accountability within human teams, how does that impact when the teams have been formed dynamically through some sort of automated negotiation.

Dr Trung Dong Huynh - Research Fellow, University of Southampton [TH]

In order to build intelligent tools to support these teams it is very crucial for us to understand how they work, how they interact and how they collaborate with each other.


So we have designed a game to study how teams of people coordinate. Atomic ORCHID is a location-based mobile game in which field responders and HQ work together to rescue as many tasks as possible before they get engulfed by a spreading radioactive cloud. The agent monitors people's locations and the targets and on that basis can construct a plan of how to optimally save the targets. So the agent forms a plan and sends messages to the players and thereby instructs them on which targets to prioritise and who to team up with.

It will suggest for you to pair up with a certain player and it will also suggest which task to rescue, to go for. What we are seeing is how the agent teams up players, sends them to certain targets and then those targets are then being collected by the players.


We know the connection between these events, how one event connects to another. From that, graph analysis can pick up common interaction patterns, who were the key players in the game or even bottlenecks in the operation. Such intimate insight in to what happens in an operation, I believe, would undoubtedly help disaster respond planners to improve their training and improve their coordination for future operations.

Professor Luc Moreau - Professor of Computer Science, University of Southampton

As information needs to be exchanged, they need common format, a common understanding, and it's crucial to have standards for that. We have defined a standard for provenance on the web. For that we worked with the World Wide Web Consortium, that's the body that standardises provenance, and we are using it in the ORCHID applications.


So what we would really like is the most efficient way of dynamically mixing resources with different skills, different talents, different capabilities and to do that continually as new tasks come along and disbanding those teams as tasks change or new priorities come along.


One of the key tasks is to be able to make predictions to figure out from what we've seen in the past and what we know, what's likely to happen in the future. So, in particular we have done some simulations around clouds of radiation. We've developed a range of very clever, advanced, machine-learning, artificial intelligence techniques that will take the data from a particular place and make predictions about what is going to happen in the future.

Dr Steve Reece - Researcher, University of Oxford

What we want to be able to do is figure out where that cloud is actually going to be, using these monitors, if you like, sort of like Geiger counters. We want to be able to determine the wind dynamics. In order to be able to evacuate people from that environment in such a way that they are not subject to the radiation cloud, they are not passing through the radiation cloud. So, what we are able to do is to use fairly complicated mathematics to model the radiation cloud, to be able to combine observations for radiation cloud, and also be able to generate paths through the environment which avoid that radiation cloud.


There will always be new forms of disaster, manmade and natural, and so when we have the ORCHID technologies out, being used for real on a routine basis, the first responders and people on the ground are able to understand the situation that they find themselves in, figure out what is the best thing to do, how can they best allocate the resources that they have at their disposal, respond quicker, make better decisions and ultimately save lives.