Killing Two Birds with One Stone: Drones, Convolution Neural Network and Reinforcement Learning for Disaster Response: By Lekan Sodeinde

In the aftermath of a natural disaster, disaster management efforts usually shift towards disaster response. This stage of disaster management involves warning and evacuation, providing immediate assistance, assessing damage and restoring public infrastructures [1]. The effectiveness and outcome of these efforts depend on the speed of responders and the quality of information available to them. A slow response could be costly, and could lead to criticism of the government and the disaster management agency. For example, in August of 2005, in the aftermath of Hurricane Katrina, President George W. Bush [2] and the Federal Emergency Management Agency (FEMA) were highly criticized for their slow response in providing aid to the people of New Orleans [3].  While no one knows the exact death toll and cost of Hurricane Katrina, it was estimated that the death toll was more than 1,000 [4] and that it cost about $125 billion US, the costliest hurricane in US history [5]. Hurricane Sandy, the second most expensive hurricane in US history, was also criticized for its slow response [6]. The hurricane affected public infrastructures and shutdown New York City subway systems, and lead to a total of 157 fatalities in the US [7]. Damaged public infrastructures, especially roads, are major contributors to the slowness of response efforts, delaying response efforts or completely making them impossible. In some cases, the damages are so bad that the affected areas are only accessible by helicopters, planes, or drones.

Drones could be very effective in response stage of disaster management. They have been proven to be very helpful for search and rescue, situational awareness, and mapping [8]. Drones used for disaster response are usually fitted with cameras to collect images, which are later used for analysis. The analysis takes time in a situation that demands rapid response, and usually involves human experts that are prone to errors. Two machine learning algorithms could significantly help to reduce errors and to shorten response time by speeding-up the turn-around time for converting images into information. These drones could be embedded with Convolution Neural Network (CNN) for timely information on damaged buildings and infrastructures while reducing error; and reinforcement learning could speed-up planning the response.

Convolution Neural Network

Convolution neural network are already popular in image classifications, and they are now becoming more popular in extracting information from images collected after a natural disaster. These images usually come from cameras mounted on planes, artificial satellites, or drones. They provide information on changes that have occurred after a natural disaster. There are many on-going efforts to use convolution neural network to extract information about damaged buildings from satellite images and to classify them into different levels of damages [9]. Projects such as the xView challenges are some of these efforts, and they are supported by the Defense Innovation Unit to automate the process of assessing building damages after natural disasters [10]. The second and the latest of the xView challenges was dubbed xView2.

The xView2 was released with dataset of over 45000 square km of polygon labeled pre and post disaster imageries. Released along with this dataset were baseline models for localization and classification. The localization is a process of instance segmentation, a task of detecting and delineating buildings in the imageries. The localization model was a fork of the SpaceNet project, another challenge which was for building detection. This model was based on another model called the U-Net, a convolution neural network for medical image segmentation. One of U-net advantages is that it requires few training images to perform instance segmentation. It achieves this by first performing contraction, using a 3 X 3 convolution layer, then a RELU activation layer, and then a max-pooling to reduce the x and y sizes of the input images. After image contraction, it then performs image expansion, using up-convolutions and concatenation with high resolution images from the contraction path [26]. Once the buildings have been identified the next step was to classify them into their levels of damages.  The classification model performs horizontal flip augmentation and uses Ada optimizer. It uses three sets of convolution layers, with a single stride, and then followed by a 2 X 2 max pooling; The convolution layers were 5X5, 3X3, and 3X3 respectively. The output from the last layer was then flattened and RELU dense layer was then used to output the four classes.  The damages could either be no-damage, minor damage, major damage, or destroyed. The goal of the challenge is to improve on these two models.

Although the challenge currently focuses on the extraction of information related to damaged buildings, the dataset provided for the challenge also has the potential for use in extracting information related to obstructed roads segments, routing across obstructed roads, and identifying the force of nature which is responsible for the damage. One advantage of CNN is that they could be built on pretrained models in a process of transfer learning. This approach provides quick gains on accuracy and allowing efforts to be concentrated on improving on existing models. Some of the common transfer learning algorithms include AlexNet, VGG19, GoogLeNet and Resnet50. AlexNet and VGG19 were pretrained on more than 1 million images and can classify over 1000 different objects. GoogleNet is over 22 layers deep. It has two sub-networks: ImageNet and Places365. ImageNet could be used to classify objects such as pencils, keyboards, and animals. Places365 cab identify different kind of places such as a park, a runway or a lobby [16].

Many recent works are now built using one these transfer learning models, for example Jigar Doshi et al used convolution neural network on satellite images to cluster areas which were mostly affected by a disaster so that resources and efforts could be concentrated on these areas. They used Residual Inception Skip network segmentation model, a model based on ResNet, for the extraction of roads from satellite images. In this model, a 3 X 3 convolution was replaced by a 3 X 1 convolution, then batch normalized followed by a 1 X 3 convolution.  The activation layer were leaky RELUs with a slope of -0.1 x (11).  This model was run on images collected before and after a disaster, and the differences between their results indicated areas that were impacted. ResNet on which this network was created was a pre-trained convolutional neural network that was trained on more than a million images from the ImageNet database (13). Pretrained Convolution Neural Network have also been used to identify areas affected by flooding using satellite images and aerial photographs (14).  Siti Nor Khuzaimah Binti Amit et al built a Convolution Neural Network inspired by AlexNet to extract areas affected by flooding with an accuracy of 80 – 90%.  Convolution Neural Network could also be used to identify map damages caused by storm. Zayd Mahmoud Hamdi et al modified a VGG19 algorithm to speed-up an assessment of storm damages after a disaster (15). Asmamaw Gebrehiwot et al modified GoogleNet to map the extent of flooding in images collected by a drone. Images from drones are becoming more accessible since they are cheaper to collect and since they are portable and easier to deploy than an airplane or a remote sensing satellite. CNN is thus useful for extracting damage information from images collected by drones, and when it is embedded into drones, it could be used to provide damage information in real-time to responders.

Drone Images

Of all the approaches for collecting images for convolution neural network models after a natural disaster, drones are the most effective. They can be easily deployed to support humanitarian efforts.  For example, in 2018, public safety officers used drones to capture images of the California camp fire to assist in the recovery efforts. The images were used by Federal Emergency Management Agency (FEMA) for planning purposes and for aiding insurance claims (17). Similarly, after Hurricane Irma in 2017, insurance inspectors used drone images to assess destructions to properties on the Island of St Martin. In the same year, images from drones helped rebuilding efforts of the City of Mexico after its earthquake. After the Balkan’s flooding of 2014, images from drones where also used to determine the location of displaced mines for the purpose of advising returning villagers (17). All of these examples prove how drone imaging benefits disaster response efforts.

Drone images have been historically post-processed to extract information for disaster response efforts. Because the timeliness of these information is very important to save lives and properties, convolution neural networks are now been suggested to be embedded on a platform and integrated into drones for a real-time information extraction (18). In some cases, the convolution neural network was embedded with a drone to detect an object. Alberto Rivas demonstrated how it could be used to detect cattle (19). In another case, convolution neural network was used to map flood extent using a drone (20). Most of the information extracted from these images are mostly used for planning purposes. For example, they are used for planning humanitarian logistics; for determining the best routes to get to affected areas and for determining how to best distribute resources. Coincidentally, reinforcement learning is another area of machine learning that could help with planning. A major component of a disaster response efforts is planning; it is required for search and rescue efforts; it is needed for humanitarian assistance as well as for disaster recovery.

Reinforcement Learning

Reinforcement learning allows an agent such as a drone to take actions in an environment such as the one affected by disaster and maximize rewards such as saving lives. Reinforcement learning has been used in gaming, an environment of rewards and punishment, for years. The same approach could be used by drones responding to an environment affected by a natural disaster. A drone could learn the best policy to distribute resources to the affected area. It can also learn to visit areas that are most affected by the disaster instead of visiting areas that have little to no impacts. A good thing about this is that the drone can perform reinforcement learning while collecting images for convolution neural network.  If the convolution neural network is embedded, it could be used to inform the reward and punishment algorithm of the reinforcement learning of the drone. To start developing reinforcement learning for drones, Chunxue Wu et al worked on an algorithm called snake to be able to perform an autonomous search and rescue (21). Huy Xuan Pham et al also proposed a reinforcement algorithm to search and rescue people after a natural disaster (23). Reinforcement learning has also been studied for obstacle detection and collision avoidance, which is important for autonomous drones (22).

Beyond a single drone, multiple drones can also work together, using a multi-agent reinforcement learning. This can ease disaster response and allow for multiple planning of activities involved in disaster response. For example, while each of the drones could be involved in collecting images or information autonomously, in real-time, they could separately be involved in determining the best policies for the different planning efforts. This approach of using multiple drones to actualize a common objective, with the drones altering their behavior based on communication with one another is called swarms [24].

Drones embeded with convolution neural network and reinforcement learning algorithms could be cost-effective, life-saving, and timely for disaster response.  This application could save government money and quicken the disaster recovery efforts, the next stage in the disaster management [25].

References

1, https://en.wikipedia.org/wiki/Disaster_response

2,  https://www.usnews.com/news/the-report/articles/2015/08/28/hurricane-katrina-was-the-beginning-of-the-end-for-george-w-bush

3, https://www.pbs.org/newshour/politics/government_programs-july-dec05-fema_09-09

4, https://www.usnews.com/news/blogs/data-mine/2015/08/28/no-one-knows-how-many-people-died-in-katrina

5, https://www.statista.com/statistics/744015/most-expensive-natural-disasters-usa/

6, https://www.csis.org/analysis/hurricane-sandy-evaluating-response-one-year-later

7, https://en.wikipedia.org/wiki/Hurricane_Sandy

  1. https://www.heliguy.com/blog/2019/03/22/six-times-drones-have-helped-with-disaster-response/

9, http://openaccess.thecvf.com/content_CVPRW_2019/papers/cv4gc/Gupta_Creating_xBD_A_Dataset_for_Assessing_Building_Damage_from_Satellite_CVPRW_2019_paper.pdf

  1. https://xview2.org/

11, https://research.fb.com/wp-content/uploads/2018/11/From-Satellite-Imagery-to-Disaster-Insights.pdf?

13, ImageNet. http://www.image-net.org

14, https://ieeexplore.ieee.org/document/8228593

  1. file:///C:/Users/osodei02/Downloads/remotesensing-11-01976.pdf

16, https://www.mathworks.com/help/deeplearning/ref/googlenet.html

17, https://www.heliguy.com/blog/2019/03/22/six-times-drones-have-helped-with-disaster-response/

18, http://openaccess.thecvf.com/content_CVPRW_2019/papers/UAVision/Kyrkou_Deep-Learning-Based_Aerial_Image_Classification_for_Emergency_Response_Applications_Using_Unmanned_CVPRW_2019_paper.pdf

19, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6068661/

20, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479537/

21, https://ieeexplore.ieee.org/abstract/document/8787847

22, https://www.mdpi.com/2072-4292/11/18/2144/htm

23, https://ieeexplore.ieee.org/document/8468611

24, https://warontherocks.com/2019/02/drones-of-mass-destruction-drone-swarms-and-the-future-of-nuclear-chemical-and-biological-weapons/