Machines to the Rescue: Artificial Intelligence in Humanitarian Action
Shanzana Yeasin Khan
Shahriar Yeasin Khan
According to estimates by the UN’s Global Humanitarian Overview, the number of people in need of humanitarian assistance and protection was 235 million in 2021, 274 million in 2022 and it has risen to 339 million in 2023. Considering such staggering numbers, for humanitarian organisations, time and resources are stretched more than ever before.
​
That is why, as various technologies are getting more accessible in recent times, many humanitarian organisations are gradually incorporating digital solutions in humanitarian action. Machines can work with large amounts of data which it can process very quickly saving valuable time, which may be crucial in crisis situations. Also, very importantly, they have the potential to save resources which can be allocated in areas where the need is more acute.
​
In that regard, perhaps the most notable development in humanitarian sectors in recent years is the increased use of Artificial Intelligence (AI). The way in which AI works is a complex subject and will not be discussed in detail in this paper. In lay terms – AI manifests itself through machines which can run complex mathematical algorithms specifically designed to collect data and analyse them to output/make decisions without the need for human assistance.
​
As will be further discussed in this paper, humanitarian organisations are increasingly collaborating with technology giants to develop AI solutions, particularly in anticipatory approaches to humanitarian action. However, the technology is still at a relatively nascent stage, and is yet to flourish. AI is dependent upon having large datasets, which are still under development. There are also ethical concerns about AI, and as suggested below, human rights principles must be at the forefront in the development of AI so that the use of AI in humanitarian action can be deemed a ‘trustworthy’ approach.
​
​
TECHNOLOGICAL EXPERTISE OF HUMANITARIAN ORGANISATIONS
​
One of the major barriers to deploying AI in humanitarian action has been the lack of requisite in-house expertise at humanitarian organisations. Various technology corporations such as Microsoft has stepped up in support and implementation of various projects across the world. One such project is the Humanitarian OpenStreetMap which is currently working to develop AI-assisted map datasets for humanitarian relief programs in particularly disaster affected areas. Google is also developing initiatives on using AI in humanitarian action, particularly in predicting flood patterns in Bangladesh and India through the Google Flood Forecasting Initiative. Amazon is also developing AI solutions for humanitarian organisations, particularly in disaster response situations.
​
In the shorter term it can be expected that there will be more collaborations between governments and humanitarian organisations with such technology giants. But in the longer term, as AI technologies become more accessible, it is predicted that humanitarian organisations may be able to develop their own expertise in this field. Efforts are already underway such as the UN Global Pulse which has been created to support innovation and to proliferate the use of AI in humanitarian action by expanding the datasets available to the humanitarian community. It is also developing various tools for collection and analysis of data, such as the PulseSatellite. Another example is NetHope which is a consortium of humanitarian organisations joining together to collaborate in developing digital solutions including AI for humanitarian action.
​
DIFFICULTIES IN INCORPORATING AI IN HUMANITARIAN ACTION
It should be noted that in some humanitarian sectors, the use of AI is not yet viable because datasets are not always available. For example, MSF piloted the REaction Assessment Collaboration Hub (REACH) platform in 2017 to monitor real time changes on the ground based on data inputs by MSF along with open-source data including from social media. However, despite using an automated process, the REACH project did not integrate AI in its platform because an algorithm based on a clean and sufficient dataset was not viable at the time. This highlights the fact that when it comes to using AI in the humanitarian sector, the availability of a suitable dataset is the most important factor. Particularly in reactive approaches, the datasets such as from social media are not always helpful or even easy to analyse through an AI model, as can be seen from UNHCR’s social media data analysis of the refugee crisis in Europe which was published in 2017.
​
For AI to be successfully used in the humanitarian sector, it is pertinent that datasets are built and AI models are thoroughly tested before implementation. In instances where appropriate datasets are available, organisations such as the World Food Programme (WF) have had much more progress with AI. The WFP’s Skai project uses AI to analyse satellite images to determine on ground realities in emergency situations and to decide the appropriate response required. This is extremely helpful in disaster hit areas where time is of the essence, and AI can significantly speed up the time it would have required to do the same task manually. The WFP has also created the HungerMap LIVE which uses datasets from WFP’s monitoring systems along with publicly available data streams which are then analysed and presented in visual form in real time using AI technology.
​
ANTICIPATORY AND REACTIVE APPROACHES
As can be seen in some of the examples above, the most prolific use of AI in the humanitarian sector has been in anticipatory approaches as opposed to reactive approaches to humanitarian action, providing newfound possibilities of pre-emptive actions with the potential of “mitigating the adverse impact on vulnerable people”. The Danish Refugee Council (DRC) has developed the Foresight model to predict future forced displacement in certain specific countries. This is particularly helpful in humanitarian organisations to be better prepared particularly in budgeting as they engage in relief and aid operations. NASA is developing the Landslide Hazard Assessment for Situational Awareness (LHASA) Model to predict landslides globally which is estimated to have extremely high accuracy in its predictions, and can help pre-emptive evacuations of communities at risk.
AI can also be used in reactive approaches to humanitarian action, although as of yet few such models exist. One example is the United Nations Satellite Centre (UNOSAT) which has been for some time providing the Humanitarian Rapid Mapping Service to governments and other organisations. It has incorporated AI in its services to monitor flood affected areas to assist in speedy response to such disasters, and this has been already tested in Mozambique in 2021. UNOSAT in 2022 began collaborating with NVIDIA to further develop its AI capabilities. These are still early times and it is yet to be seen to what extent and to what level of effectiveness AI can be used in reactive approaches to humanitarian action.
​
ADAPTIVE APPROCHES
Particularly in long protracted humanitarian crises situations where large datasets are already available, AI can be incorporated to great advantage, providing greater operational flexibility to humanitarian actors pursuing adaptive approaches to humanitarian action.
​
For regular context monitoring as well as programme monitoring, AI can work round-the-clock, continuously analysing large amounts of data. If AI models can improve the quality and timeliness of monitoring and evaluation, decision making in humanitarian action will be more effective and relevant. AI equipped machines can also be used to collect feedback from aid recipients in much higher numbers, and it can analyse the feedback in real time, saving time and resources.
​
One particular area where AI can be gradually rolled out is in refugee camps to streamline operational activities. Data collection is the most obvious area where AI can be used. For instance, cameras installed in public spaces can, using AI, monitor behavioural and activity patterns of refugees as they develop within the camps over time. This can help humanitarian actors to have a better contextual understanding, and to adapt its programme to the needs and concerns that may arise over time.
​
Children growing up in refugee camps particularly lack sufficient educational facilities due to, amongst others, the host-country policies, financial constraints, lack of infrastructure, the volatility of the region and the lack of local expertise available in neighbouring areas. Robots equipped with AI has the potential to become ‘teachers’ with the capacity to teach various subjects at various levels. Admittedly, human teachers are necessary for refugee children because they will benefit from human interaction, but they can be supplemented with AI solutions.
​
It is to be noted that AI technology is still under development. Although it has reached huge advancements, it is still not easily accessible and may not be necessarily a financially sound option for most humanitarian organisations to use in adaptive approaches to humanitarian action. But it is anticipated that in the coming years, the technology will be cheaper and more accessible. Considering the fact that building such AI models to an acceptable standard will take time, concerned organisations which has the financial capability and technical expertise, should begin creating datasets and testing models in the very immediate future.
​
ETHICAL CHALLENGES
Despite all the potential benefits, there are multifarious ethical concerns in using AI in humanitarian action. For instance there are concerns about the use of AI to collect and analyse personal data. In that regard, the safeguards in place in humanitarian organisations and the data protection rules in all sectors globally must be similarly applied as appropriate in AI models which are employed in humanitarian action. There should be clear and precise guidelines for ethical data collection and using that data responsibly in line with global data privacy standards, within a framework which minimises potential risks.
The most concerning ethical issue is that there is the potential for mathematical manipulation of AI algorithms which may, for example, favour one ethnicity over another, or certain vulnerable communities may receive less favourable results through AI. Appropriate measures must be put in place when developing AI models to counteract such possibilities, whether caused by inadvertent mistake, ignorance or even bad faith. Creating datasets that may address potential biases – such as the Global Index on Responsible AI which is being built through primary research with over 120 countries involved in the project – is one such tool seeking to include the perspectives of marginalised groups. More tools and datasets as well as accountability and transparency mechanisms still needs to be developed.
​
It is emphasised that the ethical use of AI can only be ensured if human rights principles are at the forefront in the development of AI models. To that effect, it should be highlighted that the OECD AI Principles provide a robust set of guidelines to ensure a human rights based approach in AI use in humanitarian action. Many governments around the world has already committed to these AI Principles. But more needs to be done, and it is perhaps even necessary to create a legally binding global framework for the effective as well as ethical use of AI in humanitarian action.
