Machine Learning

ChatGPT: the risks and opportunities for organisations in the humanitarian and development sector

ChatGPT, a large language model (LLM) developed by OpenAI, has emerged as a viable tool for improving the speed and cost of information-related work. It can help automate services, products, and processes by performing human-level information querying and synthesis instantaneously. This technology is rapidly being adopted by organisations in various sectors, including humanitarian and development organisations. However, the adoption of ChatGPT is not without risks. It is important to think about the opportunities and risks of using ChatGPT in the operations of large organisations in the humanitarian and development sector.

Some opportunities

  1. Crisis management and communication: ChatGPT can be used to monitor and respond to global emergencies in real-time, providing a centralised communication channel for updates, guidelines, and strategies, while also addressing queries from both internal and external stakeholders.

    But how?
    AI assistants like Jarvis inside Whatsapp and Telegram make it possible to consult quickly on the go. It’s also possible to generate content, such as videos, faster than ever, just with text inputs.

  2. Training and capacity building: ChatGPT can serve as an interactive learning platform, providing tailored training materials, simulations, and assessments to help build capacity within the organisation and improve the knowledge and skills of professionals.

    But how?
    Education platforms like Duolingo and Khan Academy now have AI-powered tutors for learners, and assistants for teachers.

  3. Internal knowledge management: ChatGPT can be used to query an organisation's knowledge resources, allowing staff to quickly access information and expertise, and facilitating knowledge sharing across departments and regions. 

    But how?
    Organisations train GPT-4 on their knowledge base, to have their own AI-powered workplace search (with AskNotion, Glean, or UseFini). 

    Your knowledge base can become an asset, like Bloomberg are doing by training their own GPT on their financial data.

    Documentation is being automatically created (e.g. for a codebase), and data entry and data cleaning can be increasingly automated with workflow tools like Bardeen.   

  4. Emergency surveillance: ChatGPT can monitor global crises as an early warning system, combining information from various sources (social media, news, and health reports) to provide a comprehensive and real-time picture of global threats, enabling swift response and containment.

    But how?
    It’s becoming easier and more effective than ever to scan large data streams, using models like HuggingGPT to create your own pipeline for data analytics. 

  5. Public health campaigns: ChatGPT can assist in designing and implementing highly personalised effective public health campaigns by analysing target audience demographics, behaviour, and preferences, and creating customised content to maximise engagement and impact. 

    But how?
    The founder of the world’s leading marketing platform, Hubspot, has pivoted to focusing on Chatspot, where AI helps create personalised content and marketing campaigns at scale.

  6. Policy research: ChatGPT can aid in policy research by synthesising relevant papers, data, and best practices, allowing organisations to make informed decisions and create evidence-based health policies and guidelines.

    But how?
    ChatGPT is excelling at summarising academic articles, blogs and podcasts
    Further still, it can elevate the capabilities of staff - no barrier to knowing data visualization languages or SQL queries by just using text commands.

  7. Grant management: ChatGPT can streamline the grant application, review, and reporting processes, ensuring that funding is allocated efficiently and transparently, and reducing the administrative burden on staff. It could also be used to raise the quality and level the playing field for applicants. Inevitably, it will be used by applicants and funded projects for reporting. 

    But how?
    AI-powered writing tools are speeding up the process, increasing formal writing quality and helping generate ideas (Lex, WriteSonic).

  8. Stakeholder engagement: ChatGPT can facilitate engagement with key stakeholders, such as governments, NGOs, and the private sector, by providing timely and accurate information, and enabling efficient collaboration and coordination.

    But how?
    Effort is saved on internal communications by automatic meeting summaries, which could extend further into any type of updates. 

    AI assistants are emerging as a way to make advice more engaging and accessible for external service users, such as this AI-powered agriculture advice for farmers.

  9. Multilingual communication: ChatGPT can be used to automatically translate communications, guidelines, and documents into multiple languages at zero cost and time, ensuring that vital information is accessible to a global audience and enhancing the organisation's ability to collaborate effectively across diverse regions.

    But how?
    Whisper AI has dramatically increased the quality of speech recognition of audio-to-text.
    Tech giants Meta, Amazon and Mozilla have all made translation advances, including under-represented languages, which are becoming productized.

  10. Virtual Assistants: ChatGPT can reduce staff workload, by summarising meeting notes, triaging emails, and creating intelligent alerts that connect new information with the current priorities of staff. 

    But how?
    Enabling tools like LangChain are making it possible to string multiple actions together, and AutoGPT is making it possible for Chatgpt to prompt itself to come up with and execute a plan of action. These are resulting in first assistants like Milo, an assistant for busy parents.

What are the risks?

With any technology, there are risks that need to be assessed and with ChatGPT, those risks could be more than most. The Outsight Team can help you navigate these in the way that ensures it’s possible to improve efficiency without taking unacceptable risks. Some of the common risks associated with the use of ChatGPT are as follows:

  1. Organisational Misinformation: The potential for ChatGPT to provide inaccurate information is a risk that organisations need to consider. It is important to ensure that the data fed into ChatGPT is accurate and reliable, and that the model is regularly updated and retrained to reflect changes in the data.

  2. Inherent Bias: ChatGPT may inadvertently reinforce existing biases in the data it is trained on. It is important to ensure that the data fed into the model is diverse and representative.

  3. Privacy Risks: ChatGPT has access to sensitive information and needs to be trained with data that respects the privacy of individuals. It is important to establish clear policies and procedures to protect the privacy of individuals.

  4. Legal Risks: The use of ChatGPT needs to comply with the relevant laws and regulations, including data protection laws.

  5. Reputational Risks: If ChatGPT is used to produce content or communicate with stakeholders, there is a risk that the communication may not reflect the values and tone of the organization, potentially leading to reputational damage.

How Outsight can help

To ensure safe and effective adoption, we must map the optimal use cases for the organisational level, and empower bottom-up understanding and best practices at the individual level. Outsight can support and accompany organisations on their journey to leverage this game-changing technology, and has developed a stepwise program, including:

  1. Mapping use cases relevant to the organisation's operations.

  2. Evaluating the limitations and risks associated with the identified use cases, along with a generalised framework.

  3. Prioritising innovation opportunities based on short, medium, and long-term impact.

  4. Developing education and capacity building content, workshops and trainings to have a repeatable and scalable impact on the safe adoption of ChatGPT from an end-user perspective..

  5. Supporting the implementation of capacity building programs to take advantage of cost and time savings effectively.

The ultimate goal is to support organisations to take up the dramatic time and cost-saving opportunities made possible by this technology, while protecting against the risks and limitations amidst the hype.

ABOUT the author and OUTSIGHT INTERNATIONAL

Harry Wilson
Harry is a product person and consultant on applying technology for impact. He has led teams which have built products used by the WHO, World Bank, UNICEF, Inter-American Development Bank, as well as acted as a consultant to companies like Facebook for Good, Microsoft and Intel. His specialist areas are AI & data, blockchain and communities.

ChatGPT
ChatGPT is a sophisticated language model powered by OpenAI's state-of-the-art GPT-3.5 architecture. With a vast knowledge base spanning a wide range of topics, ChatGPT is an expert in answering questions and generating natural language responses that are both informative and engaging.

Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please get in touch or follow us on
LinkedIn for regular updates.

The Chatbot Series: Part One: What is a chatbot?

What is a chatbot? Probably not this…

What is a chatbot? Probably not this…

The wide use of chatbots has increased dramatically since the start of the COVID-19 pandemic. Their use as a tool for public information has proven extremely effective over previous dissemination tools such as websites or hotlines, due to their targeted and adapted answers. The use of chatbots has now gained a firm foothold in humanitarian and development organisations (HDOs) — with these organisations looking to provide adapted systems to different communities that they serve.

At Outsight International, we have a number of experiences working with chatbots in the past. In this blog, Devangana Khokhar, Hanna Phelan and Michelle Chakkalakal provide and overview of what chatbots are and how they can be used. Stay tuned for a follow-up blog on key considerations when designing a chatbot.

What’s a chatbot? The Types of chatbots

Chatbots are a relatively recent addition to the world of human computer interaction. While Question-Answering systems have existed for a long time — powered by both rule engines as well as Artificial Intelligence (AI) —, recent advancements in the world of Natural Language Processing (NLP) have made it possible and convenient to build chatbots that are context-aware and optimise the interaction between a human and a machine.

Chatbots come in a number of different forms. At their most basic, they are a tool for automation. Chatbots can vary from very simple ‘rule-engines’ that work like integrated voice recognition (IVR) where users select predefined options to more advanced forms of computer programming that use natural language processing (NLP) and artificial intelligence (AI) to provide genuinely responsive experiences.

Broadly speaking, we can break chatbots down into three main categories:

  1. Rules-/Menu-Based Chatbots: Users can select options using a pre-defined scenario tree, similar to integrated voice recognition menus. You can tell you're using a menu-based chatbot, when the prompt is something like: Hi I'm Pavi, your friendly customer service agent, and I can help you with these issues. Please select your issue from this drop-down menu or press this number/letter etc...

  2. Hybrid Bots: Users can select their issues from a pre-existing menu or they can type their question in. It looks something like this. Hi, I'm John. How can I help you today? Select your issue from this menu OR type in your question below. The more complex the issue, the greater the chance a company is using a combination of pre-defined scenario trees, mechanical turk (a human assisting the language sorting), and training its AI with more queries. This approach works well when a certain set of frequently-asked questions are known along with their answers, thereby solving the cold start problem.

  3. AI Chatbot: Users can interact with the bot by typing their question, and the bot powered by AI and NLP to find the answer. Such chatbots often work with engines that extract and understand the intent as well as the entity/entities tied to that intent from the user’s query. The identified intent as well as the entity/entities are used to query a knowledge base in order to build the context and respond to the user with that context. There have been recent advancements in context and next-step prediction inspired by the use of AI in the gaming world that allows the system to predict the next question that a user is going to ask thereby improving the end user experience.


chatbots in the humanitarian and development sector

As mentioned, chatbots have started to be used by a wide range of actors looking to impart information or provide services to populations quickly and in a more personalised way. For many HDOs, this theoretically fits well with their model of responsive operations that can reach swathes of beneficiary groups in an on-demand manner.

However the success of chatbots is often determined by the consideration that went into their design and implementation. How one looks understands the problem they are trying to solve — taking a system view, involving AI, the user experience, and the feedback loops — determine the longevity of the solution.

To consider the different approaches to chatbots, we have identified two case studies from the sector which we think took differing approaches to chatbot design.

Case study 1: Praekelt.org - Scaling COVID-19 Truths

World Health Organization: HealthAlert from praekelt.org

World Health Organization: HealthAlert from praekelt.org

WhatsApps has received a lot of negative press in the past 12-months over concerns with their role as a channel where COVID-19 misinformation was rife, as well as over updates to their privacy policy.

Despite this, many organisations recognised that WhatsApp — with 1.6 billion users — would have to be addressed if evidence-based public health awareness was to prevail. Enter, Praekelt.org and their chatbot-based solution Turn.io. The South African organisation focused on building technical infrastructure to provide users with information hotlines and chatbots to understand which healthcare services were available and what precautions should be taken to remain safe during the pandemic. The solution soon attracted interest from a number of significant healthcare stakeholders including the World Health Organisation (WHO) and governments of Ethiopia and Mozambique. This was the first time WHO has used the WhatsApp for Business API.

Since its launch, the chatbot has been used by over 12 million users around the world, seeing a particular peak in usage in locations experiencing spikes in infections. The offering has since been made available free of charge by Praekelt.org and Turn.io to any ministry of health worldwide.

Beyond the COVID-19 chatbot use case, Turn.io has been developed to address a variety of other needs. One such partnership is in collaboration with Girl Effect, an international non-profit supporting adolescent girls in low- and middle-income countries (LMIC) to make informed health and wellbeing choices. The initial pilot of this chatbot solution was launched in South Africa for girls between the age of 13-17 to answer questions that may be difficult to raise in another forum, concerning emotional, social, and practical elements of sex and relationships. The Girl Effect chatbot has since been tested in three other countries (Nigeria, Philippines and Tanzania), with over 10,000 users having interacted with ‘Big Sis’ — the chatbot’s ‘persona’.

The scale and speed of rollout for this particular solution is impressive. However WhatsApp data privacy concerns will have to be addressed head-on going forward particularly when considering implementations for vulnerable communities.

Case study 2: Babylon Health GP at Hand Decision Support Chatbot

The Babylon Health chatbot

The Babylon Health chatbot

Babylon Health is in many ways a digital health success story now employing over 1,000 people in the UK, US and Rwanda, however it hasn’t all been smooth sailing. Babylon, since their founding in 2013 have embedded themselves in the UK’s NHS — offering a consumer facing AI-powered decision support tool and other telehealth interactions where users can access video or phone consultations with NHS clinicians and book in-person appointments.

The main concern around Babylon has been centred around the ambition of the AI diagnostic and triage chatbot. While the company claimed that the chatbot element was not intended to act as a validated diagnosis, critics pointed to methodological concerns; especially in their claims that the Babylon chatbot outperformed the average human doctor on a subset of the Royal College of General Practitioners exam. Questions included whether the Babylon chatbot would perform as well in real-world situations with data being entered by people with no clinical experience and additionally if it would be as successful in a more unusual situations. According to a recent paper in the BMJ, it would not. There have been calls for independent review of these types of solutions and increased regulatory measures to validate AI healthcare solutions.

Babylon seemed to recognise the risk associated with the chatbot element of their offering when expanding efforts to Rwanda in 2016 and chose a slightly different operating model in this context focusing primarily on phone and SMS services that connect clinicians and community health workers with users.

Conclusion

Given the range of different chatbot solutions available and their diverse applications. Picking the right tool for the job can be daunting. Considering the factors that lead to the success or failure of new chatbot platform will thus be the topic of our next blog where we’ll provide you with key considerations when deciding if to use a chatbot, and how to implement it successfully.

If you've worked with chatbots yourself or interacted with one that stood out to you, either as a success or a failure, we'd love to know about it! Share with us your tales of chatbots or just leave the link to your favourite chatbots in the comments. If you’re looking for chatbot expertise, get in touch with us through our contact form.

ABOUT THE AUTHORS AND OUTSIGHT INTERNATIONAL

Devangana Khokhar
Devangana Khokhar is an experienced data scientist and strategist with years of experience in building intelligent systems for clients across domains and geographies and has a research background in theoretical computer science, information retrieval, and social network analysis. Her interests include data-driven intelligence, data in the humanitarian sector, and data ethics and responsibilities. In the past, Devangana led the India chapter of DataKind. Devangana frequently consults for nonprofit organisations and social enterprises on the value of data literacy and holds workshops and boot camps on the same. She’s the author of the book titled Gephi Cookbook, a beginner's guide on network sciences. Devangana currently works as Lead Data Scientist with ThoughtWorks.

Hanna Phelan
Hanna is an expert in digital health implementation currently working as a health innovation Case Manager with the MSF Sweden Innovation Unit. In the past, she has advised leadership teams in health systems and pharmaceuticals. She received her MSc in Global Health from Trinity College Dublin, during which she conducted field assessments of rehabilitation approaches by Handicap International for Syrian refugee populations in IDP camps and community settings.

Michelle Chakkalackal
Michelle is an experienced entrepreneur, researcher, and impact strategist, specialising in growing a project or an organisation from start to scale, globally. She has 15+ years of experience working in systems change and facilitation at the crossroads of impact, tech, gender, diversity, equity, and inclusion (DEI).

Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please
get in touch or follow us on LinkedIn for regular updates.

The Complete Picture Project: Uncovering hidden AI bias

Can the diversity of the crowd be properly represented in AI datasets?

Can the diversity of the crowd be properly represented in AI datasets?

How do developers and users ensure that Artificial Intelligence (AI) algorithms serve all the members of a community equitably and fairly? The Outsight team has a solution…

This is not a small question. According to Forbes, the global AI-driven machine learning market will reach $20.83B in 2024. Low- and middle-income countries have already seen a rapid expansion in applications using this technology. Not surprisingly, the humanitarian and development sectors increasingly make use of machine learning models to reach beneficiaries faster, understanding needs better and make key decisions about the form and execution of life-saving programs.

AI-driven applications range from chatbots that connect individuals affected by disaster to their required resources, to applications that help diagnose bacterial diseases. These increasingly powerful new tools have the potential to dramatically improve aid delivery and life in communities affected by crisis. However, this value is tempered by the reality that biases can easily find their way into even the most diligently engineered applications.

AI models and applications are often built far from the communities where they will eventually be used, and are based on datasets that fail to reflect the actual diversity of these communities. This disconnect can lead to the inclusion of unintentional biases within an AI model, ultimately driving unfair system choices and recommendations that are difficult to detect.

For example, a recruiting application using AI may be designed to encourage new economic job opportunities and evaluate all the candidates applying for jobs. This is a laudable goal, but it can be tainted by biases in the algorithm that unfairly treat factors associated with gender, social background, physical ability, or language. The algorithm can systematically exacerbate existing disadvantages faced by certain groups.

Similar challenges can face large-scale aid programs that attempt to leverage AI. A cash distribution program serving an area hit by disaster or a conflict may use AI to guide cash distribution, check for misuse, and measure performance. If these automated insights favour certain communities, they could end up excluding already marginalised groups and individuals.

The Hiding Places of Bias

Whenever AI is employed in a decision-making system, it is in the interests of technical developers, adopting organisations, and communities to ensure that the algorithms are providing value, while not causing harm due to bias.

Determining whether subtle bias exists within an algorithm is a difficult task — even for experienced data scientists and conscientious AI users. A wide range of factors may contribute to bias within an AI algorithm, some of which are the result of the algorithm’s performance. There is a growing set of tools to help search for algorithmic bias within the logic of an AI application.

Evaluating the ‘wiring’ of an AI tool is important, but it is not the only concern. The sources of bias may inadvertently be embedded in the data set itself. Data bias risks can include:

  • Who is included — Bias in choosing who is selected in a data set.

  • How data are connected — Failure to recognise connections amongst different data that are important within a community.

  • Depth of insight — Failure to capture elements that are uniquely important for members of a community group.

As an example, datasets that are used to train and test AI models often only represent the digital footprint of a community and not its real diversity. As a result applications are developed based on the characteristics of well represented community groups.

A typical, unrepresentative dataset, upon which AI models are often based.

A typical, unrepresentative dataset, upon which AI models are often based.

In contrast, a true picture of the community might reveal many more ‘invisible’ members whose needs, resources, and desires are quite different; but that are not represented in the digital footprint.

The invisible real picture.

The invisible real picture.

An algorithm that bases its logic on an incomplete or inaccurate picture of a community will be hard pressed to assure it has not inherited biases from the data it used. Similarly, it will be difficult for a potential user of a new AI algorithm to evaluate whether it exhibits bias, if the data used for the test is itself incomplete and fails to accurately reflect the diversity of a community.

The Complete Picture Project: Building Complete Views of Communities

The Complete Picture Project (CPP) addresses the challenge of hidden bias in incomplete data sets by constructing data resources that offer a complete view of the true diversity within a community. These data sets are assembled from multiple sources and may include a wide range of source content. The goal is to provide those working to evaluate AI bias with a known starting point — where representation within the community has been carefully considered within the data.

These data sets are well positioned to support various actors in the AI ecosystem (AI designers, AI developers, data scientists, policy makers, user researchers as well as users of AI systems) who are seeking to test AI bias. These evaluations are particularly important when engaging with communities most impacted by the SDG’s. These communities may have unique traits that differ from those included in more conventional data sources. They are also more likely to have data gaps and distortions due to access to digital technologies.

These independent, broadly diverse, representative test datasets offer developers and other AI testers a data resource for which the form and content are known. These datasets can then applied to AI models and the results inspected for biases that are hidden in the model itself. This ability to test for bias across the whole community would support efforts to detect gender and other group biases at any stage of the AI development lifecycle, from early design and development to long after pre-trained algorithms are already in use.

Scaling the Impact of CPP Data Sets

CPP data sets can provide a valuable resource in support of responsible AI development and use. Intentionally constructed data sets that broadly reflect the true diversity of communities can help advance gender equality and women’s empowerment (SDG 5).

While these datasets are being initially designed to specifically address data scenarios that are relevant to women, children, and communities who are most impacted by the SDGs, the CPP methodology we establish could be easily be extended and scaled to include other applications where parameters of where algorithmic bias is a risk as well.

The definition of bias is ever-evolving. As AI developers and their sponsors build a better understanding of the real world and the biases in it from various dimensions (such as geography, culture, non-binary gender, language, migratory status, ethnicity and race), it will be important to expand the availability of intentionally representative data sets. Building a collaborator network is key to the strategy for broad development and use of CPP data sets. Collaborators are needed to better understand communities, provide and shape data sets, and to apply data to AI algorithms. There are strong network effects among this ecosystem, where AI sponsors, governments, developers and data owners combine to drive and build off each others contributions.

The intent of the CPP team is to capture and distill practices and methodologies so that they can be broadly shared and adopted by others. The availability of individual data sets will vary according to each specific use case, but open data resources would be created when possible.

Overview of the planned CPP approach.

Overview of the planned CPP approach.

Next steps

The CPP team are keen to connect with organisations who are interested in collaborating on the project. Please feel free to get in touch or contact us on LinkedIn to find out more.

About the Authors and Outsight International

Devangana Khokhar
Devangana Khokhar is an experienced data scientist and strategist with years of experience in building intelligent systems for clients across domains and geographies and has a research background in theoretical computer science, information retrieval, and social network analysis. Her interests include data-driven intelligence, data in the humanitarian sector, and data ethics and responsibilities. In the past, Devangana led the India chapter of DataKind. Devangana frequently consults for nonprofit organisations and social enterprises on the value of data literacy and holds workshops and boot camps on the same. She’s the author of the book titled Gephi Cookbook, a beginner's guide on network sciences. Devangana currently works as Lead Data Scientist with ThoughtWorks.

Dan McClure
Dan McClure specialises in complex systems innovation challenges, and acts as a senior innovation strategist for commercial, non-profit, and governmental organisations. He has authored a number of papers on systems innovation methodologies and is actively engaged with aid sector programs addressing cutting edge issues such as scaling, localisation, and dynamic collaboration building. His work builds on decades of experience as a systems innovation strategist working with global firms in fields spanning technology, finance, retail, media, communications, education, energy, and health.

Lucie Gueuning
Lucie manages the MSF REACH project — researching AI and machine learning in the humanitarian context. More widely, she focuses on digital implementation for the development and humanitarian sector. She believes that digital solutions can be harnessed in order to increase the efficiency of the humanitarian sector and the service provisions for the most vulnerable.

Denise Soesilo
Denise is one of the Co-founders of Outsight and has worked with many humanitarian and UN agencies — advising on the application and implementation of technologies in humanitarian operations.

Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please
get in touch or follow us on LinkedIn for regular updates.

How can AI be used in the humanitarian sector? Lessons from the frontline

The MSF REACH platform on a phone in Jakarta 2018

The MSF REACH platform on a phone in Jakarta 2018

Artificial Intelligence (AI) has become a buzzword within the humanitarian sector in recent years. Much like ‘blockchain’ or ‘drones’, it’s an area where new technology is developing quickly and operators are keen to test its possible applications. From an economic perspective, it’s big business too: from a total of $1.3B raised in 2010 to over $40.4B in 2018, funding has increased at an average annual growth rate of over 48%.

Understanding how exactly AI can positively impact humanitarian field work remains a work in progress. Lack of actionable knowledge about impact, potential, and infrastructure needed for a long-term strategy are slowing the adoption of the technology. Yet, AI-based interventions could: automate time-consuming tasks; aid in data collection and management; enhance user capacities and capabilities; and ensure emergency specialists focus on complex analysis and decision-making. But where and how should these applications be utilised to achieve this potential?

My experience working with AI

For the past three years I have managed REaction Assessment Collaboration Hub (REACH): an emergency support program to enable Médecins Sans Frontières (MSF) act faster in emergencies. REACH combines institutional data with crowd-sourced information (including social media, early alert websites, and relevant RSS feeds) in real-time to provide the organisation with virtual eyes on the ground.

Humanitarian organisations have disaster teams who specifically focus on monitoring emergencies — ensuring that collected data is timely, reliable, and shared with relevant stakeholders. The ability to deliver critical information is currently highly person-dependent, often taking significant time for the relevant information to reach decision-makers during disasters.

REACH’s platform addresses these challenges by providing a quick and more accurate insight into the evolving situation on the ground, which in turn allows for rapidly rolled-out interventions, adapted to the specific needs of an affected area.

The MSF REACH platform

The MSF REACH platform

In the initial phases of the REACH project, we wanted to integrate AI components into the system. However, it was through extensive research, scoping, interviews and testing that we made a strategic decision to leave these components out of the platform. The following explains why we made that decision based first on three main misconceptions we identified, followed by possible areas of added value.

Three common misconceptions about AI

  • Misconception 1: AI is the same as other types of automation
    There is a general skepticism within the humanitarian sector about ‘automation’ — humanitarian work has traditionally been a sector that relies on human relationships and diplomacy in volatile contexts. To hand such delicate and high-stakes interactions to machines is understandably seen as too risky. However, to extrapolate this to all possible uses of AI in the sector is naive. There are clear situations in which AI can help inform stakeholders, but we require a new understanding of how to design and interact with AI.

    More specifically, what is needed is a hybrid solution that combines the experts with the machine. Such a methodology can help us develop this approach and ensure that any solutions are appropriate for the context and address the users’ specific needs. In each and every context, we need to define a goal for the technology to solve. An algorithm should produce reliable data that will support people running operations, not replace them. With this in mind, solutions should not simply be a concept, but real tools enabling end-users to focus on tasks that require human intelligence (i.e. analysis, choices, etc).

  • Misconception 2: AI will replace human labour
    AI interventions are intended to minimise human effort on tasks that can be streamlined, allowing for human skills and interactions to be more meaningfully focused. For example, when we look at the applications of AI in healthcare to date, such as clinical decision support, this is intended to reduce the clinician’s administrative burden and allow for increased face time with patients.

    There is an increasing understanding in many sectors that humans will not be replaced by AI but rather supplemented by it. However, it may also be speculated that those who choose to explore and leverage AI applications within this frame may just replace those who refuse to consider AI optimisations. To work effectively, AI requires proficient data managers and data scientists to feed data into the algorithm and maintain it in addition to various other roles to validate and translate AI insights into tangible practices.

    To this end, AI works best when:

    1. A. Common-sense is not a requirement, and the answers are unambiguous. AI can outperform humans on some complex tasks, but it performs poorly on some others that humans take for granted (e.g. AI cannot answer questions such as ‘How can you tell if my carton of milk is full?’); AI works best in ‘black or white’ binary scenarios. Such as ‘Is my carton of milk is full?’

    2. B. Detailed explanations of results are not needed. It can be extremely hard to offer a satisfactory answer to the question ‘Why did the machine give this answer?’ When dealing in unstable contexts or with vulnerable populations, this lack of accountability can have serious implications.

  • Misconception 3: AI can solve any problem
    The success of AI depends on the quality of the dataset. Before an algorithm can operate on a dataset, the data needs to be processed and cleaned so that the results produced by the algorithm are not skewed or imprecise.

    Cleaning data is laborious. Given the value of clean and structured data, an important design choice for a socio-technical system is how many resources to use up-front to ensure that the inflow of data is structured and stored appropriately. To build a high-quality database, the platform should incentivise users to input data abundantly and in the correct formats. It should have data managers to monitor the process and clean the database. With a pre-existing high-quality database, solutions can be adapted to harness the power of AI, but this option involves costs and design choices at the very beginning of the project/program.

How AI can add value the humanitarian sector

In the humanitarian sector, there are some specific areas already where AI may be harnessed for specific tasks to add value. These are:

  • Predictive Analytics - Predictive models of humanitarian crisis (such as: migration patterns during conflicts, famines, epidemics, or natural disasters) allow for early preparation. These predictive analyses may also be leveraged for the improvement of workflows and the optimisation of supply chains. The Forced Migration Forecast developed by a team of scientist at the university of Brunel in London is an example of this.

The Forced Migration Podcast

The Forced Migration Podcast

  • Image recognition - Used to identify disaster zones from satellite or drone data. Something currently being used by the Humanitarian OpenStreetMap Team.

  • Natural language processing - Semantic models allow for complex searches for navigating information. This may be performed through: chatbot style interactions, speech recognition, transcription, and translation for various communication tasks. These tools can be rolled out to help people adapt to new contexts (i.e. due to forced migration) and better understand how to navigate their new surroundings and services.

  • Adaptive web design - Sites that offer personalised interactions based on users’ behaviour. Allowing, for instance, prioritisation of the most relevant information for that user.

Smoothing the implementation of ai in humanitarian contexts

Humanitarian organisations need to invest in educating their personnel on relevant points of progress in other sectors. In any organisation, one of the major limiting factors of adopting AI is identifying expertise that can determine if AI is actually the right answer to a specific challenge. Education and knowledge transfer should happen frequently and bring the basic expertise to the workforce; enabling all staff members to understand how to, for instance, input data and set up data structures etc. With this in place it is possible to get the most return of investment from the technology application to a certain context or problem.

Also, it is very important to educate staff to engage with what has been tested — successfully or not — in order to learn from the others. It is very important to share lessons learned and new reports and publications should to be digested in order to stay up to date. For example the essay published by UNHCR and this publication written by IFC, a member of World Bank Group.

Given how resource-intensive creating AI solutions is — from data sourcing and cleaning, to validating the output — obtaining organisational buy-in with proper consideration of its risks and benefits is currently rare in the humanitarian sector. We must acknowledge that AI is still at a relatively nascent stage and a plethora of potential applications are still being tested and validated; mostly in high-income or private sector contexts. However, it is expanding in the humanitarian sector and low income contexts… albeit a little slowly.

One key final consideration for digital humanitarian projects and actors today is to focus on building large datasets that are clean and structured so that AI models could be trained on the data in the future. Mobile phones and other devices for data collection are already key components in humanitarian response and international development programs — offering a potential ready-to-use goldmine of insights, if structured correctly. Adding algorithms and automation to this well-structured data, allows for the fast identification of patterns in the data that can inform decisions and real-time analysis for a greater impact for your operations in the field.


ABOUT lucie AND OUTSIGHT INTERNATIONAL

Lucie studied at the Université Catholique de Louvain in Belgium. For the past three years, she has managed the MSF REACH project — researching AI and machine learning in the humanitarian context. More widely, she focuses on digital implementation for the development and humanitarian sector. She believes that digital solutions can be harnessed in order to increase the efficiency of the humanitarian sector and the service provisions for the most vulnerable.

Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with Lucie and the Outsight team, please get in touch or follow us on LinkedIn for regular updates.