DPIA

ChatGPT: the risks and opportunities for organisations in the humanitarian and development sector

ChatGPT, a large language model (LLM) developed by OpenAI, has emerged as a viable tool for improving the speed and cost of information-related work. It can help automate services, products, and processes by performing human-level information querying and synthesis instantaneously. This technology is rapidly being adopted by organisations in various sectors, including humanitarian and development organisations. However, the adoption of ChatGPT is not without risks. It is important to think about the opportunities and risks of using ChatGPT in the operations of large organisations in the humanitarian and development sector.

Some opportunities

  1. Crisis management and communication: ChatGPT can be used to monitor and respond to global emergencies in real-time, providing a centralised communication channel for updates, guidelines, and strategies, while also addressing queries from both internal and external stakeholders.

    But how?
    AI assistants like Jarvis inside Whatsapp and Telegram make it possible to consult quickly on the go. It’s also possible to generate content, such as videos, faster than ever, just with text inputs.

  2. Training and capacity building: ChatGPT can serve as an interactive learning platform, providing tailored training materials, simulations, and assessments to help build capacity within the organisation and improve the knowledge and skills of professionals.

    But how?
    Education platforms like Duolingo and Khan Academy now have AI-powered tutors for learners, and assistants for teachers.

  3. Internal knowledge management: ChatGPT can be used to query an organisation's knowledge resources, allowing staff to quickly access information and expertise, and facilitating knowledge sharing across departments and regions. 

    But how?
    Organisations train GPT-4 on their knowledge base, to have their own AI-powered workplace search (with AskNotion, Glean, or UseFini). 

    Your knowledge base can become an asset, like Bloomberg are doing by training their own GPT on their financial data.

    Documentation is being automatically created (e.g. for a codebase), and data entry and data cleaning can be increasingly automated with workflow tools like Bardeen.   

  4. Emergency surveillance: ChatGPT can monitor global crises as an early warning system, combining information from various sources (social media, news, and health reports) to provide a comprehensive and real-time picture of global threats, enabling swift response and containment.

    But how?
    It’s becoming easier and more effective than ever to scan large data streams, using models like HuggingGPT to create your own pipeline for data analytics. 

  5. Public health campaigns: ChatGPT can assist in designing and implementing highly personalised effective public health campaigns by analysing target audience demographics, behaviour, and preferences, and creating customised content to maximise engagement and impact. 

    But how?
    The founder of the world’s leading marketing platform, Hubspot, has pivoted to focusing on Chatspot, where AI helps create personalised content and marketing campaigns at scale.

  6. Policy research: ChatGPT can aid in policy research by synthesising relevant papers, data, and best practices, allowing organisations to make informed decisions and create evidence-based health policies and guidelines.

    But how?
    ChatGPT is excelling at summarising academic articles, blogs and podcasts
    Further still, it can elevate the capabilities of staff - no barrier to knowing data visualization languages or SQL queries by just using text commands.

  7. Grant management: ChatGPT can streamline the grant application, review, and reporting processes, ensuring that funding is allocated efficiently and transparently, and reducing the administrative burden on staff. It could also be used to raise the quality and level the playing field for applicants. Inevitably, it will be used by applicants and funded projects for reporting. 

    But how?
    AI-powered writing tools are speeding up the process, increasing formal writing quality and helping generate ideas (Lex, WriteSonic).

  8. Stakeholder engagement: ChatGPT can facilitate engagement with key stakeholders, such as governments, NGOs, and the private sector, by providing timely and accurate information, and enabling efficient collaboration and coordination.

    But how?
    Effort is saved on internal communications by automatic meeting summaries, which could extend further into any type of updates. 

    AI assistants are emerging as a way to make advice more engaging and accessible for external service users, such as this AI-powered agriculture advice for farmers.

  9. Multilingual communication: ChatGPT can be used to automatically translate communications, guidelines, and documents into multiple languages at zero cost and time, ensuring that vital information is accessible to a global audience and enhancing the organisation's ability to collaborate effectively across diverse regions.

    But how?
    Whisper AI has dramatically increased the quality of speech recognition of audio-to-text.
    Tech giants Meta, Amazon and Mozilla have all made translation advances, including under-represented languages, which are becoming productized.

  10. Virtual Assistants: ChatGPT can reduce staff workload, by summarising meeting notes, triaging emails, and creating intelligent alerts that connect new information with the current priorities of staff. 

    But how?
    Enabling tools like LangChain are making it possible to string multiple actions together, and AutoGPT is making it possible for Chatgpt to prompt itself to come up with and execute a plan of action. These are resulting in first assistants like Milo, an assistant for busy parents.

What are the risks?

With any technology, there are risks that need to be assessed and with ChatGPT, those risks could be more than most. The Outsight Team can help you navigate these in the way that ensures it’s possible to improve efficiency without taking unacceptable risks. Some of the common risks associated with the use of ChatGPT are as follows:

  1. Organisational Misinformation: The potential for ChatGPT to provide inaccurate information is a risk that organisations need to consider. It is important to ensure that the data fed into ChatGPT is accurate and reliable, and that the model is regularly updated and retrained to reflect changes in the data.

  2. Inherent Bias: ChatGPT may inadvertently reinforce existing biases in the data it is trained on. It is important to ensure that the data fed into the model is diverse and representative.

  3. Privacy Risks: ChatGPT has access to sensitive information and needs to be trained with data that respects the privacy of individuals. It is important to establish clear policies and procedures to protect the privacy of individuals.

  4. Legal Risks: The use of ChatGPT needs to comply with the relevant laws and regulations, including data protection laws.

  5. Reputational Risks: If ChatGPT is used to produce content or communicate with stakeholders, there is a risk that the communication may not reflect the values and tone of the organization, potentially leading to reputational damage.

How Outsight can help

To ensure safe and effective adoption, we must map the optimal use cases for the organisational level, and empower bottom-up understanding and best practices at the individual level. Outsight can support and accompany organisations on their journey to leverage this game-changing technology, and has developed a stepwise program, including:

  1. Mapping use cases relevant to the organisation's operations.

  2. Evaluating the limitations and risks associated with the identified use cases, along with a generalised framework.

  3. Prioritising innovation opportunities based on short, medium, and long-term impact.

  4. Developing education and capacity building content, workshops and trainings to have a repeatable and scalable impact on the safe adoption of ChatGPT from an end-user perspective..

  5. Supporting the implementation of capacity building programs to take advantage of cost and time savings effectively.

The ultimate goal is to support organisations to take up the dramatic time and cost-saving opportunities made possible by this technology, while protecting against the risks and limitations amidst the hype.

ABOUT the author and OUTSIGHT INTERNATIONAL

Harry Wilson
Harry is a product person and consultant on applying technology for impact. He has led teams which have built products used by the WHO, World Bank, UNICEF, Inter-American Development Bank, as well as acted as a consultant to companies like Facebook for Good, Microsoft and Intel. His specialist areas are AI & data, blockchain and communities.

ChatGPT
ChatGPT is a sophisticated language model powered by OpenAI's state-of-the-art GPT-3.5 architecture. With a vast knowledge base spanning a wide range of topics, ChatGPT is an expert in answering questions and generating natural language responses that are both informative and engaging.

Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please get in touch or follow us on
LinkedIn for regular updates.

Worth the risk? Humanitarian innovation's risk challenge

Any meaningful change comes with new risks. The merit of the change depends on the balance of benefits and risks that the change offers. Ideas that deliver essential value that cannot be obtained elsewhere may well easily justify the risks that are incurred. Deducting the risk against potential benefit can offer a way of visualising if an intervention can be justified or not. 

Humanitarian innovators have become increasingly aware of the risks associated with new creative processes, services and products. These risks are of concern when they are borne by already vulnerable people. In particular, technology change has the unintended potential to create widely distributed ripple effects that are often not immediately visible. Understanding these consequences can be daunting in their scope, as illustrated by the 2018 ICRC report “Doing No Harm in the Digital Era”, which catalogued over 100 pages of digital risks in the humanitarian context. The current humanitarian discourse is to do no harm. But is doing no harm possible when also innovating?

The Dilemma - Risk as a Barrier to Beneficial Change

The range of innovation risks is not limited to digital technologies. Drones, robotics, and even construction projects all inevitably create new risks when they change the status quo. Considering risks is an essential step in any proposed innovation, particularly one that affects people with limited resources or resilience. However, a too narrow focus on risk can bring even valuable change to a standstill.  

Whilst it is clearly wrong to needlessly expose people to risks and harm, it is also unreasonable to deny communities of potentially beneficial innovations that could substantially improve overall wellbeing.

The risks and benefits of an innovation should be assessed and measured using the same scale and common indicators as status quo programming, helping the innovators to compare, contrast and make an informed decision on whether this idea is taking acceptable risk. This is especially important as there can be a tendency to veto innovation proposals based on small risks due to perception biases. For example, risks are perceived as irrationally high when:

  1. The risk taken is involuntary.

  2. Prevalence and reach of the innovation increase to affect more people.

  3. An innovation is particularly novel.

Overall, this inherently tips the scale in favor of the status quo when dealing with innovations even though more good may be achieved through the means of innovation at equal or lesser risk as the status quo.

And what type of risk? Usually, we don’t go further in depth during risk assessments. Any sort of ‘harm’ closes the door and the idea is put ‘on hold’ indefinitely. ‘Risk’ as a general term is vague and abstract: harm needs to be considered on relative levels if it is life-threatening, financial, legal or if it is compromising the future plan of a specific person. This needs to be entered into the calculation before pausing a new idea. 

Within the humanitarian and development space also there is an added imperative to include financial risk within this calculation: money spent on an innovation that fails, could have been spent on proven methods such as vaccinations or supplies instead. This seems a legitimate points, but this is not the whole picture. As a new report from Elrha will detail, there are financial resources available to humanitarians, outside of an organisation’s operational budget i.e. through organisations like Grand Challenge Canada, foundations and impact investment grants. Through this, the level of financial risk can be mitigated. 

Finally, on how risk is assessed, we reach the problem of individual prestige. Identifying such risks in projects is a profession. Ensuring that there are people there to raise risks where they have been missed is undoubtedly important. However, such assessments often have a clear leaning towards detail, rather than the bigger picture and, as such, can lead to excessive scrutiny and stop a project in its tracks.

Comparing risk and benefit on the same scale

The relative weighing of benefit — or utility from a philosophical standpoint — is something that harks back to political philosophers of the past. John Stuart Mill – an ardent support of individual liberty — famously described the correct use of weighing utility as: 

"that actions are right in the proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness."

Mill is one of the founders of modern liberalism, widely regarded as underpinning many of the foundational principles of the current world governance. Therefore, why would we choose not to apply this principle in the case of humanitarian innovation when it’s good enough for the operation of modern democracy? 

Ultimately, the benefit and risk of two whole systems need to be compared. For example, the mortality rates for women undergoing childbirth in remote areas can be dramatically reduced through the use of drone deliveries of blood supplies.  The first system - unassisted childbirth - is the status quo, which has substantial unmitigated risks of death. The second system leverages the delivery of blood supplies by drone. This system offers strong medical benefits that are amplified by the lack of other effective alternatives. Yet, drones also come with concerns associated with safe operation in a shared airspace.  These whole systems need to be compared and contrasted with each other. 

If, as often happens, the questions around privacy or the risk of crashing a drone are seen in isolation, it’s easy to understand why permissions are difficult to obtain. Yet, if you’re to consider the possible gains of an overall system in terms of lives or disability-adjusted life year’s (DALYs), then the situation can look significantly different. 

When deciding on whether the risks of a clinical trial are acceptable, an Ethical Review Board will consider the possible improved patient outcomes in a relative manner. It seems odd this luxury is rarely extended to innovation projects, often dealing less directly with patients. Indeed, many innovation projects are deemed unacceptable because of a perceived risk to privacy or data management. Whilst this is a significantly less serious risk than the risk of side-effects in a clinical trial, it is given a disproportionately high prescience. 

Finally, when considering potential harms, it’s important to consider how we each operate within the social norms of our societies. Engaging with beneficiaries’ points of view is commonly accepted as best practice. Yet, there lies significant contradictions when considering the normative nature of humanitarian and development work. One classic example is identity and privacy. For those operating from Europe and North America, there is a tendency to see the right to privacy as fundamentally essential. Take the UK public’s resistance to identity cards, or the French law prohibiting the collection of ethnographic data for example. However, for many other regions, especially where having a recognised official identity can lead to greater access to social service provision, there is less concern for hiding personal details. Whilst this may be based on the levels of trust in government, the debate is far from definitive. Given the decolonisation of aid narrative in the humanitarian and development space, these cultural differences seem to rarely be accounted for. 

Using Systems to Support Responsible Innovation Tradeoffs

Discussions surrounding risk and harm need to be based on a broader view of the opportunity for change. This does not imply there is a blank check for change: a rigorous review of the benefits and harms alongside a consideration of alternative systems should be done for any proposed innovative change. 

A well-reasoned discussion can only be had with a big picture of both the current situation and an open mind to the proposed new combination of benefit and risk. The work that has already been done to identify potential sources of risk has laid a solid foundation on which to take this next step in analysis.  

It is now time to routinely embrace taking a more holistic view of status quo challenges and the alternative systems that are proposed to replace them. This whole systems view would not only allow a more balanced view of the value of change, it would also offer a broader range of alternatives for mitigating potential risks, or at the very least make them better understood to those involved.


About the authors and Outsight International

Dan McClure, Lucie Gueuning, Denise Soesilo, Monique Duggan, Louis Potter for Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please get in touch or follow us on LinkedIn for regular updates.

With data, responsibility: The Importance of Data Protection Impact Assessments (DPIAs) in aid

Is your organisational data sufficiently secure?

Is your organisational data sufficiently secure?

Aid agencies, public health bodies, and health innovators are harnessing the rapidly accelerating improvements in data capabilities to deliver better health and wellbeing outcomes for service users and beneficiaries. Increasingly, smaller organisations are empowered to gather, process, analyse, and act on larger databases with attractively small investments in time and capital. Ostensibly, the calculus is clear: if gathering large quantities of personal data that informs strengthened decision-making is becoming easier, it would be irresponsible for an organisation not to build databases with the intention of improving outcomes.

Yet, this era of year-on-year emergence of new, reality-changing tools has demonstrated an unavoidable truth: technology is never neutral. Technology used in aid contexts is usually developed far from where it is deployed, and can carry with it implicit biases that distort its utility and curb its benefits. Equally, improvements in technological capabilities in the hands of healthcare and aid providers can serve -at least initially- to further widen inequalities between those with access to innovative and those without. Often in aid, these inequalities manifest clearly along the dynamics of provider/recipient.

Technology is never neutral: it can magnify implicit biases and -if deployed irresponsibly- further entrench inequalities, particularly in aid settings

The ability to gather large sets of personal data are an acute example of this divide. Take, for example, healthcare and aid providers working in low-resource settings. If they choose to harness large personal data gathering and processing tools to build large datasets, comprised of personal information relating to local beneficiaries, they are at once equipped with technological potential that is likely inaccessible locally and additionally entrusted with highly sensitive material relating to many local individuals. It is incumbent for such actors at the privileged end of a power disparity to use their position with utmost responsibility.

Good data practice is not only an ethical responsibility - international regulation now makes it compulsory

This is where Data Protection becomes paramount. Many humanitarian actors now are subject to the European General Data Protection Regulations (GDPR). The donor community — including EU Humanitarian Aid — now require their partners to demonstrate good data practices, including the implementation of Data Protection Impact Assessments for projects that may process, store or share personal data. This includes names, photographs of people, and even CVs. Data ethics goes beyond the procedural programming of safeguards and several guidelines and frameworks exist that can help build projects and teams on solid ethical foundations.

We are ready to support you in implementing the most appropriate tools and frameworks to your operations: analysing your system in order to apply the most relevant adaptation without disturbing your day-to-day operations, in a smooth and efficient manner. With the growing complexity of the data-driven services offered and the risk of social exclusion inherent in opting out of various technologies, individuals are disadvantaged when asked to provide informed consent for their data to be collected and used. This gulf between uptake and understanding has been met by legal frameworks implemented by governments and intra-governmental organisations (such as the EU’s GDPR), aimed at regulating data policies and enabling individuals to trust that their information is being handled responsibly. We help your organisation anticipate needs, and to actively shape the data ecosystems to meet said needs.

For more information, see our complete DPIA Service offering here.

If you would like to collaborate with Outsight International, please use our contact form to get in touch.