The Complete Picture Project: Uncovering hidden AI bias

Can the diversity of the crowd be properly represented in AI datasets?

Can the diversity of the crowd be properly represented in AI datasets?

How do developers and users ensure that Artificial Intelligence (AI) algorithms serve all the members of a community equitably and fairly? The Outsight team has a solution…

This is not a small question. According to Forbes, the global AI-driven machine learning market will reach $20.83B in 2024. Low- and middle-income countries have already seen a rapid expansion in applications using this technology. Not surprisingly, the humanitarian and development sectors increasingly make use of machine learning models to reach beneficiaries faster, understanding needs better and make key decisions about the form and execution of life-saving programs.

AI-driven applications range from chatbots that connect individuals affected by disaster to their required resources, to applications that help diagnose bacterial diseases. These increasingly powerful new tools have the potential to dramatically improve aid delivery and life in communities affected by crisis. However, this value is tempered by the reality that biases can easily find their way into even the most diligently engineered applications.

AI models and applications are often built far from the communities where they will eventually be used, and are based on datasets that fail to reflect the actual diversity of these communities. This disconnect can lead to the inclusion of unintentional biases within an AI model, ultimately driving unfair system choices and recommendations that are difficult to detect.

For example, a recruiting application using AI may be designed to encourage new economic job opportunities and evaluate all the candidates applying for jobs. This is a laudable goal, but it can be tainted by biases in the algorithm that unfairly treat factors associated with gender, social background, physical ability, or language. The algorithm can systematically exacerbate existing disadvantages faced by certain groups.

Similar challenges can face large-scale aid programs that attempt to leverage AI. A cash distribution program serving an area hit by disaster or a conflict may use AI to guide cash distribution, check for misuse, and measure performance. If these automated insights favour certain communities, they could end up excluding already marginalised groups and individuals.

The Hiding Places of Bias

Whenever AI is employed in a decision-making system, it is in the interests of technical developers, adopting organisations, and communities to ensure that the algorithms are providing value, while not causing harm due to bias.

Determining whether subtle bias exists within an algorithm is a difficult task — even for experienced data scientists and conscientious AI users. A wide range of factors may contribute to bias within an AI algorithm, some of which are the result of the algorithm’s performance. There is a growing set of tools to help search for algorithmic bias within the logic of an AI application.

Evaluating the ‘wiring’ of an AI tool is important, but it is not the only concern. The sources of bias may inadvertently be embedded in the data set itself. Data bias risks can include:

  • Who is included — Bias in choosing who is selected in a data set.

  • How data are connected — Failure to recognise connections amongst different data that are important within a community.

  • Depth of insight — Failure to capture elements that are uniquely important for members of a community group.

As an example, datasets that are used to train and test AI models often only represent the digital footprint of a community and not its real diversity. As a result applications are developed based on the characteristics of well represented community groups.

A typical, unrepresentative dataset, upon which AI models are often based.

A typical, unrepresentative dataset, upon which AI models are often based.

In contrast, a true picture of the community might reveal many more ‘invisible’ members whose needs, resources, and desires are quite different; but that are not represented in the digital footprint.

The invisible real picture.

The invisible real picture.

An algorithm that bases its logic on an incomplete or inaccurate picture of a community will be hard pressed to assure it has not inherited biases from the data it used. Similarly, it will be difficult for a potential user of a new AI algorithm to evaluate whether it exhibits bias, if the data used for the test is itself incomplete and fails to accurately reflect the diversity of a community.

The Complete Picture Project: Building Complete Views of Communities

The Complete Picture Project (CPP) addresses the challenge of hidden bias in incomplete data sets by constructing data resources that offer a complete view of the true diversity within a community. These data sets are assembled from multiple sources and may include a wide range of source content. The goal is to provide those working to evaluate AI bias with a known starting point — where representation within the community has been carefully considered within the data.

These data sets are well positioned to support various actors in the AI ecosystem (AI designers, AI developers, data scientists, policy makers, user researchers as well as users of AI systems) who are seeking to test AI bias. These evaluations are particularly important when engaging with communities most impacted by the SDG’s. These communities may have unique traits that differ from those included in more conventional data sources. They are also more likely to have data gaps and distortions due to access to digital technologies.

These independent, broadly diverse, representative test datasets offer developers and other AI testers a data resource for which the form and content are known. These datasets can then applied to AI models and the results inspected for biases that are hidden in the model itself. This ability to test for bias across the whole community would support efforts to detect gender and other group biases at any stage of the AI development lifecycle, from early design and development to long after pre-trained algorithms are already in use.

Scaling the Impact of CPP Data Sets

CPP data sets can provide a valuable resource in support of responsible AI development and use. Intentionally constructed data sets that broadly reflect the true diversity of communities can help advance gender equality and women’s empowerment (SDG 5).

While these datasets are being initially designed to specifically address data scenarios that are relevant to women, children, and communities who are most impacted by the SDGs, the CPP methodology we establish could be easily be extended and scaled to include other applications where parameters of where algorithmic bias is a risk as well.

The definition of bias is ever-evolving. As AI developers and their sponsors build a better understanding of the real world and the biases in it from various dimensions (such as geography, culture, non-binary gender, language, migratory status, ethnicity and race), it will be important to expand the availability of intentionally representative data sets. Building a collaborator network is key to the strategy for broad development and use of CPP data sets. Collaborators are needed to better understand communities, provide and shape data sets, and to apply data to AI algorithms. There are strong network effects among this ecosystem, where AI sponsors, governments, developers and data owners combine to drive and build off each others contributions.

The intent of the CPP team is to capture and distill practices and methodologies so that they can be broadly shared and adopted by others. The availability of individual data sets will vary according to each specific use case, but open data resources would be created when possible.

Overview of the planned CPP approach.

Overview of the planned CPP approach.

Next steps

The CPP team are keen to connect with organisations who are interested in collaborating on the project. Please feel free to get in touch or contact us on LinkedIn to find out more.

About the Authors and Outsight International

Devangana Khokhar
Devangana Khokhar is an experienced data scientist and strategist with years of experience in building intelligent systems for clients across domains and geographies and has a research background in theoretical computer science, information retrieval, and social network analysis. Her interests include data-driven intelligence, data in the humanitarian sector, and data ethics and responsibilities. In the past, Devangana led the India chapter of DataKind. Devangana frequently consults for nonprofit organisations and social enterprises on the value of data literacy and holds workshops and boot camps on the same. She’s the author of the book titled Gephi Cookbook, a beginner's guide on network sciences. Devangana currently works as Lead Data Scientist with ThoughtWorks.

Dan McClure
Dan McClure specialises in complex systems innovation challenges, and acts as a senior innovation strategist for commercial, non-profit, and governmental organisations. He has authored a number of papers on systems innovation methodologies and is actively engaged with aid sector programs addressing cutting edge issues such as scaling, localisation, and dynamic collaboration building. His work builds on decades of experience as a systems innovation strategist working with global firms in fields spanning technology, finance, retail, media, communications, education, energy, and health.

Lucie Gueuning
Lucie manages the MSF REACH project — researching AI and machine learning in the humanitarian context. More widely, she focuses on digital implementation for the development and humanitarian sector. She believes that digital solutions can be harnessed in order to increase the efficiency of the humanitarian sector and the service provisions for the most vulnerable.

Denise Soesilo
Denise is one of the Co-founders of Outsight and has worked with many humanitarian and UN agencies — advising on the application and implementation of technologies in humanitarian operations.

Outsight International
Outsight International provides services to the humanitarian and development sector in an efficient and agile way. Outsight International builds on the range of expertise offered by a network of Associates in order to deliver quality results adapted to the specific tasks at hand. If you’d like to discuss working with the Outsight team, please
get in touch or follow us on LinkedIn for regular updates.