By Leila Toplic, Head of Emerging Technologies Initiative at NetHope
Adoption of AI is accelerating, and the potential is significant.[1] Today, AI is being integrated into nearly every industry, from healthcare and finance to education and manufacturing. It’s being used to decide everything from who gets hired, to who is offered credit and how much, to who gets access to healthcare first. What this means is, AI systems are critical to women’s participation in all sectors of society. The ability to access, use, and shape AI is essential for the future of women’s human rights.
While AI has the potential to help us tackle some of the toughest problems and transform how we live and work – AI could also further exacerbate inequity and digital divides. We already have an alarming digital gender divide. According to UN Women, women make up more than two-thirds of the world’s 796 million illiterate people. Of the estimated 2.9 billion unconnected, the majority are women and girls. Women are 25% less likely than men to know how to leverage digital technology for basic uses. Women and girls lack access to, and participation in, science, technology, engineering, and math (STEM) fields. They are 4 times less likely to know how to program computers. With such a significant gap in education, it’s no surprise that women are underrepresented in technology fields in the workforce. According to the World Economic Forum’s 2021 Global Gender Gap report, only 32% of those in data and AI roles are women.
The results of this digital gender gap are two-fold. First, barriers to the access and use of digital technologies (including AI) prevent women and girls from accessing the opportunities in education, economy, and society. Second, underrepresentation of women and girls in the technology industry, including in the development of AI systems, only reinforces and amplifies existing gender biases and stereotypes in our society because AI does not reflect their needs, contexts, experiences, and ideas.
So, it’s no surprise that women and girls are disproportionately affected by AI. There are numerous cases of AI systems discriminating based on gender. For example, facial analysis software reported higher error rates for recognizing women, specifically those with darker skin tones (1-in-3 failure rate with identifying darker-skinned females). There’s an infamous case of a large tech company using a hiring tool that was discriminating against women. Amazon’s Alexa and Apple’s Siri have been found to entrench harmful gender biases. A recent study into how an algorithm delivered ads promoting STEM jobs showed that women were less likely to be shown the job ad due to the cost-effectiveness. Word embedding, which is one of the most important concepts in natural language processing (NLP) and widely used by commercial companies, reinforces gender stereotypes by offering words that reflect the same old biased perception of women that is not based on facts or centered on equity. It’s important to note that these are mostly Global North examples and we lack evidence from the Global South.
In summary, without an intentional focus on gender equity, AI may be deployed as a tool of discrimination, oppression, and control.
Where do we go from here?
Gender equity — it’s a term that dominates many of our conversations about AI. But what does gender equitable AI look like in practice and how do we get there?
Last month, I had the opportunity to host a discussion about gender equitable AI at the Global Digital Development Forum 2022. The topic is more important than ever, and I felt that the conversation with Neema Iyer (Pollicy), Shachee Doshi (USAID), and Bo Percival (Humanitarian OpenStreetMap Team) brought clarity and ideas around how to center gender equity in the development and use of AI, and emphasized that the decisions and choices we make now will determine what kind of world we live in.
We started the conversation by talking about what gender equitable AI is, what it’s not, and key challenges to achieving gender equitable AI. In summary:
- Gender equitable AI is built using inclusive data practices (e.g. consent, privacy) that take into account power dynamics evident in society (e.g., who’s at the table and deciding how data will be used). It provides transparency and accountability (e.g., How do I even know that I’ve been treated unfairly by an algorithm?). Gender equitable AI integrates women and girls in the design of the solutions and products from the start, and it gives them knowledge and resources to succeed. It advances women’s rights and works towards social justice and fairness.
- Gender equitable AI is not based on sex-disaggregated data, built for women and girls, or developed elsewhere and ‘imposed’ on people in lower capacity countries.
- Challenges to achieving gender equitable AI include: lack of AI literacy among women and girls; policy and protection are lagging behind technological advances; participation of women and girls in project design is seen as nice-to-have; the nonprofit and public sectors lack access to experts who can work on equity and fairness, and they focus on gender awareness rather than actively working towards gender equity in AI.
We all agreed that we are at an opportune turning point. Marginal, incremental changes won’t address systemic challenges. We not only need to de-risk AI from harming half of the world’s population, but we can design and use AI in ways that can help us close the digital gender gap.
How do we do that?
In the session, we discussed several technological and non-technological approaches that put dignity, respect, and empowerment of women and girls at the core of how AI is designed, deployed, and used.
- Build awareness, capacity, and communities of practice. To build awareness and empower anyone to develop capacity to scan for and mitigate risks, start by providing information about complex topics in an approachable way – i.e., jargon-free and delivered in formats that are accessible to anyone including guidebooks, toolkits, research reports as well as fiction writing, TV, and creative media. To build communities of practice, consider existing communities, networks, and movements, and connect practitioners who are already working in this space. For example: Pollicy, a feminist civic technology collective based in Uganda, is working to incorporate AI into the existing feminist movement and engage local women’s networks to guide their work. Through their Equitable AI Challenge, USAID aims to connect those working in this space and build a wider community of practice focused on increasing the transparency and accountability of AI systems so that their outputs are not inequitable across genders.
- Prioritize local knowledge. One approach is to envision and conduct all research locally and action it with local communities. According to Pollicy, this builds an understanding of what is actually happening with AI on the African continent from the Afrofeminist perspective. Building on their research work (see Resources below), Pollicy plans to convene African women to develop an Afrofeminist Ethical AI framework to guide future development and use of AI on the continent.
- Address risks holistically. AI systems and data are reflections of those who collected the data and those who design the technology, reflecting both personal and institutional biases. What this means is that risks such as bias need to be addressed holistically, using both technological and non-technological approaches. For example, when constructing data for the use of training Machine Learning (ML) models, ensure that the data sets are diverse and representative of women and girls adequately, provide information about the data collection methodology for the data set, ensure that a diverse group of people labels the training data, and audit your AI systems for gender bias. We also need to look beyond technology itself and consider the bias in stakeholders, contexts (e.g., political, cultural, economic), and systems that may have prejudice built-in and reflect inequities in our society.
- Implement empowered participation. To truly ensure systemic change, we need to set our end goal not as situational inclusion, but as empowered participation across all stages of the research and innovation process. Empowered participation of women and girls means that women and girls have the opportunity, knowledge, and resources to participate meaningfully, and the agency to affect change. The key component of this is to advance gender representation in STEM education, and the workforce. Gender diversity in the AI industry is essential for developing AI tools that are gender sensitive and equitable, and it’s also beneficial for radical innovation outcomes.
- Use AI to advance gender equity and address injustice. Achieving gender equity requires more than just de-risking AI. AI can and should be used to address issues related to gender equity and improve the quality of life for women. Some of the examples of tools being developed to advance gender equity and address injustice include applications that help detect gender bias in job advertisements, identify unequal pay, or support victims of gender-based violence and human trafficking. We need to place a higher priority on such tools so that we can have access to more of them, faster.
The actions proposed here are not exhaustive in nature, but they present a starting point for establishing AI that advances equity, inclusion, and empowerment of women and girls.
Gender equity matters more than ever. We cannot afford to wait another 136 years to achieve equity for half of the world’s population. The time is now to close the global gender gap and ensure responsible, humane, and beneficial deployment of AI for all.
RESOURCES
Global Digital Development Forum session on gender equitable AI:
Pollicy
- Inclusion, Not Just an Add-On
- Afrofeminist Data Futures
- Encoded Biases & Future Imaginaries
- Engendering AI
- Digital Rights Are Women’s Rights!
USAID
- How Can We Address Gender Inequity in Artificial Intelligence? | by USAID | US Agency for International Development | Medium
- Reflecting the Past, Shaping the Future: Making AI Work for International Development
- Managing Machine Learning Projects in International Development: A Practical Guide
- Exploring Fairness in Machine Learning for International Development | MIT D-Lab
- MIT OpenCourseWare: Exploring Fairness in Machine Learning for International Development
- USAID’s Equitable AI Challenge: Funding Available for Solutions that Address Gender Inequity in AI Technology
- Gender Equality and Women’s Empowerment | US Agency for International Development
- US Government Announces Largest-Ever Budget Request, $2.6 Billion, to Advance Gender Equity and Equality Around the World | Press Release | US Agency for International Development
- Fact Sheet: National Strategy on Gender Equity and Equality | The White House
- Locally Led Partnerships | US Agency for International Development
[1] AI could enable the accomplishment of 134 targets — out of 169 — across all U.N. Sustainable Development Goals. Furthermore, the AI market is projected to be 190 billion by 2025 and by 2030, AI could contribute $15.7 trillion to the global gross domestic product (GDP).