Pioneering Responsible, Gender-Inclusive AI for an Equitable Future
Marine Rabeyrin, EMEA Education Segment Director at Lenovo, has led the company’s diversity network in France for 15 years. Noticing the first impacts of Artificial Intelligence (AI) on issues of diversity and inclusion (D&I), she led the French NGO group that created the Women & AI Pledge to help companies take practical steps to mitigate risk and create accountable and gender-fair AI. Spearheading the implementation of the pledge through Lenovo worldwide, she is helping the company to lead the way in the industry by taking early and impactful action.
Following on from a discussion with Lenovo’s Global Product Diversity Office Manager, Ada Lopez, on this crucial topic in April, Marine discusses how we are achieving change.
When did you first realise the potential for AI to undermine gender equity?
There are a few examples that have triggered my concern. AI tools have been shown to better diagnose breast cancer, for example, but on the flip side, we know that in the case of heart disease, more data has come from men than women, so an AI system may struggle to identify it in women. Translation tools are another area I noticed early on. When a job title is neutral in one language, like ‘Doctor’ in English, for example, it tends to be translated into the masculine in other languages, as do the more ‘valued’ jobs in society, while the less valued ones are translated into the feminine. The first real eye-opener for me though was when I read about a pilot being done by a large tech company five years ago using AI to select CVs. It showed that the tool discriminated against women for engineering jobs and many leadership roles. Fortunately, as soon as the company realised that, the pilot stopped, but it really illustrates the concrete impact that AI may have on gender equity in the workplace.
Why does gender bias in AI matter, specifically in education?
In education, AI can impact students, teachers, and the organisations themselves, both positively and negatively. There are lots of new intelligent tutoring system tools that can support students in their learning journey as well as tools to support students with disabilities. ChatGPT has raised concerns about plagiarism, but it can also help teachers detect it and curate materials for their lessons. It can also support teachers in student assessment, but I would illustrate that with risk here. A study done by a research group in 2018 showed that an AI system used to assess students gave higher marks to boys than girls, and one of the explanations was that the system was biased, having been trained on data that reflected gender stereotypes. For institutions themselves, a useful AI benefit is increasing the organisation’s efficiency and supporting admission, but again there may also be a risk. A 2017 Stanford study found that an AI system used to recommend courses to students was less likely to recommend women for maths or science, even accounting for grades and other performance factors. A key point in AI and education is that this is where we are teaching and equipping future workers who will be using or even developing AI. There is a powerful need to address the risk of gender bias in schools because these students will, in time, be the ones supporting the AI journey itself.
What is the aim of the Women & AI workgroup, and how is it helping to create a positive AI future?
In France, Lenovo is part of an NGO called Cercle InterElles, made up of 16 companies from the STEM industry who have been working together for 20 years to further gender equity at work. Five years ago, we were discussing AI, especially regarding CV selection, and realised that with the development of AI, this work could be put at risk. We felt we needed to do something about that, and who better than an IT company? We set up the workgroup Women & AI from several tech companies, represented in the group by people from very different fields, from AI experts to people in marketing, logistics, and sales. They all had an end goal to do something positive, and together we created The Women & AI pledge for accountable and gender-fair AI. We launched the pledge three years ago to support companies and to show them what’s possible, because the majority of businesses do want to do something to structure themselves around ethical AI and guard against gender bias, but don’t really know where to start.
What does the Women & AI pledge involve?
The pledge has seven key principles that any company can use to take action if they want to produce or use AI in a responsible and gender-fair way. These include building stronger governance, developing ethics by design, and controlling or mastering what’s happening in the company. They can then work on the technical aspects, such as the data, algorithms, and putting in processes to control what the AI is becoming because it can develop bias over time. Thus, ensuring that the teams building the AI are diverse enough to identify biases others may not have initially seen, which is also vital. Finally, it’s important to drive awareness inside the company to all types of employees. For example, HR may purchase an AI tool for career development, but if they don’t know that those tools may be biased, they will not be able to challenge them. The pledge proposes different steps to drive this journey. First, to make a commitment by signing a charter to say, yes, we want to do something about this. Then to make an assessment, which they can do through our grid based on those seven principles where they can assess the level of maturity they have in this area. From there, the third step is to take action to reinforce strengths or mitigate weaknesses, and we have a toolkit to help companies to act quickly here. And the last step is to lead by example, demonstrating it’s possible to do something with a positive impact.
How is Lenovo applying this pledge in its own culture?
When I shared the pledge with Lenovo internally, I was pleased because the stakeholders quickly said, ‘Let’s adopt it worldwide,’ and I feel it was a real door opener for Lenovo to start taking action around responsible and equitable AI. We started by signing the charter in 2021, making a public commitment, and creating the Responsible AI committee. Then we did assessments to understand where we were, and from that, we realised that D&I is one of our strongest assets. With a strong D&I culture, our people developing AI were already aware of the need to pay attention to bias, but we needed to make an effort to reinforce the governance around that. Recently we also made huge progress by embedding all AI topics in a structure called the Product Diversity Office in our company, which reviews every product we are about to launch to make sure that it matches expectations in terms of diversity. Progress requires focus and effort, but it can also be simple, and I think our journey is exemplary for how a company can start to address the topic.