Artificial intelligence (AI) is dramatically transforming the workplace. However, when it comes to gender and other forms of diversity, AI may be doing much of the same. A recent study released by UNESCO found that Large Language models – machine learning tools that underpin AI platforms – reproduce gender bias, as well as other forms of racial and sexual stereotyping. Despite multiple documented cases of algorithmic bias negatively impacting women, the perception that algorithms are less biased than humans remains.
AI/algorithms have become increasingly incorporated into routine workplace processes such as hiring. Algorithms offer an efficient tool for employers to evaluate resumes. We’ve long known that gender bias in recruitment and hiring is an entrenched problem. But algorithms also pose problems, as human biases are reflected in algorithms. The belief that algorithms are objective and impartial decision-makers is troubling, as it can lead to potentially trusting algorithms more than human evaluators. This is particularly critical for people who run the risk of being stereotyped in the hiring process, such as women.
Will AI Decrease Gender Bias in Hiring?
In our own research, we examined why some women might prefer to have AI review their application than a human. This question was addressed in our recently published study that examined who people would prefer when it comes to their evaluation. We conducted three online experiments with more than 1,100 participants. In experiment 1, we asked unemployed participants whether they believed an algorithm or a human would give them better chances of finding a job within the following six months. In experiments 2 and 3, we asked participants in hypothetical hiring (experiment 2) and career-development (experiment 3) scenarios whether they preferred an algorithm or human resources (HR) manager. In each experiment, we randomized the gender of the human rater (male vs. female).
The results were consistent: across all experiments, women were more likely to prefer the algorithm if the alternative was a male rater compared to a female rater.
For example, in experiment 1, 66 % of unemployed women chose the algorithm to evaluate their job chances in the male condition, and only 39 % chose the algorithm in the female condition. Surprisingly, we could not replicate this effect for men. This means that the gender of the human rater did not significantly influence their decision for an algorithm compared to a human rater. The perception that algorithms are more objective appeared to be a factor influencing women’s decision-making. Accordingly, our study underscores the importance of algorithmic literacy and the danger of women being tricked into accepting algorithmic evaluations.
Looking Critically at Algorithms
Structural discrimination did not dissipate with recent technological advancements. Job-seekers and employers, particularly women, need to be aware that algorithms may reinforce discrimination. It is important to create awareness about how algorithms show the same biases as humans in decision-making processes. As people attribute to algorithms the ability to overcome the limits of human subjectivity, their openness to algorithmic evaluations is increased.
Algorithms can influence important decisions while working in the background. Proving sex discrimination in workplace hiring decisions is notoriously difficult.
As AI becomes further integrated into workplace hiring, this may exacerbate gender bias in employment. Victims of discrimination will have no way of pointing their finger at someone who has committed a misdeed since algorithms cannot be held accountable or brought to justice for bias.
For women, discrimination embedded in algorithms could therefore be more problematic than that of biased humans, especially if algorithms are seen as neutral and objective by women themselves. This could reduce their awareness of potentially discriminating decisions.
Working with AI
When it comes to designing fair and effective employment assessments, it’s not a question of choosing between algorithms and human judgment. There are strengths and weaknesses in both human and AI evaluations; they can be used together to minimize bias. Algorithms offer speed, consistency, and the ability to process large amounts of data efficiently. However, they are not immune to bias, as they can adopt biases inherent in the data on which they have been trained.
Human judgment involves empathy, contextual awareness, and the ability to interpret complex situations. At the same time, humans are susceptible to unconscious biases that are influenced by cultural norms, personal beliefs, and social hierarchies. By combining algorithms and human judgment, we can try to harness the strengths of both while compensating for their respective weaknesses.
Raising awareness of algorithmic bias can also help define the need for policy responses that make the use of AI in hiring decisions by organizations transparent.
Our results also speak to the need to educate both applicants and evaluators on the potential pitfalls of AI-driven decision-making. Similarly to education on data privacy (such as the use of secure passwords and mindfulness when sharing sensitive data), algorithmic literacy should be mandatory for interactions with or employer use of digital technologies. Ultimately, an educational program should help to develop a more critical orientation toward algorithms, which can promote fair treatment and equal employment opportunities for all people, regardless of gender, race, or background. Because victims of discrimination might be tricked into accepting (biased) algorithmic evaluations, it is important to recognize that organizations should be held accountable for their actions by mandating and enforcing fair hiring efforts.