Featured

Beyond the Bias: How AI is Reinforcing Gender Inequality

by Lindsay Nygren, AWC Central Scotland and Education Team Co-Chair

 

Artificial Intelligence, or AI, has been a key buzzword lately—known and linked to almost every industry, from healthcare to education and private businesses. It is altering and improving ways in which we as individuals and organizations go about our daily lives and perform tasks. The explosion of AI integration into almost every facet of our lives has resulted in positive strides in workflow and medical decision-making, for example. Yet despite this revolutionary advancement, unfortunately AI has its flaws. AI is a system that is trained on existing, potentially biased data, and the coding and algorithms are written by human engineers who may pass on their inherent biases, relying on historical patterns and trends which can create a reinforcement of stereotypes.

Among the trends that AI reinforces, gender bias is by far one of the worst for its risk of negatively impact girls and women. In an article published by Ho et al. (2025), the authors explored gender biases with AI and how the use of AI is perpetuating them. Using AI can unfairly impact women simply because of their gender when being screened, which limits their career advancement — from the hiring process to limiting opportunities within the workplace. Companies such as Amazon attempted to use AI to automate their hiring processes, but it was found to reflect sexist sentiments developed from the data used to train it. What this meant was that a large proportion of highly rated and accepted applicants were men rather than women. While Amazon launched an internal audit and eventually abandoned this project, many other companies are continuing to rely on AI for their hiring processes without fully understanding its gender bias. We know that there have been historical imbalances in hiring based on gender, which explains why AI has learned to perpetuate this, especially in male-dominated industries. Historically, men have predominantly been hired in tech fields, which means the AI model would learn to favor similar profiles, thus replicating the gender bias and reinforcement of existing gender disparities. This in turn leads to continuing discriminatory hiring practices that systematically disadvantage women by limiting their access to job opportunities and career advancement.

AI training issues are not limited to hiring and career advancement limitations for women. AI has also been shown to reinforce gender stereotypes such as what is considered “good” or “bad” leadership. Traditionally masculine traits are seen as positive examples of leadership. AI has even begun to interpret ambiguous data with gender bias, incorrectly assuming the gender of the subjects with ambiguously gendered questions. For example, in a question asked to the AI chatbot about a physicist and secretary, the chatbot would label the physicist as a man and the secretary a woman without any input from the user on the actual gender or pronouns of the subjects. This is very worrying, as users who ask the AI chatbot for advice or unbiased responses will more than likely received biased responses based on either their own gender or the gender the chatbot assumes the question is about.

These assumptions have negative consequences such as the ones covered above, but also extend to education and healthcare. AI systems have been found to perpetuate gender disparities in higher education access, especially in STEM-related fields. AI models often assume that male students are more competent than female students, consistently recommending that female students enroll in foundational courses while recommending more advanced ones for male students. Consider the potential impact on girls and young women who may be steered away from STEM or discouraged from pursuing their ambitions simply because of the biases embedded in AI systems; it is a risk we cannot afford to ignore. That is why, in the FAWCO Education Team we are focusing on Beyond the Bias to focus on empowering girls and women to engage and pursue STEM subjects and work to overcome biases in AI.

To combat the growth of gender bias in AI, UNESCO (2025) launched a new Red Teaming Playbook to provide organizations with the tools to test AI for harmful biases. It provides tools and resources for users to design and implement their own Red Teaming events to test AI systems. The playbook provides examples that reveal gendered bias through demonstrating the same prompt and the differences between describing men and women. The Education Team wants to continue supporting girls in ICT, empowering them to not only test the tech field but excel! UN Women (2026) are working towards a world in which all women and girls feel safe from violence online and off. By empowering girls to take control of their ICT journeys, they will gain news skills and build confidence to navigate AI biases. Today, women represent just one-third of those building these technologies, and those who do enter are often kept out of leadership positions. When it comes to AI, women need to be in that space. We have fought for so long to have gender equity and equality and to continue to reduce gender bias in the technologies created. Women and girls must be included at every stage. 

 


Sources:

Share This Content

Visit Our Partners