In part one of this series, we explored whether a male-dominated AI ecosystem risks widening the gender gap. The conclusion was cautious but clear, nonethelss: representation matters, and who builds AI influences how it behaves.
But, the conversation doesn’t stop at representation. The more pressing question is what happens nextand whether bias is already shaping real-world outcomes.
According to experts across AI, ethics, policy and product, the answer is a resounding yes. The risk isn’t necessarily intentional bias, but rather structural imbalance. When the teams designing AI systems lack diversity, the assumptions, data and priorities embedded within them can quietly shape how those systems behave in the real world.
Ivana Bartoletti, Chief Privacy Officer at Wipro and Founder of the Women Leading in AI Network, argues that this is fundamentally a design issue, not just a diversity one. She notes that AI reflects the assumptions and priorities of those who build it, and with men still heavily overrepresented in AI development, those perspectives can unintentionally shape outcomes.
In practice, she points to examples such as hiring tools that downrank women’s CVs, credit scoring systems that penalise career breaks and generative tools that create harmful or non-consensual content. All of these things illustrate just how bias can emerge through design choices rather than explicit intent.
Bias Is Already Influencing Real-World Systems
Several experts highlighted that the impact of AI bias is no longer theoretical. From hiring and financial services to online safety and workplace progression, AI systems are already influencing decisions that affect people’s lives, and we’re seeing it more and more often.
Dr. Laura Bishop, AI and Cyber Security Digital Sector Lead at BSI, warns that women are already experiencing disproportionate harm in some AI-powered environments. She points to examples like deepfakes, technology-facilitated abuse and conversational AI systems that can reinforce coercive or manipulative behaviour. These risks are compounded by the fact that women still make up only around a quarter of the technology workforce, which means that the perspectives shaping these tools remain limited.
This imbalance is also reflected in adoption. Research from the Markkula Center for Applied Ethics at Santa Clara University suggests that AI usage isn’t growing evenly across demographic; it’s that simple. Ann Skeet, Senior Director of Leadership Ethics at the centre, notes that studies are emerging showing a widening gender gap in AI adoption, which could have long-term implications for workforce advancement. If AI becomes a productivity multiplier, slower adoption among women could very well translate into reduced visibility, fewer opportunities and widening career gaps.
Elizabeth Ngonzi, Board Member and Founding Ethics and Responsible AI Committee Chair at the American Society for Artificial Intelligence, echoes this concern, explaining that value is already flowing to those who design, deploy and confidently use A: groups where men remain overrepresented. As a result, the benefits of AI-driven productivity and decision-making may not be distributed evenly.
More from Artificial Intelligence
The Bias Isn’t Always Obvious
One of the most difficult aspects of AI bias is that it often operates subtly. It doesn’t always appear as overt discrimination, but rather as small distortions. But over time, they build up.
Johanna Faigelman, Founder and CEO at HumanBranding, notes that bias can be embedded in everything from dataset selection to how problems are framed and prioritised. When those decisions are made by relatively homogenous groups, entire perspectives can be overlooked – not intentionally, but through omission. And the probem is, once they’re embedded into AI systems, those omissions scale, and they scale quickly.
This reflects a broader structural issue in tech. Data examining the best metros for women in tech from CoworkingCafe highlights ongoing disparities in representation, pay and leadership opportunities. These imbalances feed directly into the AI talent pipeline, meaning the same structural inequalities risk being reproduced in the technology itself.
Anne DeSpain, Founder of DeSpain Consulting, describes the gender gap in AI as operating at two levels: those who build the technology and those who use it. While user bias receives attention, she argues that training bias is equally important. If the worldview shaping large language models is narrow, the outputs will reflect that. However, she also notes that the current AI wave offers a rare opportunity, because the technology is new to everyone, the playing field is not yet fully entrenched.
Workplace Outcomes May Be the Biggest Impact
Beyond technical outputs, experts say AI bias could also shape workplace dynamics. As AI becomes embedded into recruitment, performance evaluation and productivity tools, the risk is that existing inequalities become automated.
Pippa van Praagh, VP of Operations at Perkbox, warns that AI is already influencing recruitment, communication and progression. Without equal involvement in building and testing these systems, she suggests that existing workplace gaps could be reinforced at scale.
Similarly, Brennan Kolar, Founder at Atlas CPA Index, points to research showing that women remain underrepresented in AI talent and research roles. He also highlights documented cases where medical AI trained primarily on male data missed diagnostic patterns in women, illustrating how biased training data can translate into real-world consequences.
The Risk Isn’t Intent – It’s Acceleration
A recurring theme across expert commentary is that bias in AI is rarely deliberate. Instead, it emerges from uneven representation, incomplete datasets and assumptions embedded early in development. Once deployed, AI simply scales those patterns.
Matt Seiler, Founder and Board Member at Excelerators Inc., describes this as an organisational issue. AI systems inherit the structure, incentives and priorities of the organisations that build them. When those structures lack diversity, the resulting technology reflects that imbalance – not intentionally, but systematically.
This is what makes AI different from previous technologies. Bias that might once have affected a small group can now influence millions of decisions simultaneously, from hiring and lending to search and discovery.
The Window To Shape AI Outcomes Is Narrowing
Despite the risks, experts also emphasise that the outcome is not fixed. Representation, governance and deliberate design choices can still shape how AI evolves.
Eshaan Jain, Senior Manager, Product at Mphasis, warns that the industry is currently building the future with limited diversity, which risks creating technology that serves some groups better than others. But he also suggests that recognising this gap now provides an opportunity to correct course before these patterns become entrenched.
Taken together, the message from experts is clear. AI bias may not always be intentional, but it is already influencing real-world outcomes. The technology is not inherently discriminatory – it simply reflects the people, data and systems behind it.
If part one asked whether male-dominated AI could widen the gender gap, part two suggests that the effects may already be emerging. The real question now is whether the industry moves quickly enough to address these imbalances, before they become embedded at scale.
Our Experts:
- Ivana Bartoletti: Chief Privacy Officer at Wipro and Founder of the Women Leading in AI Network
- Dr. Laura Bishop: AI and Cyber Security Digital Sector Lead at BSI
- Robin Emiliani: Co-Founder and CMGO at Catalyst Marketing
- Sahar Danesh: CEng FIET, Digital Policy Lead at BSI
- Adonis Celestine: Senior Director, Global Automation Practice Lead at Applause
- Brennan Kolar: Founder at Atlas CPA Index
- Elizabeth Ngonzi: Board Member and Founding Ethics and Responsible AI Committee Chair at American Society for Artificial Intelligence (ASFAI)
- Noe Ramos: VP of AI Operations at Agiloft
- Johanna Faigelman: Founder and CEO at HumanBranding
- Anne DeSpain: Founder of DeSpain Consulting
- Pippa van Praagh: VP of Operations at Perkbox
- Matt Seiler: Founder, Multi-Time CEO and Board Member at Excelerators Inc.
- Rev. Dylan Thomas Cotter: Publicist, Author, Activist, Dating Expert and Motivational Speaker at Cotter The Creative
- Eshaan Jain: Senior Manager – Product at Mphasis
- Ann (Gregg) Skeet: Senior director, Leadership Ethics at Markkula Center for Applied Ethics at Santa Clara University
Ivana Bartoletti, Chief Privacy Officer at Wipro Founder of the Women Leading in AI Network
![]()
“AI is not a neutral tool. It reflects the assumptions, blind spots, and priorities of those who build it and, right now, those people are overwhelmingly men. This is not just a diversity problem. It is a design problem.
“The systems being built today are already reproducing gendered harm at scale: hiring tools that downrank women´s resumes, credit scoring systems that penalise career breaks, and image tools that generate non-consensual content. These are no accident. They are the predictable result of deliberate design choices.
“Simultaneously, representation without power is theatre. We do not just need more women writing code, we need women at the top of organisations deciding what AI is for, whose needs it centres, and who it serves.”
Dr. Laura Bishop, AI and Cyber Security Digital Sector Lead, BSI
![]()
“Women make up only around 20–25% of the technology workforce, highlighting the need for meaningful change within education. The consequences of inaction are already visible. AI systems are being used in ways that disproportionately harm women, from chatbots that enable and normalise coercive control, to deepfakes and other forms of technology‑facilitated abuse.
“AI is often framed as life‑changing for women through tools that automate domestic labour. But technology alone will not dismantle deeply embedded social norms. We must ask harder questions: how is AI being designed, whose values are embedded within it, and who gets to decide its future? If women are not meaningfully involved in shaping AI, it will continue to shape them – often to their detriment.
“This is not only a question of fairness, but of quality and impact. Women bring essential perspectives, lived experience, and interdisciplinary thinking to AI development. Effective and ethical technology does not emerge from a single discipline or viewpoint; it requires the convergence of technical expertise, social insight, ethics and creativity. Without this, AI risks reinforcing the very inequalities it claims to disrupt.”
Robin Emiliani, Co-Founder and CMGO at Catalyst Marketing
![]()
“Women are not passive recipients of AI technology. We’re consumers, decision-makers, business owners, and increasingly the ones evaluating and purchasing AI tools for our organizations. And we notice when something wasn’t built with us in mind. The UX feels off. The assumptions are wrong. The outputs miss the mark.
“There’s a consumer reckoning coming for AI products that don’t hold up under female scrutiny, and it’s going to be louder than the industry expects. Women talk. We share recommendations, flag problems, and influence purchasing decisions at a scale that tends to get underestimated until it’s too late to ignore. AI companies that want long-term market relevance need to understand that building for women isn’t charity. The ones who get that now are going to have a significant head start when the market catches up.”
Sahar Danesh CEng FIET, Digital Policy Lead, BSI
![]()
“It’s important to highlight that there are AI systems that are really helping women. For instance in healthcare, AI is detecting breast cancer early. But equally, if women are not involved in developing AI they will miss out on creating systems that can benefit them directly. Things are being developed that are not designed to support women and don’t have their considerations in mind.
“When we start thinking about things like large language models that are being let loose on the Internet, if they are not inclusive by design and women are not engaged in shaping them, not bringing their perspective, or their data isn’t incorporated, it will really be corrosive to women, and ultimately negatively impact the gender gap.
“In my experience, women approach AI with a bit more scepticism, which is welcome. It’s not a silver bullet, it needs to be implemented and utilised in a very well-governed way, including using standards, to protect all groups including women and those who may be vulnerable.”
Adonis Celestine, Senior Director, Global Automation Practice Lead at Applause
![]()
“Human oversight is critical, but when the teams developing AI do not accurately reflect real-world diversity, they can inadvertently introduce bias during data collection, labelling and evaluation. Relying on a narrow group of people to train and test AI systems increases the risk of skewed outcomes. Effective algorithm training requires large volumes of high‑quality, diverse inputs, achievable only by sourcing data from a broad pool of participants representing different genders, races, languages and other attributes.
“A global community of independent testers can provide the breadth and depth of real-world data needed to validate AI systems at scale. Representative, reliable and unbiased inputs are essential from early development to final release and beyond. Our research shows 61% of organizations rely on human input to validate AI performance, with 46% stating that human sentiment and usability determine AI readiness.”
Brennan Kolar, Founder at Atlas CPA Index
![]()
“Women make up about 22% of AI talent globally and only 12% of AI researchers. Hiring tools trained on historical data have penalized resumes with women’s colleges on them. Medical AI trained primarily on male patient data has missed diagnostic patterns in women. Those are documented outcomes from systems built by a workforce that is 88% male.
“About 50% of men have used generative AI in the past year compared to 37% of women, and a 2026 study found that women make up 86% of workers most exposed to AI job displacement and least equipped to adapt to it. The women’s share of AI adoption is growing faster than men’s right now, but the people building the technology haven’t changed. I think the question isn’t whether AI can serve both genders equally, it’s whether anyone with budget authority is hiring to make that possible.”
Elizabeth Ngonzi, Board Member and Founding Ethics and Responsible AI Committee Chair at American Society for Artificial Intelligence (ASFAI)
![]()
“AI absolutely could widen the gender gap if we are not intentional, but not because women are less capable. Value is flowing to the people who design, deploy, and confidently use AI at work, and right now men are overrepresented in all three. That means they are more likely to capture the productivity gains, visibility, and promotions that come with being seen as “AI‑savvy.”
“From decades in tech and training more than 12,000 people in AI worldwide since 2023, I’ve seen that AI is not just a data tool but a communication tool. The best results come from asking clear, contextual questions, reading outputs critically, and iterating with nuance. Those are strengths many women already demonstrate every day, yet they are often under‑recognized in AI initiatives.
“Leaders who want AI to be an advantage, not a liability, should invest in women’s AI training and experimentation time, put women at the table as designers and decision‑makers, and track who actually benefits from AI projects, not just whether “AI is being used.”
Noe Ramos, VP of AI Operations at Agiloft
![]()
“A male-dominated AI ecosystem doesn’t just risk bias in outputs, it risks narrowing the very definition of what ‘good’ looks like. AI systems are shaped by the data, assumptions, and lived experiences of the people building them. When those perspectives lack diversity, gaps don’t just exist, they scale. As a minority woman who has spent her career navigating tech spaces not built for people like me, I’ve seen firsthand how homogeneity shapes what gets prioritized, questioned, or simply never considered.
“That dynamic carries quietly into AI design, influencing everything from datasets to decision frameworks. The solution isn’t slowing innovation. It’s expanding who gets to define it. Responsible AI can’t be a checkbox or an afterthought. It has to be a cultural operating model built on cross-functional collaboration and real human context. If we want AI that works for everyone, we have to build it with everyone.”
Johanna Faigelman, Founder and CEO at HumanBranding
![]()
“As an anthropologist, I’ve spent two decades studying what happens when the people designing systems don’t reflect the people using them: you get blind spots baked in at the foundation. AI is no different. The data sets, design assumptions, and framing of what counts as a “problem worth solving” all carry the cultural fingerprints of whoever is in the room. Right now, that room is predominantly male, and that has consequences we’re only beginning to understand.
“Will this widen the gender gap?
“Almost certainly, if we don’t intervene deliberately. AI isn’t neutral. It learns from historical data, and history is already unequal. When the people curating that data and defining those models are predominantly men, the system inherits their blind spots — not out of malice, but out of omission. The gap gets encoded and scaled.
“Can AI genuinely serve both men and women equally under these conditions?
“Not without significant course correction. We’ve seen this pattern before — in clinical drug trials that excluded women for decades, producing treatments that worked less effectively for half the population. AI risks repeating that same error, but faster and across every sector simultaneously: healthcare, finance, hiring, education etc.
“What concerns me most as an anthropologist and behavioral scientist is that the bias isn’t always visible. It’s embedded in what questions get asked, what problems get prioritized, and whose experiences are treated as the default. Women’s behavioral realities, emotional contexts, and decision-making patterns are being systematically underrepresented in technology that will shape how we work, access care, and navigate daily life.
“Are we too dazzled by AI to see the problem?:
“I think many are. The pace of innovation creates pressure to move fast, and questions of equity can feel like friction. But that’s exactly the thinking that produces systems which need to be rebuilt later — at far greater cost.
“The solution isn’t performative inclusion. It’s structural. Women need to be present not just as users, but as architects shaping the questions being asked, not just as validators of the answers.”
Anne DeSpain, Founder of DeSpain Consulting
![]()
“The gender gap in AI is real, and it operates at two levels, those who use the technology and those who train it. User bias gets attention, but training bias matters just as much. When the people shaping large language models bring a predominantly male worldview to that work, the outputs reflect it.
“What gives me genuine optimism is that this AI wave is new to everyone at the same time. The learning curve is democratic in a way that past technology cycles were not. Women do not have to catch up to decades of entrenched expertise, they can start now, at the same point as everyone else.
“I was at a workshop recently where a woman was talking about teaching AI to a room that skewed heavily male. In a previous tech era, that stage would have belonged to a man. Women are not just in the room anymore, they are leading it from the stage.”
Pippa van Praagh, VP of Operations at Perkbox
![]()
“AI could absolutely widen the gender gap if we carry on building it in rooms where women are underrepresented. Technology is never neutral in practice. It reflects the assumptions, priorities and blind spots of the people creating it. If the people shaping AI are mostly men, then the risk is that products, systems and workplace tools will be designed around male perspectives, while being presented as universal.
“We are talking so much about how powerful AI is that we are not paying enough attention to who gets to shape that power. In the workplace, that has real consequences. AI is already influencing recruitment, performance, communication and progression. If women are not equally involved in building, testing and challenging these systems, existing inequalities could easily be reinforced at scale.
“This is not a side issue for DEI teams to worry about later. It is a leadership issue right now. If businesses want AI to close gaps rather than deepen them, women need a bigger and stronger voice in how it is built and deployed.”
Matt Seiler, Founder, Multi-Time CEO and Board Member at Excelerators Inc.
![]()
“The central risk is not simply who uses AI, but who defines the conditions under which it is built and deployed. As the People Last white paper explains, outcomes are shaped upstream; by objectives, strategy, structure, and process; long before technology ever reaches end users. When those foundational decisions are made by a narrow group, the technology reflects that narrowness by default.
“In professional services and technology organizations, AI is being integrated under pressure, often before leadership has clearly defined what success looks like, who owns which outcomes, and where human judgment ends and automated execution begins. The paper argues that when these questions are unresolved, failure is routinely blamed on people or tools rather than on organizational design itself. In that context, unequal representation at the decision‑making level doesn’t correct itself; it compounds.
“The Digital Workforce Transition paper reinforces this point by reframing AI not as a neutral tool but as a “digital worker.” Digital workers inherit the assumptions, priorities, and processes designed by the humans who deploy them. If those processes are shaped without broad participation, the resulting systems execute at scale what was only partially considered at the start. Speed and consistency amplify whatever design choices came first.
“This is where the risk widens. The papers emphasize that people protect territory when ownership is unclear and accountability is diffuse. When entire groups are underrepresented in defining strategy, structure, and process, their perspectives are absent not by malice but by omission. The technology doesn’t ask who is missing. It simply executes the system it is given.
The result is not that AI intentionally excludes, but that it reliably mirrors the structure and incentives of the organizations that build it. As the papers note, “getting the sequence wrong” is an existential threat; because once processes are encoded into digital workers, they become harder to see, harder to question, and harder to change.
“Whether AI closes gaps or widens them depends less on how advanced the technology becomes and more on whether organizations do the hard work first: defining objectives clearly, designing structure deliberately, assigning ownership explicitly, and involving the full range of human judgment necessary to serve real outcomes. Without that work, AI doesn’t challenge existing imbalances; it operationalizes them. Faster. At scale.”
Rev. Dylan Thomas Cotter: Publicist, Author, Activist, Dating Expert and Motivational Speaker at Cotter The Creative
![]()
“Without laying the foundation for equality within developing and coding systems, this will negatively impact the quality of life for all. The ability to develop inclusive systems with non-discriminatory and/or non-stereotypical algorithms has always been a possibility; it seems, however, as if presently the AI industry has more of a misogynistic heteronormative agenda at play.
“The uneducated and socially unaware masses may become distracted by technological advances; however, those of us who are socially aware have been sounding the alarms. The reality is that some people will code anything for a check, and that’s the tech industry’s problem right there, promoting capitalism over diversity, equity, and inclusion.
“For individuals seeking much more in-depth research on the AI industry and this particular matter, I would suggest reviewing the DAIR Institute’s work and the backstory of why this particular organization was formed by a team of engineers, activists, and academic scholars, along with its mission in the Artificial Intelligence space.”
Eshaan Jain, Senior Manager – Product at Mphasis
![]()
“We’re racing to build the future with AI—but mostly with half the team. Today, men dominate both AI workspaces and usage, meaning the people shaping datasets, algorithms, and products are overwhelmingly male. The result? Technology that risks serving men’s needs first, while women’s perspectives remain an afterthought.
“Are we so dazzled by AI’s potential that we’re missing a dangerous blind spot? If women are hardly part of the conversation, how can we build AI that truly serves everyone equally? Without urgent change, we may unintentionally widen the very gender gap we hoped to close.”
Ann (Gregg) Skeet, Senior director, Leadership Ethics at Markkula Center for Applied Ethics at Santa Clara University
![]()
” . . . it [AI] may not be growing equally among all demographics.
“Studies are emerging about the growing gender gap in the use of AI, suggesting that women are adopting the technology more slowly and foreshadowing negative implications for women’s workforce advancement in the coming years as a result.”


