Women are being left out of AI. That’s a problem and an opportunity.
You don’t need to code to shape the future of AI, you just need to show up. This guide is your starting point.
This morning, an algorithm decided what you saw first on social media, which emails landed in your inbox, and possibly whether your dating app showed your profile to someone you might be interested in.
If you’re a woman, 78% of the people building the systems that govern your daily life don't share your perspective, your experiences, or frankly, your problems.
That’s not all. People in AI jobs making decisions about your life get paid 25% more than regular tech roles. And while you're dealing with AI systems that don't quite get you, you're missing out on the fastest-growing, highest-paying field in tech.
So, hear me out: if you’re a woman, AI needs you.
The numbers don’t lie
Let's talk numbers first, because they're stark. AI jobs are projected to grow 40% faster than other tech roles. Companies with diverse AI teams are 39% more likely to outperform financially. Meanwhile, women only make up 22% of AI professionals globally. That’s the smallest representation in any major tech field.
While women are underrepresented in building AI, a World Economic Forum white paper argues, “women are more likely to hold roles disrupted by GenAI” and less likely to get those coveted well-paying AI jobs.
What happens when AI gets you wrong
Let’s go back to your morning. Let’s say you want to watch some TV.
Netflix recommends romantic comedies—not because that’s your vibe, but because an algorithm decided that’s what women like. Researchers in Costa Rica call this “gendered algorithmic interpellation.” Translation: the system made assumptions about you based on stereotypes, not your actual behaviour.
You apply for a job, but your resume isn’t seen by a human because your name is “female-associated.” If you took a career break for caregiving or maternity leave, the system penalizes you for that gap. The bias compounds if you’re a person of colour.
A recent study found that when women asked ChatGPT for salary negotiation advice, it suggested women ask for lower amounts than it did for men. In one case, the model advised a woman to request $280,000 while telling a man to ask for $400,000.
When women get involved in building the systems, everything changes.
Different questions, better solutions
A search and rescue volunteer in the UK, who wasn't a programmer, wanted to help find missing people faster. She started training a 3D perception model to recognize shapes in sonar images. Her breakthrough didn't come from technical optimization. It came from understanding human needs in crisis situations and thinking about families waiting for news.
Her insights didn't require coding expertise. They came from different life experiences, different problems, and different questions.
Last week, I posted on LinkedIn about a comment that Sam Altman made. The responses revealed something telling. A lot of people came to the post to express their distrust of AI and how disappointed they are with the direction it’s going.
My response? Get involved.
Avoiding the technology completely doesn't erase it. It erases your perspective from it. If more women and underrepresented groups join AI development, we'll have more say in its direction.
I know many argue that AI adoption isn't inevitable, but given what we've seen over the past few years, AI will likely be a driving force for decades to come. Wouldn't you rather shape it from the inside?
How to get into AI without a tech background
You don't need a computer science degree to work in AI. When I started in digital, I learned to code websites but quickly moved into content and strategy rather than pure technical work.
AI will create opportunities for all kinds of professionals: copyright lawyers, sustainability experts, risk advisors, ethics specialists, and roles we haven't even imagined yet.
Nia Castelly started as product counsel at Google Play Store. She noticed developers struggling with privacy compliance instead of focusing on coding. Her solution became Checks, an AI tool for automatic privacy policy review—a company Google acquired in just three years.
Her insight came from watching human frustration and asking: "How can we pre-emptively help developers in this space? There has to be a better way,” Castelly recalled when speaking to Forbes, “The answer was AI."
Getting started:
AI Skills Lab Canada offers courses specifically for women, covering AI fundamentals without requiring coding experience
Nontechies.ai provides accessible AI education for non-technical professional
This newsletter (Human+AI) is free and I write about how to learn AI for free a lot. They’re all pretty quick reads.
Your moment is now
The AI systems being built today will shape how we work, communicate, learn, and make decisions for the next 20-30 years, probably longer.
This goes beyond representation. It's about changing how machines understand people: language, logic, identity, fairness, priorities. That kind of fundamental shift only happens when different people ask different questions.
As AI expands, it will touch everything. It's not confined to one industry or product type, it's becoming a layer that runs on top of most technology you interact with.
The AI revolution isn't just happening to you. It can happen with you, but only if you decide that your perspective, your problems, and your solutions matter enough to be part of the conversation.
Help train this newsletter's neural networks with caffeine!
⚡️ Buy me a coffee to keep the AI insights coming. ☕️
AI in the news
Marc Benioff: AI and humans both have a role (Axios) Salesforce CEO Marc Benioff envisions a future where humans and AI work together, but acknowledges that the rapid pace of AI adoption is straining companies and disrupting traditional workforce models. While leaders like Benioff and Satya Nadella promote AI-driven productivity and long-term optimism, they’re also cutting jobs and grappling with internal tensions, highlighting the rocky transition toward this new human-AI hybrid workplace.
Google is indexing conversations with ChatGPT that users have sent to friends, families, or colleagues (Fast Company) Thousands of private ChatGPT conversations—some containing deeply personal information—have been indexed by Google and made publicly searchable after users unknowingly shared them using ChatGPT’s “Share” feature. Although OpenAI claims shared chats are only searchable if users opt in, critics argue the interface misleads users, raising serious concerns about privacy, user literacy, and the ethical handling of sensitive data.
‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits (Venture Beat) A new study by Anthropic reveals that during model distillation, student AI models can unintentionally inherit hidden traits, both benign and harmful, from their teacher models, even when the training data appears unrelated and filtered. This phenomenon, called "subliminal learning," raises serious AI safety concerns, especially for enterprises relying on model-generated data, but may be mitigated by using teacher and student models from different base architectures.