
AI Algorithm Bias: Why Awareness is Key
July 24, 2025
Machine learning and use of AI has gained a significant amount of traction in the last 12 months alone. We are going to see continuous shifts with this technology and use of AI tools is going to increase. On the surface, AI is great. It can do pretty much anything. But many people fail to realize the risks associated with AI. One such risk is AI algorithm bias.
What is AI Algorithm Bias?
AI Algorithm bias happens when AI systems produce either unfair or skewed answers that reflect societal inequalities. It can also happen when a question is phrased in a way that leads AI to provide an answer it thinks the user will want to hear. This is less to do with the human aspect of AI design and more to do with the product of a flawed system, but it is still important to address.
AI bias can be introduced during data collection and labeling, when designing the algorithm, and when interpreting and applying predictions.
Why AI Algorithm Bias Exists
AI learns from existing data. It gets its information from real people. Therefore, it often amplifies inherent biases, leading to discriminatory outcomes. The people most likely to be affected by these biases are those in marginalized groups like women, people of colour, people with disabilities, and members of the LGBTQ+ community.
Biases exist. Some are conscious and some are unconscious, but we all have them, and some have them more than others. When we rely on people to program AI tools, our biases end up permeating throughout. AI is not smart enough to discern what is biased and unbiased. It is entirely reliant on humans.
Examples of AI Algorithm Biases
In Hockey…?

Canadians love hockey. As a Winnipeg Jets fan, it was heartbreaking when they were eliminated after putting up such an incredible fight and delivering the best game of hockey in history.
For many Winnipeg Jets fans (and Canadians in general), if our team couldn’t win the Stanley Cup, we were going to root for the one Canadian team left in the playoffs: Edmonton.
Canadian pride exists on a good day. Throw hockey into the mix and it’s a different story. Many people were turning to ChatGPT to get its prediction on the outcome of the series. The problem was in how many people worded the question.
“Will Edmonton win the Stanley Cup?”
What’s the problem here? You’re simply asking it a question. But because you are asking specifically, “Will EDMONTON win the Stanley Cup?” AI is going to assume you want Edmonton to win. So, it is going to give you stats and information that is skewed in favour of Edmonton winning.
Edmonton did not win.
In AI Imagery
I was scrolling through LinkedIn recently and someone had shared AI generated images of what animals would look like in certain organizational roles. Exactly the type of hard-hitting content we’re all on LinkedIn for, but that’s a topic for another blog.
The photos shared showed various jobs and roles: CEO, COO, finance manager, creative director, HR, etc. When I first came across this post, I wasn’t surprised when I clicked through and saw all the higher up positions of the animals wearing suits (clearly depicting them as male) or distinctly male animals being represented in the top positions (a lion with a mane is clearly a male lion and in this case was represented as the CEO). I thought it would depict female animals in lower-level positions (like HR – it’s important to note that the title wasn’t “HR director,” but simply “HR”). But to my surprise, female animals were not depicted at all. Not even in roles I could see bias existing in, like HR. They just didn’t exist in this imagery.
This is such an issue because women make up a significant portion of the workforce. Albeit a smaller number, they also exist in top-level positions. These images erased women from the workforce entirely.
What Do We Do?
It all starts with AI governance. Something, in my opinion, is not being talked about enough. To ensure AI is being used fairly and ethically, we can start with ensuring the following practices are being implemented:
- Compliance with laws and ethical standards
- Trust through privacy and security
- Fairness using techniques like counterfactual fairness
- Transparency in data and decision-making
- Human oversight to review AI decisions
- Reinforcement learning to reduce human bias
While this is a start, it is not enough. It’s important to understand how AI operates and to acknowledge that it’s not a perfect solution, that it will provide incorrect answers, and that biases exist. Make sure you are fact-checking information, especially if it is going to have a direct impact on other people or you’re using it for decision making purposes.
Want to learn how to use AI ethically and correctly? We’re here to help. Get in touch by filling out the form at the bottom of this page.
Thanks for reading! Make sure to subscribe to our blog. We publish technology tips, tricks, and updates every week.
Want to hear the latest from out team of experts? Sign up to receive the latest news right to your inbox. You may unsubscribe at anytime.

Discover More
Cloud Solutions: A Guide for Modern Businesses
Businesses today are understandably searching for ways to be more agile, scalable, and cost-effective. One of the most effective tools allowing business owners to do this is cloud computing. Many…
Inside Microsoft Fabric: A Conversation with a Data Scientist
As a non-technical person, I wanted to understand more about Microsoft Fabric and why people love this tool. I sat down with one of our data experts, Anastasiia, to get…
What to Know About Microsoft’s New OneDrive Archiving Policy
Earlier this year, Microsoft announced it was rolling out a significant change to how it handles unlicensed OneDrive user accounts. Microsoft’s new OneDrive archiving policy is something every IT admin…

Let’s build something amazing together
From concept to handoff, we’d love to learn more about what you are working on.
Send us a message below or call us at 1-800-989-6022.