
Image generated with ChatGPT
This week's reading and listening made me think about the first time I experienced AI bias in action. It was years ago in Hong Kong, and a basic search showed Filipinos mainly as domestic helpers. This wasn't totally unexpected, but it showed me how AI systems, trained on huge amounts of data shaped by human choices, can produce biased and harmful stories. That experience really stuck with me, especially now that AI is quickly changing education
AI tools like ChatGPT learn from massive amounts of human-created data. If that data contains stereotypes, gaps, or historical inequalities, those patterns can show up in AI responses. There are several types of biases that exist in artificial intelligence: interaction bias (how users influence systems), selection bias (who and what gets included in the data), and latent bias (hidden assumptions buried in information). In schools, this bias is concerning. It subtly influences what students see as true or normal, creating a hidden curriculum where algorithmic responses seem right and unquestionable. Artificial Intelligence is transforming education positively and negatively.
The big question here is, what can we do?
- AI developers need to carefully review training data, allow external audits, design inclusive teams, and build tools that identify false or harmful outputs.
- Educational institutions can devise programs to teach students how to use GenAI. They can modify assessments to check for new ideas and thinking skills. Also, they can set rules that are easy to understand, so everything is fair for everyone.
- Educators play a huge role in identifying and countering power and bias. We can facilitate open conversations about how algorithms work, use real-world examples of bias, and teach students to question AI-generated content instead of automatically trusting it.
There is no denying that AI is powerful. But as Lucretia Fraga underscored, an informed user is even more powerful. By initiating small talk openly about bias and teaching students how to think critically about technology, we shift the balance. AI can shape education; we just have to ensure it does so for the better."
In my catalyst discussion this Monday, I will address the question, "What are teacher responsibilites when they don't control AI?" I will explore several things educators can consider to develop their own digital literacy to be effective in helping their students navigate AI critically and ethically. This video introduces us to Monday's discussions.
Reflect
In my catalyst discussion this Monday, I will address the question, "What are teacher responsibilites when they don't control AI?" I will explore several things educators can consider to develop their own digital literacy to be effective in helping their students navigate AI critically and ethically. This video introduces us to Monday's discussions.
Reflect
Share a time when an AI tool may have quietly influenced your child's or student's beliefs, identity, and opportunities without us even noticing.
Links to the materials I explored for this post:
https://www.youtube.com/watch?v=cygpAm4ooGs
https://www.youtube.com/watch?v=eeyL9jBky68
https://www.youtube.com/watch?v=59bMh59JQDo
https://pubmed.ncbi.nlm.nih.gov/39850144/
https://www.eff.org/deeplinks/2022/05/podcast-episode-teaching-ai-its-targets
Links to the materials I explored for this post:
https://www.youtube.com/watch?v=cygpAm4ooGs
https://www.youtube.com/watch?v=eeyL9jBky68
https://www.youtube.com/watch?v=59bMh59JQDo
https://pubmed.ncbi.nlm.nih.gov/39850144/
https://www.eff.org/deeplinks/2022/05/podcast-episode-teaching-ai-its-targets
Hi Kris,
ReplyDeleteI really enjoyed reading your blog. Your reflection question made me think about my niece Jill’s recent experiences with algorithms. Jill is 10 years old and currently going through a bit of a tomboy phase. She’s taking a karate class, so many of her YouTube searches are for instructional karate videos. Her best friend’s older brother is a die-hard Edmonton Oilers fan, which means she’s now watching highlights of the Oilers on a regular basis.
Her mom has also been encouraging her to read the Harry Potter series, which was a favorite of my sister’s as a child. Recently, when Jill watches YouTube videos or listens to stories from Alexa, the content is generally age-appropriate, but many of these stories feature same-sex parents. This has prompted Jill to ask her parents questions about topics she hadn’t previously considered.
I was talking to my sister about this, and she feels that Jill’s latest interests are shaping an algorithm that doesn’t fully represent who Jill is. On the bright side, it’s creating space for meaningful conversations between Jill and her mom, which I think is incredibly valuable.
Hi, Jordan. Thanks for sharing Jill’s story. It’s such a good example of how algorithms can run ahead of kids in ways they’re not always ready for. I actually think this is where parents and teachers still matter so much. Kids don’t just need filters; they need humans who can step in, slow things down, and help them make sense of what they’re seeing. The best part of your example is how it opened the door for real conversations between Jill and her mom. Those moments, where adults can frame things, answer questions, and turn something unexpected into a teachable moment, are exactly why human judgment and guidance are still important in how young people navigate AI. Good luck completing your project.
ReplyDelete