Sunday, March 22, 2026

AI and School: Some Notes on my Digital Citizenship and Literacy Journey


Image created with ChatGPT

I’m nearing the end of my course in Digital Literacy and Citizenship as part of my master’s program, where I’ve spent a lot of time exploring how technology is shaping the way we learn. The cluster of articles this week was about AI and the future of education. I felt that particular mix of professional curiosity and personal unease that comes with reading about something that's actively changing the field we are studying. Howard Gardner called AI a fundamental a shift in education as anything seen in a thousand years, and as someone currently sitting inside an educational institution trying to understand digital transformation, that landed differently than it might for a casual reader. He suggested that most cognitive aspects of the mind - the disciplined, synthesizing, and creative - will eventually be performed so well by large language models that whether humans engage in them at all will become optional. That's a provocative claim for anyone in education, but for someone studying digital literacy specifically, it raises a direct question: what exactly are we preparing students for if the cognitive heavy lifting is increasingly outsourced?

Jim Shimabukuro pushed that question further for me. The argument is that AI's personalization, accessibility, and efficacy could shift the balance away from traditional schools as the dominant medium for academic learning, potentially within the next decade. The numbers cited are hard to sit with neutrally: some AI-first schools report that students master core curricula in two hours a day and score in the 99th percentile nationally. In my digital literacy coursework, we talk constantly about the difference between access to information and genuine understanding, and I find myself wondering whether those test scores reflect deep learning or something more like optimized performance. Mary Burns and Rebeca Winthrop speak directly to this tension, and it's the piece I'll probably be citing in my next assignment. Their research found that AI tools prioritize speed and engagement over learning and well-being, and that cognitive offloading and dependency can atrophy students' mastery of foundational knowledge and critical thinking, which in essence is the exact skill set a digital literacy education is supposed to build. There's something deeply ironic about that.

What keeps me from sliding into full pessimism, and what I think is actually the most useful framing I've encountered across all three pieces, is the insistence that none of this is fixed. As Burns and Winthrop highighlighted, the future of AI in education is in the hands of individuals and institutions, and we all have a role to play as active participants rather than spectators. Gardner's collaborator Anthea Roberts said something that resonates with me both as a student and as someone training to work in this space: we have the chance to cognitively offload, and we have the chance to cognitively expand. It is our duty to figure out how to pursue expansion rather than replacement. That feels like the core question of digital literacy to me. It's not really about the tools themselves, but rather, it's about the habits of mind we build around them.

I came into this master's program thinking I'd learn frameworks for evaluating technology. Increasingly, I think the deeper work is learning how to protect the kind of thinking that technology can't, and shouldn't, replace.


Saturday, March 14, 2026

Redesigning Schools: But Why?

Image generated with ChatGPT AI.

Although artificial intelligence (AI) is becoming more prevalent in classrooms, there is still a discernible difference in how teachers and students use it. According to Tyton Partners, as cited by the College of Education at the University of Illinois Urbana-Champaign, the ratio between students and teachers who regularly use AI shows a considerable gap between the two groups. These statistics highlight how quickly AI is entering students’ learning environments and suggest that schools may need to rethink how curriculum and assessments are designed.

Disclaimer: Please note that I ran these ideas through ChatGPT, but you can find my original notes here. 

Effects on Learning and Assessments. AI is already shaping how students learn and access information. AI tools can generate essays, summaries, and explanations almost instantly. While these tools can support learning, they also challenge traditional assignments that rely heavily on written responses. When students can easily generate work using AI, it becomes harder for teachers to assess what students truly understand. As a result, schools may need to design assessments that emphasize critical thinking, problem-solving, and real-world applications rather than simple recall or writing tasks (Rupp, 2024).

Outdated Curriculum. Many current curriculum frameworks were developed before AI technologies existed. Much of the traditional school structure still reflects a “factory model” of education that focuses on standardized content and testing. However, scholars argue that education systems must move toward learning environments grounded in relevance, rigor, and relationships. This shift would prioritize deeper learning, collaboration, and meaningful engagement rather than narrow test-based accountability systems (Darling-Hammond, 2025).

Role of Teachers. AI also raises important questions about the role of teachers in the classroom. Rather than replacing educators, AI should be seen as a tool that supports teachers’ work. Educators can act as collaborators with AI technologies, using them to enhance instruction while maintaining professional judgment and human oversight. Teachers remain essential in guiding students, providing ethical perspectives, and supporting the social and emotional aspects of learning (McRae, 2025).

Equitable Access. Another urgent concern is equity in access to technology. Access to AI tools and digital resources is not equal across schools and communities. If curriculum and policies are not redesigned thoughtfully, these differences could deepen existing educational inequities. Schools must therefore develop policies that ensure responsible and equitable use of AI while integrating these technologies into culturally responsive and student-centered learning environments (All4Ed, 2025).

The Future. Redesigning curriculum and assessment is necessary because AI will continue to shape future careers and industries. Schools have a responsibility to prepare students for a workforce where AI technologies are increasingly common. This includes helping students develop digital literacy, critical thinking skills, and the ability to evaluate and monitor AI systems. Although AI can present risks such as bias or misinformation, education systems should not avoid innovation out of fear. Instead, schools should carefully evaluate these technologies while adapting their practices to prepare students for the future (Rupp, 2024).

The urgency for schools to redesign curriculum and assessments comes from the reality that AI is already influencing education. The challenge for educators is not whether AI will be part of learning environments, but how schools can integrate it responsibly while preserving meaningful learning experiences. By rethinking curriculum goals, assessment practices, and the role of teachers, schools can ensure that students are prepared to thrive in a world increasingly shaped by artificial intelligence.


Friday, March 6, 2026

Under Surveiilance: Teaching Shaped by Cameras

Image generated with ChatGPT


My classrooms in Macao and Hong Kong had CCTVs. As a new, foreign teacher in these places, the cameras made me feel uncomfortable within the four walls. While we were assured that the cameras were there to manage students' behavior, the idea of "being constantly watched" created a sense that every behavior in the classroom may be viewed and interpreted by someone else. But over time, I started to see this experience through the lens of power and surveillance. 


In her post entitled, The Danger of Facial Recognition in Our Children’s Classrooms, Nila Bala discusses that schools often operate within clear power imbalances. Similar to my experience in Macao and Hong Kong, teachers have little influence over institutional policies, including decisions about surveillance. As a classroom teacher, I had little or no voice in whether cameras should be present in my classroom. Reporting to school always left me fearing possible situations, like if I snapped or acted in a way that might raise questions or trigger an inquiry about my classroom conduct. Fortunately, I never experienced such a situation. Still, the constant awareness of being watched created a lingering sense of caution. 


I got used to CCTVs over time, but the experience changed my understanding of my teaching environment, making me more aware of how authority and surveillance shape everyday classroom practice. We teachers often work in systems where rules and policies are made without our input, but we still have to deal with the consequences in our daily work. 


Surveillance in the classroom does not only come with the installation of CCTVs in classrooms. In his post entitled Are Teachers Under Increasing Surveillance, Ross Morrison McGill summarizes the three types of teacher surveillance that Damian Page explored in his paper Conceptualizing The Surveillance of Teachers: vertical surveillance, horizontal surveillance, and interpersonal surveillance. These surveys reflect the consumerism and neoliberalism ideologies and portrayal of teachers as commodities who represent themselves as products that can be evaluated and compared, as well as entrepreneurial individuals who continually improve themselves to stay competitive and employable. 


But the most concerning aspect of this surveillance is that many teachers may not fully recognize these forms of surveillance. Without this knowledge, teachers may feel pressure to keep an eye on their work, gather evidence of their efficacy, and establish their value as professionals. Over time, this can normalize a culture in which teaching becomes performative, with an emphasis on proving productivity and accountability rather than providing meaningful learning opportunities. 


Recognizing these dynamics is important. When teachers are aware of them, they can think more critically about the systems they work in and reflect on how surveillance influences not only their professional identity but also the learning environment they create for their students.


Friday, February 27, 2026

AI Is Powerful - But Are We Aware of Its Bias?



Image generated with ChatGPT

This week's reading and listening made me think about the first time I experienced AI bias in action. It was years ago in Hong Kong, and a basic search showed Filipinos mainly as domestic helpers. This wasn't totally unexpected, but it showed me how AI systems, trained on huge amounts of data shaped by human choices, can produce biased and harmful stories. That experience really stuck with me, especially now that AI is quickly changing education

AI tools like ChatGPT learn from massive amounts of human-created data. If that data contains stereotypes, gaps, or historical inequalities, those patterns can show up in AI responses. There are several types of biases that exist in artificial intelligence: interaction bias (how users influence systems), selection bias (who and what gets included in the data), and latent bias (hidden assumptions buried in information). In schools, this bias is concerning. It subtly influences what students see as true or normal, creating a hidden curriculum where algorithmic responses seem right and unquestionable. Artificial Intelligence is transforming education positively and negatively.

The big question here is, what can we do?
  • AI developers need to carefully review training data, allow external audits, design inclusive teams, and build tools that identify false or harmful outputs. 
  • Educational institutions can devise programs to teach students how to use GenAI. They can modify assessments to check for new ideas and thinking skills. Also, they can set rules that are easy to understand, so everything is fair for everyone. 
  • Educators play a huge role in identifying and countering power and bias. We can facilitate open conversations about how algorithms work, use real-world examples of bias, and teach students to question AI-generated content instead of automatically trusting it.
There is no denying that AI is powerful. But as Lucretia Fraga underscored, an informed user is even more powerful. By initiating small talk openly about bias and teaching students how to think critically about technology, we shift the balance. AI can shape education; we just have to ensure it does so for the better."

In my catalyst discussion this Monday, I will address the question, "What are teacher responsibilites when they don't control AI?" I will explore several things educators can consider to develop their own digital literacy to be effective in helping their students navigate AI critically and ethically. This video introduces us to Monday's discussions.

Reflect
Share a time when an AI tool may have quietly influenced your child's or student's beliefs, identity, and opportunities without us even noticing.

Links to the materials I explored for this post:
https://www.youtube.com/watch?v=cygpAm4ooGs
https://www.youtube.com/watch?v=eeyL9jBky68
https://www.youtube.com/watch?v=59bMh59JQDo
https://pubmed.ncbi.nlm.nih.gov/39850144/
https://www.eff.org/deeplinks/2022/05/podcast-episode-teaching-ai-its-targets





Thursday, February 19, 2026

The Teacher Advantage in an AI-driven Classroom

I
Image generated with ChatGPT

Personal LearningExperience with AI

In my capstone class, our professor had us pull out the main ideas from our research and use an AI application to find similar topics. The themes generative AI tools identified were pretty shallow and did not really connect to our sources all that well. While the generative AI tools could identify patterns and repeated words, they overlooked the important ideas we found by closely reading the articles. This highlighted the AI's weaknesses: it can see patterns, but it lacks the judgment inherent to us humans. 

Given the rapid developments in AI, educators today need to develop critical skills, including good judgment, data and ethical literacy, and the ability to facilitate learning with curiosity. With these skills, educators will be able to teach kids about and work with artificial intelligence and prepare them for a digital world.

Strong Professional Judgment
In her opinion article, Tanishia Lavette Williams highlights that AI is always behind the times and only uses old information. Teachers have to carefully look at AI's answers and teach students to do the same. AI systems are often biased, and they look to the past. She notes that we need judgment, care, and cultural knowledge to teach with a human touch, which computers, though sophisticated, cannot do. To make good judgments, teachers need to think about what they are doing, carefully analyze what AI provides them, and identify bias. Teachers who lack these skills may unknowingly continue practicing unfair systems and rely too much on automated solutions. But with these skills, teachers can ensure that learning is meaningful for everyone.

Ethical Awareness and Data Literacy
In the podcast with Tom Vander Ark, Doug Fisher emphasized the importance of teaching children about AI from elementary school onward, so they can use the tool critically and responsibly. Many kids start checking out AI on their own by high school, so he thinks we should introduce it as early as second grade. Teachers should get kids curious, help them understand data, and teach them how to talk to AI. If teachers and students know how to understand data, they can check information, question facts, and make good choices. Fisher believes that to make good decisions, everyone needs to know more about data. To help teachers grow in this area, they can work together and set clear classroom rules for using AI responsibly. Without these things, students might not use AI the right way or might just believe whatever it tells them. But if these skills are taught well, they can encourage responsibility, respect for privacy, and good moral judgment.

Nurturing Curiosity and Professional Agency
Fisher also points out that some students think there's just one right answer. We need to pair curiosity with good judgment in this AI age - the ability to look at results, improve them, and make them better. Teachers can show how this is done by asking questions, creating activities where AI helps solve problems, and helping students become skilled with AI instead of just using it passively. As students learn to think critically about AI, they start to see themselves as professionals in training instead of just people trying to finish tasks.

Reflective Question:
What other skills do you think teachers need to keep up with AI and guide students as they use it?


Saturday, February 7, 2026

Rethinking Our Relationship With AI

Image generated using ChatGPT

The issue, "Does AI replace the human workforce?" has been bothering me for a while now. This question was somehow answered by listening to the podcast The Latest on AI and Work in Higher Ed by Educause Shop Talk wherein they explored the place, challenges, and opportunities of AI in the workplace and in higher education. 

Theme 1: The importance of collaboration. The panelists highlighted that across all fields, substantial AI policies have already been put in place, many of which directly address the concerns we keep raising. The real challenge is not drafting more rules but on working together to learn how to govern data consistently across all boards. As several panelists noted, we need to be honest about which policies are actually helping and which ones are falling short. 

Theme 2: The need to shift the narrative around AI and work. While headlines often warn that AI is “taking over” human jobs, data simply does not support that fear. Instead, the panelists argued that the real risk comes from our own resistance to developing digital fluency. When we avoid learning new tools, we create the very conditions that make us feel replaceable (a self‑fulfilling prophecy). 

What’s real? 

AI is a far more effective tool for increasing human productivity than it is as a replacement for human knowledge. The panelists urged us to view AI as a partner rather than a competitor. AI can help us brainstorm, generate ideas, and elevate our work to a more strategic level. One of Sarah Eaton’s tenets of postplagiarism supports this idea by underlining that human creativity is not threatened but rather strengthened and expanded by artificial intelligence. We are still the ones driving the process. Instead of giving our jobs to machines, the idea is to utilize AI to enhance the aspects of our jobs that require human interaction, creativity, and judgment. 

What’s the role of educators?

The panelists explained that our role as educators is not to shield students from AI but to create safe, thoughtful “playgrounds” where they can explore information, experiment with tools, and learn from one another. When we secure the integrity of the data students interact with, we free them to focus on the deeper work, such as developing values, strengthening collaboration, and sharpening their human skills. AI becomes a catalyst for learning, not a shortcut around it. This means moving beyond transactional uses of AI and instead embracing a strategic mindset. As AI becomes an increasingly integral part of our everyday lives, we therefore need to reskill and upskill, not because it will replace us, but because it will augment our capabilities. 

Next week's post?

I will explore the role of teachers next week, reflecting on the necessary qualities teachers need to leverage AI and the importance of viewing teachers from a humanistic perspective when discussing teaching and learning with AI. 

What teacher qualities do you think will be most beneficial in today’s AI-dominated school environment? Feel free to share your thoughts.

Saturday, January 31, 2026

Fakes and Reals in AI

I've been thinking a lot about how quickly AI is changing everything, from our beliefs to our way of life. AI is all around us now, affecting our daily lives, choices, and even what we think is true. It impacts both individuals and established infrastructures. 


Qadri’s research highlights several ways generative AI introduces serious ethical and societal risks, including political propaganda and media weaponization, social engineering and psychological manipulation, and economic and legal impacts of synthetic media. In the Real or Fake podcast by KRQE, guest spoeakers Melanie Moses and Sonia Rankin of UNM see AI as both thrilling and risky. Similar to our class discussions, Moses and Rankin see AI as possibly biased and misleading, but it sounds so convincing that it’s hard to tell what’s real and what’s not, and warning us that AI technology can not just keep spreading without any control. It is up to us to understand what AI can achieve, where it falls short, and how it gradually alters what we trust, whether it is information, companies, or even each other. 


AI is not just about the technology itself, rather the ethics and how it all affects us as humans. AI is forcing us to rethink how we do things, from schooling to what we even consider real. Using the value of money, the speakers linked AI's potential to construct fake individuals to fake realities, underlining that the concern is not lying, but the erosion of trust, which is more damaging than what is being put out there.


There is an a need for us to slow down, to think things through, and point out the fakes when we spot them. How does this work in school settings? Sometimes spotting fakes is easily said than done. But again as fakes come like wolves in sheep clothing. Moses and Rankin threw out the idea that we need to experiment and see what the limits of these AI tools are, while still staying connected to tactile learning (hands-on learning and real-world experiences). Afterall, school should not just be about getting quick answers. It is about the process of learning: learning how to think, ask questions, and be creative.


At the same time, there's a lot of potential here. AI could make learning better, boost creativity, and show us new things we never thought about. But to shape that future and control how AI impacts us, we need to decide what kind of world we want to create with it. People will always want something real, those with a human touch in it: real voices, real art, real relationships. What we need right now is to use these tools wisely, stay informed, and speak up about how we want them to change. There are things to be careful about, but there's also a lot to be excited about. With AI, we just need to pay attention to both.


Some videos I found on the internet about misinformation, fake news and deepfakes:

Misinformation


Reflective Question:

When AI can create things that look real, what human skills and learning experiences should schools focus on to keep learning meaningful and authentic?




AI and School: Some Notes on my Digital Citizenship and Literacy Journey

Image created with ChatGPT I’m nearing the end of my course in Digital Literacy and Citizenship as part of my master’s program, where I’ve s...