Sunday, March 22, 2026

AI and School: Some Notes on my Digital Citizenship and Literacy Journey


Image created with ChatGPT

I’m nearing the end of my course in Digital Literacy and Citizenship as part of my master’s program, where I’ve spent a lot of time exploring how technology is shaping the way we learn. The cluster of articles this week was about AI and the future of education. I felt that particular mix of professional curiosity and personal unease that comes with reading about something that's actively changing the field we are studying. Howard Gardner called AI a fundamental a shift in education as anything seen in a thousand years, and as someone currently sitting inside an educational institution trying to understand digital transformation, that landed differently than it might for a casual reader. He suggested that most cognitive aspects of the mind - the disciplined, synthesizing, and creative - will eventually be performed so well by large language models that whether humans engage in them at all will become optional. That's a provocative claim for anyone in education, but for someone studying digital literacy specifically, it raises a direct question: what exactly are we preparing students for if the cognitive heavy lifting is increasingly outsourced?

Jim Shimabukuro pushed that question further for me. The argument is that AI's personalization, accessibility, and efficacy could shift the balance away from traditional schools as the dominant medium for academic learning, potentially within the next decade. The numbers cited are hard to sit with neutrally: some AI-first schools report that students master core curricula in two hours a day and score in the 99th percentile nationally. In my digital literacy coursework, we talk constantly about the difference between access to information and genuine understanding, and I find myself wondering whether those test scores reflect deep learning or something more like optimized performance. Mary Burns and Rebeca Winthrop speak directly to this tension, and it's the piece I'll probably be citing in my next assignment. Their research found that AI tools prioritize speed and engagement over learning and well-being, and that cognitive offloading and dependency can atrophy students' mastery of foundational knowledge and critical thinking, which in essence is the exact skill set a digital literacy education is supposed to build. There's something deeply ironic about that.

What keeps me from sliding into full pessimism, and what I think is actually the most useful framing I've encountered across all three pieces, is the insistence that none of this is fixed. As Burns and Winthrop highighlighted, the future of AI in education is in the hands of individuals and institutions, and we all have a role to play as active participants rather than spectators. Gardner's collaborator Anthea Roberts said something that resonates with me both as a student and as someone training to work in this space: we have the chance to cognitively offload, and we have the chance to cognitively expand. It is our duty to figure out how to pursue expansion rather than replacement. That feels like the core question of digital literacy to me. It's not really about the tools themselves, but rather, it's about the habits of mind we build around them.

I came into this master's program thinking I'd learn frameworks for evaluating technology. Increasingly, I think the deeper work is learning how to protect the kind of thinking that technology can't, and shouldn't, replace.


No comments:

Post a Comment

AI and School: Some Notes on my Digital Citizenship and Literacy Journey

Image created with ChatGPT I’m nearing the end of my course in Digital Literacy and Citizenship as part of my master’s program, where I’ve s...