The ancient Greek philosopher Epictetus once observed, “It's not what happens to you, but how you react to it that matters.” In a world where artificial intelligence (AI) is rapidly transforming the nature of work, that wisdom has never felt more urgent. Jobs across every sector—including education—are being redefined in real time. For many, today’s job description may soon bear little resemblance to tomorrow’s expectations.
Epictetus reminds us that our true power lies not in resisting change, but in choosing how we respond to it—with intentionality, integrity, and clarity. In the face of an AI revolution, it’s not the disruption itself that will define us, but whether we lead our reaction or let someone else script it for us.
I first picked up Mel Robbins’ Let Them Theory during a personal quest for clarity—and found it couldn’t feel more relevant to our AI moment. Robbins’ “Let Them” idea boils down to one question: what will I choose to think, feel, and do next? It’s about letting the AI wave crash around us—and then stepping forward with a decisive “Let me” mindset.
“Let me” isn’t passive detachment; it’s radical ownership. Let me define the future we’ll build together. Let me reshape the teaching profession, redesign pedagogy, rethink assessments, transform school environments, streamline procedures, and deepen relationships. We don’t abdicate our roles or desires; we clarify them, then pursue them with purpose and urgency.
In the context of education, this means we must not fixate on whether others embrace AI, misuse it, or ignore it entirely. Instead, we must define what kind of future we want to co-create—and start building it in what will be an AI saturated world, whether that be for the teaching profession, pedagogical practice, assessment, our school environments, relationships procedures etc.
We have the power to shape how our work, educational environments and ecosystem evolves. Our reaction, individually and collectively, is what will define the present and the future. In the face of the AI revolution, this mindset isn’t just helpful—it’s a moral imperative.
Designing for Deep Thinking
AI offers powerful shortcuts—but at what cost? Concerns about cognitive offloading and attention erosion are growing. Tools that instantly generate answers or summarize content risk weakening students’ capacity for reflection, inquiry, and memory. And yet, with the right design and intention, these tools can also be catalysts for deeper thinking.
Students must be supported in using AI to stretch—not replace—their cognitive effort. That means designing learning experiences that prompt verification, curiosity, and critical engagement. Teachers play a vital role here, creating pedagogical activities, scope and sequence, and assessments that require students to challenge AI-generated responses, compare sources, and reflect on their learning process. It needs to be done at every part of the scaffolding with intent.
Teachers experimenting with prompt design, co-authoring assignments with AI, and crafting assessment models that reward interpretation over replication are leading this charge. The work of the University of Sydney on “Assessment for” and “Assessment of learning” offers a compelling example of teachers—professors, in this instance—beginning to shape the kind of educational future they want to see. It’s a future where assessment aligns with purpose, and AI is used not to substitute thought, but to deepen it. Their work reminds us that with intention, AI can be a thinking partner—not a thinking crutch.
Cultivating Critical Judgement
Bias, hallucinations, and black-box algorithms raise the question: how do we make sure students don’t blindly trust AI? Students need to develop AI fluency grounded in epistemic awareness—understanding how knowledge is generated, validated, and manipulated. 'Epistemic' refers to how we know what we know, basically critical judgement and understanding. In this context, it means teaching students to question not just the content AI provides, but the processes and assumptions behind that content.
This is a system-wide imperative. From rethinking digital citizenship to explicitly teaching source validation and bias detection, AI fluency must be embedded across curriculum and professional learning. Trust in AI must be built—not assumed.
We must also address the growing threat of automation bias—the tendency to over trust automated systems even when they are wrong. This cognitive shortcut can undermine students’ ability to reason independently. Activities should guide students in comparing AI outputs with diverse, credible sources, evaluating discrepancies, and articulating why one claim holds more merit than another. These habits cultivate discernment and skepticism—foundations of healthy digital judgment. Learning to critically engage with AI must be as central as learning how to use it.
Beyond formal learning, we must also confront the real-world consequences of unchecked algorithmic influence. As author Michael Lewis has pointed out (US examples for the moment), certain platforms—particularly in the gambling sector —use AI systems that actively target vulnerable individuals, especially young men. These algorithms are optimized not for well-being, but for engagement and profit, sometimes resulting in addiction, bankruptcy, shame, and even suicide.
Education cannot ignore this.
Students must be equipped to recognize when their behavior is being shaped, nudged, or manipulated by systems designed without their best interests in mind. This is not only about critical thinking—it’s about protecting agency, identity, and mental health in a data-driven world.
Growing Human Connection
As social media platforms and chatbots increasingly simulate relationships, how do we preserve the uniquely human elements of teaching and learning? While generative AI tools increasingly simulate warmth and responsiveness, true emotional nuance, empathy, and mentorship require human judgment, shared context, and relational presence—qualities that can’t be reliably automated.
That is why our response cannot be passive; we must actively model, teach, and defend the forms of human connection that sustain emotional development across schools, families, and communities. Teachers, parents, mentors, and caregivers all remain relational anchors in young people’s lives—offering the kind of presence, understanding, and emotional guidance that no algorithm can authentically replicate.
AI can already perform some repetitive or administrative tasks with speed and reliability—providing a genuine opportunity to reallocate human time toward what matters most. The hard work ahead for education lies in identifying which tasks can and should be automated, and which must remain rooted in relational presence and human insight. In the classroom and schools, this means teachers and school leaders need time, space, and support to make those decisions deliberately—not reactively—so they can reinvest their energy in emotional presence, care, and connection. From this work, as an example, prioritizing training in socio-emotional learning and relational pedagogy could be core courses for pre-service programs if we are looking at this from a holistic pre-service to retirement professional development continuum.
We must also acknowledge the broader mental health dimensions AI introduces and truly dive in to understand this new societal and learning context of our children. Students face increased pressure to perform efficiently, with little space for vulnerability or rest. In a culture shaped by algorithmic feeds and influencer perfection, many young people feel they must curate constant excellence—online and in life. AI-generated feedback loops can perpetuate anxiety if not grounded in supportive pedagogy and parenting. Digital wellness must be taught intentionally—helping students develop boundaries, self-awareness, and resilience in a landscape designed to hijack attention and amplify comparison.
As generative tools become more immersive and “personable,” it is increasingly pressing that we model, protect, and co-create spaces for authentic human connection. This responsibility belongs to all of us—private & public sectors, policy makers, system leaders, school leaders, parents, caregivers, mentors, and peers. Emotional presence isn’t something we can delegate to devices; it must be lived, nurtured, and safeguarded across homes, classrooms, and communities. Even in countries like Australia, where social media has been banned in schools in theory, the broader emotional and cognitive effects persist beyond the classroom. Digital wellness and human connection must be intentionally cultivated through culture, curriculum, and community.
Closing the Equity Divide
Without intentional design, AI risks amplifying existing divides in access, participation, and opportunity. Wealthier families and districts may benefit from early access and tailored tools, while under-resourced communities fall behind.
This is where system leaders and policymakers must act. National strategies like Singapore's EdTech Masterplan 2030—which integrates ethics, data literacy, and targeted rollout to under-resourced schools—demonstrate that inclusion can be planned and prioritized.
A growing divide is also emerging internationally between students with formal AI learning opportunities and those navigating it informally through social media. It also has to be said that most jurisdictions don’t have it built in already apart from a few like Gwinett County Public Schools in Atlanta. This is an issue as many students rely on entertainment platforms for their digital knowledge, while their formal education lags behind. Without bridging this gap, AI fluency risks becoming another axis of inequity across the world. Digital, and now AI, is treated as everyone's responsibility—yet owned by no one. That diffusion leads to inconsistent access, unclear pedagogy, and shallow implementation. We must ask: what would it look like to treat Digital–Data–AI as a formal discipline, just like literacy, mathematics, or sciences? It does come with needed discussions such as do we want to take away time from one discipline to make this happen? To give it scope, sequence, and professional expertise? If we don’t, we risk normalizing a world where AI fluency is earned by privilege, not guaranteed by design.
Survey data from across European countries shows that while 74% of students expect AI to play a major role in their future careers, only 46% feel their schools are preparing them for that reality—and just 44% believe their teachers are ready to integrate AI in a meaningful way. This mismatch between expectation and perceived preparedness reveals the need for coordinated curriculum, teacher development, and school-wide planning.
Choosing with Intentionality
AI should not prompt wholesale reinvention. Instead, it should push us to clarify what must be protected—like human relationships, important foundational literacies-programs, ethical discernment—and what can be redefined. This is the creative tension teachers, policy makers, and curriculum developers now face.
A possible answer lies in moving from techno-enthusiasm to what Cal Newport calls "techno-selectionism" : the deliberate process of choosing which technologies, platforms, automation, tools to adopt, delay, or reject based on core values and educational purpose. This discernment could be embedded as a digital-data-ai philosophy that drives how it is implemented into curriculum design, pedagogical practice, assessment models, education tech and data infrastructure, institutional leadership, etc.
Leading with Purpose and Integrity
What is the purpose of education?
It’s a big question, that all education system and societies must now ponder with urgency.
Sarah Eaton’s work on "postplagiarism" reminds us that the boundaries between authorship, assistance, and automation are now blurred—and she proposes that our task is not to draw hard lines, but to teach students how to engage responsibly, ethically, and reflectively with these tools. Purpose today must include helping learners understand not just how to use AI, but what it means to think with it.
In this new age, education cannot simply be about keeping pace with technology. It must be about anchoring learners in purpose, belonging, and discernment. Students must not only be trained for employment—but prepared for citizenship, relationships, and ethical decision-making in a world that is AI saturated.
This vision is not new.
The Delors Report (UNESCO International Commission on Education-1996), Learning: The Treasure Within, outlined four foundational pillars of education: learning to know, learning to do, learning to live together, and learning to be. These remain more relevant than ever in the age of AI. If we lose sight of these humanist foundations, we risk reducing education to technical optimization and losing its role as a social and moral compass.
Moral passivity happens not by malicious intent, but through convenience. When efficiency becomes the primary goal, we may cease to question how decisions are made, or who is accountable. All education stakeholders - policy makers, system leaders, school leaders, teachers, parents, students etc. - (and society for that matter) risk outsourcing not only tasks but judgment itself.
The risk extends to neglecting broader concerns—such as how AI shapes narratives, whose values it encodes, and its environmental footprint. These are not theoretical questions; they demand real deliberation. Every jurisdiction must have discussions about fairness, bias, surveillance, and sustainability. The results of these discussions will be foundations to make decisions.
The antidote to moral passivity is conscious, transparent decision-making. Teachers and leaders must model deliberation, embedding discussions of bias, consent, and fairness in both tech use and pedagogy. Systems must require human-in-the-loop governance—not as an afterthought, but as a foundational design principle. Education, at its core, must cultivate discernment—not just performance.
Rather than responding to AI with reaction or resistance, we must respond proactively with clarity of purpose. That means not just asking what skills students need—but what kind of people we hope they become and what kind of world they will live-in. It is not enough to prepare students for the future of work. We must prepare them for the future of being human in an age of machines.
And don’t forget, they need to be part of these conversations.
Step Forward with Intention
The stakes are high. No doubt about it.
Failing to actively define our collective response to AI, then will lead to the trajectory of education dictated by external, often profit-driven forces—that are misaligned with the values we claim to hold dear.
The lessons from Epictetus and Mel Robbins remind us that personal and collective agency are not abstract ideals—they are foundational commitments. When faced with disruption, we have a choice: surrender to uncertainty or step forward with intention. We must begin shaping what education should look like when technology accelerates everything but understanding. We need leadership that reaffirms: humanity, not machinery, must define our direction.
Let us allow AI to exist—but never let it define us. Let us guide it, question it, and ensure it serves the fullness of what education can and must be. Because our future depends not on AI itself, but on how we choose to react—with clarity, courage, and care.
Just subscribed, but I’m a lowly teacher, so can’t afford to share my shekels. So I love that this article, at least, is free! 😬 This is good and necessary work you’re doing. Keep it up!