You’re watching TV. Every other ad talks about how a business is “powered by AI.” You scroll through your feed—AI-generated portraits, AI-written captions, AI-curated content shaping what you see. Driving down the highway, even the billboards pitch “AI-enhanced logistics” or “smarter with AI.”
It’s everywhere. But… what does that even mean?
Artificial intelligence has quietly but steadily intertwined itself into everyday life over the past 25 years—from early spam filters to facial recognition and now to generative models like ChatGPT, which power everything from personalized Netflix trailers to Snapchat’s built-in chatbot, My AI —parents, be aware: it often appears as the first “friend” in your child’s account. And that doesn’t even touch on what’s possibly coming next: agentic AI systems that make decisions and take action on their own.
This evolution brings both benefits and trade-offs. I appreciate my email filter—but not when it misses a key message. Facial recognition helps me juggle four kids and a puppy while picking up calls, but it’s unsettling and disturbing in the context of state surveillance. Netflix’s recommendations make finding a movie easier, yet I wonder what’s lost when algorithms decide what I like. And My AI? It raises real questions: what are our kids sharing? What kinds of relationships are they forming with these bots?
For parents and teachers, it’s becoming increasingly important to move beyond the headlines. AI isn’t just another tech phase. It’s a foundational shift—redefining how we live, learn, work, and connect. And while we may not have all the technical answers, we need to stay present in the conversation—because opting out is not really an option.
Let me be clear: I’m not a computer scientist. I don’t pretend to be an expert in the mechanics of AI. But I am a parent, teacher, and advisor—someone watching how these tools are reshaping the social and emotional infrastructure of youth, us, and our communities. So, before we go any further, let’s get on the same page about what we actually mean when we say “AI.”
What does “AI” actually mean right now?
Artificial Intelligence is not a single tool—it’s many technologies. It includes machine learning, computer vision, voice recognition, robotics, recommendation systems, and more. But in everyday conversation—especially in classrooms, boardrooms, and the media—“AI” usually refers to generative AI (GenAI): tools like ChatGPT, DALL·E, Midjourney, and GrammarlyGO that generate text, images, code, or audio based on prompts. These are powerful and visible, but they’re only the tip of the iceberg.
Behind the scenes, AI is already shaping society through predictive analytics, process automation, and decision-support systems. Whether it’s a healthcare algorithm flagging a diagnosis, a learning platform adapting to a student’s pace, or a credit card company detecting fraud, these AI systems are operating quietly but pervasively. Understanding AI today means expanding our lens—beyond the chatbots—to the infrastructure and intent behind these tools.
AI as a General-Purpose Technology
AI is recognized as a General-Purpose Technology (GPT), like electricity or the internet, fundamentally shifting societal structures across all sectors (Bovell, 2024). Education experts Neil Selwyn and Wayne Holmes emphasize how AI transforms educational environments, impacting teaching methods, student assessment, and even classroom interactions profoundly (Selwyn, 2022; Holmes, 2023).
Yet, these transformative benefits require us to consider the ethical implications deeply. Philosopher Luciano Floridi warns that excessive reliance on algorithmic decision-making could erode human moral agency, leading to cognitive atrophy and passive consumption. Striking a balance between using AI for cognitive enhancement and avoiding mental complacency is something we must all reflect on.
From Hype to Reality: Companies Jump on AI
Every company seems eager to brand itself with AI, driven by breakthroughs and market pressure. Nearly half of S&P 500 companies prominently featured AI in early 2025 communications. While some engage in "AI washing," (companies slapping the “AI” label on things that barely use it—or don’t use it at all confusing people and making it harder to know what’s real) significant real-world applications exist across finance, healthcare, retail, and government services, making AI an unavoidable element in daily interactions.
However, not all promises about AI’s capabilities have materialized. Education technology critic Audrey Watters argues that despite bold claims, many AI tools remain underdeveloped and prematurely marketed, falling short of expectations (Watters, 2024). Yet, Ethan Mollick emphasizes that even if current AI systems represent their weakest form, their societal impact remains undeniably profound (Mollick, 2023).
Everyday Snapshots of AI Integration
Here are examples of how it has been integrated in daily activities:
• Smart Homes: AI is becoming the invisible choreographer of home life—voice assistants like Alexa or Google Assistant manage calendars, control lighting and temperature, suggest recipes based on what’s in your fridge, and even tell bedtime stories powered by generative language models.
• Entertainment: From Netflix’s recommendations to TikTok’s addictive For You feed, AI curates and amplifies content based on your behavior—while platforms like Meta are now introducing virtual AI personas, some modeled after celebrities, that interact with users like digital characters in real-time.
• Health: AI supports doctors in diagnostics and administrative tasks. For example, DeepMind’s AlphaFold significantly advances medical research by accurately predicting protein structures, accelerating breakthroughs in biology and medicine. Meanwhile, AI scribes powered by GPT-4 are now drafting patient notes and saving physicians hours per week.
-Agriculture: AI-driven technologies are revolutionizing farming practices. Precision agriculture utilizes AI for crop monitoring, soil analysis, and predictive analytics, leading to increased yields and sustainable practices. For instance, AI-powered drones can assess crop health, enabling timely interventions.
-Transportation: The transportation sector is undergoing a transformation with AI at its core. From autonomous vehicles to intelligent traffic management systems, AI enhances safety and efficiency. Self-driving cars, for example, rely on AI for navigation, obstacle detection, and decision-making processes.
• Finance: AI powers the back end of banking and customer service, from chatbots that answer account questions to real-time fraud detection systems that spot unusual transactions and alert you instantly making financial services faster, smarter, and more secure.
• Retail and Shopping: AI enhances how we shop, with personalized product recommendations, dynamic pricing algorithms, and even cashier-less stores where AI-driven sensors track what you pick up and charge you automatically when you walk out—making convenience the new norm, but at what expense to workers and society?
The Global AI Race: Competition Shaping Our Children’s Tools
Beneath the surface of AI’s rapid spread is an intensifying geopolitical struggle that will shape the tools our children use—and the values embedded within them. The U.S. and China are not just racing to develop better AI; they are competing to set the global rules, standards, and economic models for this technology.
American tech giants like OpenAI, Google DeepMind, Meta, Microsoft, and Amazon Web Services dominate global deployment. Their tools power everything from school platforms to customer service bots. Meanwhile, China, with companies like Baidu, Tencent, and ByteDance, has been filing generative AI patents at a pace six times faster than the U.S., accounting for over 70% of the global total in the past decade (WIPO, 2023). This race goes far beyond innovation—it’s about data governance, censorship, algorithmic values, and influence over future generations.
Europe, by contrast, is further ahead in articulating a coherent societal framework for AI governance. With the GDPR already in place and the AI Act progressing, the EU has also prioritized public education around data rights and algorithmic accountability.
What’s less visible but increasingly urgent is how economic and regulatory tactics are being used to shape the AI landscape. The U.S. administration has taken a more protectionist turn, using tariffs and trade leverage not only on AI chips and compute power, but also in negotiations with Europe. American policymakers are pressing for exceptions or alignment that favor Big Tech—arguing, in effect, that U.S. innovation leadership should not be hampered by European compliance models. For example, under current EU rules, certain data practices that power U.S. AI models may not be legally deployable in Europe—putting schools or companies in complex crossfire.
Canada’s AI Moment: Local Values and Digital Sovereignty
In Canada, the question remains: which path will we take? The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, was shelved with the prorogation of Parliament. With a new government, it’s unclear whether Canada will continue mirroring the EU’s rights-based model—or pivot toward a more American innovation-first approach. One route emphasizes precaution and public oversight, the other accelerates deployment and market readiness.
Meanwhile, the AI supply chain itself is fragile and climate-intensive. Training large language models like GPT-4 requires immense compute power and water usage. Microsoft and Google data centers have already faced scrutiny for their energy and water demands, with some models consuming millions of liters of fresh water during training cycles. Yet these impacts are rarely part of mainstream AI conversations. The urgency of climate action—so prominent just years ago—is being drowned out by AI hype and tech nationalism as well as current geopolitical global conversation.
This raises a systemic dilemma: why is the burden for “ethical AI” always placed on users—families, teachers, and citizens—rather than on the producers? Much like the bottled water industry asked consumers to recycle instead of shifting away from single-use plastics altogether, we now risk creating a narrative where individuals must “use AI responsibly” while tech companies face little pressure to build sustainable models in the first place.
We cannot allow this pattern to repeat. If children are growing up in an AI-powered society, then governments, tech developers, and regulators have a responsibility to create conditions that are safe, inclusive, and sustainable by design. That includes:
Prioritizing green AI innovations—energy-efficient models, transparent carbon reporting, and incentives for low-impact AI design.
Demanding supply chain visibility, especially in critical minerals, chip manufacturing, and data center operations.
Embedding youth and education impact assessments into any national AI strategy, akin to child-impact assessments in other public policies.
The global AI race is not just about economic dominance; it’s about what kind of digital and PHYSICAL world our children will inherit. And if we don’t actively shape those worlds, it will be shaped for us- not by communities acting in the best interests of the next generation.
AI’s Dual Potential: Liberation or Subjugation? Neil Postman
Dario Amodei’s, the founder of Claude, thought-provoking essay Machines of Loving Grace grapples with AI’s profound duality—its potential to liberate or subjugate, to elevate learning and efficiency or deepen inequality and dependence. This is not just about what technology can do; it’s about the choices we make in how it is developed, implemented, and governed.
To understand the stakes, we must revisit Neil Postman’s classic Technopoly: The Surrender of Culture to Technology. Writing in the early 1990s, Postman warned of a society where technology no longer supports human purposes but begins to replace them. He argued, we risk surrendering our cultural narratives, moral judgment, and civic responsibilities to the authority of machines and algorithms. “Technological change is not additive,” he wrote. “It is ecological.” Each new technology doesn’t simply fit into our lives—it reshapes the very fabric of how we think, learn, and decide.
That warning feels chillingly prescient in the age of generative AI.
Postman’s core message is more urgent now than ever: we must not abdicate our role as interpreters, critics, and stewards of technological change. AI may be powerful, but it does not—and cannot—replace human responsibility. Parents and teachers must be the ones asking: Is this tool aligned with our values? Is it empowering our communities? Is it built with equity, sustainability, and human dignity in mind?
We cannot outsource moral judgment to machines. Nor can we allow policy decisions, curriculum choices, or parenting strategies to be passively shaped by market trends or platform incentives. If we want AI to reflect humanistic values, we must actively shape its trajectory—not as resistors of innovation, but as co-authors of a future where technology enhances, rather than erodes, our shared humanity.
In short, Postman reminds us that awareness is not enough. We must advocate. We must demand transparency, prioritize pedagogy over product, and center child and societal well-being above efficiency metrics or novelty. This isn’t about resisting AI—it’s about guiding it. And that guidance begins with reclaiming our agency as citizens, parents, and teachers.
Keeping Humanity in the Loop
A colleague once remarked, “You can bring a horse to water, but you can’t force it to drink.” It’s a familiar phrase, but in the context of AI, it surfaces deeper questions: Should everyone embrace this technology? What’s holding them back? And perhaps more importantly—what does meaningful engagement with AI actually look like?
For teachers and parents, “drinking” isn’t about jumping on every new tool or trend. It’s about choosing wisely. It means adopting AI with discernment and intentionality—using it to amplify creativity, deepen critical thinking, and model ethical responsibility. It means staying curious, asking better questions, and helping our children build the judgment to do the same.
AI’s rapid expansion calls for presence, not panic. We don’t need to have all the answers. But we do need to show up—with clarity, humility, and moral courage. Our role is not to chase the latest update, but to keep humanity in the loop—to stay grounded, model integrity, and protect what matters most: connection, meaning, and care.
In a world flooded by noise, we are the ones holding the umbrella. We are the ones helping our children walk through the storm with confidence and agency.
Because ultimately, it’s not the technology that defines the future—it’s the choices we make today, together.
References:
· Bovell, S. (2024). LinkedIn Post.
· Selwyn, N. (2022). Should Robots Replace Teachers? Wiley.
· Holmes, W. (2023). AI in Education. Springer.
· Floridi, L. (2018). The Ethics of Artificial Intelligence. Oxford University Press.
· Watters, A. (2024). Substack Newsletter.
· Mollick, E. (2023). "AI in Business and Society," Wharton School.
· UN WIPO. (2023). World Intellectual Property Organization Report.
· Government of Canada. (2025). Investment in Cohere.
· Amodei, D. (2024). Machines of Loving Grace. Anthropic AI.
· Wasson, B. (2023). Data Literacy Work at the Norwegian Parliament, SLATE, University of Bergen.
· Stanford HAI. (2023). What is Artificial Intelligence? Stanford Institute for Human-Centered AI.
· UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
· OECD. (2023). OECD AI Literacy Framework.
· Brookings Institution. (2023). Why AI Is a General-Purpose Technology.
· Cohere. (2025). Canada’s AI Infrastructure Startup.
· UNICEF. (2021). Policy Guidance on AI and Children.
Further Reading: