Leading AI in Schools: Navigating the Three Lanes
- Ryan Smith

- Oct 16
- 5 min read

Artificial intelligence isn’t coming to schools — it’s already here. Students use it to study, summarize, and create, while teachers and staff experiment with it to plan lessons, communicate with families, and manage workload.
For education leaders, the question isn’t whether to use AI but how to lead when everyone is using it in different ways.
AI is already reshaping how students learn, how educators work, and how systems respond. The opportunity and the risk for schools is not whether to use it, but how to lead it with purpose. Leading AI in schools requires coherence, clarity, and a shared framework for action.
I’ve found it helpful to imagine three lanes: policy, teaching and learning, and productivity. Each has a purpose and moves at its own speed, but together they form a smooth, forward path for innovation. The lanes depend on one another: policy gives direction, teaching and learning brings purpose, and productivity sustains both.
1. The Policy Lane: Guardrails for Leading AI in Schools
This lane builds the trust and clarity that make innovation possible. Students and staff are already exploring AI, often without clear direction. Policy gives everyone a shared understanding of what’s allowed, what’s protected, and what’s expected.
Start simple:
Define your principles. Keep them grounded in values like privacy, transparency, and human oversight.
Include early adopters. Teachers, tech staff, and even students who already use AI can help shape smart, realistic guidelines.
Keep it living. AI changes fast. Review policies every year and adjust as understanding deepens.
Policy shouldn’t feel restrictive; it should feel reassuring. When done right, it gives people confidence to explore responsibly within shared boundaries.
In the Bellflower Unified School District, we’ve tried to take this lane seriously. Our Board of Education's policy on artificial intelligence establishes clear expectations around safety, ethics, equity, and data privacy, while still encouraging exploration within responsible guardrails. It’s built on a simple belief: structure enables innovation. Clear expectations make it safer for people to try new things.
2. The Teaching and Learning Lane: Frameworks for Transformation
If policy sets the guardrails, teaching and learning define the purpose. This is the most challenging lane to navigate because it is not about how teachers use AI, but about how students learn with it.
AI will not replace teaching; it expands what is possible for learners. Many students already use it fluently, while others have little access or understanding. The challenge is not introduction; it is equity, intention, and depth. Schools must build frameworks that develop AI-literate learners and support educators as AI-confident guides.
AI literacy begins with strong foundations. Students must first be secure in essential skills such as reading, writing, and mathematics, and develop habits of critical and creative thinking. These core abilities give meaning to what AI produces and allow students to question, verify, and build on it rather than depend on it.
AI literacy should also begin with thinking, not tools. In the early grades, that means curiosity, logic, and fairness—the foundations of computational thinking. As students grow, it includes understanding how algorithms learn, where bias can appear, and how to use AI responsibly. By high school, most already know how to prompt and create. The focus shifts to reflection and originality, understanding not just what AI can do, but how to use it thoughtfully, creatively, and ethically.
The age and developmental readiness of students also matter. Younger students can explore pattern recognition, reasoning, and digital citizenship long before using generative AI tools directly. Introducing such tools should be gradual and purposeful, with guidance that ensures safety, comprehension, and context.
Students also need to learn to verify what AI produces. Even convincing responses can be wrong or biased. Reading critically, checking sources, and asking, “Does this make sense?” are habits that sustain truth and trust in an AI-powered world.
Across grade levels, AI learning should strengthen the literacies that matter most:
Information literacy – evaluating accuracy, sources, and bias
Verification and discernment – recognizing when AI output may be flawed or incomplete
Ethical reasoning – understanding authorship, originality, and responsible collaboration
Digital citizenship – protecting privacy and using data ethically
Creativity and communication – prompting, writing, and revising with clarity and voice
Teachers do not have to be experts before they begin. They can learn alongside students, modeling curiosity, transparency, and discernment. By showing how they explore AI and where it falls short, educators demonstrate responsible innovation in real time.
AI integration is not a program; it is a mindset. It begins with small, safe experiments, open reflection, and trust to learn together. In schools that cultivate this kind of culture, AI becomes more than a tool. It becomes a catalyst for connection, creativity, and deeper learning.
3. The Productivity Lane: Time, Trust, and the Human Element
This is often the easiest on-ramp for schools because it meets people where they are. It’s where AI can help teachers, support staff, and administrators reclaim something they never have enough of: time.
Educators across every role carry more responsibilities than hours in the day. AI can help by taking on repetitive tasks and organizing information, creating more space for work that requires human connection and professional judgment. Whether designing lessons, coordinating schedules, supporting students, or managing operations, AI can simplify the process and give people back time to focus on what matters most.
Still, we must name a truth: many employees worry that AI could replace them. That fear is real, and leaders must address it directly.
AI should never replace the relationships, care, or creativity that define education. It should remove friction so people can do their best work, not replace the people doing it.
That’s why leadership matters in this lane. Providing clear guidance, modeling ethical use, and celebrating examples where AI amplifies human work helps shift the narrative from fear to trust. When educators and staff see AI giving time back rather than taking work away, they begin to view it as a support, not a substitute.
Ultimately, productivity isn’t about doing more; it’s about creating space to do what matters most. When schools use AI to simplify the work, they strengthen the human connections at the center of it.
Driving Forward
Most conversations about AI in education are reactionary. They chase headlines about cheating, new tools, and sudden breakthroughs. Lasting change does not come from reacting. It comes from building a system where policy, teaching, and productivity work together intentionally.
When all three lanes align, AI becomes a thoughtful, balanced, and learner-centered part of how schools operate.
Policy sets the guardrails that make innovation safe.
Teaching and learning give the work meaning and direction.
Productivity creates the time and capacity to sustain it.
Together, they form a system where AI supports—not replaces—the relationships, creativity, and judgment at the heart of education.
The opportunity ahead is not to chase what is new, but to shape what lasts.
Dr. Ryan Smith, with more than 20 years of leadership experience in public education, is dedicated to ensuring every student receives an outstanding education and reaches their highest potential. Through his current service as Deputy Superintendent in the Bellflower Unified School District and previous experience as Superintendent of the Monrovia Unified School District, his commitment to putting students first has driven success and positive change across various schools and districts. Learn more about Dr. Smith at his website, on LinkedIn, or X.





