Campfire Session

Feb 19, 2026

Campfire Session — AI Ethics, Safety, and Responsible Use

Campfire Session — AI Ethics, Safety, and Responsible Use

Learn how to create AI policies, handle moderation and privacy, and align classroom AI use with pedagogy using Flint.

Jacob Edington Headshot

Jacob Edington, Head of Customer Success at Flint

In this AI Ethics, Safety, and Responsible Use in Schools Flint Campfire Session, we focused on how schools can move beyond reactive AI rules and instead build proactive, community-driven AI ethics frameworks. The conversation centered on practical policy design, early AI literacy, data privacy, moderation systems, and how Flint supports safe, pedagogy-first AI implementation across grade levels.

Slides from the presentation can be found here.

Content covered in this session includes:

  • Context-setting on school AI policy design, featuring a case study from Yokohama International School where a cross-stakeholder task force (leadership, teachers, staff, parents) created a shared AI ethics booklet to establish common expectations and language around responsible use.

  • Practical rollout guidance for school AI frameworks, stressing that effective AI policy requires collaboration, visibility, and ongoing discussion rather than a one-time document.

  • Embedding AI guidance directly inside Flint, including using policy documents and AI literacy materials inside chats so both students and teachers learn responsible usage within the same environment where they practice it.

  • Early AI literacy for young learners, highlighting the importance of teaching students the difference between humans and machines before heavy interaction, especially for students already familiar with voice assistants.

  • Relational boundaries with AI, discussing concerns about students forming overly personal relationships with chatbots and the role of proactive digital citizenship instruction.

  • Moderation and student safety workflows, including flagging protocols, counselor involvement, and transparent communication processes when concerning behavior appears in student interactions.

  • AI as a thought partner in the classroom, showing how assistive AI supports differentiation, accessibility (audio/oral responses), and deeper insight into student understanding rather than replacing teaching.

  • Pedagogy-first AI integration principles, distinguishing assistive AI (questioning, analysis, scaffolding) from generative shortcut-seeking use, and emphasizing that instructional goals should determine AI use—not the tool itself.

  • District-wide AI literacy playbook strategies, including classroom posters, shared prompting frameworks, and school-wide initiatives that normalize responsible AI habits.

  • Collaborative AI learning culture, proposing events where students and teachers share AI projects to encourage low-risk experimentation and peer learning.

  • Data privacy and retention discussion, clarifying that student data is not used to train models and explaining the contained environment approach to school AI safety.

  • Mission & background and policy context inside Flint, demonstrating how schools can embed institutional values, policies, and branding so Sparky tailors responses to local expectations.

  • Moderation customization capabilities, allowing schools to define how AI responds to harmful language, self-harm concerns, or policy violations.

  • Future direction toward student profiles and automated accommodations, aiming for context-aware support across subjects while maintaining teacher oversight.

  • Overall shift from restriction to responsible empowerment, reinforcing that safe AI adoption comes from education, transparency, and shared norms—not banning tools.

  • Community collaboration and next steps, encouraging continued discussion, parent communication resources, and ongoing refinement of AI practices as schools learn together.

Got more questions, comments, or feedback for this topic? Feel free to raise them within the Flint Community.

Join our next Campfire Session 🏕️

Subscribe to our events calendar to be notified when upcoming Campfire Sessions and other Flint events get scheduled.

See Events Calendar

Image of Flint logo rock next to calendar with three check-marked dates.

Introduction • 00:00

  • Jacob introduces the session and agenda.

Teacher shareout • 01:37

  • Nick from Yokohama International School presents the school's AI Guidelines and emphasizes creating a cross-stakeholder task force to develop a physical booklet for shared understanding of AI ethics.

Open forum discussion on AI ethics in education • 08:46

  • Participants share experiences with student and teacher engagement using Flint, including deploying AI documents into chatbots for both staff and students to learn AI literacy and ethics.

  • A Q&A opens the floor for concerns about child interaction with chatbots, especially in early education, and requests for guidance on curriculum alignment and ethics.

  • Concern about relational boundaries with Sparky is raised, prompting consideration of how students interact with the AI and whether relationships are becoming too personal.

  • Participants highlight how Flint supports differentiation, oral and audio accessibility, and how it helps teachers identify what students know beyond written work.

  • James Bender shares a classroom example using Flint to simulate electoral systems, highlighting the value of assistive AI in data gathering and deeper questioning.

  • The group discusses using AI to teach responsible use, moving away from merely providing quick answers, and highlighting Flint and Sparky as prompting tools to guide students toward next steps.

  • A district-wide AI literacy initiative is described, including developing a classroom-wide playbook and visible posters in every classroom and monitor to keep AI concepts in sight.

  • A detailed explanation of how Flint supports mission/background context at multiple levels, including school context, policy integration, and watermarking for image provenance.

  • The group discusses future features and a plan to differentiate student accommodations, emphasizing a unified approach and the potential for automated, profile-driven support.

Conclusion • 58:00

Dark plum background with light painstroke lines on the corners

Learning feels different when it fits you.

Streak of orange highlighter
Dark plum background with light painstroke lines on the corners

Learning feels different when it fits you.

Streak of orange highlighter
Dark plum background with light painstroke lines on the corners

Learning feels different when it fits you.

Streak of orange highlighter