AI Role in Education
|
Guest Article
The Question of Artificial Intelligence and the Christian School (CESA)
This article was originally published in the CESA Blog by Joshua Crane and Jeffery A. Smith from The Stony Brook School. It has been republished here with their permission.
CESA (Council on Educational Standards & Accountability) is committed to the rigorous application of aspirational standards through the relationships and expertise of like-minded peers, those who believe that excellence in all areas of our schools and Christian witness are complimentary, not contradictory. School leaders who are committed in humility to the continuous pursuit of the CESA Standards of Accountability(c), accountability to the Council, and the responsibility to other CESA schools should be encouraged to learn more about joining this movement.
While Flint doesn't exclusively serve Christian schools, our view is that AI should lead to more personalization at the school and teacher levels. So, we’ve been excited to see how religious schools like Christian schools have made use of the customizability of Flint to cater to their needs.
Introduction
ChatGPT—Or, How We Got Here
The introduction of ChatGPT has clearly touched a nerve within our society in a way that previous iterations of AI have not. After all, people talk to Siri and Alexa all day long and, aside from passing fears that maybe we are being listened to more than we think, we have integrated these technologies into our routines. Similarly, we have no problem when Netflix suggests programming based on what we have previously watched. However, ChatGPT—a technology based in a large language model (LLM) characteristic of generative AI, along with natural language processing (NLP) capabilities—can go far beyond the simple commands that Siri and Alexa can execute (hey Siri, play “Can’t Buy Me Love” by the Beatles) to robust answers to complex existential questions (“Why do I feel so sad all the time?”).
Additionally, and of more import to educators, Chat GPT has the ability to fulfill homework assignments skillfully that used to be the staples of a secondary school education—in particular the five-paragraph essay. In a matter of five to ten seconds, a student can produce a well-written paper in response to a wide array of writing prompts without having to do much more than type in “Write a five-paragraph essay for a high school English paper highlighting Byron’s use of irony in his poem ‘Don Juan’.” The Chat GPT paper will easily pass muster in terms of complexity of thought, development of ideas, and textual citations. And unless the teacher is intimately familiar with the student’s writing, the paper could easily pass as the student’s own. This is a major development indeed.
In laymen’s terms, as far as our experience with the technology is concerned, ChatGPT4 has made artificial intelligence a little less artificial and a little more human—and this is precisely what has people so concerned. Theoretically, the more human the technology becomes, the more dangerous it becomes to the human race—from concerns about job replacement (some statistics show as many as 60% of the jobs that exist today will be replaced by AI) all the way to robots wiping humans off the face of the Earth and taking over the planet (more on this a little later). It is important to note that from a historical perspective, these fears of technological and scientific breakthroughs running amok are nothing new under the sun (Ecclesiastes 1:9b).
During the Industrial Revolution of the 19th Century, Ned Ludd led a brief but violent uprising against the machines and machine workers in the textile industry where he smashed the new machines that were causing the loss of jobs. It is from this episode that we get the epithet “Luddite” to describe anyone who is resistant to technology. Perhaps no work of fiction better delineates the fears, questions, and implications of technological development than the 1818 Mary Shelley novel Frankenstein, which is astoundingly relevant for today. The novel explores the dangers and ethical considerations of unbridled scientific and technological pursuits. It tells the story of Dr. Victor Frankenstein who had become obsessed with alchemy and used his skills and abilities to create a human being. Shelley describes the moment Dr. Frankenstein’s creation came to life on the operating table, “I saw the hideous phantasm of a man stretched out, and then, on the working of some powerful engine, show signs of life, and stir with an uneasy, half vital motion. Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanism of the Creator of the world.”4 Dr. Frankenstein’s creation would become a terror to its creator and to the pitchfork-wielding townspeople who, in fear, chase off the monster to his death.
Mary Shelley recognized something which would have been much more readily accepted in the 19th Century than it is today—namely, life ultimately comes from God. His are the “stupendous mechanisms” that bring forth life. Anytime we are seeking to create human characteristics through technology we are treading squarely on God’s domain. And it is here that we should realize that conversations about AI are every bit theological as they are technological. There is a lot in play here and it is critical that before we talk about implementation, we first build the foundations of our understanding on the Rock so that when the winds of fear, hype and danger come, our house will not fall with a great crash (Matthew 7:24-27). As we build these foundations, we will discover that we have an amazing opportunity to engage our students in levels of thought that inevitably drive them to the core of their beliefs—either strengthening those with existing faith, driving the unsure to dig deeper, or unsettling the overconfident in their unbelief.
Theological Foundations
As Christian school leaders, we need to employ our theology to consider all things, even robots. The creation of technology with human characteristics and doomsday projections of a world dominated by such technology should create a healthy dose of unease in us for two primary reasons.
As exciting as new technology is we should have native suspicions about the intentions of the creators of this technology. Since the Garden of Eden, we have all been susceptible to the temptation to try and be like God. Adam and Eve were deceived by Satan (who had earlier made his play to be like God) to believe that they could achieve at least coequal status with their Creator. Remember Satan’s great lie when enticing Adam and Eve? God doesn’t want you to eat this fruit because then you will be like Him. Ergo, don’t you want to eat this so that you can be like Him? (Genesis 3:5) Since that time, man’s fallen impulse has been to control, dominate and exalt himself as God. Attempts to create human-like technology is the latest iteration of this quixotic quest and in many ways represents its apotheosis. After all, what could render God more irrelevant than being able to replicate at least in some ways the pinnacle of His creation?
Again, there is nothing new under the sun. The Scriptures tell us that the intent behind the construction of the Tower of Babel was to utilize technology to build a tower that would unite humanity and theoretically remove mankind’s need for dependence on God (Genesis 11:4). As Christians, this knowledge should temper our unbridled enthusiasm and adaptation of technological tools. When fallen people create powerful technology to put in the hands of other fallen people, we would do well to remember Paul’s statement in 1 Corinthians 6:12, “All things are lawful for me, but I will not be dominated by anything.”
To best prepare students for an AI future, Christian schools should be teaching a robust concept of freedom. I am not talking about freedom in a political or a religious practice sense, as important as these may be, nor am I talking about the kind of freedom that enables one to do whatever one wants, whenever one wants to do it. Rather, I am talking about the kind of freedom that is internal—the freedom that comes from Holy Spirit powered self-control. What does it look like? It looks like the freedom to be able to say “no” to lesser things in favor of higher things, like saying “no” to checking your news feed for the fifth time in a day in order to spend time in prayer or meditating on God’s Word. It looks like the freedom to say “no” to sin, like fasting from social media for a season in order to quell envy. It is the freedom to know and follow the Shepherd’s voice above all the other voices in our lives. As AI tools add to the cacophony and become more and more prevalent in our lives, will we be able to put them down when they threaten to consume more of our disposable time? As is the case with all technology, it will mean maintaining a Gospel-centered independence from anything that threatens our shalom. This kind of freedom is only the product of a focused Christian discipleship of which the Christian school has much to say.
The second source of disequilibrium is the realization that deep down we all know there is an order to creation. Man is the pinnacle of God’s creation and all things on earth are subject to him. This is the creation mandate in Genesis 1:28 where God calls mankind calls him to subdue the earth and have dominion over it. It has always been a central tenant of the Christian school that our programs do not exalt man over God, but a close second should be that we don’t exalt other created things as equal or above man. This is so critical at this moment in history. Man is the pinnacle of God’s creation and always will be because he bears the very image of God. Nothing else on planet earth has this distinction. Our humanity is sacred and needs to be elevated accordingly despite our culture’s many protestations to the contrary. Christians cannot lose the battle for our minds and must remain confident regarding what God has declared about humanity’s place in the cosmos. It will help our students ward off the doomsday predictions and keep proper perspective.
Understanding the Technology
A central reason there is so much fear about and avoidance of AI within educational settings, has everything to do with not understanding how it works. While ChatGPT’s innerworkings are modeled after the human brain, it is not the same as a human brain—not even close. ChatGPT and generative AI run on statistical probabilities. It’s math combined with clever programming concepts. Here is how it works in a nutshell: unlike a Google search which returns to you matches from a vast database, ChatGPT will answer your question based on the context and intent of your question. ChatGPT is neither omniscient nor discerning—it’s just really good at probabilities and odds.
This sentence completion probability model is only one part of the technology; the other part accounts for the contextual, conversational way in which it answers our queries. This was accomplished through thousands of simulated conversations that were entered in by human programmers who would play the role of both the chatbot and the human being so that the model could feed off data to predict the appropriate response to a question. It learns the patterns of contexts and meanings of various inputs so that it can predict how it should respond. Again, the focus here is on probabilities and odds that were based off models of past conversations fed to it by human hands. ChatGPT produces the highest probability response type to the context of your question. It doesn’t read your mood or the moment, nor does it employ wisdom—instead it scours its database of conversational patterns and selects the best one, which when you think about it, is very robotic.
ChatGPT is a remarkable piece of technology. Its ability to parse through an unfathomably high volume of data at the speed in which it does far exceeds anything the human brain can do. However, it must be noted, that from its inception, ChatGPT has been dependent on human intervention. It possesses derivative intelligence, not primary intelligence. It is only as good as the data it is given by humans and the conversational etiquette programmed into it by humans. It cannot teach itself. The proliferation of AI technology in our schools and daily lives will provide us with ample opportunities to thoughtfully juxtapose it with the capabilities and characteristics of the imago Dei and marvel afresh at what God has done in creation.
In his seminal work Teaching Redemptively, Donovan Graham outlines the characteristics of the imago Dei in the students in our classrooms that our educational programs must acknowledge lest we be guilty of reductionism and therefore educational malpractice. These are never more relevant than they are right now. Graham’s list includes the following: Active and Purposeful, Creative, Rational, Free and Responsible, Moral, Faithful, Relational, Merciful, Loving (7). When you put ChatGPT capabilities up against these characteristics, you realize just how wide the chasm between the technology and the imago Dei really is. I would recommend performing these comparisons from time to time as a way of fortifying reality amidst the hype.
We have spent a disproportionate amount of time thus far on the philosophical aspects of AI because this is the hardest work to do. In comparison, discussions of implementation of generative AI in Christian schools are relatively easy and quite fun. However, if we fail on the former, we will inevitably foul up the latter. Secular schools will likely have discussions relative to the ethics of AI as they develop policy and approach, but they will not be able to go down to the roots. Christian schools with their understanding of creation, dominion, and purpose have the ability to speak sanity and life into a highly disruptive technological development. This is our time to lead—if we will take it.
Getting Started
I’d like to share our experience at The Stony Brook School, in the hope that it may offer a framework to approach the implementation of generative AI in the Christian school setting. Here are the concrete steps we took to get started with AI at our school.
Convening: The first thing we did was put our best thinkers on this topic. Forming a task force on AI will draw the philosophers, technologists and the curious from your community together—and you need all three. We started with the premise that we would be implementing generative AI; the questions were, how would we do it, and at what scale? We opened up the discussion for a robust exploration of our purpose and mission as an institution, and how ChatGPT fit into the mission.
Exploring: We set as our first task to understand the technology to the very best of our ability—and that required investment. We purchased a subscription to ChatGPT 4 and Khanmigo as a way of giving the task force free reign to investigate and experiment. Our experience was that even those who would call themselves technology novices were able to obtain a level of proficiency with these technologies in under an hour. One of the beauties of this technology is its ease of use.
Experimenting: To put some focus and motivation to our learning, we set a few deliverables that we worked on together. The first was a presentation to our faculty of actual lessons we had conducted in our classrooms incorporating ChatGPT. We shared what went well and what did not and opened it up for questions. Ideas and curiosity were sparked, and more teachers became open to the possibilities of AI in their classrooms as a result. If you desire to see widescale adaptation of AI in your school, we see no better path than having a group of early adopters share their learnings with their colleagues.
Collaborating: The second deliverable was a series of webinars that we planned for Christian schools worldwide to provide both philosophical and practical guidance for implementation. (At the time of this publication, we have conducted two webinars which are available on YouTube). You do not have to publish to the world regarding AI, but we would recommend directing your learning towards a presentation of some kind at this initial stage. Accountability is crucial here because this technology is moving fast. Procrastination and inertia will only make it harder to catch up—our advice is just get started, and to learn from other schools in the process.
All of the above was in place before we began formally engaging students with AI. This leads us to the next part of our school’s story.
Engaging Students
We operated from the premise that our students were already practiced in the art of using ChatGPT (may I humbly suggest yours are too) and therefore banning student use of the technology was never on the table. Instead, we employed a comprehensive strategy that was equal parts defense and offense. Here are the steps and actions we took as a school in terms of engaging students with AI.
Setting Policy: We made an AI policy consistent with our existing plagiarism policy, which states a student’s work submitted for grading must be entirely his or her own unless the teacher has specified otherwise. Any use of ChatGPT must be acknowledged or else the student faces an academic penalty for plagiarism. This ensured that we were addressing the issues of academic honesty that is not only crucial for any academic program, but also for the character development and integrity we desire as outcomes for our students.
Limiting Assessments: On the defensive side, as we navigate the new realities of ChatGPT, we are assigning more in-class assessments without technology than we have previously done. This is particularly critical in classrooms where teachers do not yet know the capabilities or writing styles of their students. If take-home assessments are the norm, it would be very easy for a student to take advantage of the newness of the relationship and utilize ChatGPT exclusively on all take home assignments thereby creating a false impression of his or her abilities.
Selecting an AI Platform: On the offensive side, we have made learning about the capabilities of ChatGPT a collaborative effort between teachers and students through a technology called Flint AI. Flint effectively sets guard rails around ChatGPT so that teachers can effortlessly program ChatGPT to only provide students feedback that is helpful to the learning task without doing the assignment for them. Perhaps an apt analogy for Flint is that it provides the sandbox for teachers and students alike to play by the rules together with ChatGPT. Here is an example of how Flint works from a recent classroom. A history teacher assigned a research paper that took weeks of in-class work and research. In addition to the standard writing conferences and graded feedback, this teacher uploaded to Flint a handful of his files about writing expectations for the course: including a grading rubric, citation information, primary sources, and a style guide. Students then had a customized chatbot relying directly and exclusively on the uploaded information. This chatbot served as a tutor for the writing process, and students iteratively uploaded their drafts and asked for feedback. When the teacher met with students in a writing conference the next week, they had already edited out many of the first draft issues. This allowed the teacher to focus on the major ideas and themes of the writing.
While anecdotal feedback from our students has been predominantly positive, Jeffrey Smith, our Academic Dean, has conducted a focus group on AI with our upper-class students. Their comments not only addressed their overall impression of AI, but also showed specific ways in which they used the technology in class:
“AI is already an engaging way to learn about things. And Flint AI kind of combines your teachers with the AI. It's definitely a really helpful way to maximize and become more efficient.”
“We use Flint AI to work on our essays and our writing assignments. And I think it's really useful as opposed to using ChatGPT, because Flint has your teachers’ materials already loaded in it.”
“I’m using it to make flashcards for Latin and other subjects to study. It can produce a list of topics or a list of words, and then students can find the definitions and generate a flashcard.”
“When writing a paper on Aristotle, I used it to dissect some passages because I couldn’t fully understand them—for example, the idea of goodness is really abstract. So I used it to help me fully understand it.”
Despite these benefits, students were acutely aware of the potential dangers of over-relying on AI:
“I think that AI is helpful for some things. It’s definitely helpful for diminishing time on certain things. But I think in the future it might detract from our own ability to do those things if we rely on it too much.”
“[With ChatGPT] it’s very easy to just type in an exact question that you're struggling with, and get the exact answer for that, and not have to learn about the rest of the topic.”
“If we become too reliant on it, then we're going to stop doing the work that's maybe necessary to actually become educated people. Sure, we'll pick up the information. But we're going to skip a lot of the details along the way, I think. We’re going to also lose our stamina. When it comes to doing any sort of thing that requires work, it will deteriorate and diminish I think, as time goes on.”
As educators, we are encouraged that our students are concerned about the quality of their own learning. As always, we have a lot to learn not only alongside, but also from, our students.
Engaging Teachers
As we see it, ChatGPT’s highest near-term value to The Stony Brook School is time compression. Time is our most precious commodity as educators and there is never enough of it. As is the case in the above example, generative AI technology was able to extend the reach of teachers. The technology compressed otherwise laborious, time-intensive grading into valuable feedback delivered virtually in seconds, thereby granting teachers a better starting point in their face-to-face conferences with students. We all want to provide more individualized attention to our students, but inevitably our limitation is time. Generative AI technology helps us get closer to this ideal and we celebrate this development.
In addition to a virtual tutor, AI is also being used by our teachers to:
Create sentences and examples for English teachers’ grammar worksheets
Create review sheets quickly for teachers when they upload their own materials or notes
Identify trends in writing or data across multiple student assignments
Synthesize announcements or content from a faculty meeting to a follow-up email
Assist with parent communication and emails
All of these tasks performed by ChatGPT free up teacher time to help us concentrate more time on those tasks that will help us better reach our ultimate mission. Our students sense this as well, as one focus group participant explained:
“Flint AI enables teachers to give feedback as soon as without actually being there—the AI is basing its feedback on the same criteria. So AI can definitely help teachers with their workload, and they have more time for teaching us seven new things rather than giving the same feedback to twenty students.”
Concluding Thoughts
Christian schools have the opportunity to move toward generative AI and capture it for strategic purposes. It might seem a little scary at first, but up close, we see it is not one of the Four Horsemen of the Apocalypse. Rather, it has enormous benefits for us as a powerful productivity tool. This doesn’t mean we plow ahead without thoughtfulness and intentionality, however. Again, the words of one of our students best captures this:
“We are open minded about the use of AI. We don't say it's bad, and we don't say it's good. However, we see both sides of it. We use it in an ethical manner. We learn about it while it's still at the beginning. We see how we can start using it, how we can navigate it, and how we can set rules for it.”
Finally, we can use the question of AI as a relevant entrée into the kinds of conversations with students that inevitably lead to the things of God. Generative AI forces questions of ethics, of values, of creation, of life itself. Instead of being intimidated by all the smoke around the topic, let us see it for what it is, see ourselves for who we are, and trust that our Sovereign God has placed us in our schools for a time such as this.
About the Authors
Joshua Crane is in his eleventh year as head of school of The Stony Brook School (a 7th-12th grade boarding and day school on the north shore of Long Island) and in his 19th year of school leadership (previously head of school at Central Christian School, a 3k-6 school in St Louis, Missouri). Prior to school leadership, he was Chief Operating Officer of a software company in Nashville, Tennessee as well as a sixth grade English teacher. He holds a BA in English from Vanderbilt University, an MPhil in European Romanticism from the University of Glasgow and an M.Ed. in Educational Leadership from Covenant College. He is passionate about seeing Christian schools be the exemplars of academic excellence in the cities and towns where they are placed.
Jeffrey A. Smith is a humanities teacher and Academic Dean at The Stony Brook School, a Christian boarding school on Long Island. He has an undergraduate degree in religion from Dartmouth College (USA) and a master’s degree in history from the University of Birmingham (UK). He is the author of Themistocles: The Powerbroker of Athens and The Corinthian War 395-387 BC: The Twilight of Sparta's Empire.
Bibliography
“What is Generative AI?” McKinsey and Company 19 January 2023, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai#/
Shrier, David L. Welcome to AI: A Human Guide to Artificial Intelligence. E-book, Harvard Business Review Press, 2024.
“Three Laws of Robotics.” Wikipedia, Wikimedia Foundation, November 8, 2008, https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Shelley, Mary. Frankenstein Edition 3 Edited by MacDonald, D.L. and Scherf, Kathleen Broadview Press, 2012 (p.9)
Peterson, Eugene. Leap Over a Wall: Earthly Spirituality for Everyday Christians Harper Collins, 1997 (p. 134)
Ash, Arvin. “So How Does ChatGPT Really Work? Behind the Screen.” YouTube, Uploaded by Arvin Ash 8 April, 2023, So How Does ChatGPT really work? Behind the screen!
Graham, Donovan. Teaching Redemptively Purposeful Design, 2003 (pps 96-97)
Peterson, Eugene. Leap Over a Wall: Earthly Spirituality for Everyday Christians Harper Collins, 1997 (p 136)