Dec 28, 2023

AI's changing role in education: Monthly highlights of 2023

AI's changing role in education: Monthly highlights of 2023

2023’s evolution of AI attitudes in education and predictions for what’s to come in 2024.

2023’s evolution of AI attitudes in education and predictions for what’s to come in 2024.

Number 2023 surrounded by education- and AI-related icons
Number 2023 surrounded by education- and AI-related icons
Number 2023 surrounded by education- and AI-related icons

Illustrated by AI, written by humans.

January - Fear of cheating

January - Fear of cheating

Large language models were skyrocketing in quality of output and virality of adoption. ChatGPT set a new record for the fastest-growing userbase, hitting an estimated 100 million monthly active users by the end of January.

This incited a fear of cheating in education, causing a string of ChatGPT bans in school districts in Seattle, LA, NYC, and more. Teachers also scrambled to try AI writing detectors like ZeroGPT, but could not ensure success in reigning in use of AI.

ChatGPT banned in a school

February - AI is here to stay

February - AI is here to stay

A report by Global Market Insights projected the market for AI in education to reach $30 billion by 2032.

Different news media outlets also start covering not just the initial shock and concern over AI in education, but also some of the positive possibilities. These included how AI could help teacher productivity by automating repetitive tasks and how AI could be used to tutor students, giving them personalized and immediate feedback.

Pros and cons scale about AI

March - The Generative AI Race Continues; GPT-4 is released

March - The Generative AI Race Continues; GPT-4 is released

Microsoft added Dalle to Bing, which had just integrated ChatGPT in February. Google launched Bard, their own AI chatbot. Anthropic released Claude, a competitor LLM to ChatGPT. OpenAI unveiled GPT-4, a model with more advanced reasoning capabilities, context length, and varied input types compared to GPT-3.5, which had brought OpenAI to the world stage just a few months prior.

Logo compilation of Microsoft, Google, Claude, and ChatGPT

This burst of development for general AI models foreshadowed the growth in AI tools for education. As the landscape for AI tools for education got more and more crowded, educators needed to discern what exactly they wanted out of an AI learning experience and how to achieve that.

April - Fear turns into curiosity

April - Fear turns into curiosity

The Google searches for the phrase “personalized learning AI” peaked in April. Educators were starting to consider how AI could be the key to solving age-old education challenges like Bloom’s 2-sigma problem.

Google Trends screenshot showing spike in searches for "personalized learning AI" in April 2023.

Italy banned and unbanned ChatGPT in that span of just this month. The original ban was instated based on concerns over user control of their data privacy and verification of age for people attempting to access the platform. Student data privacy would prove to be a continuing concern for educators and their AI strategy.

Late April is also when the first version of Flint was released. Over 60% of startups from the Y Combinator Summer ‘23 cohort were working on some form of AI, but only two companies focused on using AI to improve education:

We at Flint had built an AI copilot for teachers—a tool that any K-12 teacher could use to create classroom activities, worksheets, or lesson plans.

Initial Flint landing page screenshot showing the software's focus on generating teaching materials.

May - Guidelines for AI in schools

May - Guidelines for AI in schools

US Dept. of Education released its first-ever report on AI in education in which officials shared insights and recommendations for integrating AI into education. These included:

  1. Emphasize Humans-in-the-Loop

  2. Align AI Models to a Shared Vision for Education

  3. Design AI Using Modern Learning Principles

  4. Prioritize Strengthening Trust

  5. Inform and Involve Educators

  6. Focus R&D on Addressing Context and Enhancing Trust and Safety

  7. Develop Education-specific Guidelines and Guardrails

June - Talks of regulating AI

President Biden met with leaders in the AI ethics and development space to discuss how best to move forward with developing safe, secure, and transparent AI technology. The final voluntary agreements are outlined in this fact sheet from the White House and were agreed upon by leading AI companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Image of President Joe Biden at a podium to speak to a room of people

Commitments these companies made include:

  1. “Sharing information across the industry and with governments, civil society, and academia on managing AI risks”

  2. “Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system”

  3. “Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy”

Transparency from #1 might have a conflict of interest with profit-forward companies, the possibility of #2 is still an unsolved engineering problem, and the incentive to fully commit to #3 is unclear because it poses a threat to the company’s reputation. As all of these commitments are voluntary, we’ll have to keep an eye on how legislation progresses and how transparent companies are willing to be with their audiences.

July - AI exploration led by teachers, not students

Quizlet’s research on the state of AI in education found teachers outpacing students in AI usage and optimism. Beyond asking about how effective AI is for teaching and studying, the study also covered how AI might help students rebound post-pandemic and how AI can bring about more equity in education quality for students regardless of background. Teacher views on all these topics were generally more hopeful.

To explore how AI can foster deeper learning and push the current boundaries of education, we refocused Flint to address tutoring. The new and improved Flint aimed to help schools embrace AI. Our platform was built to create new modes of teaching through flexible, interactive and personalized AI learning experiences, not just help with existing teacher tasks.

New Flint landing page with messaging about helping schools embrace AI

August - Enterprise and upskilling

OpenAI launched ChatGPT Enterprise—a version of ChatGPT catered to businesses that would address the privacy and security qualms that had kept companies from allowing use of AI in the workplace. However, the waitlist is thousands and thousands of companies long, which has been restrictive for businesses—educational ones included—to moving forward with exploring AI solutions.

Results from a survey by IBM gave insights into AI’s projected impact on the workforce. The data showed a large scope of disruption across industries and a growing need for upskilling to leverage AI.

September - Turns out, cheating concerns were overblown

The International Journal for Educational Integrity published research showing how rates of cheating didn’t increase because of AI. This was not only reassuring for educators, but also took the conversation to the next step: how can AI be used to change assessment to be inherently harder to plagiarize? As we discussed in a previous blog post, AI detection is untrustworthy and ethically questionable, so the new data on overblown cheating concerns allowed for more focus on how AI can revolutionize learning.

Student high-fiving an AI robot in a classroom