The State of Washington Embraces AI for Public Schools
Educational institutions may be warming up to generative AI
Washington state issued new guidelines for K-12 public schools last week based on the principle of “embracing a human-centered approach to AI,” which also embraces the use of AI in the education process. The state’s Superintendent of Public Instruction, Chris Reykdal, commented in a letter accompanying the new guidelines:
Washington state is remarkably positioned to integrate AI in our classrooms and campuses across our state.
It is with great excitement and appropriate caution that we distribute guidance to schools and districts now. Like many of the innovations in technology that came before it, the world of AI is evolving at lightning speed. Also like many of the technology innovations that came before it, young people are accessing these tools and wanting to use them in their daily lives. In other words, AI is here and slowing down isn’t an option. Students and educators are already engaging with AI, but the key question remains: How will we use it in a way that empowers critical thinking? As this technology revolutionizes industries, communities, sciences, and workplaces, our responsibility is to prepare students and educators to use these tools in ways that are responsible, ethical, and safe.
…
I encourage all stakeholders—caregivers, families, teachers, education partners, and community members—to join us in this groundbreaking journey. Your insights and participation are invaluable as we chart this path and learn together. Our state leads by example, setting a standard for how technology and human ingenuity can work hand in hand to prepare the next generation of leaders for success in careers, jobs, and communities that don’t yet exist.
Techno-Optimism
The Superintendent’s message will impact over one million public school students across the state. It carries a very different message than the New York City Department of Education, the Los Angeles Unified School District, and several others issued one year ago. Seattle Public Schools banned ChatGPT in early 2023, citing concerns about inaccurate information and the risk of plagiarism.
Similar to the recent announcement of Arizona State University’s (ASU) partnership with OpenAI, Washington State’s Superintendent sees AI as an opportunity and inevitability for education. In an interview with the local King 5 NBC news affiliate in Olympia, Washington, Reykdal commented:
"This is an embrace. This is us saying 'Let's run toward this," he said. "By raising the consciousness, getting students to really understand (AI), our kids will be the innovators of it in the future, they will create entire companies and entire ideas and concepts. If it's going to dominate global productivity, Washington kids should be the best in the world at it."
It is not entirely surprising that the state with the world headquarters for both Microsoft and Amazon is embracing AI. The more significant element is how other states might react to this move. As of August 2023, the Center for Reinventing Public Education (CRPE) identified no states with official AI guidelines. By September, California and Oregon had issued guidance to school districts, and another 11 were identified as actively developing standards. King 5 states that Washington is the fifth U.S. state to publish on the topic.
The Guidelines
Washington State’s guidelines are centered on the concept of human-centered AI and blend the ideas of employing the technology to help people while also reducing the risk of harm. While the state uses AI, the guidelines specifically call out generative AI. In fact, the definitions are largely limited to the generative AI subdomains of AI technology. Key goals for adoption include:
A human-centered AI learning environment is one that prioritizes the needs, abilities, and experiences of students, educators, and administrators. An educational leader can support a human-centered learning environment by considering the following:
Developing students’ AI literacy by helping them understand the concepts, applications, and implications of AI in various domains, and empowering them to use AI as a tool for learning and problem-solving.
Ensuring ethical, equitable, and safe use of AI by protecting the privacy and security of data, addressing potential biases and harms, and promoting digital citizenship and responsibility.
Providing professional development and support for educators by helping them integrate AI into their pedagogy, curriculum, and assessment, and by facilitating their collaboration and innovation with AI.
Applying human-centered design principles to the development and implementation of AI solutions, such as involving stakeholders in the design process, testing and iterating the solutions, and evaluating the impact and outcomes.
Aligning AI solutions with the best practices and principles of learning, such as supporting learner agency, fostering collaboration, enhancing feedback, and promoting critical thinking.
While the guidelines are overtly positive in their outlook regarding AI, they do provide balance by stressing that large language models (LLM) and assistants often produce factually inaccurate results, and there are potential risks to individual privacy, security, and safety. From the document:
Potential Opportunities for Using AI in Education
Personalize learning and feedback in real time
Lesson plan and assessment design with customized planning for differentiation
Translation between languages
Develop critical thinking through human input, data output, and elevated human analysis
Aid in creativity, simulation, and skill development
Streamline operational and administrative functions
Potential Risks That Need to Be Mitigated When Using AI in Education
Increasing and/or creating inequitable learning environments
Unauthorized access to protected user information and unauthorized data collection
Perpetuating institutional and systemic biases
Plagiarism and academic dishonesty
Over-relying on technology and undermining the importance of human intelligence in education
What it Means
For most of 2023, the twin narratives around generative AI were amazing innovation driven by startups and large technology companies on the one hand and warnings of impending doom on the other. Doomerism was trending up in news coverage for most of the year, fueled by the shock of ChatGPT and a parade of other innovative foundation models. However, a subtle shift took place in the fourth quarter of 2023.
The Russell Group of Universities in the UK and ASU appear equally bullish on the potential for AI to positively impact education. They also recognize that knowledge and skills related to generative AI are likely to be an advantage in the workforce that may also extend to personal daily activities. Education is a competitive market. Rivals to the schools that embrace AI will have to decide if it impacts their competitiveness. Like Chris Reykdal, I suspect more primary and secondary educational institutions will conclude that rising generative AI adoption is inevitable and that embracing it will be more beneficial than opposition.
If schools start to embrace AI as critical for students and teachers, it will be hard for doomers or other educators to restrain adoption. An AI pause was always a bad idea pushed by doomers and companies that wanted some help to catch up with OpenAI. It seems extremely unlikely at this point. Hopefully, some of those students trained on AI from an early age will also become skilled at ensuring its positive uses outweigh any risks that arise.