Friends and colleagues,
I am convinced that the future of instruction is hybrid—with teachers still squarely in the driver seat of teaching, receiving meaningful assistance from helpful AI solutions.
This year, I’m seeing 20 student-facing AI solutions in action, talking to teachers using the products frequently, and interviewing product developers about their design choices and business models. I'm doing this to get a sense of how these products may change instruction and the teacher role in the years to come. Here is the list of products I’m learning about this year.
Though all student-facing AI solutions are also teacher-facing, many teacher-facing solutions have no direct application for students. I'm focusing on student-facing solutions (i.e., solutions from which students get questions or feedback from AI) designed for use in school because I sense that these programs have the most potential to fundamentally change the instructional setup—and make the job of teaching more enjoyable, easier, and more effective. I also think that student-facing solutions have the most promise to improve instructional coherence. I am prioritizing seeing products in action because I find it’s easier to be wowed by or critical of the wrong things in a product demo.
So far, I have researched all 20 solutions, spoken to 14 product founders and eight teacher superusers, and observed four products in classrooms. Recognizing that I’m still in the early stages of learning, let me outline the range of instructional design choices I’m beginning to see:
- Solutions aim to do different jobs. Some solutions are designed around a complete vision of instruction for one content area (e.g., this product is designed to support 6th-grade reading instruction; students use the product in specific ways at specific times throughout lessons and units). Some solutions are built to do a single instructional task (e.g., this product gives students feedback on exit tickets for all subjects and grades). Some solutions offer a one-stop shop for a range of instructional jobs (e.g., this product gives students a range of different tools they can use to help them do their assignments). Some solutions are designed to be used in core classes; others are designed to be used outside of core time.
- Product developers think about instructional quality in a range of ways. Some developers have clear ideas about instructional quality (e.g., “we have four categories and 24 sub-requirements for what makes a good math question, and we test every question against a rubric of these requirements”). Some developers put instructional quality in the hands of teachers and leaders (e.g., “we think teachers and leaders will know best what approach serves students, so we are building responsive to their interests—whatever they want, we will try to do”). Some products use only high-quality instructional materials as base content; some develop original content with a range of parameters.
- Solutions play different roles vis-a-vis students. Some solutions ask students questions. Others help students answer questions that their teachers ask. Some are designed to be used by individual students, others by small groups or partners.
- Solution developers train AI in different ways. Solutions use different large language models and add different things to them. Some provide extensive pedagogical training to guide how AI interacts with students. Some are trained by processing a range of student work. Some train the AI to refer to specific sources of information.
I’m fired up about the value these solutions offer in four ways:
- AI can dramatically increase the number of at-bats with immediate feedback students get. Though the quality of the feedback varies, every solution I’ve seen in action offers students far more feedback than teachers can structurally deliver. Students generally like getting more feedback.
- AI can track and analyze student work and tee up potential next steps faster than a teacher can walk around the room. AI can do jobs we currently think about in separate siloes (e.g., “curriculum,” “assessment,” and “intervention”) in a single dynamic way and suggest teacher actions to support better student learning.
- When the solution asks students questions and evaluates their responses, I notice a higher degree of student effort and enthusiasm to get the right answer than I see in most class discussions. This could be a novelty effect, but I see more students giving high fives and celebrating strong answers than I usually see in whole-group discussions or paper-pencil independent practice.
- Solutions designed for social use are increasing student talk time. To be fair, it doesn’t take much to outperform the baseline here. According to TeachFX data, students in American classrooms average 27 seconds of talk time per hour of schooling in teacher-delivered lessons. The solutions I have seen in action get students to dramatically outperform that baseline.
I also have four major concerns:
- In certain set ups, AI makes it easy for students to offload the hard thinking. When the AI is positioned to help students answer questions rather than the one asking questions of the student, students use the short cuts.
- The way teachers and schools use technology can be a lonely set up for students. As a parent, I have no interest in my kids spending the whole school day on the computer, even if it has the best AI. Implementation of all products is shaped by old patterns in how tech is typically deployed in classrooms—kids with headphones, working alone at computer stations. Even solutions designed for social use are often implemented this way.
- Extensive use of AI could compromise teachers’ sense of personal accountability for student learning. AI can add a lot of instructional value but—at the end of the day—it’s teachers’ responsibility to ensure that their students learn. Though we currently ask teachers to do an extraordinarily hard job, at least it’s clear: teachers’ are accountable for their students’ learning. We will all lose if that mission gets confused. I worry about AI that encourages teachers to lean back rather than lean in.
- Market demand might reward solutions designed to make jobs easier without attention to student impact. It is unclear to me at this moment whether the products designed to help students do harder work, with all they will require, will be the products favored in purchasing.
AI has a lot to offer instruction. The question isn’t whether it will matter but how we can shape it to be most helpful to learning for all students. Seeing products in action makes clear the degree to which design and implementation choices will shape how helpful AI actually is—which means we humans need to get smart about how to support and shape selection, design, and implementation.
I’m excited about this learning tour, and I look forward to sharing more. For now, I have three requests for you:
- Let me know what questions you have. I’d like this to be as useful as possible to those who lead and support schools. Understanding your questions will help me do that. Just reply to this email and I will address common questions in future emails.
- Let me know if there are interesting student-facing AI solutions that you do not see on the list that you think we should consider adding to the tour.
- If you know someone who wants to follow this learning tour, they can sign up for these emails here.
One step at a time, together,
Emily