The AI Plateau in Course Design
Anthropic published a report looking at around 74,000 conversations educators had with Claude.

The top three ways educators used AI were for curriculum development, research, and assessment evaluation. In each case, AI helps out rather than taking over completely. So, why not let AI handle everything? Educators tell us that course development feels personal, and they don’t want to lose control over it. Instead, they use AI tools mostly for organizing, handling admin tasks, and polishing their courses. One professor in the report pointed out that custom simulations and interactive experiments, which used to take too much time, are now possible. This makes learning more engaging for students.
Most faculty use AI tools for fast turnaround
Faculty use AI for tasks where it gives quick, clear help. When a task is well-defined, they can check the results right away. For example:
- Drafting a discussion prompt for Week 5’s reading.
- Generate practice problems at varying difficulty.
- Reformatting lecture notes into a study guide.
- Brainstorming alternative explanations and examples.
- Building a rough outline for a new course, then adjusting it to match your own teaching style.
These workflows share a few traits:
- The task is self-contained.
- Dependencies are minimal or obvious.
- Faculty can validate quality quickly with domain expertise.
When these conditions are met, tools like Claude, ChatGPT, Mistral, and Gemini work well. They save time, help you get started, and give you new ideas. After that, the main job is making sure the content fits your course and lesson goals. However, Anthropic’s report doesn’t show where faculty run into limits with AI in course development and instructional design. Frankly, that’s not a fault of Anthropic or any other tool; it’s an issue with how courses are built.
The AI plateau
Course development usually runs into problems because the connections between course components are hard to see, track, or maintain. When we say “course,” we’re saying a system where:
- Outcomes are designed, and courses are developed around those
- Skills are built on prerequisites.
- Readings are assigned for different goals
- Discussions reference prior readings.
- Assessments measure what was taught and practiced.
- Later weeks assume earlier foundations.
- Small edits propagate into other decisions.
General AI tools can help you build strong parts of a course. They can even help you rebuild sections if you keep giving them enough context. But keeping everything coordinated is still a challenge for AI alone. Someone still has to track how all the pieces fit together, and that’s usually the instructor. Here are a few situations where faculty still have to put in extra effort, even with AI.
Scenario 1: The better reading
A professor teaching organizational behavior finds a 2024 article during summer prep that explains psychological safety better than the 2018 article she has used for years. She asks an AI tool to update Week 6 based on the new reading. The conversation goes well, and the lesson plan gets better.
While looking for course cohesion, she notices that Week 8’s discussion prompt uses terminology introduced in the old article. The midterm includes a question that asks students to compare the old framework with Week 4. Week 11’s group project scaffold assumes students encountered a specific concept from the original reading. The final exam includes an essay prompt tied to a case study that no longer appears.
None of these connections came up in the Week 6 conversation. They are spread throughout the course, in different documents, drafts, and in the instructor’s memory. Now, changing one thing means reviewing the whole course.
The Professor has some choices:
- A) Search manually across everything and repair the chain.
- B) Make the swap and discover the break later, when students are confused and time is tight.
- C) Remove all dependencies so future professors never have to deal with this problem again.
- D) Use their ample free time to build a brand new course done “the right way”.
Scenario 2: The prerequisite chain that was never explicit
A professor is building a new data analytics course. He uses AI to plan each week and has good discussions about each lesson. Week 4 covers data cleaning, Week 7 introduces visualization, and Week 10 covers predictive modeling. Each week looks good on its own.
When Week 7 begins, students struggle. The visualization lesson expects them to know DataFrames, which Week 4 mentioned but didn’t practice. Week 10 will expect skills from Week 7 that students might not have. The right progression was there, but it wasn’t clearly planned. Each week was improved separately, so the course became a set of good lessons instead of a smooth learning path.
In this case, AI can help spot potential problems in the course and suggest fixes, but only if it knows the expected student outcomes and their starting skills. If AI is used to fix the problem, the instructor still needs to make sure the course stays balanced. All of this can be done by hand, but it takes time. Using AI can save time, but on its own, it can also create new problems.
Scenario 3: Assessment alignment under real teaching constraints
A professor teaching research methods uses AI to create weekly quiz questions from the readings. The questions seem fine at first. But when the midterm comes, some questions cover topics that were skipped earlier because class discussions focused on a recent event. Other questions go deeper than what was taught in class.
The AI created questions based on the reading, but it didn’t know what was emphasized, skipped, or practiced in class. Now the professor has to review and change a bunch of questions. For a 50-question exam covering the whole semester, checking and fixing questions can take as much time as writing them from scratch.
And yet, these situations are common. Real-life events happen, courses change, and keeping everything aligned is still necessary. In most cases, an instructor will talk through the changes in class, and it works. But what would it look like if the instructor could rapidly update the course to accomodate for the shift? Much like scenario 2 above, AI can be used to change the course based on recent events, but does it know what to change and where and keep the core goals consistent? For quick one-offs, yes. For overall course cohesion, probably not.
On its own, AI will run into these issues:
- When you move lesson teaching goals, AI chat windows can’t easily keep the course aligned without a cascade of manual fixes.
- When you shift the focus of a lesson, AI chat windows can’t maintain balance across your overall goals.
- When you change one reading or one assignment, AI chat windows can’t cascade the changes throughout the entire course.
What the Anthropic “augmentation” finding implies for AI chat window workflows.
On the topic of chat windows, Anthropic’s report includes a line that captures the position faculty are taking with AI: “It’s the conversation with the LLM that’s valuable, not the first response.” That mindset is healthy, and it also explains the plateau.
Conversation works well for simple, self-contained tasks, but it gets harder when the work involves many connected parts. At that point, the instructor is the one who keeps everything together. The platform and the AI tool each help, but only the faculty member truly understands how the course fits as a whole. This is the limit educators are reaching, even when they use AI carefully.
What purpose-built course architecture (like Curriculus!) changes
All these scenarios have one main cause: managing relationships between course parts is usually done by hand because it’s a key part of course development, even if it’s not part of teaching itself.
This is the “course development overhead” problem that Curriculus is designed to solve.
- Curriculus maintains a reference-based architecture in which each reading has a unique identifier that is tracked throughout the course. When you update a reading, the system can identify every discussion prompt, assessment item, and progression element that references the old material, so you can make targeted decisions and cascade their impacts throughout the course where required.
- Curriculus also makes prerequisite relationships explicit through structure. Using our Outside-In methodology, courses are designed backward from proficiency goals, through breakthrough moments, to foundational skills. Prerequisites are structural relationships that the system enforces and validates. Read: https://www.curriculus.com/blog/beyond-backward-design-in-instructional-design
- And finally, we’ve also implemented a learning domain layer. Learning domains provide a second layer of structure. They turn implicit skill tracks into explicit, designable tracks. That matters because most courses build multiple capabilities in parallel, and faculty need a way to balance and scaffold those capabilities intentionally across a term. Read: https://www.curriculus.com/blog/learning-domains-in-course-design & https://www.curriculus.com/blog/choosing-the-best-domain-structure
About Curriculus
Curriculus is designed to help with the complex parts of course development, even if you already use AI. Our goal is to handle the structural details, so faculty and course developers like you can focus on what matters most for learning: pacing, emphasis, feedback, student support, and the art of teaching. Contact us to learn more: https://www.curriculus.com/contact