
Create engaging training videos in 160+ languages.
Think back to the first digital learning technology you used. Maybe it was an LMS course builder, a screen-recording tool, or an early authoring platform. How would you describe it? Clunky or labor-intensive? Did you spend more time navigating the tool than making design decisions?
For a long time, that tradeoff felt normal. Digital learning tools helped us distribute training, track completions, and standardize delivery. We learned to design within the constraints of the technology. Efficiency came from workarounds and personal expertise.
AI has changed that working dynamic. It lowers the effort required to get to a first draft. It also makes adaptation easier across roles and regions. As production gets faster, the focus shifts to what keeps learning credible: content quality and measurable impact.
Our research shows AI moving from experimentation into everyday instructional design work. Practitioners report the clearest value in time saved during content creation. Those savings create capacity for measurement and iteration. Instructional designers have always cared about outcomes, but delivery pressure can push follow-through aside.
AI makes it more realistic to bring evidence closer to the workflow and improve learning based on what people do on the job. So what does everyday use look like in practice? The next section shows where teams are getting value first, and how adoption is spreading across the workflow.
How has AI changed the day-to-day work of instructional design?
Hereβs what adoption looks like in our research. In our 2026 AI in Learning &Β DevelopmentΒ Report, 87% of respondents report using AI already. Adoption is strongest in production-heavy tasks where teams feel time pressure most acutely. Usage concentrates in voice generation (63%), quiz and content drafting (60%), video creation (52%), and translation (38%). Β These use cases pay off quickly because they speed up first drafts and make revisions easier, especially when SME time is limited.
Peer-reviewed research points in the same direction. A mixed-methods study found instructional designers using GenAI across responsibilities, with usefulness varying by task and context. Other research echoes that GenAI supports rapid drafting and course-structure generation, while design judgment remains essential for quality, context, and learner fit.
Where are teams starting, and where is adoption spreading next?
AI can support the full ADDIE lifecycle, and our data shows where teams start and where adoption is heading.
β

What matters about this pattern is the direction. Design and Develop are a natural on-ramp because the work is visible and the cycle time is short. As teams extend AI into Implement and Evaluate, the emphasis shifts. More of the value comes from consistency at scale, tighter feedback loops, and evidence that supports decisions about what to reinforce, revise, or retire.
Workplace research on AI in people analytics points to the same pattern. As organizations scale AI into higher-stakes decisions, governance and review structures become enabling infrastructure. Accountability becomes explicit.
The same dynamic shows up in L&D as AI moves downstream into implementation and evaluation. Decisions sit closer to the workflow and carry more consequence. Governance keeps standards consistent and makes measurement sustainable at scale. Once those standards exist, teams can measure learning and iterate with more confidence.
How can AI support workflow measurement and iteration?
When AI supports implementation and evaluation, it can shorten the time between insight and revision. Transfer research has long shown that outcomes depend on what happens after training in the work environment. AI makes that evidence easier to collect and act on. Hereβs how:
- Write the observable behavior, then use AI to tighten the language
Start with a behavior you can observe in a real situation. Use AI to generate three to five tighter versions and surface assumptions.
β
Example: You draft: βManagers give good feedback.β AI sharpens it into options like: βWhen a manager observes a missed standard, they give specific, actionable feedback within 24 hours so the employee can correct it on the next attempt.β - Select one workflow signal, then use AI to identify the system of recordβ
Choose a signal you can collect consistently from systems that already exist. Use AI to draft a measurement plan that names the data source, owner, and collection cadence.
β
Example: For βspecific feedback within 24 hours,β your workflow signal is β% of documented coaching notes created within 24 hours of a performance miss.β AI maps it to where it lives (HRIS coaching notes field or manager check-in form), who owns it (HRBP or People Ops analyst), and cadence (weekly rollup by team). - Design the learning signal, then use AI to find friction
Decide what the learning experience should reveal beyond completion. Use AI to summarize themes from learner comments, common questions, and scenario misses. Use it to propose revisions tied to those patterns, then review and select changes with SMEs.
β
Example: In a practice scenario, 42% of managers choose βBe more careful next timeβ instead of giving specific guidance. AI summarizes the pattern as βavoidance of specificity,β pulls learner comments like βI donβt want to sound harsh,β and proposes a revision: add a 20-second model clip plus a rewrite exercise that forces specificity. - Set the decision rule, then use AI to draft the ifβthen actions
Decide what counts as movement and how long you will wait. Use AI to draft decision rules in plain language and suggest likely interventions.
β
Example: βIf the share of coaching notes logged within 24 hours stays below 60% after four weeks, add a manager reinforcement nudge and revise the scenario to include language prompts. If it rises above 75% for two consecutive review cycles, scale the module to the next manager population." - Establish a review cadence, then use AI to produce the decision brief
After launch, use AI to compile weekly or monthly summaries from your signals. Ask it to highlight trends, outliers, and the top questions from the field. Turn that into a short decision brief for the asset owner.
β
Example: Each month, AI produces a one-page brief: βTimely coaching notes improved from 52% to 68% in Sales, flat in Support. Support teams cite βno timeβ and βunclear expectation.β Top replay segment is the βspecific vs vagueβ example. Recommendation: add a 2-minute reinforcement and align expectation in the manager toolkit.β - Publish a new version, then use AI to summarize what changed
When you update the learning, label the version and capture the reason. Use AI to produce a one-line change note and a short βwhat changed and whyβ summary so measurement stays interpretable over time.
β
Example: You publish βManager Coaching v1.2.β AI generates: βAdded 20-second model language and specificity rewrite exercise to address recurring vague-feedback choices in Scenario 2.β It also drafts a 3β5 sentence change log you can paste into your asset record.
What should stay human-led in an AI-enabled workflow?
AI accelerates production. Instructional designers keep ownership of accuracy, context, and the decisions that follow from evidence. Hereβs what stays human-led when designing with AI:
- Standards. IDs define what βgoodβ looks like for the asset and the audience. That includes accuracy requirements, tone, accessibility, and what βdoneβ means.
- Sources of truth. Humans decide what inputs AI is allowed to use, especially for policy, compliance, and process learning. When sources are unclear, AI output becomes inconsistent across versions.
- Review and risk. Humans decide who reviews what, based on consequence. A manager coaching module, a safety procedure, and a product update do not carry the same risk. Review should reflect that reality.
- Measurement decisions. Humans choose the workflow signal, the learning signal, and the decision rule. AI can help locate data, synthesize feedback, and draft a brief.
Used this way, AI shortens the distance between insight and revision. Instructional design turns that speed into learning the business can trust, because humans keep ownership of standards, risk, and measurement.
π Reach out if you have questions about how Synthesia can modernize your workflow, or try it out for yourself.
About the author
Learning and Development Evangelist
Amy Vidor
Amy Vidor, PhD is a Learning & Development Evangelist at Synthesia, where she researches emerging learning trends and helps organizations apply AI to learning at scale. With 15 years of experience across the public and private sectors, she has advised high-growth technology companies, government agencies, and higher education institutions on modernizing how people build skills and capability. Her work focuses on translating complex expertise into practical, scalable learning and examining how AI is reshaping development, performance, and the future of work.

Frequently asked questions
How can instructional designers use AI across the full ADDIE lifecycle?
Instructional designers can use AI in every phase of ADDIE. It can help with analysis by synthesizing needs and performance gaps. It can support design by drafting objectives, outlines, and assessments. It can speed development through scripts, scenarios, and localized variants. It can also help with implementation and evaluation by reducing admin work and making it easier to learn from feedback and workflow signals.
β
Why do many instructional designers start by using AI for content creation?
Because it delivers immediate, visible time savings in the parts of the work that consume the most hours. In our AI in L&D research, practitioners report the clearest value from AI in time saved during content creation, so early use often centers on drafting, adapting, and localization.
What is the biggest opportunity for instructional design right now?
AI creates capacity to reset priorities. Measurement and iteration are often squeezed by delivery timelines. With less time spent on first drafts, IDs can design for evidence earlier and improve learning based on what happens in the flow of work.
What does human-in-the-loop mean in AI-enabled instructional design?
Human-in-the-loop means people stay accountable for the decisions that carry risk. That includes accuracy, context, ethical standards, and quality. AI can accelerate production and variation, but instructional designers define what βgoodβ looks like and how outputs get reviewed before learners see them.
What governance is required as AI use expands beyond content creation?
As AI moves into implementation and evaluation, teams need clearer guardrails. That means agreed tools, approved sources, defined reviewers, and rules on what data can be used. This matters because practitioners cite security and accuracy as major blockers, and many are asking for support with governance and measuring impact in the workflow.













