AI Scales Learning. Instructional Design Scales Impact.

Written by
Amy Vidor
March 11, 2026

Create engaging training videos in 160+ languages.

Think back to the first digital learning technology you used. Maybe it was an LMS course builder, a screen-recording tool, or an early authoring platform. How would you describe it? Clunky or labor-intensive? Did you spend more time navigating the tool than making design decisions?

For a long time, that tradeoff felt normal. Digital learning tools helped us distribute training, track completions, and standardize delivery. We learned to design within the constraints of the technology. Efficiency came from workarounds and personal expertise.

AI has changed that working dynamic. It lowers the effort required to get to a first draft. It also makes adaptation easier across roles and regions. As production gets faster, the focus shifts to what keeps learning credible: content quality and measurable impact.

Our research shows AI moving from experimentation into everyday instructional design work. Practitioners report the clearest value in time saved during content creation. Those savings create capacity for measurement and iteration. Instructional designers have always cared about outcomes, but delivery pressure can push follow-through aside.

AI makes it more realistic to bring evidence closer to the workflow and improve learning based on what people do on the job. So what does everyday use look like in practice? The next section shows where teams are getting value first, and how adoption is spreading across the workflow.

TL;DR πŸš€

AI has made production faster and localization easier. As production speeds up, credibility depends on two things: content quality and measurable impact.

What to know: AI adoption often starts in production work and then moves downstream into implementation and evaluation, where decision-making gets more complex and governance matters more.

What to do: Use AI-created capacity to build measurement into the flow of work. Pair one workflow signal with one learning signal, set a decision rule before launch, and use AI to shorten the loop between insight and revision.

What stays human-led: Standards, sources of truth, review by risk, and measurement decisions.

How has AI changed the day-to-day work of instructional design?

Here’s what adoption looks like in our research. In our 2026 AI in Learning &Β DevelopmentΒ Report, 87% of respondents report using AI already. Adoption is strongest in production-heavy tasks where teams feel time pressure most acutely. Usage concentrates in voice generation (63%), quiz and content drafting (60%), video creation (52%), and translation (38%). Β These use cases pay off quickly because they speed up first drafts and make revisions easier, especially when SME time is limited.

Peer-reviewed research points in the same direction. A mixed-methods study found instructional designers using GenAI across responsibilities, with usefulness varying by task and context. Other research echoes that GenAI supports rapid drafting and course-structure generation, while design judgment remains essential for quality, context, and learner fit.

Where are teams starting, and where is adoption spreading next?

AI can support the full ADDIE lifecycle, and our data shows where teams start and where adoption is heading.

‍

Bar chart showing the stages of ADDIE where teams use AI tools. Current use is highest in Design (66%) and Develop (65%), followed by Analyze (44%). Implement (24%) and Evaluate (19%) have lower current use, with more teams experimenting or planning AI in those stages.
Synthesia's AI in Learning &Β DevelopmentΒ Report 2026, Chart 11

What matters about this pattern is the direction. Design and Develop are a natural on-ramp because the work is visible and the cycle time is short. As teams extend AI into Implement and Evaluate, the emphasis shifts. More of the value comes from consistency at scale, tighter feedback loops, and evidence that supports decisions about what to reinforce, revise, or retire.

Workplace research on AI in people analytics points to the same pattern. As organizations scale AI into higher-stakes decisions, governance and review structures become enabling infrastructure. Accountability becomes explicit.

The same dynamic shows up in L&D as AI moves downstream into implementation and evaluation. Decisions sit closer to the workflow and carry more consequence. Governance keeps standards consistent and makes measurement sustainable at scale. Once those standards exist, teams can measure learning and iterate with more confidence.

How can AI support workflow measurement and iteration?

When AI supports implementation and evaluation, it can shorten the time between insight and revision. Transfer research has long shown that outcomes depend on what happens after training in the work environment. AI makes that evidence easier to collect and act on. Here’s how:

  1. Write the observable behavior, then use AI to tighten the language
    Start with a behavior you can observe in a real situation. Use AI to generate three to five tighter versions and surface assumptions.
    ‍
    Example: You draft: β€œManagers give good feedback.” AI sharpens it into options like: β€œWhen a manager observes a missed standard, they give specific, actionable feedback within 24 hours so the employee can correct it on the next attempt.”
  2. Select one workflow signal, then use AI to identify the system of record‍
    Choose a signal you can collect consistently from systems that already exist. Use AI to draft a measurement plan that names the data source, owner, and collection cadence.
    ‍
    Example: For β€œspecific feedback within 24 hours,” your workflow signal is β€œ% of documented coaching notes created within 24 hours of a performance miss.” AI maps it to where it lives (HRIS coaching notes field or manager check-in form), who owns it (HRBP or People Ops analyst), and cadence (weekly rollup by team).
  3. Design the learning signal, then use AI to find friction
    Decide what the learning experience should reveal beyond completion. Use AI to summarize themes from learner comments, common questions, and scenario misses. Use it to propose revisions tied to those patterns, then review and select changes with SMEs.
    ‍
    Example: In a practice scenario, 42% of managers choose β€œBe more careful next time” instead of giving specific guidance. AI summarizes the pattern as β€œavoidance of specificity,” pulls learner comments like β€œI don’t want to sound harsh,” and proposes a revision: add a 20-second model clip plus a rewrite exercise that forces specificity.
  4. Set the decision rule, then use AI to draft the if–then actions
    Decide what counts as movement and how long you will wait. Use AI to draft decision rules in plain language and suggest likely interventions.
    ‍
    Example: β€œIf the share of coaching notes logged within 24 hours stays below 60% after four weeks, add a manager reinforcement nudge and revise the scenario to include language prompts. If it rises above 75% for two consecutive review cycles, scale the module to the next manager population."
  5. Establish a review cadence, then use AI to produce the decision brief
    After launch, use AI to compile weekly or monthly summaries from your signals. Ask it to highlight trends, outliers, and the top questions from the field. Turn that into a short decision brief for the asset owner.
    ‍
    Example: Each month, AI produces a one-page brief: β€œTimely coaching notes improved from 52% to 68% in Sales, flat in Support. Support teams cite β€˜no time’ and β€˜unclear expectation.’ Top replay segment is the β€˜specific vs vague’ example. Recommendation: add a 2-minute reinforcement and align expectation in the manager toolkit.”
  6. Publish a new version, then use AI to summarize what changed
    When you update the learning, label the version and capture the reason. Use AI to produce a one-line change note and a short β€œwhat changed and why” summary so measurement stays interpretable over time.
    ‍
    Example: You publish β€œManager Coaching v1.2.” AI generates: β€œAdded 20-second model language and specificity rewrite exercise to address recurring vague-feedback choices in Scenario 2.” It also drafts a 3–5 sentence change log you can paste into your asset record.

What should stay human-led in an AI-enabled workflow?

AI accelerates production. Instructional designers keep ownership of accuracy, context, and the decisions that follow from evidence. Here’s what stays human-led when designing with AI:

  • Standards. IDs define what β€œgood” looks like for the asset and the audience. That includes accuracy requirements, tone, accessibility, and what β€œdone” means.
  • Sources of truth. Humans decide what inputs AI is allowed to use, especially for policy, compliance, and process learning. When sources are unclear, AI output becomes inconsistent across versions.
  • Review and risk. Humans decide who reviews what, based on consequence. A manager coaching module, a safety procedure, and a product update do not carry the same risk. Review should reflect that reality.
  • Measurement decisions. Humans choose the workflow signal, the learning signal, and the decision rule. AI can help locate data, synthesize feedback, and draft a brief.

Used this way, AI shortens the distance between insight and revision. Instructional design turns that speed into learning the business can trust, because humans keep ownership of standards, risk, and measurement.

Practical example: Manager feedback training (human-in-the-loop, end to end)

Goal: Improve the consistency and quality of manager feedback after a performance miss.

1) Define the performance moment (then have AI sharpen it)

Initial behavior statement: β€œWhen a manager spots a miss, they give specific feedback within 24 hours.”

How AI helps: Generate 3–5 tighter versions, flag vague terms (β€œspecific,” β€œmiss”), and surface assumptions (what counts as a miss, what counts as feedback).

Human-led decision: The ID finalizes the definition and confirms it matches company expectations and manager norms.

Example output (final): β€œWhen a manager observes a missed standard, they document the issue and provide one actionable next step within 24 hours.”

2) Pick one workflow signal (then use AI to map where the data lives)

Workflow signal: Percentage of performance misses that receive documented, actionable feedback within 24 hours.

Where the data lives: Manager check-in form field, HRIS coaching note, or performance log (depending on your org’s system of record).

How AI helps: Draft a lightweight measurement plan naming the system, field, owner, and cadence (e.g., weekly rollup by team).

Human-led decision: Confirm the signal is feasible to collect consistently and interpret responsibly.

3) Design the learning signal (then use AI to find friction)

Learning signal: Scenario decisions that indicate β€œvague feedback” vs β€œactionable feedback,” plus replay and drop-off patterns.

How AI helps: Summarize learner comments and questions, cluster common misconceptions, and highlight scenario steps where learners hesitate or choose vague language.

Example insight: β€œLearners avoid specificity when they worry about tone. They default to β€˜Be more careful next time’ instead of naming the missed standard and next step.”

Human-led decision: The ID and SME select which friction points are true skill gaps versus policy or culture constraints.

4) Write the decision rule (then have AI generate if–then options)

Decision rule: Define what β€œmovement” looks like, how long you’ll wait, and what you’ll change.

How AI helps: Draft clear if–then rules and propose likely interventions tied to the patterns you’re seeing.

Example rule: β€œIf documented feedback within 24 hours stays below 60% after four weeks, revise the practice activity and add a manager reinforcement touchpoint. If it rises above 75% for two consecutive reviews, scale the module to the next manager cohort.”

Human-led decision: The ID confirms the threshold is realistic and aligned to business expectations.

5) Shorten the feedback loop (use AI synthesis after launch)

Cadence: Monthly review during the first quarter after launch.

How AI helps: Compile a short brief from workflow and learning signals, highlight trends and outliers, and draft recommended revisions.

Example brief summary: β€œSales improved from 52% to 68%. Support is flat at 49%. Top replay segment is β€˜specific vs vague’ language. Recommendation: add a 2-minute reinforcement and a β€˜what to say’ model for common Support scenarios.”

Human-led decision: The ID and stakeholders choose what to change and what to test next.

6) Version the change (then let AI document what changed)

Version update: β€œManager Feedback v1.2”

How AI helps: Generate a concise change note and a short β€œwhat changed and why” summary for traceability.

Example change note: β€œAdded model language and a rewrite exercise to reduce vague feedback choices in Scenario 2.”

Human-led decision: The ID confirms the change note matches what actually changed and is safe to share.

Result: The workflow can scale, and accountability stays explicit. AI accelerates drafting, synthesis, and documentation. Humans own standards, review, and decisions.

πŸ‘‹ Reach out if you have questions about how Synthesia can modernize your workflow, or try it out for yourself.

How to get started

See how you can make a video in 3 minutes β€” then try creating your own learning video with Synthesia text-to-video.

Next steps:

  1. Pick one learning asset.
  2. Define one performance moment you can observe on the job.
  3. Choose one workflow signal and set a decision rule before you launch.
  4. Use AI to shorten the loop between insight and revision.

About the author

Learning and Development Evangelist

Amy Vidor

Amy Vidor, PhD is a Learning & Development Evangelist at Synthesia, where she researches emerging learning trends and helps organizations apply AI to learning at scale. With 15 years of experience across the public and private sectors, she has advised high-growth technology companies, government agencies, and higher education institutions on modernizing how people build skills and capability. Her work focuses on translating complex expertise into practical, scalable learning and examining how AI is reshaping development, performance, and the future of work.

Go to author's profile
Get started

Make videos with AI avatars in 160+ languages

Try out our AI Video Generator

Create a free AI video
faq

Frequently asked questions

How can instructional designers use AI across the full ADDIE lifecycle?

Instructional designers can use AI in every phase of ADDIE. It can help with analysis by synthesizing needs and performance gaps. It can support design by drafting objectives, outlines, and assessments. It can speed development through scripts, scenarios, and localized variants. It can also help with implementation and evaluation by reducing admin work and making it easier to learn from feedback and workflow signals.

‍

Why do many instructional designers start by using AI for content creation?

Because it delivers immediate, visible time savings in the parts of the work that consume the most hours. In our AI in L&D research, practitioners report the clearest value from AI in time saved during content creation, so early use often centers on drafting, adapting, and localization.

What is the biggest opportunity for instructional design right now?

AI creates capacity to reset priorities. Measurement and iteration are often squeezed by delivery timelines. With less time spent on first drafts, IDs can design for evidence earlier and improve learning based on what happens in the flow of work.

What does human-in-the-loop mean in AI-enabled instructional design?

Human-in-the-loop means people stay accountable for the decisions that carry risk. That includes accuracy, context, ethical standards, and quality. AI can accelerate production and variation, but instructional designers define what β€œgood” looks like and how outputs get reviewed before learners see them.

What governance is required as AI use expands beyond content creation?

As AI moves into implementation and evaluation, teams need clearer guardrails. That means agreed tools, approved sources, defined reviewers, and rules on what data can be used. This matters because practitioners cite security and accuracy as major blockers, and many are asking for support with governance and measuring impact in the workflow.

VIDEO TEMPLATE