
Scaling Avatars adoption responsibly with our new Avatar Governance Framework and Policy Template
KI Videos mit ΓΌber 230 Avataren in mehr als 140 Sprachen erstellen
Synthesia has spent the last decade working with our customers to move Avatars from a cool experiment inside an academic lab to a product used by 90% of Fortune 100 companies. Weβve seen teams using Avatars for employee onboarding, multilingual marketing, compliance training, and customer-facing communications. In many organizations, this happened gradually and then all at once: a single team ran a pilot, the results were good, and within months the technology was embedded in workflows across departments.
What hasn't kept pace is governance, and that gap creates a specific kind of organizational tension that we've come to know well.
The pattern we see
The conversation usually starts with a legal or compliance team asking a reasonable question: how should we be managing this new technology? They look for an existing policy that covers AI-generated content involving human likenesses and find that nothing quite fits. Data protection frameworks address some of the issues but miss others. Content policies are too broad. Acceptable use policies written for traditional software don't contemplate a tool that can generate a photorealistic person delivering a scripted message in 140+ languages.
Meanwhile, the business teams that adopted avatars are producing real results. They've cut video production timelines from weeks to hours or localized hundreds of hours of training content that previously existed only in English. They don't want to stop, and they shouldn't have to. But they also can't keep operating in a policy vacuum, especially as usage scales and the content becomes more visible.
We've watched this dynamic play out at hundreds of organizations over the past several years, and we've learned something from it: the companies that navigate it well start by establishing a shared vocabulary for thinking about the risks, and then they layer on controls that are proportionate to what they're actually doing.
That observation is the basis for two resources we're publishing today: an Avatar Governance Framework [link] and a practical Avatar Use Policy template [link].
Why Avatars require their own governance model
It would be convenient if existing frameworks covered this adequately. They don't, and the reason is worth understanding.
An Avatar sits at the intersection of several distinct concerns that are usually governed separately. There are identity questions: whose likeness is being used, and under what terms? And content questions: what is the Avatar saying, and could it be mistaken for a real person expressing genuine views? And distribution questions: where will this content appear, and will audiences understand that it was generated by AI? And increasingly, there's the regulatory question: as jurisdictions implement AI-specific legislation such as the EU AI Act, how does Avatar usage map to emerging compliance obligations?
Traditional content governance assumes a human author and performer. Traditional data protection assumes you're processing information about a person, not generating a synthetic representation of one. Neither framework was designed for a tool that can produce a convincing human presenter from a text prompt, and asking either to stretch that far produces awkward results. Organizations end up with controls that are either too restrictive to allow productive use or too loose to address the actual risks.
We've spent years thinking about this problem because it's central to our business. Synthesia's platform generates more enterprise-focused AI videos with Avatars than any other, and that means we encounter the edge cases and failure modes before most of our customers do. Our responsible AI framework has guided our own product decisions since 2017. The governance resources we're publishing today extend that thinking outward, giving organizations a way to apply similar principles to their own Avatar programs regardless of which tools they use.
What the new framework covers
The Avatar Governance Framework is organized around three core ideas that we've found hold up well in practice.
The first is that different types of Avatars carry different levels of governance complexity. An Avatar based on a specific employee's likeness, created with their documented consent and used for internal training, presents a different risk profile than a stock Avatar used in external marketing, which in turn is different from a fully synthetic Avatar that doesn't correspond to any real person. Treating all of these the same, as many organizations do by default, leads to either unnecessary friction for low-risk uses or insufficient oversight for higher-risk ones. The framework provides a classification model that helps teams make that distinction quickly.
The second is that governance should follow the Avatar lifecycle rather than exist as a single checkpoint. The relevant decisions don't all happen at the moment of creation. They extend through how the content is approved, where it's distributed, how long it remains in circulation, and what happens when an employee whose likeness was used leaves the organization. A lifecycle approach means controls are applied at the points where they actually matter, rather than being front-loaded in a way that creates bottlenecks.
The third is proportionality. A thirty-second internal training clip explaining a new expense policy does not require the same level of review as a customer-facing campaign featuring a named executive. The framework provides a way to calibrate oversight to actual risk, which is what makes it possible to scale Avatar usage without scaling the compliance burden in lockstep.
The policy template
The governance framework is useful for alignment conversations between legal, product, and business teams. But most employees need a policy they can read in five minutes that tells them what they can do, what requires approval, and what's off limits.
That's what the Avatar Use Policy template provides. It's written in plain language and structured around the decisions that come up most often in day-to-day Avatar use: when consent is required and what form it should take, what kinds of content need additional review, how to handle transparency and disclosure, and what to do when an Avatar is used in a way that wasn't originally anticipated.
We've designed it to be adoptable with minimal modification. Most organizations should be able to take it, adjust the specifics to reflect their internal processes, and put it into use within a week. For teams that are already producing Avatar content without any formal policy, this alone represents a meaningful improvement in governance posture.
The regulatory context
These resources don't exist in a vacuum. The regulatory landscape for AI-generated content is evolving rapidly, and Avatar-specific questions are increasingly on the agenda.
Article 50 of the EU AI Act introduces transparency obligations for AI systems that generate synthetic content, including requirements around disclosure and provenance. Several state- and federal-level jurisdictions in the United States are developing or have enacted rules around the use of digital likenesses, with particular attention to consent and the potential for misrepresentation. Other countries such as India are also introducing new requirements to govern synthetic media. Industry standards are beginning to coalesce as well: Synthesia became the first generative AI company to achieve ISO 42001 certification, and we've participated actively in shaping the codes of practice and technical standards that will define how this technology is governed at scale.
We built the framework and policy template with this landscape in mind. They're designed to be compatible with emerging regulatory requirements rather than requiring wholesale revision as new rules take effect. Our experience working with enterprise customers across jurisdictions, many of whom operate under multiple overlapping compliance regimes, informed the approach throughout.
How to use these resources
For organizations that already have some internal discussion underway about Avatar governance, the framework can serve as a reference point. It provides a shared structure for conversations that often stall because different teams are thinking about different aspects of the problem without a common vocabulary.
For organizations that need something operational right now, the policy template is the place to start. It provides enough specificity to guide day-to-day decisions without requiring a lengthy implementation process.
In either case, we'd encourage treating these as living documents. The technology is still evolving, and the regulatory environment is still taking shape. And most organizations are still in the relatively early stages of figuring out how Avatars fit into their broader content and communication strategies. A governance approach that's too rigid will become an obstacle; one that's too loose will eventually create problems. The goal is to find the middle ground that allows productive use of the technology while maintaining the kind of oversight that builds trust internally and externally.
Both resources are available now on our AI Governance Portal. We welcome feedback from customers, partners, and the broader community. Getting Avatar governance right is not something any single company can do alone, and we're committed to continuing this work in the open.
β










