How we think about content moderation

Victor Riparbelli
Updated:
August 24, 2023

When we founded Synthesia in 2017 we set out to make it easier for everyone to make video content without the need for cameras, studios and microphones. 

Our mission was clear: we wanted to democratise content creation and at the same time pioneer ethical use of synthetic media. We envision a future where great video content can come from anywhere, not just from those with access to the right gatekeepers and budgets. 

Since then we’ve gone on to do both; we now serve thousands of customers and use our consent, control and collaboration framework to guide product and business decisions.

How do we ensure that our ethical principles align with our mission to democratise content creation? Opening up our platform to empower individual creators from all over the world also means that people with bad intentions can easier slip through. 

In this post I'll outline how we think about this balance, our content guidelines and how we’re implementing solutions.

Where are we now?

This year we have been growing exponentially and have onboarded thousands of new users. We’re seeing an entirely new form of media being shaped in real-time by our users who range from artists to small businesses to some of the world’s largest enterprises. 

Our customers create content for education, marketing and business – from teaching school subjects to spreading vaccine information to underrepresented communities in their native language.

But just as you can use video to inform it can also be used by bad actors to spread misinformation or bully individuals or groups. 

Luckily this has not yet been a problem – in the span of a year we’ve only had to suspend 10 accounts for not adhering to our content guidelines. But we want to get in ahead of this problem and today we’re launching an update to our moderation flows.

In this post I'll outline how we think about this balance, our content guidelines and how we’re implementing solutions.

For obvious reasons I will not outline the nitty gritty details here but this should give you a rough idea of how we’re tackling this problem.

Should we even moderate content?

Traditionally, content creation platforms do not moderate what you can create and what you can’t. Anyone can open up Photoshop, Word, PowerPoint or any game engine like Unreal or Unity and create whatever they would like, even the most hateful and vile content. 

We take a different approach and have decided to police what content you can and cannot create on Synthesia. We offer an ‘on-rails’ experience that enables users to create content within the constraints we provide. Customers don’t get access to the underlying AI technology.

What content are users allowed to create?

For starters, here's our terms of service which outline our allowed content standards.

There’s content that we obviously don't want on our platform. For example profanity and hate speech. This type of content gets filtered out automatically and warns a member of our content moderation team who will take the appropriate action.

Where it gets more difficult is content that falls in the grey zone. For example we want our customers to be able to make videos on sexual health or the history of the LGBTQIA+ movement. But these videos will often use terms that gets flagged by content moderation systems. Other examples include training videos on unconscious bias or the history of the 2nd World War for example.

We want to help teams who create such content. On the other hand, bad actors might try to game the system by appearing to be creating training materials, while overtly preaching for radical or abusive views. This is the type of behaviour we fight against using content moderation. Given the scale and the grey zones, our approach involves both machine moderation, and manual, human reviews.

Human & Machine Moderation

So how do we moderate content? At its core, our systems aim to bring the best of human-machine collaboration. Machines are great at pattern recognition at scale. Humans are great at context. We’re combining both to create a safe space where we can democratise video creation while defending against bad actors.

From a high level the way the system works is that we monitor the content that get’s generated on our platform. Content that sits in the grey zone can easily be detected, but it is much harder for computers to understand the context around the message. So in cases like these we have a team that manually looks through content to ensure it adheres with terms of service.

Who can make AI Avatars?

Creating custom avatars requires a recorded consent statement and goes through a manual approval process. As we scale this feature of the platform we will require KYC checks to be an integrated part of the onboarding process.

Frequently Asked Questions