Welcome to Synthesia AI Research
About our AI research
At Synthesia, our world-class team of researchers and engineers are working to make this possible. Today we are pioneering new techniques to create photo-realistic synthetic actors. Tomorrow, we will create synthetic scenes and put the actor in context. In the future you will be able to create everything you see on the big screen with just a laptop and a bit of imagination.
.webp)
“In research at Synthesia, we develop photorealistic synthetic actors that look and sound exactly like a real person in videos- our ‘AI Avatars’. Discover more about research below.”

About our AI avatars
Our mission is to make video creation easy. We generate lifelike synthetic humans and create a performance directly from a script, a process we call text-to-video (TTV).
What makes this hard? We need to match the quality you see in conventional video. The challenge is that we are all accustomed to the subtle nuances of how a person should look, act and sound in a video. Everything that makes a person look real. Any approximation ruins the illusion and you hit the "uncanny valley."
At Synthesia, we solve this through neural video synthesis. We train neural networks to reproduce the photorealistic look and movements you see in existing videos. We have traversed the uncanny valley and we create synthetic humans that look real.
Our open source research
HumanRF research paper
HumanRF enables temporally stable, novel-view synthesis of humans in motion. It reconstructs long sequences with the state-of-the-art quality and high compression rates by adaptively partitioning the temporal domain into 4D-decomposed feature grids.
ActorsHQ dataset
ActorsHQ is our high-fidelity publicly available dataset of clothed humans in motion. The dataset features multi-view recordings of 160 synchronized cameras that simultaneously capture individual video streams of 12MP each. As such, the dataset is tailored for the tasks of photo-realistic novel view and novel pose synthesis of humans.
Our research challenges
Our goal is high-fidelity, photorealistic and controllable neural video synthesis. We are replacing physical cameras with generative synthesis. We conduct foundational research with our co-founders Prof. Matthias Niessner (TUM) and Prof. Lourdes Agapito (UCL) to develop 3D neural rendering techniques that synthesize realistic video.
Here's some of the problems we tackle today.
Synthesia AI Research leadership
Jonathan Starck
I'm a CTO at Synthesia where I lead our AI Research teams. We are solving fundamental and exciting AI problems on a daily basis.
Matthias Niessner
I'm a co-founder of Synthesia and a Professor at the Technical University of Munich where I lead the Visual Computing Lab.
Lourdes Agapito
I'm a co-founder of Synthesia and a Professor of 3D Vision in the Department of Computer Science at University College London (UCL).
Our ethics standards
As a pioneer in synthetic media, we are aware of the responsibility we have. It is clear to us that artificial intelligence cannot be built with ethics as an afterthought. They need to be front and center, an integral part of the company - reflected in both company policy, in the technology we are building, and in our commitment to security. This is also the reason why we are an active member of the Content Authenticity Initiative, which was started by Adobe in 2019.
All AI video content goes through our content moderation process before being released to our clients. Read more about our content moderation framework and see our content moderation course.
We will never re-enact someone without their explicit consent. This includes politicians or celebrities for satirical purposes. As an example, our NGO campaign with Malaria Must Die and David Beckham won the CogX Outstanding Achievement in Social Good Use Of AI Award in 2019.
We actively work with media organisations, governments and research institutions to develop best practices and educate on video synthesis technologies.