It’s time to elect a new President of the United States, which means video streams everywhere are crowded with more talking heads than an early 1980s pop music chart. But now, thanks to some innovative tech generated from a collaboration between several international universities, while those heads talk, viewers can take full control of their mouths and facial features in real-time.
In other words, welcome to a super advanced version of Conan O’Brien’s celebrity-via-satellite moving mouth gag.
The technology, dubbed “Face2Face,” is being developed by researchers at the University of Erlangen-Nuremberg, the Max Planck Institute for Informatics, and Stanford University, and focuses on allowing people “to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.” If you move your mouth and make expressions with your face, those movements and expressions will be tracked and then translated onto a “target” person’s face, so it looks like he or she is mimicking those exact movements.
The developers of Face2Face, who published their work in Image Science for Interventional Techniques, say the source actor’s facial expressions are captured using facial tracking tech, and that data drives the animation in the video.
The video notes that Face2Face is better than previous expression-replacing technology because it’s RGB-only, re-renders a mouth’s interior along with its exterior, and also doesn’t use “a geometric teeth proxy.” All this amounts to what is evidently the power to make people look like they are saying one thing, when they actually mean another. Sounds like politics as usual! [Insert sad rimshot here.]
What do you think about this uncanny real-time expression swapping tech? Can you already see a nefarious future for Face2Face, or is this the only way you ever want to watch a political debate ever again? Let us know in the comments section below!
—
HT: Gizmodo
Images: Matthias Niessner