Dynalips – a new start-up specialising in lip synchronisation based in Lorraine

13 May 2024

The pioneering new speech modelling and audiovisual voice synthesis start-up Dynalips has been officially launched. Resulting from research work at Loria (CNRS, Université de Lorraine, Inria, CentraleSupélec), subject of a maturation programme by the Société d’Accélération du Transfert de Technologie – SATT Sayens and supported by the Incubateur Lorrain, it provides animators and video game developers with an innovative lip-sync solution for the precise, automatic and rapid synchronisation of a 3D or 2D character’s lip movements with speech.

Slim Ouni, head of the start-up; Théo Biasutto–Lervat, Inria engineer who worked on this subject during his thesis; Louis Abel, PhD student at the Université de Lorraine.

The animation sector is constantly evolving with sets, special effects and characters becoming increasingly realistic. But what about characters’ speech? Modelling mouth movements and facial expressions is among the main challenges facing professional animators. Current lip animation techniques are far from perfect as they do not take coarticulation into account. This is the phenomenon by which the pronunciation of a sound is influenced by the sounds around it.

The new Dynalips start-up is offering an automatic lip-sync solution based on the latest research results in this area. It particularly takes coarticulation into account which is essential for more fluid, natural and, above all, intelligible speech.

 

Dynalips – the result of ten years of research and development in speech modelling

The DeepLipSync technology developed by the Dynalips start-up derives from the proven expertise of the Multispeech team at the Loria (Lorraine Research Laboratory in Computer Science and its Applications – CNRS, Université de Lorraine, Inria, CentraleSupélec).In 2015 Slim Ouni, a professor at the University of Lorraine and head of the team, began working on lipsync having concluded that the tools of the time for automatically synchronising characters’ lips with their speech were not satisfactory. The team members developed together, with the researchers involved contributing their deep-learning and speech processing expertise. Slim Ouni, the head of the start-up, explains that “the PACTE Law provided for a public-private bridging scheme which has meant I could spend a year working on the launch of the start-up that I head through a secondment that enables me to divide my time between the start-up and teaching or research at the Université de Lorraine.”

The lip-sync platform

A powerful technology to revolutionise the animation world

Dynalips’s technology is now being opened up to professionals, particularly animation studios and video game development studios. Currently, avatars’ mouths are animated manually which is particularly time-consuming for professionals. “Our tool means they can animate a character’s lips in synchronisation with speech in a few seconds whereas this process would take a day’s work in the studio when done by hand,” points out Slim Ouni. “So Dynalips can offer shorter production times and means animators can concentrate on artistic tasks like facial expressions and a character’s unique characteristics.” The system works in a very simple way. A user just provides the system with the audio recording required and then will receive the character’s facial animation in return.

As well as animation studios, Dynalips has wide ranging applications in which precise lip movements are essential to ensure that a message is intelligible. These include language learning systems, virtual assistants and applications to help the hard of hearing.

 

Brilliant results

The start-up has already obtained some splendid results. It won the tenth Pépite prize run by the French Ministry of Higher Education, Research and Innovation and has also been selected for an ‘Epic MegaGrant’ by the international video game studio Epic Games. This has enabled Dynalips to demonstrate the proof-of-concept of its technology. “We applied our technology to the animation of MetaHumans, hyper-realistic 3D human models which meant we could validate our solution’s accuracy and realism“.

So what’s the next challenge? To develop a multilingual, real-time version of the solution so it can be opened up to the global market.