ISCApad #189 |
Saturday, March 15, 2014 by Chris Wellekens |
*** Postdoc #1
Post-doctoral Fellow - INRIA - Nancy, France
The goal of this work is to develop an accurate 3D lip model that can be integrated within a talking head. A control model will also be developed. The lip model should be as accurate dynamically as possible. When designing this model, the focus will be on the dynamics. For this reason, one can start from a static 3D lip mesh, using a generic 3D lip model, and then we will use MRI images or 3D scans to obtain more realistic shape of the lips. To take into account the dynamic aspect of the lip deformation, we will use an articulograph (EMA) and motion capture technique to track sensors or markers on the lips. The mesh will be adapted to this data.
To control the lips, we will consider allowing a skeletal animation to be controlled by the EMA sensors or motion capture markers, using inverse kinematic technique, widely used in 3D modeling. In line with conventional skeletal animation, an articulated armature rigged inside the mesh is mapped to vertex groups on the lip mesh by a weight map that can be deļ¬ned automatically from the envelope of the armature’s shape and manually adjusted if required, where manipulating the armature’s components deforms the surrounding mesh accordingly.
The main challenge is to find the best topology of the sensors or markers on the lips, to be able to better capture accurately its dynamics. The main outcome is to accurately model and animate the lips based on articulatory data. It is very important to have readable lips in that can be lip-read by hard-of-hearing people.
For full description and to apply, please visit this website:
The applications will be considered as soon as received.
Slim Ouni
University of Lorraine
Parole - Inria Nancy -Grand Est
************************************************************
PhD #1
Position type: PhD Student - INRIA - Nancy, France
Title: Emotion modeling during expressive audiovisual speech
The goal of this thesis is to study the expressivity from articulatory and visual points of view. The articulatory and facial gestures will be characterized for the different sounds of speech (called phonemes) in the different expressive contexts. The goal is to determine how facial expressions interact with speech gesture (lips, tongue, face and acoustics), and how this is embedded within the phoneme articulation and their acoustic consequences. The quantification of the intensity of an expression during a given phoneme (or sequence of phonemes) articulation needs to be determined. One important objective of this work is to develop an expressive control model, which describes the interaction of the facial expressions with audiovisual speech.
To achieve these goals, two corpora will be acquired using electromagnetography (EMA) and motion capture techniques synchronously with acoustics. The EMA is the technique that uses electromagnetic sensors, glued on the tongue, teeth, lips and eventually the face, that represent 3D positions, and two angle orientations. We will use a marker-less motion capture that allows retrieving the dynamics of the face due to speech and expressivity. The corpora will cover sentences pronounced in several emotional contexts. These corpora provide the articulatory trajectories of the tongue and the lips in addition to the acoustic signal. The acquired data will be processed and analyzed, and the control model will be developed based on the results of this analysis.
For full description and to apply, please visit this website:
The applications will be considered as soon as received.
Slim Ouni
University of Lorraine
Parole - Inria Nancy -Grand Est
|
Back | Top |