Dynamic Facial Expression Analysis and Synthesis with Mpeg-4 Facial Animation Parameters

Citation

Y. Zhang, Q. Ji, Z. Zhu and B. Yi, “Dynamic Facial Expression Analysis and Synthesis With MPEG-4 Facial Animation Parameters,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 10, pp. 1383-1396, Oct. 2008, doi: 10.1109/TCSVT.2008.928887.

Abstract

This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.


Read more from SRI