Curriculum vitae | Teaching | Research |Others | Home

Research

Frontal Face Synthesis from a Single Image

Head pose estimation is an important task for many face analysis applications, such as face recognition systems and human computer interactions. We aim to address the pose estimation problem under some challenging conditions, e.g., from a single image, large pose variation, and un-even illumination conditions. The approach
we developed combines non-linear dimension reduction techniques with a learned distance metric transformation. The learned distance metric provides better intra-class clustering, therefore preserving a smooth low-dimensional manifold in the presence of large variation in the input images due to illumination changes. Experiments show that our method improves the performance, achieving accuracy within 2-3 degrees for face images with varying poses and within 3-4 degrees error for face images with varying pose and illumination changes.



We develop the first dimension-reduction (DR)-based method for head pose estimation under harsh illumination conditions. It is able to preserve a smooth manifold (bottom right) in the presence of large illumination changes, for which traditional DR methods, such as ISOMAP (bottom left), would fail.

People:
Ruigang Yang, Xianwang Wang, Xinyu Huang,  Jizhou Gao

Related Publication:
Illumination and Person-Insensitive Head Pose Estimation Using Distance Metric Learning, ECCV 2008

Sponsors:
NSF, DHS, Bosch