FaceInCar: Real-time Dense Monocular Face Tracking of a Driver

Vortrag 2016

This exhibit demonstrates the first approach for real-time dense face tracking of a driver based on a monocular video stream. It allows to reconstruct and track a high fidelity mathematical model of the facial geometry and expressions of the driver using just a commodity webcam. In addition to geometry, it also recovers skin reflectance and an estimate of the scene's illumination. Since scene illumination is recovered, our approach is robust to the rapidly changing illumination conditions that are often encountered in driving cars, i.e. driving through a forest. Our approach can be used to monitor the driver and trigger a warning in the case his attention level drops, i.e. seconds sleep or anger. In contrast to previous real-time face trackers, which are based on sparse feature points, our approach leverages dense per-pixel information to obtain higher quality reconstructions. Dense face tracking also has many other important use cases: In the field of man-machine interaction, detection and tracking of movements is of paramount importance. In the film industry face tracking can be used to capture and analyze the facial motion of actors. The aquired information can also be used to animate virtual characters in video games. Our project also has applications in medical research. There, we already analyzed the healing of patients that suffer from a cleft lip and palate disorder. Head tracking is also important for other medical purposes, i.e. tracking during surgeries. Many psychologists are interested in our system to treat people that suffer from psychical diseases. In addition, our technique can be used in virtual/augmented reality applications, i.e. to implement a virtual mirror or telepresence.

Project Page / Videos


National IT Summit 2016 (17.11)