Decaf is a novel method that can track human hands interacting with faces in 3D from single monocular RGB videos. The method models hands as articulated objects inducing non-rigid face deformations during an active interaction. Decaf overcomes the challenges of 3D tracking from monocular RGB videos and improves the realism of downstream applications like AR/VR, 3D virtual avatar communications, and character animations. It uses a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system. The 3D hand and face reconstructions are realistic and more plausible compared to several baselines applicable in our setting.

 

Publication date: 29 Sep 2023
Project Page: https://vcai.mpi-inf.mpg.de/projects/Decaf
Paper: https://arxiv.org/pdf/2309.16670