This article discusses a new method for reconstructing high-quality, animatable, dynamic garments from monocular videos. Traditional methods for digitizing garments are time-consuming and require expert knowledge. However, this new method presents a learnable garment deformation network that addresses the garment reconstruction task as a pose-driven deformation problem. This network can generate reasonable deformations for various unseen poses. The method also includes a multi-hypothesis deformation module for estimating 3D garments from monocular videos, alleviating ambiguity. The experimental results demonstrate that the method can reconstruct high-quality dynamic garments with coherent surface details, that can be easily animated under unseen poses.

 

Publication date: 3 Nov 2023
Project Page: http://cic.tju.edu.cn/faculty/likun/projects/DGarment
Paper: https://arxiv.org/pdf/2311.01214