The study introduces a new face swapping method leveraging the progressively growing structure of a pre-trained StyleGAN. Instead of the traditional encoder-decoder structures and embedding integration networks, this method disentangles identity and attribute features separately. This approach allows for high-quality results by mapping the concatenated features into the extended latent space. The method shows superior performance compared to other face swapping techniques both qualitatively and quantitatively. The study contributes to the field of deepfake technology, offering improved control over the generation of manipulated images and videos.
Publication date: 19 Oct 2021
Project Page: https://arxiv.org/abs/2310.12736
Paper: https://arxiv.org/pdf/2310.12736