The paper discusses the challenge of maintaining the integrity and ownership of large transformer-based models due to their rapid growth. The traditional method of watermarking, which involves embedding a unique identifier into the model, is computationally costly for large models. The authors introduce a new approach that leverages the model’s invariance to generate functionally equivalent copies that can carry the watermark. This method does not change the model’s outputs and does not require training. The paper demonstrates the effectiveness and robustness of this approach.
Publication date: 19 Oct 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2310.11446