The study discusses the development of a unique binary linear tree commitment-based ownership protection model for distributed machine learning. This model ensures computational integrity with limited overhead and concise proof. The commitment scheme introduced in the model enables a maintainable tree structure, reducing the costs of updating proofs. It also leverages inner product arguments for efficient proof aggregation. To prevent forgeries or duplications, proofs of model weights are watermarked using worker identity keys. The model’s performance analysis and comparison with SNARK-based hash commitments affirm its effectiveness in preserving computational integrity within distributed machine learning.

 

Publication date: 15 Jan 2024
Project Page: https://doi.org/XXXXXXX.XXXXXXX
Paper: https://arxiv.org/pdf/2401.05895