The research introduces a novel framework, FedPEAT, which converges Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning. The framework is designed to address the challenges in deploying and fine-tuning large AI models such as GPT-3 and BERT, particularly in terms of collaborative training, model ownership, and computational limitations. FedPEAT uses adapters and emulators to enhance model privacy and streamline memory-efficient fine-tuning. It also incorporates a control mechanism that uses deep reinforcement learning to optimize critical hyper-parameters. The research demonstrates the practical applicability and efficacy of FedPEAT in addressing the complex challenges associated with large language models.

 

Publication date: 27 Oct 2023
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2310.17491