The article presents FedPEAT, a system that merges federated learning, parameter-efficient fine-tuning, and emulator-assisted tuning for AI foundation models with mobile edge computing. The system is designed to address challenges in deploying and fine-tuning large models like GPT-3 and BERT, focusing on issues like collaborative training, model ownership, and computational limitations. FedPEAT uses adapters and emulators for federated model fine-tuning, enhancing model privacy and improving memory and computational efficiency. The system’s adaptive control mechanism uses deep reinforcement learning to optimize critical hyperparameters and ensure efficient resource orchestration. The authors illustrate the practical applicability and efficacy of the proposed framework through experimental evaluation.

 

Publication date: 27 Oct 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2310.17491