This paper introduces Investigate-Consolidate-Exploit (ICE), a new approach for improving the adaptability and flexibility of AI agents through inter-task self-evolution. Unlike existing methods that focus on learning within tasks, ICE encourages the transfer of knowledge between tasks for genuine self-evolution, similar to human experiential learning. The strategy dynamically explores planning and execution trajectories, consolidates them into simplified workflows, and uses them for enhanced task execution. Tests on the XAgent framework show ICE’s effectiveness, reducing API calls by up to 80% and significantly lowering the demand for the model’s capability. The authors argue that this self-evolution approach represents a significant shift in agent design, contributing to a stronger AI community and ecosystem, and moving closer to full autonomy.

 

Publication date: 26 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.13996