L-Eval: Instituting Standardized Evaluation for Long Context Language Models
The research team from Fudan University, The University of Hong Kong, and University of Illinois Urbana-Champaign propose an evaluation benchmark, L-Eval, for Long Context Language Models (LCLMs). They aimed to…
Continue reading