The paper discusses the use of Large Language Models (LLMs) in generating visualizations, noting both their potential and the challenges they present. It introduces EvaLLM, a theoretical evaluation stack designed to assess the efficacy of these visualizations. The authors also present an evaluation platform that utilizes EvaLLM to benchmark visualization generation tasks. Two case studies are discussed to illustrate the benefits of EvaLLM and to shed light on the current state of LLM-generated visualizations.
Publication date: 7 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2402.02167
Leave a comment