Data analysis is a complex task that involves combining domain knowledge, statistical expertise, and programming skills. AI assistants like ChatGPT, powered by large language models (LLMs), can help analysts by converting natural language instructions into code. However, the alignment between the AI assistant’s responses and the analyst’s intent may not always be perfect, leading to potentially incorrect conclusions. Therefore, it is crucial to validate AI assistance. This study investigates how analysts of varying backgrounds and expertise verify AI-generated analyses. The researchers developed a design probe that allowed analysts to employ different verification workflows using natural language explanations, code, visualizations, data table inspection, and common data operations. The study identified common verification workflow patterns influenced by the analysts’ programming, analysis, and AI backgrounds and highlighted challenges and opportunities for improving future AI analysis assistant experiences.

 

Publication date: 21 Sep 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2309.10947