This article delves into the use of Large Language Models (LLMs) in the field of chemistry. It discusses the complexities and innovations at this interdisciplinary juncture. The paper examines how molecular information is incorporated into LLMs and categorizes chemical LLMs based on their input data. It also investigates the pretraining objectives with adaptations to chemical LLMs and explores their diverse applications in chemistry. The paper concludes by highlighting promising research directions such as further integration with chemical knowledge, advancements in continual learning, and improvements in model interpretability.

 

Publication date: 5 Feb 2024
Project Page: not provided
Paper: https://arxiv.org/pdf/2402.01439