This study presents three pathways to implementing models and policies in neurosymbolic artificial intelligence (AI), which combines the interpretability and explicit reasoning of symbolic approaches with data-driven neural approaches. The paper studies a class of neural networks that incorporate interpretable semantics into their architecture. The researchers highlight the potential and challenges of combining logic, simulation, and learning. They also discuss the trade-off between learnability and interpretability. The paper raises several open questions about the limits of rule-based controllers, the scalability of differentiable interpretable approaches, and the achievement of true interpretability.

 

Publication date: 9 Feb 2024
Project Page: https://arxiv.org/abs/2402.05307
Paper: https://arxiv.org/pdf/2402.05307