The research introduces a new navigation algorithm, imagination-augmented hierarchical reinforcement learning (IAHRL), that enhances the safety and interactivity of autonomous driving in urban environments. The hierarchical agent is trained in a way that a high-level policy infers interactions by interpreting behaviors imagined with low-level policies. The high-level policy uses a permutation-invariant attention mechanism to determine which low-level policy generates the most interactive behavior. The low-level policies then generate safe and structured behaviors following task-specific rules. The algorithm was tested in five complex urban driving tasks, and results showed that the hierarchical agent performs safety-aware behaviors and properly interacts with surrounding vehicles, achieving higher success rates and lower average episode steps than baselines in urban driving tasks.

 

Publication date: 20 Nov 2023
Project Page: N/A
Paper: https://arxiv.org/pdf/2311.10309