Interpolate start reference image. SafePLUG

Empowering Multimodal LLMs with Pixel-Level Insight and Temporal Grounding for Traffic Accident Understanding

Zihao Sheng1, Zilin Huang1, Yen-Jung Chen2, Yansong Qu2,
Yuhao Luo1, Yue Leng3, Sikai Chen1,
1University of Wisconsin-Madison2Purdue University3Google
Corresponding Author
Interpolate start reference image.

SafePLUG supports both image/video-level and pixel-level understanding through accident description, temporal localization, region-level question answering, and pixel-level grounding, enabling comprehensive traffic accident analysis.

🔍 Abstract

Multimodal large language models (MLLMs) have achieved remarkable progress across a range of vision-language tasks and demonstrate strong potential for traffic accident understanding. However, existing MLLMs in this domain primarily focus on coarse-grained image-level or video-level comprehension and often struggle to handle fine-grained visual details or localized scene components, limiting their applicability in complex accident scenarios. To address these limitations, we propose SafePLUG, a novel framework that empowers MLLMs with both Pixel-Level Understanding and temporal Grounding for comprehensive traffic accident analysis. SafePLUG supports both arbitrary-shaped visual prompts for region-aware question answering and pixel-level segmentation based on language instructions, while also enabling the recognition of temporally anchored events in traffic accident scenarios. To advance the development of MLLMs for traffic accident understanding, we curate a new dataset containing multimodal question-answer pairs centered on diverse accident scenarios, with detailed pixel-level annotations and temporal event boundaries. Experimental results show that SafePLUG achieves strong performance on multiple tasks, including region-based question answering, pixel-level segmentation, temporal event localization, and accident event understanding. These capabilities lay a foundation for fine-grained understanding of complex traffic scenes, with the potential to improve driving safety and enhance situational awareness in smart transportation systems.


🚀 Highlights

1. SafePLUG Framework - Developed a novel multimodal large language model framework that integrates pixel-level understanding and temporal grounding for fine-grained traffic accident analysis.


2. Comprehensive Accident Dataset - Built a large-scale benchmark with region QA, pixel-level grounding, accident description, and temporal localization tasks, featuring detailed bounding boxes, segmentation masks, and event boundaries.


3. State-of-the-Art Performance - Outperforms strong baselines on four key tasks and generalizes well to different shapes and positions of visual prompts.



Interpolate start reference image. SafePLUG Architecture

SafePLUG is a multimodal large language model framework for fine-grained traffic accident understanding. It takes video frames with number prompts, image-level context, and user-defined visual prompts (bounding boxes, polygons, or free-form masks). Images and video frames are encoded by CLIP‑L/14; frame indices are overlaid as lightweight number prompts to provide temporal cues, and a visual prompt encoder extracts region features from the specified boxes/masks. A unified multimodal fusion module within the LLM enables joint reasoning, and dual decoders produce either natural language answers or SAM‑based pixel segmentation masks. This design allows SafePLUG to accurately connect what happens, where it happens, and when it happens in complex accident scenarios.

Interpolate start reference image.

🎬 Demos

Demo 1 (Region QA)

Demo 2 (Region QA)

Demo 3 (Pixel-level Grounding)

Demo 4 (Pixel-level Grounding)

BibTeX

@inproceedings{sheng2025safeplug,
  title={SafePLUG: Empowering Multimodal LLMs with Pixel-Level Insight and Temporal Grounding for Traffic Accident Understanding},
  author={Sheng, Zihao and Huang, Zilin and Chen, Yen-Jung and Qu, Yansong and Luo, Yuhao and Leng, Yue and Chen, Sikai},
  year={2025}
}