alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

A Survey on Video Temporal Grounding with Multimodal Large Language Model

BibTex
Copy
@misc{liu2025surveyvideotemporal,
      title={A Survey on Video Temporal Grounding with Multimodal Large Language Model},
      author={Wei Liu and Liqiang Nie and Ye Liu and Zhouchen Lin and Chang Wen Chen and Jianlong Wu and Meng Liu},
      year={2025},
      eprint={2508.10922},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.10922},
}
GitHub
Awesome-MLLMs-for-Video-Temporal-Grounding
38
HTTPS
https://github.com/ki-lw/Awesome-MLLMs-for-Video-Temporal-Grounding
SSH
git@github.com:ki-lw/Awesome-MLLMs-for-Video-Temporal-Grounding.git
CLI
gh repo clone ki-lw/Awesome-MLLMs-for-Video-Temporal-Grounding
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format