alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models

BibTex
Copy
@misc{duan2024lvlminterpretinterpretabilitytool,
      title={LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models}, 
      author={Nan Duan and Chenfei Wu and Gabriela Ben Melech Stan and Estelle Aflalo and Shao-Yen Tseng and Vasudev Lal and Yaniv Gurwicz and Matthew Lyle Olson and Raanan Yehezkel Rohekar and Anahita Bhiwandiwalla},
      year={2024},
      eprint={2404.03118},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2404.03118}, 
}
GitHub
lvlm-interpret
103
HTTPS
https://github.com/IntelLabs/lvlm-interpret
SSH
git@github.com:IntelLabs/lvlm-interpret.git
CLI
gh repo clone IntelLabs/lvlm-interpret
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format