alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

Skip Tuning: Pre-trained Vision-Language Models are Effective and Efficient Adapters Themselves

BibTex
Copy
@misc{zhang2025skiptuningpretrained,
      title={Skip Tuning: Pre-trained Vision-Language Models are Effective and Efficient Adapters Themselves}, 
      author={Ji Zhang and Lianli Gao and Jingkuan Song and Heng Tao Shen and Pengpeng Zeng and Shihan Wu},
      year={2025},
      eprint={2412.11509},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.11509}, 
}
GitHub
SkipTuning
11
HTTPS
https://github.com/Koorye/SkipTuning
SSH
git@github.com:Koorye/SkipTuning.git
CLI
gh repo clone Koorye/SkipTuning
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format