alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model

BibTex
Copy
@misc{wang2025vlaadaptereffectiveparadigm,
      title={VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model},
      author={Yihao Wang and Pengxiang Ding and Lingxiao Li and Can Cui and Zirui Ge and Xinyang Tong and Wenxuan Song and Han Zhao and Wei Zhao and Pengxu Hou and Siteng Huang and Yifan Tang and Wenhui Wang and Ru Zhang and Jianyi Liu and Donglin Wang},
      year={2025},
      eprint={2509.09372},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2509.09372},
}
GitHub
VLA-Adapter
578
HTTPS
https://github.com/OpenHelix-Team/VLA-Adapter
SSH
git@github.com:OpenHelix-Team/VLA-Adapter.git
CLI
gh repo clone OpenHelix-Team/VLA-Adapter
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format