alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model