alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

OV-MER: Towards Open-Vocabulary Multimodal Emotion Recognition

BibTex
Copy
@misc{chen2024openvocabularymultimodalemotion,
      title={Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark}, 
      author={Haoyu Chen and Siyuan Zhang and Haiyang Sun and Bin Liu and Mingyu Xu and Rui Liu and Kang Chen and Jianhua Tao and Zheng Lian and Licai Sun and Ya Li and Jiangyan Yi and Hao Gu and Zhuofan Wen and Shun Chen and Lan Chen and Shan Liang and Hailiang Yao},
      year={2024},
      eprint={2410.01495},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/2410.01495}, 
}
GitHub
AffectGPT
280
HTTPS
https://github.com/zeroQiaoba/AffectGPT
SSH
git@github.com:zeroQiaoba/AffectGPT.git
CLI
gh repo clone zeroQiaoba/AffectGPT
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format