alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?

BibTex
Copy
@misc{geva2024dolargelanguage,
      title={Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?}, 
      author={Mor Geva and Sebastian Riedel and Elena Gribovskaya and Nora Kassner and Sohee Yang},
      year={2024},
      eprint={2411.16679},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.16679}, 
}
GitHub
latent-multi-hop-reasoning
78
HTTPS
https://github.com/google-deepmind/latent-multi-hop-reasoning
SSH
git@github.com:google-deepmind/latent-multi-hop-reasoning.git
CLI
gh repo clone google-deepmind/latent-multi-hop-reasoning
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format