Large Language Models (LLMs) have significantly impacted nearly every domain
of human knowledge. However, the explainability of these models esp. to
laypersons, which are crucial for instilling trust, have been examined through
various skeptical lenses. In this paper, we introduce a novel notion of LLM
explainability to laypersons, termed
ReQuesting, across three
high-priority application domains -- law, health and finance, using multiple
state-of-the-art LLMs. The proposed notion exhibits faithful generation of
explainable layman-understandable algorithms on multiple tasks through high
degree of reproducibility. Furthermore, we observe a notable alignment of the
explainable algorithms with intrinsic reasoning of the LLMs.