The CTL learning problem consists in finding for a given sample of positive
and negative Kripke structures a distinguishing CTL formula that is verified by
the former but not by the latter. Further constraints may bound the size and
shape of the desired formula or even ask for its minimality in terms of
syntactic size. This synthesis problem is motivated by explanation generation
for dissimilar models, e.g. comparing a faulty implementation with the original
protocol. We devise a SAT-based encoding for a fixed size CTL formula, then
provide an incremental approach that guarantees minimality. We further report
on a prototype implementation whose contribution is twofold: first, it allows
us to assess the efficiency of various output fragments and optimizations.
Secondly, we can experimentally evaluate this tool by randomly mutating Kripke
structures or syntactically introducing errors in higher-level models, then
learning CTL distinguishing formulas.