A comprehensive review examines the intersection of interpretable machine learning and physics, categorizing different aspects of interpretability while evaluating ML models across various physics subfields and establishing frameworks for combining physical principles with machine learning approaches.
View blogA framework is presented for extracting human-readable, closed-form interpretations of concepts encoded within neural network latent spaces. The method leverages symbolic gradients to uncover underlying scientific laws from Siamese neural networks without requiring prior knowledge, demonstrating this capability across matrix invariants, conserved quantities in dynamical systems, and spacetime intervals.
View blog