Goodfire AI
Recovering meaningful concepts from language model activations is a central aim of interpretability. While existing feature extraction methods aim to identify concepts that are independent directions, it is unclear if this assumption can capture the rich temporal structure of language. Specifically, via a Bayesian lens, we demonstrate that Sparse Autoencoders (SAEs) impose priors that assume independence of concepts across time, implying stationarity. Meanwhile, language model representations exhibit rich temporal dynamics, including systematic growth in conceptual dimensionality, context-dependent correlations, and pronounced non-stationarity, in direct conflict with the priors of SAEs. Taking inspiration from computational neuroscience, we introduce a new interpretability objective -- Temporal Feature Analysis -- which possesses a temporal inductive bias to decompose representations at a given time into two parts: a predictable component, which can be inferred from the context, and a residual component, which captures novel information unexplained by the context. Temporal Feature Analyzers correctly parse garden path sentences, identify event boundaries, and more broadly delineate abstract, slow-moving information from novel, fast-moving information, while existing SAEs show significant pitfalls in all the above tasks. Overall, our results underscore the need for inductive biases that match the data in designing robust interpretability tools.
2
Large language models (LLMs) have demonstrated emergent in-context learning (ICL) capabilities across a range of tasks, including zero-shot time-series forecasting. We show that text-trained foundation models can accurately extrapolate spatiotemporal dynamics from discretized partial differential equation (PDE) solutions without fine-tuning or natural language prompting. Predictive accuracy improves with longer temporal contexts but degrades at finer spatial discretizations. In multi-step rollouts, where the model recursively predicts future spatial states over multiple time steps, errors grow algebraically with the time horizon, reminiscent of global error accumulation in classical finite-difference solvers. We interpret these trends as in-context neural scaling laws, where prediction quality varies predictably with both context length and output length. To better understand how LLMs are able to internally process PDE solutions so as to accurately roll them out, we analyze token-level output distributions and uncover a consistent three-stage ICL progression: beginning with syntactic pattern imitation, transitioning through an exploratory high-entropy phase, and culminating in confident, numerically grounded predictions.
2
Researchers from Goodfire AI and Anthropic demonstrated that mechanistic interpretability tools, specifically logit lens analysis, can unsupervisably decode ROT-13 encoded reasoning within a finetuned large language model. Their developed pipeline successfully reconstructed human-readable reasoning transcripts, showing robustness against simple forms of internal textual obfuscation.
This research investigates how different synthetic data generation strategies impact the generalization of probes designed to monitor internal Large Language Model (LLM) behaviors. Researchers found that while training data strategy had varied effects, shifts in the data's domain consistently led to significantly greater degradation in probe performance, predicting substantial generalization failures for high-stakes behaviors like deception.
2
This paper mathematically characterizes "filter equivariant functions," a class of list functions whose behavior on arbitrary input lengths is entirely determined by their actions on short, finite examples. It provides a formal framework to explain how certain functions can extrapolate reliably to unseen lengths, which has implications for designing AI systems with robust generalization capabilities.
There are no more papers matching your filters at the moment.