Institute for Advanced Consciousness Studies
I consider motivation and value-alignment in AI systems from the perspective of (constrained) entropy maximization. Though the structures encoding knowledge in any physical system can be understood as energetic constraints, only living agents harness entropy in the endogenous generation of actions. I argue that this exploitation of "mortal" or thermodynamic computation, in which cognitive and physical dynamics are inseparable, is of the essence of desire, motivation, and value, while the lack of true endogenous motivation in simulated "agents" predicts pathologies like reward hacking.
Researchers propose a framework for integrating emotional valence into active inference models of dyadic social interaction, formalizing affect as a recursive inference over self-model coherence. It introduces geometric hyperscanning, using Forman-Ricci curvature entropy, to empirically track inter-brain network reconfigurations as a proxy for shared affective dynamics.
We draw on the predictive processing theory of perception to explain why healthy, intelligent, honest, and psychologically normal people might easily misperceive lights in the sky as threatening or extraordinary objects, especially in the context of WEIRD (western, educated, industrial, rich, and democratic) societies. We argue that the uniquely sparse properties of skyborne and celestial stimuli make it difficult for an observer to update prior beliefs, which can be easily fit to observed lights. Moreover, we hypothesize that humans have likely evolved to perceive the sky and its perceived contents as deeply meaningful. Finally, we briefly discuss the possible role of generalized distrust in scientific institutions and ultimately argue for the importance of astronomy education for producing a society with prior beliefs that support veridical perception.
There are no more papers matching your filters at the moment.