Texas Institute for Restorative Neurotechnologies
In the modern language sciences, the core computational operation of syntax, 'Merge', is defined as an operation that combines two linguistic units (e.g., 'brown', 'cat') to form a categorized structure ('brown cat', a Noun Phrase). This can then be further combined with additional linguistic units based on this categorial information, respecting non-associativity such that abstract grouping is respected. Some linguists have embraced the view that Merge is an elementary, indivisible operation that emerged in a single evolutionary step. From a neurocognitive standpoint, different mental objects constructed by Merge may be supported by distinct mechanisms: (1) simple command constructions (e.g., "eat apples"); (2) the merging of adjectives and nouns ("red boat"); and (3) the merging of nouns with spatial prepositions ("laptop behind the sofa"). Here, we systematically investigate participants' comprehension of sentences with increasing levels of syntactic complexity. Clustering analyses revealed behavioral evidence for three distinct structural types, which we discuss as potentially emerging at different developmental stages and subject to selective impairment. While a Merge-based syntax may still have emerged suddenly in evolutionary time, responsible for the structured symbolic turn our species took, different cognitive mechanisms seem to underwrite the processing of various types of Merge-based objects.
We introduce Temporal Ensemble Logic (TEL), a monadic, first-order modal logic for linear-time temporal reasoning. TEL includes primitive temporal constructs such as ``always up to tt time later'' (t\Box_t), ``sometimes before tt time in the future'' (t\Diamond_t), and ``tt-time later'' φt\varphi_t. TEL has been motivated from the requirement for rigor and reproducibility for cohort specification and discovery in clinical and population health research, to fill a gap in formalizing temporal reasoning in biomedicine. Existing logical frameworks such as linear temporal logic are too restrictive to express temporal and sequential properties in biomedicine, or too permissive in semantic constructs, such as in Halpern-Shoham logic, to serve this purpose. In this paper, we first introduce TEL in a general set up, with discrete and dense time as special cases. We then focus on the theoretical development of discrete TEL on the temporal domain of positive integers N+\mathbb{N}^+, denoted as TELN+{\rm TEL}_{\mathbb{N}^+}. TELN+{\rm TEL}_{\mathbb{N}^+} is strictly more expressive than the standard monadic second order logic, characterized by Büchi automata. We present its formal semantics, a proof system, and provide a proof for the undecidability of the satisfiability of TELN+{\rm TEL}_{\mathbb{N}^+}. We also include initial results on expressiveness and decidability fragments for TELN+{\rm TEL}_{\mathbb{N}^+}, followed by application outlook and discussions.
A core component of a successful artificial general intelligence would be the rapid creation and manipulation of grounded compositional abstractions and the demonstration of expertise in the family of recursive hierarchical syntactic objects necessary for the creative use of human language. We evaluated the recently released o3 model (OpenAI; o3-mini-high) and discovered that while it succeeds on some basic linguistic tests relying on linear, surface statistics (e.g., the Strawberry Test), it fails to generalize basic phrase structure rules; it fails with comparative sentences involving semantically illegal cardinality comparisons ('Escher sentences'); its fails to correctly rate and explain acceptability dynamics; and it fails to distinguish between instructions to generate unacceptable semantic vs. unacceptable syntactic outputs. When tasked with generating simple violations of grammatical rules, it is seemingly incapable of representing multiple parses to evaluate against various possible semantic interpretations. In stark contrast to many recent claims that artificial language models are on the verge of replacing the field of linguistics, our results suggest not only that deep learning is hitting a wall with respect to compositionality (Marcus 2022), but that it is hitting [a [stubbornly [resilient wall]]] that cannot readily be surmounted to reach human-like compositional reasoning simply through more compute.
There are no more papers matching your filters at the moment.