Language is a strikingly general medium. We can talk about anything from poetry to mathematics, and use anything someone tells us to shape our beliefs and behavior. Our current computational models of this kind of general language understanding and reasoning require either vast amounts of experience (like contemporary neural networks) or comparatively vast amounts of time and memory (like idealized Bayesian models or expected utility planners.) But human minds aren’t trained on Internet-scale data, nor are they run on supercomputers. So how do minds pull off the hattrick of general language understanding with such efficiency in experience and mental resources? I’ll describe two related projects in this direction, each which seeks to answer serious challenges that human-like language understanding poses for existing computational accounts of language and cognition. In the first part of this talk, I’ll focus on the generality of language understanding: how can people make reasonable use of arbitrary information in unbounded language? I’ll present work which frames minds as continually constructing small, ad-hoc structured mental representations containing just-enough relevant knowledge for a given situation, like reasoning about a particular conversation or question. In the second part of the talk, I’ll elaborate on potential implications of this idea regarding the relative data efficiency of language learning. I’ll present work which frames sentence processing specifically as lightweight translation from natural language into structured conceptual representations of their content, and discuss kinds of language-based reasoning (like reasoning about physical or social world knowledge in language) that can be modeled without assuming that this content is itself learned directly from language.
Explaining the efficiency and generality of human language use
Lio Wong · January 23, 2026