Inductive bias for temporal structure in humans and AI

Phenomena of human learning and memory such as spacing effects and power-law forgetting suggest the brain is attuned to a natural environment with rich temporal structure, including events and correlations at a wide range of timescales. I will show how these phenomena are remarkably well explained by a Bayesian cognitive model wherein the prior for temporal dynamics is fractional Brownian motion (1/f noise), a ubiquitous statistical signature of natural complex systems. I will then describe how the same prior can be built into Bayesian neural networks in machine learning. Networks trained in this way show human-like forgetting curves and spacing effects, and they perform better than standard methods in task environments with realistic autocorrelation.