For over two decades, my research has been directed to studies of a form of implicit learning referred to as “statistical learning”. Although initially focused on the task of word segmentation from fluent speech, statistical learning has been extended to other domains, such as musical tones, phonetic categories, sequences of visual shapes, sequences of motor responses, and combinations of objects (or object parts) in complex visual scenes. An important goal of these studies is to reveal the computational constraints that enable statistical learning to be tractable given the complexity of the input and the infinite number of statistical computations that are possible over any set of inputs. Initial computational models of statistical learning focused on bi-gram statistics and conditional probabilities, but more recent work has broadened to include Bayesian ideal learning models. Empirical studies of statistical learning have also evolved to explore order effects in learning multiple structures and to understand how statistical patterns trigger the formation of categories in artificial grammars.
A related line of research focuses on spoken word recognition in both infants, toddlers, and adults using eye-tracking and EEG methods. Once an auditory word-form has been extracted from fluent speech, how does the infant map that sequence of sounds onto meaning? Recent and on-going studies have examined how infants and toddlers recognize the meaning of the unfolding speech signal, for both previously known and recently learned words, as well as for mispronounced words or words preceded by a disfluency. Most of these studies employed eye-tracking, while more recent work uses machine-learning techniques for decoding word-identity from EEG signals.
In the past few years, my research has moved toward studies of brain function in adults and infants using fMRI, fNIRS, and EEG. Up to 80 channels of fNIRS can be obtained from infants and over 120 channels from adults, providing a dense sampling of hemodynamic activity in the superficial layers of cortex during both task and rest. This system enables us to assess activations in various regions of the infant brain, as well as functional connectivity, thereby revealing the neural correlates of behavioral measures such as looking time. We are particularly interested in how these optical signals change over time as a way of understanding aspects of habituation and statistical learning, and we use multivariate methods of pattern classification to explore decoding accuracy for visual and auditory stimuli.