- About Us
- Affiliates and Liaisons
Colloquia occur: Selected Mondays at 4:00 pm - 5:00 pm - Room PY 101.
Colloquia titles will be posted as they become available.
Organizer: Rob Goldstone
Jan 13, 2014: Steve Franconeri, Northwestern University
Title: Visual attention creates structure over space and time
Abstract: Selective attention allows us to filter visual information, amplifying what is relevant and suppressing what competes. But recent work in our lab suggests another role – extracting and manipulating visual structure. I will describe four such lines of research, showing a role for selective attention in grouping objects with similar features, extracting spatial relationships between objects, imagining manipulations of objects, and maintaining object identity over time. I will also describe interactions of these processes with spatial language and highlight potential applications for improving pedagogy and displays related to math and science education.
Feb 3, 2014: Michael Frank, Stanford University
Title: Modeling pragmatic inference in referential communication and word learning
Abstract: A short, ambiguous message can convey a lot of information to a listener who is willing to make inferences based on assumptions about the speaker and the context of the message. Pragmatic inferences are critical in facilitating efficient human communication, and have been characterized informally using tools like Grices conversational maxims. They may also be extremely useful for language learning. In this talk, I'll propose a probabilistic framework for referential communication in context. This framework shows good fit to adults and childrens judgments. In addition, it makes interesting novel predictions about both language acquisition and processing, some of which we have already begun to test.
Feb 17, 2014: Richard Aslin, Department of Brain and Cognitive Sciences, University of Rochester
Title: Behavioral, computational, and neural mechanisms of statistical learning in infants and adults
Abstract: In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults. I will briefly review this literature that began in the temporal domain as a solution to the word segmentation problem, and then make the point that this mechanism is modality general, domain general, and species general. However, to be tractable, this powerful statistical learning engine must be constrained, and those constraints are both subtle and diverse, e.g., how infants allocate their attention to sequences of events plays an important role in the efficiency and effectiveness of learning. A variety of computational models have been proposed to account for statistical learning, including those that treat learning of exemplars and learning of rules as separate mechanisms. I will argue that this exemplar rule distinction can more parsimoniously be accounted for by a single statistical learning mechanism that is sensitive to the patterning of the input. Variations in how learners judge the grammaticality of utterances from an artificial grammar, and a single model that accounts for that variation, will then be reviewed. Finally, time permitting, I will provide a brief glimpse at some recent data on the neural correlates of statistical learning.
Mar 10, 2014: Marian Stewart Bartlett, University of California, San Diego and Emotient, Inc.
Title: Exploring the facial expression perception-production link using real-time facial expression recognition
Abstract: This talk reviews recent research in my lab exploring natural facial behavior with automatic facial expression recognition from computer vision. Automated systems enable new avenues for the study of facial expression, including explorations of learning in paradigms that respond to the subject’s facial expressions in real time. The talk first describes development of an intervention for children with autism. Children with autism spectrum disorders (ASD) are impaired in their ability to produce and perceive dynamic facial expressions, which may contribute to social deficits. Here I will describe a collaboration with Jim Tanaka and Bob Schultz to develop a novel intervention system for improving facial expression perception and production in children with ASD. The intervention employs the computer vision system developed in my lab to train facial expression production, provide practice in facial mimicry, and provide immediate feedback on the child’s facial expressions. Next I will describe an experiment to explore the link between perception and production in learning facial expressions. Motor production may play an important role in learning to recognize facial expressions. The present study explores the influence of facial production training on the perception of facial expressions by employing a novel production training intervention built on feedback from automated facial expression recognition. We hypothesized that production training using the automated feedback system would improve an individual’s ability to identify dynamic emotional faces. Thirty-four participants were administered a dynamic expression recognition task before and after either interacting with a production training video game called the Emotion Mirror or playing a control video game. Consistent with the prediction that perceptual benefits are tied to expression production, individuals with high engagement in production training improved more than individuals with low engagement or individuals who did not receive production training. These results suggest that the visual-motor associations involved in expression production training are related to perceptual abilities. Additionally, this study demonstrates a novel application of computer vision for real-time facial expression intervention training. Lastly, I will describe a project to measure spontaneous mimicry during a task for detecting deceptive expressions of pain. We show that facial mimicry correlates with the ability to detect when a person is lying. This had long been hypothesized by embodied theories of cognition but never previously shown. These findings were made possible by the use of novel computer vision techniques that allowed us to obtain rich quantitative information about facial dynamics.
Mar 31, 2014: Dennis Proffitt, University of Virginia
Title: Perception Viewed as a Phenotypic Expression
Abstract: Visual experience relates the optically-specified environment to people’s ever-changing purposes and the embodied means by which these purposes are achieved. Depending upon their purpose, people turn themselves into walkers, throwers, graspers, etc., and in so doing, they perceive the world in relation to what they have become. People transform their phenotype to achieve ends and scale their perceptions with that aspect of their phenotype that is relevant for their purposive action. Within near space, apparent distances are scaled with morphology, and in particular, to the extent of an actor’s reach. For large environments, such as fields and hills, spatial layout is scaled by changes in physiology – the ioenergetics costs of walking relative to the ioenergetics resources currently available. When appropriate, behavioral performance scales apparent size; for example, a golf hole looks bigger to golfers when they are putting well. Research findings show that perception is influenced by both manipulations of and individual differences in people’s purposive action capabilities.
Apr 7, 2014: Tom Palmeri, Vanderbilt University
Title: Neurocognitive modeling of perceptual decision making
Abstract: Mathematical psychology and systems neuroscience have converged on stochastic accumulation of evidence models as a general theoretical framework to explain the time course of perceptual decision making. I describe collaborative research that aims to forge strong connections between computational and neural levels of explanation in the context of saccade decisions by awake behaving monkeys. Viable models predict both the dynamics of monkey behavior and the dynamics of neural activity. I describe how neural measures can be used to constrain models, how model predictions of neural measures can be used for model selection, and how cognitive models inform understanding of neurophysiological signals. I also discuss strengths and limitations of mapping between computational and neural levels.
Apr 21, 2014: Arthur Glenberg, Arizona State University
Title: Individual differences as a crucible for testing embodied theories of language comprehension
Abstract: I will present the results from two projects that examine individual differences in the embodiment of language comprehension. Both projects test the claim that language comprehension is a simulation process that uses neural and bodily systems of perception, action, and emotion. But, are embodied effects found only when reading special texts designed to elicit them, or is simulation the basis for comprehension of all types of texts? If the latter is correct, then people who best simulate should best understand. This prediction was tested in the first project. We first measured reading comprehension skill using the Gates-McGinite standardized reading test, and then the same participants read a passage using Zwaan and Taylor's reading by rotation paradigm that provides a measure of embodiment. The embodiment prediction is for a positive correlation between the two measures, whereas non-embodied positions predict either a negative correlation (simulation is a waste of resources) or a zero correlation (simulation is epiphenomenal). The second project takes seriously the claim that bodily resources contribute to simulation. In this case, large (tall/heavy) people should find it relatively easier to understand sentences such as, 'You pushed the SUV to the gas station for a fill-up' compared to sentences such as 'You entered the house by crawling through the doggie door.' And, smaller people should show the reverse.