What type of neural activity leads to better memory? adaptsim

Repeated exposure to a stimulus is associated with corresponding reductions in neural activity. This phenomenon of repetition suppression may be related to facilitated processing or implicit memory. Repeated exposure to a stimulus can also be considered in terms of the similarity of the pattern of neural activity elicited at each exposure -- a measure that has recently been linked to explicit memory. The extent to which these two measures differentially (or similarly) relate to memory is not well established. With Marvin Chun and Brice Kuhl, I compared repetition suppression and pattern similarity as predictors of both implicit and explicit memory. Using fMRI, we scanned participants while they viewed and categorized repeated presentations of scenes. Repetition priming (facilitated categorization across repetitions) was used as a measure of implicit memory, and subsequent scene recognition was used as a measure of explicit memory. We found that repetition priming was predicted by repetition suppression; however, repetition priming was not predicted by pattern similarity. In contrast, subsequent explicit memory was predicted by pattern similarity (across repetitions); however, explicit memory was not related to repetition suppression. This striking double dissociation indicates that repetition suppression and pattern similarity differentially track implicit and explicit learning.



What type of information can change unconscious processing? cfs

Gary Lupyan and I investigated whether language-based activation of visual representations -- the visual properties of an object that seem to be activated by linguistic labels (e.g. "chair") -- can affect the ability to simply detect the presence of an object. We used continuous flash suppression to make pictures of familiar objects invisible. In this procedure, people are presented with the image in one eye and dynamic noise pattern in the other eye. The noise suppresses the image, making it invisible for long durations. During the experiments, however, when participants heard the name of the invisible object, they became aware of the image. An otherwise invisible picture of a kangaroo, for example, was boosted into visual awareness when the participants heard the word "kangaroo". Hearing the wrong word actually made pictures even more difficult to detect. We theorize that words can affect how even the most basic visual processes work.



What type of unconscious information can influence conscious behavior? dancer

What we see is a function not only of incoming stimulation, but of unconscious inferences in visual processing. A powerful demonstration of this are bistable images, where the same stimulus alternates between two very different percepts, but what causes the percepts to switch? Using the Spinning Dancer illusion (which is bistable in terms of depth and rotation direction, but many people only see extended rotation in the same direction, interrupted only rarely by involuntary switches), Brian Scholl and I introduced quickly-flashed contour cues on the dancer to unambiguously specify her direction of rotation. Participants failed to notice the contours throughout the entire experiment, but they had a strong and systematic effect: cues typically led to seemingly random perceptual switches shortly thereafter, especially when conflicting with the current percept. Thus, unconscious inferences in visual processing can extract the content of incoming information, even when the existence of that information never researches awareness, and a sense of randomness should not be taken to imply a corresponding lack of underlying systematicity.



How does the brain encode information about visual scenes? adaptsim

In previous work with Russell Epstein, I investigated natural scene encoding in the parahippocampal place area (PPA), a region of the brain that responds preferentially to images of various natural scenes. We believe that the response in the PPA derives from the spatial layout of the images, and not contextual relationships among different objects in the scene. Consistent with this, we found that scene-selective regions of the brain are sensitive to viewpoint relative to the eyes (as in earlier visual areas) rather than encoding the scene in a more egocentric (head- or body-centered) frame of reference.