In rapid serial visual presentation, identification of the second of two targets is impaired when it closely follows the first target. This phenomenon is known as the attentional blink (AB) effect. Awh and his colleagues (2004) found that face discrimination was immune to AB when performed together with a digit...
Perea, Vergara-Martínez, and Gomez (2015) claimed a late locus of case mixing in visual word recognition. In their masking priming study, participants performed a lexical-decision task on an uppercase target, which was preceded by an identity or unrelated prime (e.g., “plane” or “music” followed by “PLANE”, respectively) in lowercase or...
Visual working memory (VWM) allows us temporarily hold images in our minds and manipulate them. As an example, you can remember a face you just saw, or try to imagine how a room would look with a different arrangement of furniture. Previous studies have shown that individuals with low VWM...
Alexithymia is a trait where individuals have difficulty identifying feeling and finding a word to express emotion. Some studies have suggested that this deficit is due to dissociation (repression), or an inability to perceive emotions, whereas others argued that the deficit is due to suppression of emotional information after it...
Previous studies have suggested that LEET words can automatically activate lexical information because of their physical similarity to real words (e.g., Perea, Duñabeitia, & Carreiras, 2008). Lien, Allen, and Martin (in press) recently used electrophysiological measures (event-related brain potentials; ERPs) to show similar lexical/semantic activation (based on the N400 effect,...
Previous studies have shown that both younger and older adults exhibit similar brain activity while anticipating monetary gain but older adults exhibit less brain activity comparing to younger adults while anticipating monetary loss. In Anderson et al.’s (2011) study, they found that visual search was slower with a salient, task-irrelevant...
Previous studies have suggested that negatively valenced faces (e.g., angry faces) automatically capture attention away from faces with other emotional valences (e.g., happy faces and neutral faces). The present study evaluated two experiments with age-related differences: the first assessed recognition memory for pictures of faces and how it is modulated...
Previous studies have suggested that negatively valenced faces (e.g., angry faces) automatically capture attention away from faces with other emotional valences (e.g., happy faces and neutral faces). The present study evaluated whether this attentional bias enhances memory of the negative emotional faces. Participants first performed a gender discrimination task on...
Some studies have found that responses are faster when the orientation of an object’s graspable part corresponds with the response location than when it does not (i.e., the object-based correspondence effect). We examined Goslin et al.’s (2012) claim that the effect is the result of object-based attention (visual-action binding). As...
Lien, Ruthruff, and Johnston (2010) reported that the attentional control system is able to rapidly and fully switch between different search settings (e.g., red to green), with no carryover. The present study examined whether such impressive flexibility is possible even with more complicated switches, namely singleton search and the feature...