PsyResearch
ψ   Psychology Research on the Web   



Couples needed for online psychology research


Help us grow:




Journal of Experimental Psychology: Human Perception and Performance - Vol 43, Iss 7

Random Abstract
Quick Journal Finder:
Journal of Experimental Psychology: Human Perception and Performance The Journal of Experimental Psychology: Human Perception and Performance publishes studies on perception, control of action, and related cognitive processes.
Copyright 2017 American Psychological Association
  • Our own action kinematics predict the perceived affective states of others.
    Our movement kinematics provide useful cues about our affective states. Given that our experiences furnish models that help us to interpret our environment, and that a rich source of action experience comes from our own movements, in the present study, we examined whether we use models of our own action kinematics to make judgments about the affective states of others. For example, relative to one’s typical kinematics, anger is associated with fast movements. Therefore, the extent to which we perceive anger in others may be determined by the degree to which their movements are faster than our own typical movements. We related participants’ walking kinematics in a neutral context to their judgments of the affective states conveyed by observed point-light walkers (PLWs). As predicted, we found a linear relationship between one’s own walking kinematics and affective state judgments, such that faster participants rated slower emotions more intensely relative to their ratings for faster emotions. This relationship was absent when observing PLWs where differences in velocity between affective states were removed. These findings suggest that perception of affective states in others is predicted by one’s own movement kinematics, with important implications for perception of, and interaction with, those who move differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Upside-down: Perceived space affects object-based attention.
    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Abstract feature codes: The building blocks of the implicit learning system.
    According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Cerebral hemodynamics during scene viewing: Hemispheric lateralization predicts temporal gaze behavior associated with distinct modes of visual processing.
    Systematic patterns of eye movements during scene perception suggest a functional distinction between 2 viewing modes: an ambient mode (characterized by short fixations and large saccades) thought to reflect dorsal activity involved with spatial analysis, and a focal mode (characterized by long fixations and small saccades) thought to reflect ventral activity involved with object analysis. Little neuroscientific evidence exists supporting this claim. Here, functional transcranial Doppler ultrasound (fTCD) was used to investigate whether these modes show hemispheric specialization. Participants viewed scenes for 20 s under instructions to search or memorize. Overall, early viewing was right lateralized, whereas later viewing was left lateralized. This right-to-left shift interacted with viewing task (more pronounced in the memory task). Importantly, changes in lateralization correlated with changes in eye movements. This is the first demonstration of right hemisphere bias for eye movements servicing spatial analysis and left hemisphere bias for eye movements servicing object analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The phonological unit of Japanese Kanji compounds: A masked priming investigation.
    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The role of error processing in the contextual interference effect during the training of perceptual-cognitive skills.
    The contextual interference (CI) effect refers to the learning benefits that occur from a random compared with blocked practice order. In this article, the cognitive effort explanation for the CI effect was examined by investigating the role of error processing. In 2 experiments, a perceptual-cognitive task was used in which participants anticipated 3 different tennis skills across a pretest, 3 practice sessions, and retention test. During practice, the skills were presented in either a random or blocked practice order. In Experiment 1, cognitive effort was examined using a probe reaction time (RT) task. In Experiment 2, cognitive effort was manipulated for 2 groups by inserting a cognitively demanding secondary task into the intertrial interval. The CI effect was found in both experiments as the random groups displayed superior learning in the retention test compared with the blocked groups. Cognitive effort during practice was greater in random compared to blocked practice groups in Experiment 1. In Experiment 2, greater decrements in secondary task performance following an error were reported for the random group when compared with the blocked group. The suggestion is that not only the frequent switching of tasks in randomized orders causes increased cognitive effort and the CI effect, but it is also error processing in combination with task switching. Findings extend the cognitive effort explanation for the CI effect and propose an alternative hypothesis highlighting the role of error processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Embodied memory allows accurate and stable perception of hidden objects despite orientation change.
    Rotating a scene in a frontoparallel plane (rolling) yields a change in orientation of constituent images. When using only information provided by static images to perceive a scene after orientation change, identification performance typically decreases (Rock & Heimer, 1957). However, rolling generates optic flow information that relates the discrete, static images (before and after the change) and forms an embodied memory that aids recognition. The embodied memory hypothesis predicts that upon detecting a continuous spatial transformation of image structure, or in other words, seeing the continuous rolling process and objects undergoing rolling observers should accurately perceive objects during and after motion. Thus, in this case, orientation change should not affect performance. We tested this hypothesis in three experiments and found that (a) using combined optic flow and image structure, participants identified locations of previously perceived but currently occluded targets with great accuracy and stability (Experiment 1); (b) using combined optic flow and image structure information, participants identified hidden targets equally well with or without 30° orientation changes (Experiment 2); and (c) when the rolling was unseen, identification of hidden targets after orientation change became worse (Experiment 3). Furthermore, when rolling was unseen, although target identification was better when participants were told about the orientation change than when they were not told, performance was still worse than when there was no orientation change. Therefore, combined optic flow and image structure information, not mere knowledge about the rolling, enables accurate and stable perception despite orientation change. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The motor-cognitive model of motor imagery: Evidence from timing errors in simulated reaching and grasping.
    Motor imagery represents an important but theoretically underdeveloped area of research in psychology. The motor-cognitive model of motor imagery was presented, and contrasted with the currently prevalent view, the functional equivalence model. In 3 experiments, the predictions of the two models were pitted against each other through manipulations of task precision and the introduction of an interference task, while comparing their effects on overt actions and motor imagery. In Experiments 1a and 1b, the motor-cognitive model predicted an effect of precision whereby motor imagery would overestimate simulated movement times when a grasping action involved a high level of precision; this prediction was upheld. In Experiment 2, the motor-cognitive model predicted that an interference task would slow motor imagery to a much greater extent than it would overt actions; this prediction was also upheld. Experiment 3 showed that the effects observed in the previous experiments could not be due to failures to match the motor imagery and overt action tasks. None of the above results were explainable by either a strong version of the functional equivalence model, or any reasonable adaptations thereof. It was concluded that the motor-cognitive model may represent a theoretically viable advance in the understanding of motor imagery. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The magical number one-on-square-root-two: The double-target detection deficit in brief visual displays.
    How limited representational capacity is divided when multiple items need to be processed simultaneously is a fundamental question in cognitive psychology. The double-target deficit is the finding that, when monitoring multiple locations or information streams for targets, identification of 2 simultaneous targets is substantially worse than is predicted from the cost of divided attention alone. This finding suggests that targets and nontargets are treated differently by the cognitive system. We investigated the double-target deficit in 4 different visual decision tasks using noisy, backwardly masked targets presented for a range of exposure durations to test the theory that the deficit reflects a capacity limitation of visual short-term memory (VSTM). We quantified the deficit using a sample-size model of VSTM and 2 different models of the decision process: a signal detection MAX model and an optimum likelihood ratio model. We found a double-target deficit in all 4 tasks which increased in magnitude for briefer displays, consistent with the capacity limits of VSTM. We explained the exposure dependency using a competitive interaction model in which nontargets compete for access to VSTM at a slower rate than targets. Our findings support 2-stage models of visual processing in which the most target-like stimuli gain priority access into VSTM before the decision process begins. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Avoiding the conflict: Metacognitive awareness drives the selection of low-demand contexts.
    Previous research attempted to explain how humans strategically adapt behavior in order to achieve successful task performance. Recently, it has been suggested that 1 potential strategy is to avoid tasks that are too demanding. Here, we report 3 experiments that investigate the empirically neglected role of metacognitive awareness in this process. In these experiments, participants could freely choose between performing a task in either a high-demand or a low-demand context. Using subliminal priming, we ensured that participants were not aware of the visual stimuli creating these different demand contexts. Our results showed that participants who noticed a difference in task difficulty (i.e., metacognitive aware participants) developed a clear preference for the low-demand context. In contrast, participants who experienced no difference in task difficulty (i.e., metacognitive unaware participants) based their choices on variables unrelated to cognitive demand (e.g., the color or location associated with a context), and did not develop a preference for the low-demand context. Crucially, this pattern was found despite identical task performance in both metacognitive awareness groups. A multiple regression approach confirmed that metacognitive awareness was the main factor driving the preference for low-demand contexts. These results argue for an important role of metacognitive awareness in the strategic avoidance of demanding tasks. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The role of allograph representations in font-invariant letter identification.
    The literate brain must contend with countless font variants for any given letter. How does the visual system handle such variability? One proposed solution posits stored structural descriptions of basic letter shapes that are abstract enough to deal with the many possible font variations of each letter. These font-invariant representations, referred to as allographs in this paper, while frequently posited, have seldom been empirically evaluated. The research reported here helps to address this gap with 2 experiments that examine the possible influence of allograph representations on visual letter processing. In these experiments, participants respond to pairs of letters presented in an atypical font in 2 tasks—visual similarity judgments (Experiment 1) and same/different decisions (Experiment 2). By using representational similarity analysis (RSA) in conjunction with linear mixed effect models (LMEM; RSA-LMEM) we show that the similarity structure of the responses to the atypical font is influenced by the predicted similarity structure of allograph representations even after accounting for font-specific visual shape similarity. Similarity due to symbolic (abstract) identity, name, and motor representations of letters are also taken into account providing compelling evidence for the unique influence of allograph representations in these tasks. These results provide support for the role of allograph representations in achieving font-invariant letter identification. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Domain-general biases in spatial localization: Evidence against a distorted body model hypothesis.
    A number of studies have proposed the existence of a distorted body model of the hand. Supporting this hypothesis, judgments of the location of hand landmarks without vision are characterized by consistent distortions—wider knuckle and shorter finger lengths. We examined an alternative hypothesis in which these biases are caused by domain-general mechanisms, in which participants overestimate the distance between consecutive localization judgments that are spatially close. To do so, we examined performance on a landmark localization task with the hand (Experiments 1–3) using a lag-1 analysis. We replicated the widened knuckle judgments in previous studies. Using the lag-1 analysis, we found evidence for a constant overestimation bias along the mediolateral hand axis, such that consecutive stimuli were perceived as farther apart when they were closer (e.g., index-middle knuckle) versus farther (index-pinky) in space. Controlling for this bias, we found no evidence for a distorted body model along the mediolateral hand axis. To examine whether similar widening biases could be found with noncorporeal stimuli, we asked participants to localize remembered dots on a hand-like array (Experiments 4–5). Mean localization judgments were wider than actual along the primary array axis, similar to previous work with hands. As with proprioceptively defined stimuli, we found that this widening was primarily due to a constant overestimation bias. These results provide substantial evidence against a distorted body model hypothesis and support a domain-general model in which responses are biased away from the uncertainty distribution of the previous trial, leading to a constant overestimation bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Conceptual biases explain distortion differences between hand and objects in localization tasks.
    Recent studies have shown the presence of distortions in proprioceptive hand localization tasks. Those results were interpreted as reflecting specific perceptual distortions bound to a body model. It was especially suggested that hand distortions could be related to distortions of somatotopic cortical maps. In this study, we show that hand distortions measured in localization tasks might be partly driven by a general false belief about hand landmark locations (conceptual biases). We especially demonstrate that hand and object distortions are present in similar magnitude when correcting for the conceptual bias of the knuckles (Experiment 1) or when asking participants to directly locate spatially well-represented landmarks (i.e., without conceptual biases) on their hand (Experiment 2). Altogether our results suggest that localization task distortions are nonspecific to the body and that similar perceptual distortions could underlie localization performance measured on objects and hands. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Attention mediates the flexible allocation of visual working memory resources.
    Though it is clear that it is impossible to store an unlimited amount of information in visual working memory (VWM), the limiting mechanisms remain elusive. While several models of VWM limitations exist, these typically characterize changes in performance as a function of the number of to-be-remembered items. Here, we examine whether changes in spatial attention could better account for VWM performance, independent of load. Across 2 experiments, performance was better predicted by the prioritization of memory items (i.e., attention) than by the number of items to be remembered (i.e., memory load). This relationship followed a power law, and held regardless of whether performance was assessed based on overall precision or any of 3 measures in a mixture model. Moreover, at large set sizes, even minimally attended items could receive a small proportion of resources, without any evidence for a discrete-capacity on the number of items that could be maintained in VWM. Finally, the observed data were best fit by a variable-precision model in which response error was related to the proportion of resources allocated to each item, consistent with a model of VWM in which performance is determined by the continuous allocation of attentional resources during encoding. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Sync or separate? No compelling evidence for unintentional interpersonal coordination between Usain Bolt and Tyson Gay on the 100-meter world record race.
    In a recent observation article in Journal of Experimental Psychology: Human Perception and Performance (JEP:HPP; Varlet & Richardson, 2015) the 100-m sprint final of the World Championship in Athletics in Berlin of 2009 (i.e., the current world record race) was analyzed. That study reported occurrence of spontaneous, unintentional interpersonal synchronization between Usain Bolt and Tyson Gay, the respective winner and runner-up of that race. In the present commentary article, however, we argue that the results and conclusion of that study cannot be warranted because of methodological shortcomings. We addressed the same research question and reassessed the same race using an alternative data analysis method. These results revealed that as yet there is no sufficient ground to conclude that in the 100-m world record race synchronization occurred between Bolt and Gay. Yet, our reanalysis suggested that even at this very elite level the individual movement frequencies did seem to vary to such an extent that synchronization would theoretically still be possible, thereby providing incentives for further examination of potential unintentional synchronization in coactive sports. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source



Back to top


Back to top