PsyResearch
ψ   Psychology Research on the Web   



Couples needed for online psychology research


Help us grow:




Journal of Experimental Psychology: Human Perception and Performance - Vol 40, Iss 2

Random Abstract
Quick Journal Finder:
Journal of Experimental Psychology: Human Perception and Performance The Journal of Experimental Psychology: Human Perception and Performance publishes studies on perception, control of action, and related cognitive processes.
Copyright 2014 American Psychological Association
  • Percepts, not acoustic properties, are the units of auditory short-term memory.
    For decades, researchers have sought to understand the organizing principles of auditory and visual short-term memory (STM). Previous work in audition has suggested that there are independent memory stores for different sound features, but the nature of the representations retained within these stores is currently unclear. Do they retain perceptual features, or do they instead retain representations of the sound’s specific acoustic properties? In the present study we addressed this question by measuring listeners’ abilities to keep one of three acoustic properties (interaural time difference [ITD], interaural level difference [ILD], or frequency) in memory when the target sound was followed by interfering sounds that varied randomly in one of the same properties. Critically, ITD and ILD evoked the same percept (spatial location), despite being acoustically different and having different physiological correlates, whereas frequency evoked a different percept (pitch). The results showed that listeners found it difficult to remember the percept of spatial location when the interfering tones varied either in ITD or ILD, but not when they varied in frequency. The study demonstrates that percepts are the units of auditory STM, and provides testable predictions for future neuroscientific work on both auditory and visual STM. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Shape information mediating basic- and subordinate-level object recognition revealed by analyses of eye movements.
    This study examines the kinds of shape features that mediate basic- and subordinate-level object recognition. Observers were trained to categorize sets of novel objects at either a basic (between-families) or subordinate (within-family) level of classification. We analyzed the spatial distributions of fixations and compared them to model distributions of different curvature polarity (regions of convex or concave bounding contour), as well as internal part boundaries. The results showed a robust preference for fixation at part boundaries and for concave over convex regions of bounding contour, during both basic- and subordinate-level classification. In contrast, mean saccade amplitudes were shorter during basic- than subordinate-level classification. These findings challenge models of recognition that do not posit any special functional status to part boundaries or curvature polarity. We argue that both basic- and subordinate-level classification are mediated by object representations. These representations make explicit internal part boundaries, and distinguish concave and convex regions of bounding contour. The classification task constrains how shape information in these representations is used, consistent with the hypothesis that both parts-based, and image-based, operations support object recognition in human vision. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Can’t touch this: The first-person perspective provides privileged access to predictions of sensory action outcomes.
    Previous studies have shown that viewing others in pain activates cortical somatosensory processing areas and facilitates the detection of tactile targets. It has been suggested that such shared representations have evolved to enable us to better understand the actions and intentions of others. If this is the case, the effects of observing others in pain should be obtained from a range of viewing perspectives. Therefore, the current study examined the behavioral effects of observed grasps of painful and nonpainful objects from both a first- and third-person perspective. In the first-person perspective, a participant was faster to detect a tactile target delivered to their own hand when viewing painful grasping actions, compared with all nonpainful actions. However, this effect was not revealed in the third-person perspective. The combination of action and object information to predict the painful consequences of another person’s actions when viewed from the first-person perspective, but not the third-person perspective, argues against a mechanism ostensibly evolved to understand the actions of others. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The flexible focus: Whether spatial attention is unitary or divided depends on observer goals.
    The distribution of visual attention has been the topic of much investigation, and various theories have posited that attention is allocated either as a single unitary focus or as multiple independent foci. In the present experiment, we demonstrate that attention can be flexibly deployed as either a unitary or a divided focus in the same experimental task, depending on the observer’s goals. To assess the distribution of attention, we used a dual-stream Attentional Blink (AB) paradigm and 2 target pairs. One component of the AB, Lag-1 sparing, occurs only if the second target pair appears within the focus of attention. By varying whether the first-target-pair could be expected in a predictable location (always in-stream) or not (unpredictably in-stream or between-streams), observers were encouraged to deploy a divided or a unitary focus, respectively. When the second-target-pair appeared between the streams, Lag-1 sparing occurred for the Unpredictable group (consistent with a unitary focus) but not for the Predictable group (consistent with a divided focus). Thus, diametrically different outcomes occurred for physically identical displays, depending on the expectations of the observer about where spatial attention would be required. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The spatiotemporal dynamics of scene gist recognition.
    Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image’s basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space—within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • A new view through Alberti’s window.
    In his famous treatise on perspective, Alberti compared picture perception with looking through a window. Although Alberti himself was more concerned with picture production than perception, the window metaphor is still widely used to describe picture perception. By performing depth perception experiments, we investigated whether Alberti’s hypothesis makes sense in a geometrical fashion. If pictures are regarded as windows, the locus of objects with equal depth should be similar for pictorial and real space—ideally, spherical. Furthermore, if the loci of equidistance are indeed similar for real and pictorial space, their difference should be flat. We designed two experiments to investigate this claim. In the first experiment, a pairwise depth comparison task was used to compute the global perceived depth structure of a complex scene. We found that perception of the real space is more accurate and less ambiguous than pictorial space. More interestingly, we found that the relative differences between these two spaces (locus of relative equidistance) are curved, which contradicts the window hypothesis. In the second experiment, we wanted to measure the absolute locus of equidistance that we believed was diagnostic for the difference between real and pictorial space perception. We found that under normal circumstances, the distribution of equally perceived depths is curved in real space, and relatively flat in pictorial space. However, we also found exceptions. For example, viewing real space with one eye yielded similar results as normal pictorial space perception. We conclude that Alberti’s hypothesis needs a revision. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The aperture capture illusion: Misperceived forms in dynamic occlusion displays.
    Visual illusions can reveal unconscious representations and processes at work in perception. Here we report a robust illusion that involves the misperception of moving, partially occluded objects. When a dynamically occluded object is seen through 2 misaligned apertures, the object appears misaligned in the direction of the apertures, creating the Aperture Capture Illusion. Specifically, when part of a dynamically occluded object disappears behind an occluding surface and then another part of the object comes into view immediately afterward, the 2 parts appear misaligned in the direction of the offset of the apertures through which they were seen. This illusion can be nulled: Separating the 2 object parts to increase the time interval between their appearance produced the percept of alignment. The ability to null the illusion in this manner demonstrates that dynamically occluded regions of moving objects continue to persist in perceptual awareness but, we argue, are perceived to move at a slower velocity than visible regions. We report 7 experiments establishing the existence of the illusion and ruling out several classes of explanation for it. We interpret the illusion and the ability to nullify it within the context of Palmer, Kellman, and Shipley’s (2006) theory of spatiotemporal object formation. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Degree of certainty modulates anticipatory processes in real time.
    In the present study, we investigated how degree of certainty modulates anticipatory processes using a modified spatial cuing task in which participants made an anticipatory hand movement with the computer mouse toward one of two probabilistic targets. A cue provided information of the location of the upcoming target with 100% validity (certain condition), 75% validity (semicertain condition) or gave no information of the location (uncertain condition). We found that the degree of certainty associated with the probabilistic precue on the upcoming target location affected the spatiotemporal characteristics of the anticipatory hand movements in a systematic way. In the case of semicertainty, we found evidence that the anticipatory processes were modulated in a way consistent with a model of graded probability matching biased toward certainty. In the case of uncertainty regarding two equally likely locations, we observed large between- and within-subject variability in the patterns of anticipatory hand movements, suggesting that individual differences in the strategies employed may become relevant when the likelihoods of response options are equal. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Lexically guided phonetic retuning of foreign-accented speech and its generalization.
    Listeners use lexical knowledge to retune phoneme categories. When hearing an ambiguous sound between /s/ and /f/ in lexically unambiguous contexts such as gira[s/f], listeners learn to interpret the sound as /f/ because gira[f] is a real word and gira[s] is not. Later, they apply this learning even in lexically ambiguous contexts (perceiving knife rather than nice). Although such retuning could help listeners adapt to foreign-accented speech, research has focused on single phonetic contrasts artificially manipulated to create ambiguous sounds; however, accented speech varies along many dimensions. It is therefore unclear whether analogies to adaptation to accented speech are warranted. In the present studies, the to-be-adapted ambiguous sound was embedded in a global foreign accent. In addition, conditions of cross-speaker generalization were tested with focus on the extent to which perceptual similarity between 2 speakers’ fricatives is a condition for generalization to occur. Results showed that listeners retune phoneme categories manipulated within the context of a global foreign accent, and that they generalize this short-term learning to the perception of phonemes from previously unheard speakers. However, generalization was observed only when exposure and test speakers’ fricatives were sampled across a similar perceptual space. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Evidence of unlimited-capacity surface completion.
    Capacity limitations of perceptual surface completion were assessed using a simultaneous−sequential method. Observers searched among multiple surfaces requiring perceptual completion in front of other objects (modal completion) or behind other objects (amodal completion). In the simultaneous condition, all surfaces were presented at once, whereas in the sequential condition, they appeared in subsets of 2 at a time. For both modal and amodal surface completion, performance was as good in the simultaneous condition as in the sequential condition, indicating that surface completion unfolds independently for multiple surfaces across the visual field (i.e., has unlimited capacity). We confirmed this was due to the formation of surfaces defined by the pacmen inducers, and not simply to the detection of individual features of the pacmen inducers. These results provide evidence that surface-completion processes can be engaged and unfold independently for multiple surfaces across the visual field. In other words, surface completion can occur through unlimited-capacity processes. These results contribute to a developing understanding of capacity limitations in perceptual processing more generally. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Looming motion primes the visuomotor system.
    A wealth of evidence now shows that human and animal observers display greater sensitivity to objects that move toward them than to objects that remain static or move away. Increased sensitivity in humans is often evidenced by reaction times that increase in rank order from looming, to receding, to static targets. However, it is not clear whether the processing advantage enjoyed by looming motion is mediated by the attention system or the motor system. The present study investigated this by first examining whether sensitivity is to looming motion per se or to certain monocular or binocular cues that constitute stereoscopic motion in depth. None of the cues accounted for the looming advantage. A perceptual measure was then used to examine performance with minimal involvement of the motor system. Results showed that looming and receding motion were equivalent in attracting attention, suggesting that the looming advantage is indeed mediated by the motor system. These findings suggest that although motion itself is sufficient for attentional capture, motion direction can prime motor responses. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The spatially asymmetric cost of memory load on visual perception: Transient stimulus-centered neglect.
    Recent evidence suggests that visual working memory (VWM) load reduces performance accuracy on a concurrent visual recognition task, particularly for objects presented in the left hemifield. It has also been shown that high VWM load causes suppression of activity in the right temporoparietal junction (TPJ). Given the resemblance of VWM load effects to symptoms of unilateral neglect (i.e., impaired perception on the left side and lesion to the right TPJ), we investigated whether VWM load effects are restricted to the left side of space or extend to object-centered reference frames. In other words, akin to object-centered neglect, can high VWM load cause a perceptual cost in attending to the left side of the stimulus? We addressed this question using an object recognition task (Experiment 1) and a visual search task (Experiment 2) showing that this transient left-neglect can indeed be modulated by an object-centered frame of reference. These findings suggest that load-induced impairments of visual attention are spatially asymmetric and can emerge within multiple spatial reference frames. Therefore, the attentional consequences of high VWM load on conscious perception may serve as a useful model of unilateral perceptual neglect. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Pushing typists back on the learning curve: Revealing chunking in skilled typewriting.
    Theories of skilled performance propose that highly trained skills involve hierarchically structured control processes. The present study examined and demonstrated hierarchical control at several levels of processing in skilled typewriting. In the first two experiments, we scrambled the order of letters in words to prevent skilled typists from chunking letters, and compared typing words and scrambled words. Experiment 1 manipulated stimulus quality to reveal chunking in perception, and Experiment 2 manipulated concurrent memory load to reveal chunking in short-term memory (STM). Both experiments manipulated the number of letters in words and nonwords to reveal chunking in motor planning. In the next two experiments, we degraded typing skill by altering the usual haptic feedback by using a laser-projection keyboard, so that typists had to monitor keystrokes. Neither the number of motor chunks (Experiment 3) nor the number of STM items (Experiment 4) was influenced by the manipulation. The results indicate that the utilization of hierarchical control depends on whether the input allows chunking but not on whether the output is generated automatically. We consider the role of automaticity in hierarchical control of skilled performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Distractor devaluation in a flanker task: Object-specific effects without distractor recognition memory.
    Previous research has shown that ignored stimuli are affectively devalued (i.e., distractor devaluation effect). Whereas previous research used feature-based selection tasks to investigate distractor devaluation, we used an object-based paradigm, allowing us to investigate open questions regarding underlying mechanisms. First, by using an object-based paradigm, we expected to find distractor devaluation for specific distractors (in contrast to general effects for certain categories). Second, we expected distractor devaluation in the absence of explicit recall of the to-be-evaluated stimulus’ prior status (e.g., distractor), which is an important and previously untested factor, in order to exclude alternative explanations for distractor devaluation. Third, derived from the devaluation-by-inhibition hypothesis, we predicted that conditions of stronger distractor interference would result in stronger distractor devaluation. These predictions were confirmed in two experiments. We thus provide evidence that distractor devaluation can be a consequence of selective attention processes and that the evaluative consequences of ignoring can be tied to the mental representation of specific distractors. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The visual system’s intrinsic bias influences space perception in the impoverished environment.
    A dimly lit target in the intermediate distance in the dark is judged at the intersection between the target’s projection line from the eye to its physical location and an implicit slanted surface, which is the visual system’s intrinsic bias. We hypothesize that the intrinsic bias also contributes to perceptual space in the impoverished environment. We first showed that a target viewed against sparse texture elements delineating the horizontal ground surface in the dark is localized along an implicit slanted surface that is less slanted than that of the intrinsic bias, reflecting the weighted integration of the weak texture information and intrinsic bias. We also showed that while the judged egocentric locations are similar between 0.15- to 5-s exposure durations, the judged precision improves with duration. Furthermore, the precision for the judged target angular declination does not vary with the physical angular declination and is better than the precision of the eye-to-target distance. Second, we used both action and perceptual tasks to directly reveal the perceived surface slant. Confirming our hypothesis, we found that an L-shaped target on the horizontal ground with sparse texture information is perceived with a slant that is less than that of the intrinsic bias. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The attentional effects of single cues and color singletons on visual sensitivity.
    Sudden changes in the visual periphery can automatically draw attention to their locations. For example, the brief flash of a single object (a “cue”) rapidly enhances contrast sensitivity for subsequent stimuli in its vicinity. Feature singletons (e.g., a red circle among green circles) can also capture attention in a variety of tasks. Here, we evaluate whether a peripheral cue that enhances contrast sensitivity when it appears alone has a similar effect when it appears as a color singleton, with the same stimuli and task. In four experiments we asked observers to report the orientation of a target Gabor stimulus, which was preceded by an uninformative cue array consisting either of a single disk or of 16 disks containing a color or luminance singleton. Accuracy was higher and contrast thresholds lower when the single cue appeared at or near the target’s location, compared with farther away. The color singleton also modulated performance but to a lesser degree and only when it appeared exactly at the target’s location. Thus, this is the first study to demonstrate that cueing by color singletons, like single cues, can enhance sensory signals at an early stage of processing. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Dissociating preview validity and preview difficulty in parafoveal processing of word n + 1 during reading.
    Many studies have shown that previewing the next word n + 1 during reading leads to substantial processing benefit (e.g., shorter word viewing times) when this word is eventually fixated. However, evidence of such preprocessing in fixations on the preceding word n when in fact the information about the preview is acquired is far less consistent. A recent study suggested that such effects may be delayed into fixations on the next word n + 1 (Risse & Kliegl, 2012). To investigate the time course of parafoveal information-acquisition on the control of eye movements during reading, we conducted 2 gaze-contingent display-change experiments and orthogonally manipulated the processing difficulty (i.e., word frequency) of an n + 1 preview word and its validity relative to the target word. Preview difficulty did not affect fixation durations on the pretarget word n but on the target word n + 1. In fact, the delayed preview-difficulty effect was almost of the same size as the preview benefit associated with the n + 1 preview validity. Based on additional results from quantile-regression analyses on the time course of the 2 preview effects, we discuss consequences as to the integration of foveal and parafoveal information and potential implications for computational models of eye guidance in reading. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Differential familiarity effects in amodal completion: Support from behavioral and electrophysiological measurements.
    We studied the effects of learning on amodal completion of partly occluded shapes. Amodal completion may originate from local characteristics of the partly occluded contours, resulting in local completions, or from global characteristics, resulting in global completions. Two classes of occlusion patterns were constructed: convergent occlusion patterns, in which global and local completions resulted in the same shape, and the much more ambiguous divergent occlusion patterns, in which these completions resulted in different shapes. We used a sequential matching paradigm and obtained behavioral responses (Experiment 1s and 2) and electroencephalogram recordings (Experiment 3) to investigate whether previously learned shapes influenced completions of partly occluded shapes. Experiment 1 revealed the preference for different completions of both occlusion patterns. In Experiment 2, learning effects were found only for test shapes following divergent occlusion patterns. Experiment 3 showed differential effects with regard to convergent and divergent occlusion patterns on a positive event-related potential in the 150- to 300-ms range, before learning. After learning, modulation of this effect was only found for the divergent occlusion patterns. The results show that amodal completion of shapes can be influenced by a simple learning task when multiple completions of partly occluded shapes are perceptually plausible. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Effects of attention to and awareness of preceding context tones on auditory streaming.
    This study determined whether facilitation of auditory stream segregation could occur when facilitating context tones are accompanied by other sounds. Facilitation was measured as the likelihood of a repeated context tone that could match the low (A) or high (B) frequency of a repeating ABA test to increase the likelihood of hearing the test as segregated. We observed this type of facilitation when matching tones were alone, or with simultaneous bandpass noises or continuous speech, neither of which masked the tones. However, participants showed no streaming facilitation when a harmonic complex masked the context tones. Mistuning or desynchronizing the context tone relative to the rest of the complex did not facilitate streaming, despite the fact that the context tone was accessible to awareness and attention. Even presenting the context tone in a separate ear from the rest of the harmonic complex did not facilitate streaming, ruling out peripheral interference. Presenting the test as mistuned or desynchronized tones relative to complex tones eliminated the possibility that timbre changes from context to test interfered with facilitation resulting from the context. These results demonstrate the fragility of streaming facilitation and show that awareness of and attention to the context tones are not sufficient to overcome interference. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Perceptual animacy: Visual search for chasing objects among distractors.
    Anthropomorphic interactions such as chasing are an important cue to perceptual animacy. A recent study showed that the detection of interacting (e.g., chasing) stimuli follows the regularities of a serial visual search. In the present set of experiments, we explore several variants of the chasing detection paradigm in order to investigate how human observers recognize chasing objects among distractors although there are no distinctive visual features attached to individual objects. Our results indicate that even a spatially separated presentation of potentially chasing pairs of objects requires attention at least for object selection (Experiment 1). In the chasing detection framework, a chase among nonchases is easier to find than a nonchase among chases, suggesting that cues indicating the presence of a chase prevail during chasing detection (Experiment 2). Spatial proximity is one of these cues toward the presence of a chase because decreasing the distance between chasing objects leads to shorter detection latencies (Experiment 3). Finally, our results indicate that single objects provide the basis of chasing detection rather than pairs of objects. Participants would rather search for one object that is approaching any other object in the display than for a pair of objects involved in a chase (Experiments 4 and 5). Taken together, these results suggest that participants recognize a chase by detecting one object that is approaching any of the other objects in the display. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • End-state comfort trumps handedness in object manipulation.
    A goal of research on human perception and performance is to explore the relative importance of constraints shaping action selection. The present study concerned the relative importance of two constraints that have not been directly contrasted: (1) the tendency to grasp objects in ways that afford comfortable or easy-to-control final postures; and (2) the tendency to grasp objects with the dominant rather than the nondominant hand. We asked participants to reach out and grasp a horizontal rod whose left or right end was to be placed into a target after a 90° rotation. In one condition, we told participants which hand to use and let them choose an overhand or underhand initial grasp. In another condition, we told participants which grasp to use and let them choose either hand. Participants sacrificed hand preference to perform the task in a way that ensured a comfortable or easy to control thumb-up posture at the time of object placement, indicating that comfort trumped handedness. A second experiment confirmed that comfort was indeed higher for thumb-down postures than thumb-up postures. A third experiment confirmed that the choice data could be linked to objective performance differences. The results point to the importance of identifying constraint weightings for action selection and support an account of hand selection that ascribes hand preference to sensitivity to performance differences. The results do not support the hypothesis that hand preference simply reflects a bias to use the dominant hand. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Guiding attention to specific locations by combining symbolic information about direction and distance: Are human observers direction experts?
    Spatial symbols can guide attention to a specific location only when they convey information about both direction and distance. However, the spatial symbols that have been used in previous cuing studies only convey information about direction, but not distance. Consequently, previous studies have only demonstrated that spatial symbols can exert partial control over the guidance of attention to specific locations. The present study investigated whether spatial symbols can also exert a more complete form of control over the guidance of attention to specific locations by presenting symbolic cues that conveyed information about both direction and distance. The effects of each spatial dimension were isolated by varying the spatial validity of each dimension separately. Consistent with the notion of more complete control, the results of 4 experiments showed that observers routinely combined symbolic information about direction and distance to guide their attention to specific locations. Perhaps more importantly, the results also suggested that observers demonstrated greater expertise orienting in response to direction symbols, though this expertise was only observed when these symbols were both familiar and commonly used to orient attention in the outside world. These results extend current theories, and set a new standard for studying symbolic control. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Nonverbal communicative signals modulate attention to object properties.
    We investigated whether the social context in which an object is experienced influences the encoding of its various properties. We hypothesized that when an object is observed in a communicative context, its intrinsic features (such as its shape) would be preferentially encoded at the expense of its extrinsic properties (such as its location). In 3 experiments, participants were presented with brief movies, in which an actor either performed a noncommunicative action toward 1 of 5 different meaningless objects, or communicatively pointed at 1 of them. A subsequent static image, in which either the location or the identity of an object changed, tested participants’ attention to these 2 kinds of information. Throughout the 3 experiments we found that communicative cues tended to facilitate identity change detection and to impede location change detection, whereas in the noncommunicative contexts we did not find such a bidirectional effect of cueing. The results also revealed that the effect of the communicative context was a result the presence of ostensive-communicative signals before the object-directed action, and not to the pointing gesture per se. We propose that such an attentional bias forms an inherent part of human communication, and function to facilitate social learning by communication. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • When vision influences the invisible distractor: Tactile response compatibility effects require vision.
    Research on the nature of crossmodal interactions between vision and touch has shown that even task-irrelevant visual information can support the processing of tactile targets. In the present study, we implemented a tactile variant of the Eriksen flanker task to investigate the influences of vision on the processing of tactile distractors. In particular, we analyzed whether the size of the flanker effect at the level of perceptual congruency and at the level of response compatibility would differ as a function of the availability of vision (Experiments 1 and 2). Tactile distractors were processed up to the level of response selection only if visual information was provided (i.e., no flanker effects were observed at the level of response compatibility for blindfolded participants). In Experiment 3, we manipulated whether the part of the body receiving the tactile target or distractor was visible, while the other body part was occluded from view. Flanker effects at the level of response compatibility were observed in both conditions, meaning that vision of either the body part receiving the tactile target or the body part receiving the tactile distractor was sufficient to further the processing of tactile distractors from the level of perceptual congruency to the level of response selection. Taken together, these results suggest that vision modulates tactile distractor processing because it results in the processing of tactile distractors up to the level of response selection. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Babies in traffic: Infant vocalizations and listener sex modulate auditory motion perception.
    Infant vocalizations and “looming sounds” are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Multiple spatial representations determine touch localization on the fingers.
    Touch location can be specified in different anatomical and external reference frames. Temporal order judgments (TOJs) in touch are known to be sensitive to conflict between reference frames. To establish which coordinates are involved in localizing touch to a finger, participants performed TOJ on tactile stimuli to 2 out of 4 possible fingers. We induced conflict between hand- and finger-related reference frames, as well as between anatomical and external spatial coding, by selectively crossing 2 fingers. TOJ performance was impaired when both stimuli were applied to crossed fingers, indicating conflict between anatomical and external finger coordinates. In addition, TOJs were impaired when stimuli were mapped to the same hand based on either anatomical or external spatial codes. Accordingly, we observed a benefit rather than impairment with finger crossing when both stimuli were applied to 1 hand. Complementary, participants systematically mislocalized touch to nonstimulated fingers of the targeted hand. The results indicate that touch localization for the fingers involves integration of several sources of spatial information: the anatomical location of the touched finger, its position in external space, the stimulated hand, and the hand to which the touch is (re)mapped in external space. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Misjudgment of direction contributes to curvature in movements toward haptically defined targets.
    The trajectories of arm movements toward visually defined targets are curved, even if participants try to move in a straight line. A factor contributing to this curvature may be that participants systematically misjudge the direction to the target, and try to achieve a straight path by always moving in the perceived direction of the target. If so, the relation between perception of direction and initial movement direction should not only be present for movements toward visually defined targets, but also when making movements toward haptically defined targets. To test whether this is so, we compared errors in the initial movement direction when moving as straight as possible toward haptically defined targets with errors in a pointer setting task toward the same targets. We found a modest correlation between perception of direction and initial movement direction for movements toward haptically defined targets. The amount of correlation depended on the geometry of the task. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Do masked orthographic neighbor primes facilitate or inhibit the processing of Kanji compound words?
    In the masked priming paradigm, when a word target is primed by a higher frequency neighbor (e.g., blue–BLUR), lexical decision latencies are slower than when the same word is primed by an unrelated word of equivalent frequency (e.g., care–BLUR). This inhibitory neighbor priming effect (e.g., Davis & Lupker, 2006; Segui & Grainger, 1990) is taken as evidence for the lexical competition process that is an important component of localist activation-based models of visual word recognition (Davis, 2003; Grainger & Jacobs, 1996; McClelland & Rumelhart, 1981). The present research looked for evidence of an inhibitory neighbor priming effect using words written in Japanese Kanji, a logographic, nonalphabetic script. In 4 experiments (Experiments 1A, 1B, 3A, and 3B), inhibitory neighbor priming effects were observed for low-frequency targets primed by higher frequency Kanji word neighbors (情報-情緒). In contrast, there was a significant facilitation effect when targets were primed by Kanji nonword neighbors (情門-情緒; Experiments 2 and 3). Significant facilitation was also observed when targets were primed by single constituent Kanji characters (情-情緒; Experiment 4). Taken together, these results suggest that lexical competition plays a role in the recognition of Kanji words, just as it does for words in alphabetic languages. However, in Kanji, and likely in other logographic languages, the effect of lexical competition appears to be counteracted by facilitory morphological priming due to the repetition of a morphological unit in the prime and target (i.e., in Kanji, each character represents a morpheme). (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Learned reward association improves visual working memory.
    Statistical regularities in the natural environment play a central role in adaptive behavior. Among other regularities, reward association is potentially the most prominent factor that influences our daily life. Recent studies have suggested that pre-established reward association yields strong influence on the spatial allocation of attention. Here we show that reward association can also improve visual working memory (VWM) performance when the reward-associated feature is task-irrelevant. We established the reward association during a visual search training session, and investigated the representation of reward-associated features in VWM by the application of a change detection task before and after the training. The results showed that the improvement in VWM was significantly greater for items in the color associated with high reward than for those in low reward-associated or nonrewarded colors. In particular, the results from control experiments demonstrate that the observed reward effect in VWM could not be sufficiently accounted for by attentional capture toward the high reward-associated item. This was further confirmed when the effect of attentional capture was minimized by presenting the items in the sample and test displays of the change detection task with the same color. The results showed significantly larger improvement in VWM performance when the items in a display were in the high reward-associated color than those in the low reward-associated or nonrewarded colors. Our findings suggest that, apart from inducing space-based attentional capture, the learned reward association could also facilitate the perceptual representation of high reward-associated items through feature-based attentional modulation. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Temporal integration of consecutive tones into synthetic vowels demonstrates perceptual assembly in audition.
    Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such “long-term” temporal integration has been characterized as a relatively simple function that acts chiefly to bridge brief input gaps, and which places integrated stimuli on temporal coordinates while preserving their temporal order information. These properties are not observed in visual temporal integration, suggesting they might be modality specific. The present study challenges that view. Participants were presented with rapid series of successive tone stimuli, in which two separate, deviant target tones were to be identified. Critically, the target tone pair would be perceived as a single synthetic vowel if they were interpreted to be simultaneous. During the task, despite that the targets were always sequential and never actually overlapping, listeners frequently reported hearing just one sound, the synthetic vowel, rather than two successive tones. The results demonstrate that auditory temporal integration, like its visual counterpart, truly assembles a percept from sensory inputs across time, and does not just summate time-ordered (identical) inputs or fill gaps therein. This finding supports the idea that temporal integration is a universal function of the human perceptual system. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • Measuring psychometric functions with the diffusion model.
    The diffusion decision model (Ratcliff, 1978) was used to examine discrimination for a range of perceptual tasks: numerosity discrimination, number discrimination, brightness discrimination, motion discrimination, speed discrimination, and length discrimination. The model produces a measure of the quality of the information that drives decision processes, a measure termed drift rate in the model. As drift rate varies across experimental conditions that differ in difficulty, a psychometric function that plots drift rate against difficulty can be constructed. Psychometric functions for the tasks in this article usually plot accuracy against difficulty, but for some levels of difficulty, accuracy can be at ceiling. The diffusion model extends the range of difficulty that can be evaluated because drift rates depend on response times (RTs) as well as accuracy, and when RTs decrease across conditions that are all at ceiling in accuracy, then drift rates will distinguish among the conditions. Signal detection theory assumes that the variable driving performance is the z-transform of the accuracy value, and, somewhat surprisingly, this closely matches drift rate extracted from the diffusion model when accuracy is not at ceiling, but not sometimes when accuracy is high. Even though the functions are similar in the middle of the range, the interpretations of the variability in the models (e.g., perceptual variability, decision process variability) are incompatible. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source

  • The influence of object height on maximum grip aperture in empirical and modeled data.
    During a grasping movement, the maximum grip aperture (MGA) is almost linearly scaled to the dimension of the target along which it is grasped. There is still a surprising uncertainty concerning the influence of the other target dimensions on the MGA. We asked healthy participants to grasp cuboids always along the object’s width with their thumb and index finger. Independent from variations of object width, we systematically varied height and depth of these target objects. We found that taller objects were generally grasped with a larger MGA. At the same time, the slope of the regression of MGA on object width decreased with increasing target height. In contrast, we found no effect of varying target depth on the MGA. Simulating these movements with a grasping model in which the objective to avoid contact of the digits with the target object at positions other than the goal positions was implemented yielded larger effects of target height than of target depth on MGA. We concluded that MGA does not only depend on the dimension of the target object along which it is grasped. Furthermore, the effects of the other 2 dimensions are considerably different. This pattern of results can partially be explained by the aim to avoid contacting the target object at positions other than the goal positions. (PsycINFO Database Record (c) 2014 APA, all rights reserved)
    Citation link to source



Back to top


Back to top