PsyResearch
ψ   Psychology Research on the Web   



Journal of Experimental Psychology: Learning, Memory, and Cognition - Vol 50, Iss 5

Random Abstract
Quick Journal Finder:
Journal of Experimental Psychology: Learning, Memory, and Cognition The Journal of Experimental Psychology: Learning, Memory, and Cognition publishes original experimental studies on basic processes of cognition, learning, memory, imagery, concept formation, problem solving, decision making, thinking, reading, and language processing.
Copyright 2024 American Psychological Association
  • Attention-based rehearsal: Eye movements reveal how visuospatial information is maintained in working memory.
    The human eye scans visual information through scan paths, series of fixations. Analogous to these scan paths during the process of actual “seeing,” we investigated whether similar scan paths are also observed while subjects are “rehearsing” stimuli in visuospatial working memory. Participants performed a continuous recall task in which they rehearsed the precise location and color of three serially presented discs during a retention interval, and later reproduced either the precise location or the color of a single probed item. In two experiments, we varied the direction along which the items were presented and investigated whether scan paths during rehearsal followed the pattern of stimulus presentation during encoding (left-to-right in Experiment 1; left-to-right/right-to-left in Experiment 2). In both experiments, we confirmed that the eyes follow similar scan paths during encoding and rehearsal. Specifically, we observed that during rehearsal participants refixated the memorized locations they saw during encoding. Most interestingly, the precision with which these locations were refixated was associated with smaller recall errors. Assuming that eye position reflects the focus of attention, our findings suggest a functional contribution of spatial attention shifts to working memory and are in line with the hypothesis that maintenance of information in visuospatial working memory is supported by attention-based rehearsal. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Environmental regularities mitigate attentional misguidance in contextual cueing of visual search.
    Visual search is faster when a fixed target location is paired with a spatially invariant (vs. randomly changing) distractor configuration, thus indicating that repeated contexts are learned, thereby guiding attention to the target (contextual cueing [CC]). Evidence for memory-guided attention has also been revealed with electrophysiological (electroencephalographic [EEG]) recordings, starting with an enhanced early posterior negativity (N1pc), which signals a preattentive bias toward the target, and, subsequently, attentional and postselective components, such as the posterior contralateral negativity (PCN) and contralateral delay activity (CDA), respectively. Despite effective learning, relearning of previously acquired contexts is inflexible: The CC benefits disappear when the target is relocated to a new position within an otherwise invariant context and corresponding EEG correlates are diminished. The present study tested whether global statistical properties that induce predictions going beyond the immediate invariant layout can facilitate contextual relearning. Global statistical regularities were implemented by presenting repeated and nonrepeated displays in separate streaks (mini blocks) of trials in the relocation phase, with individual displays being presented in a fixed and thus predictable order. Our results revealed a significant CC effect (and an associated modulation of the N1pc, PCN, and CDA components) during initial learning. Critically, the global statistical regularities in the relocation phase also resulted in a reliable CC effect, thus revealing effective relearning with predictive streaks. Moreover, this relearning was reflected in an enhanced PCN amplitude for repeated relative to nonrepeated contexts. Temporally ordered contexts may thus adapt memory-based guidance of attention, particularly the allocation of covert attention in the visual display. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Positional encoding of morphemes in visual word recognition.
    Reading morphologically complex words requires analysis of their morphemic subunits (e.g., play + er); however, the positional constraints of morphemic processing are still little understood. The current study involved three unprimed lexical decision experiments to directly compare the positional encoding of stems and affixes during reading and to investigate the role of semantics during the position encoding of morphemes. Experiment 1 revealed that transposed compound words were harder to reject than their controls (e.g., dreamday vs. shadeday), whereas there was no difference between transposed suffixed words and their controls (e.g., fulpain vs. adepain). Experiment 2 replicated the results for transposed compound words of the first experiment and further showed that there was no difference between transposed prefixed words and their controls (e.g., qualifydis vs. qualifymis). Experiment 3 investigated the role of semantic transparency in morpheme transposition effects and revealed a larger morpheme transposition effect for semantically transparent transposed compound words (e.g., cuptea vs. taptea) than for semantically opaque transposed compound words (e.g., linedead vs. deskdead). These results bring to light important differences in the positional encoding of stems and affixes, suggesting that prefixes and suffixes are recognized in a position-dependent manner compared to the position-independent encoding of embedded stems and that morpheme transposition effects are guided by semantics. The current findings call for more clearly specified theoretical models of visual word recognition that reflect the distinct positional constraints of stems and affixes, as well as the influence of semantics on morphological processing. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Testing expectations and retrieval practice modulate repetition learning of visuospatial arrays.
    One of the best-known demonstrations of long-term learning through repetition is the Hebb effect: Immediate recall of a memory list repeated amidst nonrepeated lists improves steadily with repetitions. However, previous studies often failed to observe this effect for visuospatial arrays. Souza and Oberauer (2022) showed that the strongest determinant for producing learning was the difficulty of the test: Learning was consistently observed when participants recalled all items of a visuospatial array (difficult test) but not if only one item was recalled, or recognition procedures were used (less difficult tests). This suggests that long-term learning was promoted by increased testing demands over the short term. Alternatively, it is possible that lower testing demands still lead to learning but prevented the application of what was learned. In four preregistered experiments (N = 981), we ruled out this alternative explanation: Changing the type of memory test midway through the experiment from less demanding (i.e., single item recall or recognition) to a more demanding test (i.e., full item recall) did not reveal hidden learning, and changing it from the more demanding to a less demanding test did not conceal learning. Mixing high and low demanding tests for nonrepeated arrays, however, eventually produced Hebb learning even for the less demanding testing conditions. We propose that testing affects long-term learning in two ways: Expectations of the test difficulty influence how information is encoded into memory, and retrieval consolidates this information in memory. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Hebb repetition effects in complex and simple span tasks are based on the same learning mechanism.
    The Hebb repetition effect shows improvement in serial recall of repeated lists compared to random nonrepeated lists. Previous research using simple span tasks found that the Hebb repetition effect is limited to constant uninterrupted lists, suggesting chunking as the mechanism of list learning. However, the Hebb repetition effect has been found in complex span tasks, which challenges the chunking explanation, as successive list items are separated by distractor processing, possibly interfering with the unified representations. We tested the possibility that Hebb repetition learning arises from chunking in simple span, but from position–item associations in complex span. In a series of five experiments, we found evidence that contradicts that hypothesis. Results show that (a) Hebb repetition learning in a complex span task can be transferred to a simple span task; (b) Hebb repetition learning from a complex span task cannot be transferred to a partially repeated simple span task; (c) partial repetition in a complex span task does not lead to learning; (d) Hebb repetition learning from a simple span task can be transferred to a complex span task; and (e) repeating the distractors in complex span has no impact on the Hebb repetition effect. These results suggest that the mechanism underlying the Hebb repetition effect in simple and complex span tasks is the same and points at the creation of chunks while excluding the distractors from the long-term memory representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Transfer of task-probability-induced biases in parallel dual-task processing occurs in similar, but is constraint in distinct task sets.
    Although humans often multitask, little is known about how the processing of concurrent tasks is managed. The present study investigated whether adjustments in parallel processing during multitasking are local (task-specific) or global (task-unspecific). In three experiments, participants performed one of three tasks: a primary task or, if this task did not require a response, one of two background tasks (i.e., prioritized processing paradigm). To manipulate the degree of parallel processing, we presented blocks consisting mainly of primary or background task trials. In Experiment 1, the frequency manipulation was distributed equally across the two background tasks. In Experiments 2 and 3, only one background task was frequency-biased (inducer task). The other background task was presented equally often in all blocks (diagnostic task) and served to test whether processing adjustments transferred. In all experiments, blocks with frequent background tasks yielded stronger interference between primary and background tasks (primary task performance) and improved background task performance. Thus, resource sharing appeared to increase with high background task probabilities even under triple task requirements. Importantly, these adjustments generalized across the background tasks when they were conceptually and visually similar (Experiment 2). Implementing more distinct background tasks limited the transfer: Adjustments were restricted to the inducer task in background task performance and only small transfer was observed in primary task performance (Experiment 3). Overall, the results indicate that the transfer of adjustments in parallel processing is unrestricted for similar, but limited for distinct tasks, suggesting that task similarity affects the generality of resource allocation in multitasking. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • How statistical correlations influence discourse-level processing: Clause type as a cue for discourse relations.
    Linguistic phenomena (e.g., words and syntactic structure) co-occur with a wide variety of meanings. These systematic correlations can help readers to interpret a text and create predictions about upcoming material. However, to what extent these correlations influence discourse processing is still unknown. We address this question by examining whether clause type serves as a cue for discourse relations. We found that the co-occurrence of gerund-free adjuncts and specific discourse relations found in natural language is also reflected in readers’ offline expectations for discourse relations. However, we also found that clause structure did not facilitate the online processing of these discourse relations, nor that readers have a preference for these relations in a paraphrase selection task. The present research extends previous research on discourse relation processing, which mostly focused on lexical cues, by examining the role of non-semantic cues. We show that readers are aware of correlations between clause structure and discourse relations in natural language, but that, unlike what has been found for lexical cues, this information does not seem to influence online processing and discourse interpretation. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • A corpus-based examination of scalar diversity.
    The phenomenon of scalar diversity refers to the well-replicated finding that different scalar expressions give rise to scalar implicatures (SIs) at different rates. Previous work has shown that part of the scalar diversity effect can be explained by theoretically motivated factors. Although the effect has been established only in controlled experiments using manually constructed stimuli, there has been a tendency to assume that the marked differences in inference rates that have been observed reflect differences to be found in naturally occurring discourse. We explore whether this is the case by sampling actual language usage involving a wide range of scalar expressions. Adopting the approach in Degen (2015), we investigated the scalar diversity effect in a corpus of Twitter data we constructed. We find that the phenomenon of scalar diversity attenuates significantly when measured in a corpus-based paraphrase task. Although the degree of “scalar diversity” varies, we find that factors derived from theories of SI can explain nearly two-thirds of the variation. This remains the case whether the variation is observed in controlled experiments or in the context of natural language use. As for the remaining variation, we hypothesize that it may be due to a high level of uncertainty about whether adjectival scalar expressions should undergo scalar enrichment. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Beyond quantity of experience: Exploring the role of semantic consistency in Chinese character knowledge.
    Most printed Chinese words are compounds built from the combination of meaningful characters. Yet, there is a poor understanding of how individual characters contribute to the recognition of compounds. Using a megastudy of Chinese word recognition (Tse et al., 2017), we examined how the lexical decision of existing and novel Chinese compounds was influenced by two properties of individual characters: family size (the number of distinct words that embed a character) and family semantic consistency (the average semantic relatedness between a character and all words containing it). Results revealed that both variables influence word and nonword processing: Words are recognized more quickly and accurately when they contain characters that occur frequently across different words and that make consistent meaningful contributions to those words, while nonwords containing those types of characters are rejected more slowly. These findings suggest that the learning of individual characters is based not only on the quantity of experience with them but also on the reliability of the semantic information they communicate. In addition, readers are able to generalize character knowledge acquired from previous word experiences to their daily encounters with familiar and unfamiliar words. We close by discussing how word experience shapes character knowledge when different ways of calculating family properties are considered. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • It is not all about you: Communicative cooperation is determined by your partner’s theory of mind abilities as well as your own.
    We investigated the relationship between Theory of Mind (ToM) and communicative cooperation. Specifically, we examined whether communicative cooperation is affected by the ToM ability of one’s cooperative partner as well as their own. ToM is the attribution of mental states to oneself and others; cooperation is the joint action that leads to achieving a shared goal. We measured cooperation using a novel communicative cooperation game completed by participants in pairs. ToM was measured via the Movies for Assessment of Social Cognition (MASC) task and fluid intelligence via the Raven task. Findings of 350 adults show that ToM scores of both players were predictors of cooperative failure, whereas Raven scores were not. Furthermore, participants were split into low- and high-ToM groups through a median split of the MASC scores: high-ToM individuals committed significantly fewer cooperative errors compared to their low-ToM counterparts. Therefore, we found a direct relationship between ToM and cooperation. Interestingly, we also examined how ToM scores of paired participants determine cooperation. We found that pairs with two high-ToM individuals committed significantly fewer errors compared to pairs with two low-ToM individuals. We speculate that reduced cooperation in low–low ToM pairs is a result of less efficient development of conceptual alignment and recovery from misalignment, compared to high–high ToM dyads. For the first time, we thus demonstrate that it is not all about you; both cooperative partners make key, independent, contributions to cooperative outcomes. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source



Back to top


Back to top