PsyResearch
ψ   Psychology Research on the Web   



Psychological Review - Vol 131, Iss 3

Random Abstract
Quick Journal Finder:
Psychological Review Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology.
Copyright 2024 American Psychological Association
  • Probabilistic origins of compositional mental representations.
    The representation of complex phenomena via combinations of simple discrete features is a hallmark of human cognition. But it is not clear exactly how (or whether) discrete features can effectively represent the complex probabilistic fabric of the environment. This article introduces information-theoretic tools for quantifying the fidelity and efficiency of a featural representation with respect to a probability model. In this framework, a feature or combination of features is “faithful” to the extent that knowing the value of the features reduces uncertainty about the true state of the world. In a single dimension, a discrete feature is faithful if the values of the feature correspond isomorphically to distinct classes in the probability model. But in multiple dimensions, the situation is more complicated: The fidelity of each feature depends on the direction in multidimensional feature space in which the feature is projected from the underlying distribution. More interestingly, distributions may be more effectively represented by combinations of projected features—that is, compositionality. For any given distribution, a variety of compositional forms (features and combination rules) are possible, which can be quite different from one another, entailing different degrees of fidelity, different numbers of features, and even different induced regularities. This article proposes three specific criteria for a compositional representation: fidelity, simplicity, and robustness. The information-theoretic framework introduces a new and potentially useful way to look at the problem of compositionality in human mental representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • A signal detection–based confidence–similarity model of face matching.
    Face matching consists of the ability to decide whether two face images (or more) belong to the same person or to different identities. Face matching is crucial for efficient face recognition and plays an important role in applied settings such as passport control and eyewitness memory. However, despite extensive research, the mechanisms that govern face-matching performance are still not well understood. Moreover, to date, many researchers hold on to the belief that match and mismatch conditions are governed by two separate systems, an assumption that likely thwarted the development of a unified model of face matching. The present study outlines a unified unequal variance confidence–similarity signal detection–based model of face-matching performance, one that facilitates the use of receiver operating characteristics (ROC) and confidence–accuracy plots to better understand the relations between match and mismatch conditions, and their relations to factors of confidence and similarity. A binomial feature-matching mechanism is developed to support this signal detection model. The model can account for the presence of both within-identities and between-identities sources of variation in face recognition and explains a myriad of face-matching phenomena, including the match–mismatch dissociation. The model is also capable of generating new predictions concerning the role of confidence and similarity and their intricate relations with accuracy. The new model was tested against six alternative competing models (some postulate discrete rather than continuous representations) in three experiments. Data analyses consisted of hierarchically nested model fitting, ROC curve analyses, and confidence–accuracy plots analyses. All of these provided substantial support in the signal detection–based confidence–similarity model. The model suggests that the accuracy of face-matching performance can be predicted by the degree of similarity/dissimilarity of the depicted faces and the level of confidence in the decision. Moreover, according to the model, confidence and similarity ratings are strongly correlated. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Processing speed and executive attention as causes of intelligence.
    Individual differences in processing speed and executive attention have both been proposed as explanations for individual differences in cognitive ability, particularly general and fluid intelligence (Engle et al., 1999; Kail & Salthouse, 1994). Both constructs have long intellectual histories in scientific psychology. This article attempts to describe the historical development of these constructs, particularly as they pertain to intelligence. It also aims to determine the degree to which speed and executive attention are theoretical competitors in explaining individual differences in intelligence. We suggest that attention is the more fundamental mechanism in explaining variation in human intelligence. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • A maturational frequency discrimination deficit may explain developmental language disorder.
    Auditory perceptual deficits are widely observed among children with developmental language disorder (DLD). Yet, the nature of these deficits and the extent to which they explain speech and language problems remain controversial. In this study, we hypothesize that disruption to the maturation of the basilar membrane may impede the optimization of the auditory pathway from brainstem to cortex, curtailing high-resolution frequency sensitivity and the efficient spectral decomposition and encoding of natural speech. A series of computational simulations involving deep convolutional neural networks that were trained to encode, recognize, and retrieve naturalistic speech are presented to demonstrate the strength of this account. These neural networks were built on top of biologically truthful inner ear models developed to model human cochlea function, which—in the key innovation of the present study—were scheduled to mature at different rates over time. Delaying cochlea maturation qualitatively replicated the linguistic behavior and neurophysiology of individuals with language learning difficulties in a number of ways, resulting in (a) delayed language acquisition profiles, (b) lower spoken word recognition accuracy, (c) word finding and retrieval difficulties, (d) “fuzzy” and intersecting speech encodings and signatures of immature neural optimization, and (e) emergent working memory and attentional deficits. These simulations illustrate many negative cascading effects that a primary maturational frequency discrimination deficit may have on early language development and generate precise and testable hypotheses for future research into the nature and cost of auditory processing deficits in children with language learning difficulties. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • The violation-of-expectation paradigm: A conceptual overview.
    For over 35 years, the violation-of-expectation paradigm has been used to study the development of expectations in the first 3 years of life. A wide range of expectations has been examined, including physical, psychological, sociomoral, biological, numerical, statistical, probabilistic, and linguistic expectations. Surprisingly, despite the paradigm’s widespread use and the many seminal findings it has contributed to psychological science, so far no one has tried to provide a detailed and in-depth conceptual overview of the paradigm. Here, we attempted to do just that. We first focus on the rationale of the paradigm and discuss how it has evolved over time. We then show how improved descriptions of infants’ looking behavior, together with the addition of a rich panoply of brain and behavioral measures, have helped deepen our understanding of infants’ responses to violations. Next, we review the paradigm’s strengths and limitations. Finally, we end with a discussion of challenges that have been leveled against the paradigm over the years. Through it all, our goal was twofold. First, we sought to provide psychologists and other scientists interested in the paradigm with an informed and constructive analysis of its theoretical origins and development. Second, we wanted to take stock of what the paradigm has revealed to date about how infants reason about events, and about how surprise at unexpected events, in or out of the laboratory, can lead to learning, by prompting infants to revise their working model of the world. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • A social inference model of idealization and devaluation.
    People often form polarized beliefs, imbuing objects (e.g., themselves or others) with unambiguously positive or negative qualities. In clinical settings, this is referred to as dichotomous thinking or “splitting” and is a feature of several psychiatric disorders. Here, we introduce a Bayesian model of splitting that parameterizes a tendency to rigidly categorize objects as either entirely “Bad” or “Good,” rather than to flexibly learn dispositions along a continuous scale. Distinct from the previous descriptive theories, the model makes quantitative predictions about how dichotomous beliefs emerge and are updated in light of new information. Specifically, the model addresses how splitting is context-dependent, yet exhibits stability across time. A key model feature is that phases of devaluation and/or idealization are consolidated by rationally attributing counter-evidence to external factors. For example, when another person is idealized, their less-than-perfect behavior is attributed to unfavorable external circumstances. However, sufficient counter-evidence can trigger switches of polarity, producing bistable dynamics. We show that the model can be fitted to empirical data, to measure individual susceptibility to relational instability. For example, we find that a latent categorical belief that others are “Good” accounts for less changeable, and more certain, character impressions of benevolent as opposed to malevolent others among healthy participants. By comparison, character impressions made by participants with borderline personality disorder reveal significantly higher and more symmetric splitting. The generative framework proposed invites applications for modeling oscillatory relational and affective dynamics in psychotherapeutic contexts. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Optimal metacognitive control of memory recall.
    Most of us have experienced moments when we could not recall some piece of information but felt that it was just out of reach. Research in metamemory has established that such judgments are often accurate; but what adaptive purpose do they serve? Here, we present an optimal model of how metacognitive monitoring (feeling of knowing) could dynamically inform metacognitive control of memory (the direction of retrieval efforts). In two experiments, we find that, consistent with the optimal model, people report having a stronger memory for targets they are likely to recall and direct their search efforts accordingly, cutting off the search when it is unlikely to succeed and prioritizing the search for stronger memories. Our results suggest that metamemory is indeed adaptive and motivate the development of process-level theories that account for the dynamic interplay between monitoring and control. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • One thought too few: An adaptive rationale for punishing negligence.
    Why do we punish negligence? Some current accounts raise the possibility that it can be explained by the kinds of processes that lead us to punish ordinary harmful acts, such as outcome bias, character inference, or antecedent deliberative choices. Although they capture many important cases, these explanations fail to account for others. We argue that, in addition to these phenomena, there is something unique to the punishment of negligence itself: People hold others directly responsible for the basic fact of failing to bring to mind information that would help them to avoid important risks. In other words, we propose that at its heart negligence is a failure of thought. Drawing on the current literature in moral psychology, we suggest that people find it natural to punish such failures, even when they do not arise from conscious, volitional choice. This raises a question: Why punish somebody for a mental event they did not exercise deliberative control over? Drawing on the literature on how thoughts come to mind, we argue that punishing a person for such failures will help prevent their future occurrence, even without the involvement of volitional choice. This provides new insight on the structure and function of our tendency to punish negligent actions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source



Back to top


Back to top