Deprecated: Creation of dynamic property lastRSS::$cache_dir is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 430

Deprecated: Creation of dynamic property lastRSS::$cache_time is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 431

Deprecated: Creation of dynamic property lastRSS::$rsscp is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 267

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597
Psychological Review
PsyResearch
ψ   Psychology Research on the Web   



Psychological Review - Vol 131, Iss 5

Random Abstract
Quick Journal Finder:
Psychological Review Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology.
Copyright 2024 American Psychological Association
  • How do people predict a random walk? Lessons for models of human cognition.
    Repeated forecasts of changing values are common in many everyday tasks, from predicting the weather to financial markets. A particularly simple and informative instance of such fluctuating values are random walks: Sequences in which each point is a random movement from only its preceding value, unaffected by any previous points. Moreover, random walks often yield basic rational forecasting solutions in which predictions of new values should repeat the most recent value, and hence replicate the properties of the original series. In previous experiments, however, we have found that human forecasters do not adhere to this standard, showing systematic deviations from the properties of a random walk such as excessive volatility and extreme movements between subsequent predictions. We suggest that such deviations reflect general statistical signatures of cognition displayed across multiple tasks, offering a window into underlying mechanisms. Using these deviations as new criteria, we here explore several cognitive models of forecasting drawn from various approaches developed in the existing literature, including Bayesian, error-based learning, autoregressive, and sampling mechanisms. These models are contrasted with human data from two experiments to determine which best accounts for the particular statistical features displayed by participants. We find support for sampling models in both aggregate and individual fits, suggesting that these variations are attributable to the use of inherently stochastic prediction systems. We thus argue that variability in predictions is strongly influenced by computational noise within the decision making process, with less influence from “late” noise at the output stage. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Bayesian confidence in optimal decisions.
    The optimal way to make decisions in many circumstances is to track the difference in evidence collected in favor of the options. The drift diffusion model (DDM) implements this approach and provides an excellent account of decisions and response times. However, existing DDM-based models of confidence exhibit certain deficits, and many theories of confidence have used alternative, nonoptimal models of decisions. Motivated by the historical success of the DDM, we ask whether simple extensions to this framework might allow it to better account for confidence. Motivated by the idea that the brain will not duplicate representations of evidence, in all model variants decisions and confidence are based on the same evidence accumulation process. We compare the models to benchmark results, and successfully apply four qualitative tests concerning the relationships between confidence, evidence, and time, in a new preregistered study. Using computationally cheap expressions to model confidence on a trial-by-trial basis, we find that a subset of model variants also provide a very good to excellent account of precise quantitative effects observed in confidence data. Specifically, our results favor the hypothesis that confidence reflects the strength of accumulated evidence penalized by the time taken to reach the decision (Bayesian readout), with the penalty applied not perfectly calibrated to the specific task context. These results suggest there is no need to abandon the DDM or single accumulator models to successfully account for confidence reports. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Imprecise probabilistic inference from sequential data.
    Although the Bayesian paradigm is an important benchmark in studies of human inference, the extent to which it provides a useful framework to account for human behavior remains debated. We document systematic departures from Bayesian inference under correct beliefs, even on average, in the estimates by experimental subjects of the probability of a binary event following observations of successive realizations of the event. In particular, we find underreaction of subjects’ estimates to the evidence (“conservatism”) after only a few observations and at the same time overreaction after longer sequences of observations. This is not explained by an incorrect prior nor by many common models of Bayesian inference. We uncover the autocorrelation in estimates, which suggests that subjects carry imprecise representations of the decision situations, with noise in beliefs propagating over successive trials. But even taking into account these internal imprecisions and assuming various incorrect beliefs, we find that subjects’ updates are inconsistent with the rules of Bayesian inference. We show how subjects instead considerably economize on the attention that they pay to the information relevant to the decision, and on the degree of control that they exert over their precise response, while giving responses fairly adapted to the task. A “noisy-counting” model of probability estimation reproduces the several patterns we exhibit in subjects’ behavior. In sum, human subjects in our task perform reasonably well while greatly minimizing the amount of information that they pay attention to. Our results emphasize that investigating this economy of attention is crucial in understanding human decisions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Counterfactuals and the logic of causal selection.
    Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g., the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather … )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Prejudice model 1.0: A predictive model of prejudice.
    The present research develops a predictive model of prejudice. For nearly a century, psychology and other fields have sought to scientifically understand and describe the causes of prejudice. Numerous theories of prejudice now exist. Yet these theories are overwhelmingly defined verbally and thus lack the ability to precisely predict when and to what extent prejudice will emerge. The abundance of theory also raises the possibility of undetected overlap between constructs theorized to cause prejudice. Predictive models enable falsification and provide a way for the field to move forward. To this end, here we present 18 studies with ∼5,000 participants in seven phases of model development. After initially identifying major theorized causes of prejudice in the literature, we used a model selection approach to winnow constructs into a parsimonious predictive model of prejudice (Phases I and II). We confirm this model in a preregistered out-of-sample test (Phase III), test variations in operationalizations and boundary conditions (Phases IV and V), and test generalizability on a U.S. representative sample, an Indian sample, and a U.K. sample (Phase VI). Finally, we consulted the predictions of experts in the field to examine how well they align with our results (Phase VII). We believe this initial predictive model is limited and bad, but by developing a model that makes highly specific predictions, drawing on the state of the art, we hope to provide a foundation from which research can build to improve science of prejudice. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Unifying approaches to understanding capacity in change detection.
    To navigate changes within a highly dynamic and complex environment, it is crucial to compare current visual representations of a scene to previously formed representations stored in memory. This process of mental comparison requires integrating information from multiple sources to inform decisions about changes within the environment. In the present article, we combine a novel systems factorial technology change detection task (Blunden et al., 2022) with a set size manipulation. Participants were required to detect 0, 1, or 2 changes of low and high detectability between a memory and probe array of 1–4 spatially separated luminance discs. Analyses using systems factorial technology indicated that the processing architecture was consistent across set sizes but that capacity was always limited and decreased as the number of distractors increased. We developed a novel model of change detection based on the statistical principles of basic sampling theory (Palmer, 1990; Sewell et al., 2014). The sample size model, instantiated parametrically, predicts the architecture and capacity results a priori and quantitatively accounted for several key results observed in the data: (a) increasing set size acted to decrease sensitivity (d′) in proportion to the square root of the number of items in the display; (b) the effect of redundancy benefited performance by a factor of the square root of the number of changes; and (c) the effect of change detectability was separable and independent of the sample size costs and redundancy benefits. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • The relation between learning and stimulus–response binding.
    Perception and action rely on integrating or binding different features of stimuli and responses. Such bindings are short-lived, but they can be retrieved for a limited amount of time if any of their features is reactivated. This is particularly true for stimulus–response bindings, allowing for flexible recycling of previous action plans. A relation to learning of stimulus–response associations suggests itself, and previous accounts have proposed binding as an initial step of forging associations in long-term memory. The evidence for this claim is surprisingly mixed, however. Here we propose a framework that explains previous failures to detect meaningful relations of binding and learning by highlighting the joint contribution of three variables: (a) decay, (b) the number of repetitions, and (c) the time elapsing between repetitions. Accounting for the interplay of these variables provides a promising blueprint for innovative experimental designs that bridge the gap between immediate bindings on the one hand and lasting associations in memory on the other hand. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source



Back to top


Back to top