Deprecated: Creation of dynamic property lastRSS::$cache_dir is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 430

Deprecated: Creation of dynamic property lastRSS::$cache_time is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 431

Deprecated: Creation of dynamic property lastRSS::$rsscp is deprecated in /home2/mivanov/public_html/psyresearch/php/rss.php on line 267

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $onclick in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597

Warning: Undefined variable $span_id in /home2/mivanov/public_html/psyresearch/php/rss.php on line 597
Psychological Methods
PsyResearch
ψ   Psychology Research on the Web   



Psychological Methods - Vol 29, Iss 6

Random Abstract
Quick Journal Finder:
Psychological Methods Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues.
Copyright 2025 American Psychological Association
  • Subgroup discovery in structural equation models.
    Structural equation modeling is one of the most popular statistical frameworks in the social and behavioral sciences. Often, detection of groups with distinct sets of parameters in structural equation models (SEM) are of key importance for applied researchers, for example, when investigating differential item functioning for a mental ability test or examining children with exceptional educational trajectories. In the present article, we present a new approach combining subgroup discovery—a well-established toolkit of supervised learning algorithms and techniques from the field of computer science—with structural equation models termed SubgroupSEM. We provide an overview and comparison of three approaches to modeling and detecting heterogeneous groups in structural equation models, namely, finite mixture models, SEM trees, and SubgroupSEM. We provide a step-by-step guide to applying subgroup discovery techniques for structural equation models, followed by a detailed and illustrated presentation of pruning strategies and four subgroup discovery algorithms. Finally, the SubgroupSEM approach will be illustrated on two real data examples, examining measurement invariance of a mental ability test and investigating interesting subgroups for the mediated relationship between predictors of educational outcomes and the trajectories of math competencies in 5th grade children. The illustrative examples are accompanied by examples of the R package subgroupsem, which is a viable implementation of our approach for applied researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Distributional causal effects: Beyond an “averagarian” view of intervention effects.
    The usefulness of mean aggregates in the analysis of intervention effectiveness is a matter of considerable debate in the psychological, educational, and social sciences. In addition to studying “average treatment effects,” the evaluation of “distributional treatment effects,” (i.e., effects that go beyond means), has been suggested to obtain a broader picture of how an intervention affects the study outcome. We continue this discussion by considering distributional causal effects. We present formal definitions of causal effects that go beyond means and utilize a distributional regression framework known as generalized additive models for location, scale, and shape (GAMLSS). GAMLSS allows one to characterize an intervention effect in its totality through simultaneously modeling means, variances, skewnesses, kurtoses, as well as ceiling and floor effects of outcome distributions. Based on data from a large-scale randomized controlled trial, we use GAMLSS to evaluate the impact of a teacher classroom management program on student academic performance. Results suggest the teacher classroom management training increased mean academic competence as well as the chance to obtain the maximum score on the academic competence scale. These effects would have been completely overlooked in a traditional evaluation of mean aggregates. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Improving hierarchical models of individual differences: An extension of Goldberg’s bass-ackward method.
    Goldberg’s (2006) bass-ackward approach to elucidating the hierarchical structure of individual differences data has been used widely to improve our understanding of the relationships among constructs of varying levels of granularity. The traditional approach has been to extract a single component or factor on the first level of the hierarchy, two on the second level, and so on, treating the correlations between adjoining levels akin to path coefficients in a hierarchical structure. This article proposes three modifications to the traditional approach with a particular focus on examining associations among all levels of the hierarchy: (a) identify and remove redundant elements that perpetuate through multiple levels of the hierarchy; (b) (optionally) identify and remove artefactual elements; and (c) plot the strongest correlations among the remaining elements to identify their hierarchical associations. Together these steps can offer a simpler and more complete picture of the underlying hierarchical structure among a set of observed variables. The rationale for each step is described, illustrated in a hypothetical example and three basic simulations, and then applied in real data. The results are compared with the traditional bass-ackward approach together with agglomerative hierarchical cluster analysis, and a basic tutorial with code is provided to apply the extended bass-ackward approach in other data. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Spurious inference in consensus emergence modeling due to the distinguishability problem.
    Researchers use consensus emergence models (CEMs) to detect when the scores of group members become similar over time. The purpose of this article is to review how CEMs often lead to spurious conclusions of consensus emergence due to the problem of distinguishability, or the notion that different data-generating mechanisms sometimes give rise to similar observed data. As a result, CEMs often cannot distinguish between observations generated from true consensus processes versus those generated by stochastic fluctuations. It will be shown that a distinct set of mechanisms, none of which exhibit true consensus, nonetheless yield spurious inferences of consensus emergence when CEMs are fitted to the observed data. This problem is demonstrated via examples and Monte Carlo simulations. Recommendations for future work are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Comparing random effects models, ordinary least squares, or fixed effects with cluster robust standard errors for cross-classified data.
    Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in psychology, education research, and other fields. However, when the focus of a study is on the regression coefficients at Level 1 rather than on the random effects, ordinary least squares regression with cluster robust variance estimators (OLS-CRVE) or fixed effects regression with CRVE (FE-CRVE) could be appropriate approaches. These alternative methods are potentially advantageous because they rely on weaker assumptions than those required by CCREM. We conducted a Monte Carlo Simulation study to compare the performance of CCREM, OLS-CRVE, and FE-CRVE in models, including conditions where homoscedasticity assumptions and exogeneity assumptions held and conditions where they were violated, as well as conditions with unmodeled random slopes. We found that CCREM out-performed the alternative approaches when its assumptions are all met. However, when homoscedasticity assumptions are violated, OLS-CRVE and FE-CRVE provided similar or better performance than CCREM. When the exogeneity assumption is violated, only FE-CRVE provided adequate performance. Further, OLS-CRVE and FE-CRVE provided more accurate inferences than CCREM in the presence of unmodeled random slopes. Thus, we recommend two-way FE-CRVE as a good alternative to CCREM, particularly if the homoscedasticity or exogeneity assumptions of the CCREM might be in doubt. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Reliable network inference from unreliable data: A tutorial on latent network modeling using STRAND.
    Social network analysis provides an important framework for studying the causes, consequences, and structure of social ties. However, standard self-report measures—for example, as collected through the popular “name-generator” method—do not provide an impartial representation of such ties, be they transfers, interactions, or social relationships. At best, they represent perceptions filtered through the cognitive biases of respondents. Individuals may, for example, report transfers that did not really occur, or forget to mention transfers that really did. The propensity to make such reporting inaccuracies is both an individual-level and item-level characteristic—variable across members of any given group. Past research has highlighted that many network-level properties are highly sensitive to such reporting inaccuracies. However, there remains a dearth of easily deployed statistical tools that account for such biases. To address this issue, we provide a latent network model that allows researchers to jointly estimate parameters measuring both reporting biases and a latent, underlying social network. Building upon past research, we conduct several simulation experiments in which network data are subject to various reporting biases, and find that these reporting biases strongly impact fundamental network properties. These impacts are not adequately remedied using the most frequently deployed approaches for network reconstruction in the social sciences (i.e., treating either the union or the intersection of double-sampled data as the true network), but are appropriately resolved through the use of our latent network models. To make implementation of our models easier for end-users, we provide a fully documented R package, STRAND, and include a tutorial illustrating its functionality when applied to empirical food/money sharing data from a rural Colombian population. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Correcting bias in extreme groups design using a missing data approach.
    Extreme groups design (EGD) refers to the use of a screening variable to inform further data collection, such that only participants with the lowest and highest scores are recruited in subsequent stages of the study. It is an effective way to improve the power of a study under a limited budget, but produces biased standardized estimates. We demonstrate that the bias in EGD results from its inherent missing at random mechanism, which can be corrected using modern missing data techniques such as full information maximum likelihood (FIML). Further, we provide a tutorial on computing correlations in EGD data with FIML using R. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Causal relationships in longitudinal observational data: An integrative modeling approach.
    Much research in psychology relies on data from observational studies that traditionally do not allow for causal interpretation. However, a range of approaches in statistics and computational sciences have been developed to infer causality from correlational data. Based on conceptual and theoretical considerations on the integration of interventional and time-restrainment notions of causality, we set out to design and empirically test a new approach to identify potential causal factors in longitudinal correlational data. A principled and representative set of simulations and an illustrative application to identify early-life determinants of cognitive development in a large cohort study are presented. The simulation results illustrate the potential but also the limitations for discovering causal factors in observational data. In the illustrative application, plausible candidates for early-life determinants of cognitive abilities in 5-year-old children were identified. Based on these results, we discuss the possibilities of using exploratory causal discovery in psychological research but also highlight its limits and potential misuses and misinterpretations. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Using natural language processing and machine learning to replace human content coders.
    Content analysis is a common and flexible technique to quantify and make sense of qualitative data in psychological research. However, the practical implementation of content analysis is extremely labor-intensive and subject to human coder errors. Applying natural language processing (NLP) techniques can help address these limitations. We explain and illustrate these techniques to psychological researchers. For this purpose, we first present a study exploring the creation of psychometrically meaningful predictions of human content codes. Using an existing database of human content codes, we build an NLP algorithm to validly predict those codes, at generally acceptable standards. We then conduct a Monte-Carlo simulation to model how four dataset characteristics (i.e., sample size, unlabeled proportion of cases, classification base rate, and human coder reliability) influence content classification performance. The simulation indicated that the influence of sample size and unlabeled proportion on model classification performance tended to be curvilinear. In addition, base rate and human coder reliability had a strong effect on classification performance. Finally, using these results, we offer practical recommendations to psychologists on the necessary dataset characteristics to achieve valid prediction of content codes to guide researchers on the use of NLP models to replace human coders in content analysis research. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Ubiquitous bias and false discovery due to model misspecification in analysis of statistical interactions: The role of the outcome’s distribution and metric properties.
    Studies of interaction effects are of great interest because they identify crucial interplay between predictors in explaining outcomes. Previous work has considered several potential sources of statistical bias and substantive misinterpretation in the study of interactions, but less attention has been devoted to the role of the outcome variable in such research. Here, we consider bias and false discovery associated with estimates of interaction parameters as a function of the distributional and metric properties of the outcome variable. We begin by illustrating that, for a variety of noncontinuously distributed outcomes (i.e., binary and count outcomes), attempts to use the linear model for recovery leads to catastrophic levels of bias and false discovery. Next, focusing on transformations of normally distributed variables (i.e., censoring and noninterval scaling), we show that linear models again produce spurious interaction effects. We provide explanations offering geometric and algebraic intuition as to why interactions are a challenge for these incorrectly specified models. In light of these findings, we make two specific recommendations. First, a careful consideration of the outcome’s distributional properties should be a standard component of interaction studies. Second, researchers should approach research focusing on interactions with heightened levels of scrutiny. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Regression with reduced rank predictor matrices: A model of trade-offs.
    A regression model of predictor trade-offs is described. Each regression parameter equals the expected change in Y obtained by trading 1 point from one predictor to a second predictor. The model applies to predictor variables that sum to a constant T for all observations; for example, proportions summing to T = 1.0 or percentages summing to T = 100 for each observation. If predictor variables sum to a constant T for all observations and if a least squares solution exists, the predicted values for the criterion variable Y will be uniquely determined, but there will be an infinite set of linear regression weights and the familiar interpretation of regression weights does not apply. However, the regression weights are determined up to an additive constant and thus differences in regression weights βv−βv∗ are uniquely determined, readily estimable, and interpretable. βv−βv∗ is the expected increase in Y given a transfer of 1 point from variable v∗ to variable v. The model is applied to multiple-choice test items that have four response categories, one correct and three incorrect. Results indicate that the expected outcome depends, not just on the student’s number of correct answers, but also on how the student’s incorrect responses are distributed over the three incorrect response types. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Comparison of noncentral t and distribution-free methods when using sequential procedures to control the width of a confidence interval for a standardized mean difference.
    sequential stopping rule (SSR) can generate a confidence interval (CI) for a standardized mean difference d that has an exact standardized width, ω. Two methods were tested using a broad range of ω and standardized effect sizes δ. A noncentral t (NCt) CI used with normally distributed data had coverages that were nominal at narrow widths but were slightly inflated at wider widths. A distribution-free (Dist-Free) method used with normally distributed data exhibited superior coverage and stopped on average at the expected sample sizes. When used with moderate to severely skewed lognormal distributions, the coverage was too low at large effect sizes even with a very narrow width where Dist-Free was expected to perform well, and the mean stopping sample sizes were absurdly elevated (thousands per group). SSR procedures negatively biased both the raw difference and the “unbiased” Hedges’ g in the stopping sample with all methods and distributions. The d was the less biased estimator of δ when the distribution was normal. The poor coverage with a lognormal distribution resulted from a large positive bias in d that increased as a function of both ω and δ. Coverage and point estimation were little improved by using g instead of d. Increased stopping time resulted from the way an estimate of the variance is calculated when it encounters occasional extreme scores generated from the skewed distribution. The Dist-Free SSR method was superior when the distribution was normal or only slightly skewed but is not recommended with moderately skewed distributions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • One-tailed tests: Let’s do this (responsibly).
    When preregistered, one-tailed tests control false-positive results at the same rate as two-tailed tests. They are also more powerful, provided the researcher correctly identified the direction of the effect. So it is surprising that they are not more common in psychology. Here I make an argument in favor of one-tailed tests and address common mistaken objections that researchers may have to using them. The arguments presented here only apply in situations where the test is clearly preregistered. If power is truly as urgent an issue as statistics reformers suggest, then the deliberate and thoughtful use of preregistered one-tailed tests ought to be not only permitted, but encouraged in cases where researchers desire greater power. One-tailed tests are especially well suited for applied questions, replications of previously documented effects, or situations where directionally unexpected effects would be meaningless. Preregistered one-tailed tests can sensibly align the researcher’s stated theory with their tested hypothesis, bring a coherence to the practice of null hypothesis statistical testing, and produce generally more persuasive results. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source



Back to top


Back to top