PsyResearch
ψ   Psychology Research on the Web   



Psychological Methods - Vol 29, Iss 1

Random Abstract
Quick Journal Finder:
Psychological Methods Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues.
Copyright 2024 American Psychological Association
  • A modified approach to fitting relative importance networks.
    Most researchers have estimated the edge weights for relative importance networks using a well-established measure of general dominance for multiple regression. This approach has several desirable properties including edge weights that represent R² contributions, in-degree centralities that correspond to R² for each item when using other items as predictors, and strong replicability. We endorse the continued use of relative importance networks and believe they have a valuable role in network psychometrics. However, to improve their utility, we introduce a modified approach that uses best-subsets regression as a preceding step to select an appropriate subset of predictors for each item. The benefits of this modification include: (a) computation time savings that can enable larger relative importance networks to be estimated, (b) a principled approach to edge selection that can significantly improve specificity, (c) the provision of a signed network if desired, (d) the potential use of the best-subsets regression approach for estimating Gaussian graphical models, and (e) possible generalization to best-subsets logistic regression for Ising models. We describe, evaluate, and demonstrate the proposed approach and discuss its strengths and limitations. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Reassessment of innovative methods to determine the number of factors: A simulation-based comparison of exploratory graph analysis and next eigenvalue sufficiency test.
    Next Eigenvalue Sufficiency Test (NEST; Achim, 2017) is a recently proposed method to determine the number of factors in exploratory factor analysis (EFA). NEST sequentially tests the null-hypothesis that k factors are sufficient to model correlations among observed variables. Another recent approach to detect factors is exploratory graph analysis (EGA; Golino & Epskamp, 2017), which rules the number of factors equal to the number of nonoverlapping communities in a graphical network model of observed correlations. We applied NEST and EGA to data sets under simulated factor models with known numbers of factors and scored their accuracy in retrieving this number. Specifically, we aimed to investigate the effects of cross-loadings on the performance of NEST and EGA. In the first study, we show that NEST and EGA performed less accurately in the presence of cross-loadings on two factors compared with factor models without cross-loadings: We observed that EGA was more sensitive to cross-loadings than NEST. In the second study, we compared NEST and EGA under simulated circumplex models in which variables showed cross-loadings on two factors. Study 2 magnified the differences between NEST and EGA in that NEST was generally able to detect factors in circumplex models while EGA preferred solutions that did not match the factors in circumplex models. In total, our studies indicate that the assumed correspondence between factors and nonoverlapping communities does not hold in the presence of substantial cross-loadings. We conclude that NEST is more in line with the concept of factors in factor models than EGA. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Estimating the number of factors in exploratory factor analysis via out-of-sample prediction errors.
    Exploratory factor analysis (EFA) is one of the most popular statistical models in psychological science. A key problem in EFA is to estimate the number of factors. In this article, we present a new method for estimating the number of factors based on minimizing the out-of-sample prediction error of candidate factor models. We show in an extensive simulation study that our method slightly outperforms existing methods, including parallel analysis, Bayesian information criterion (BIC), Akaike information criterion (AIC), root mean squared error of approximation (RMSEA), and exploratory graph analysis. In addition, we show that, among the best performing methods, our method is the one that is most robust across different specifications of the true factor model. We provide an implementation of our method in the R-package fspe. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Factor analyzing ordinal items requires substantive knowledge of response marginals.
    In the social sciences, measurement scales often consist of ordinal items and are commonly analyzed using factor analysis. Either data are treated as continuous, or a discretization framework is imposed in order to take the ordinal scale properly into account. Correlational analysis is central in both approaches, and we review recent theory on correlations obtained from ordinal data. To ensure appropriate estimation, the item distributions prior to discretization should be (approximately) known, or the thresholds should be known to be equally spaced. We refer to such knowledge as substantive because it may not be extracted from the data, but must be rooted in expert knowledge about the data-generating process. An illustrative case is presented where absence of substantive knowledge of the item distributions inevitably leads the analyst to conclude that a truly two-dimensional case is perfectly one-dimensional. Additional studies probe the extent to which violation of the standard assumption of underlying normality leads to bias in correlations and factor models. As a remedy, we propose an adjusted polychoric estimator for ordinal factor analysis that takes substantive knowledge into account. Also, we demonstrate how to use the adjusted estimator in sensitivity analysis when the continuous item distributions are known only approximately. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Equivalence testing for linear regression.
    We introduce equivalence testing procedures for linear regression analyses. Such tests can be very useful for confirming the lack of a meaningful association between a continuous outcome and a continuous or binary predictor. Specifically, we propose an equivalence test for unstandardized regression coefficients and an equivalence test for semipartial correlation coefficients. We review how to define valid hypotheses, how to calculate p values, and how these tests compare to an alternative Bayesian approach with applications to examples in the literature. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Waldian t tests: Sequential Bayesian t tests with controlled error probabilities.
    Bayesian t tests have become increasingly popular alternatives to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian t tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem require time-consuming simulations to calibrate decision thresholds. In this article, we propose a sequential probability ratio test that combines Bayesian t tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian t test, in the context of three recently proposed specifications of Bayesian t tests. Waldian t tests preserve the key idea of Bayesian t tests by assuming a distribution for the effect size under the alternative hypothesis. At the same time, they control expected frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual expected error rates under the specified statistical models. Thus, Waldian t tests are fully justified from both a Bayesian and a frequentist point of view. We highlight the relationship between Bayesian and frequentist error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application that implements the proposed procedure for interested researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Who is and is not “average”? Random effects selection with spike-and-slab priors.
    Mixed-effects models are often employed to study individual differences in psychological science. Such analyses commonly entail testing whether between-subjects variability exists and including covariates to explain that variability. We argue that researchers have much to gain by explicitly focusing on the individual in individual differences research. To this end, we propose the spike-and-slab prior distribution for random effect selection in (generalized) mixed-effects models as a means to gain a more nuanced perspective of individual differences. The prior for each random effect is a two-component mixture consisting of a point-mass “spike” centered at zero and a diffuse “slab” capturing nonzero values. Effectively, such an approach allows researchers to answer questions about particular individuals; specifically, “Who is average?”, in the sense of deviating from an average effect, such as the population-averaged slope. We begin with an illustrative example, where the spike-and-slab formulation is used to select random intercepts in logistic regression. This demonstrates the utility of the proposed methodology in a simple setting while also highlighting its flexibility in fitting different kinds of models. We then extend the approach to random slopes that capture experimental effects. In two cognitive tasks, we show that despite there being little variability in the slopes, there were many individual differences in performance. In two simulation studies, we assess the ability of the proposed method to correctly identify (non)average individuals without compromising the mixed-effects estimates. We conclude with future directions for the presented methodology. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Mixture multilevel vector-autoregressive modeling.
    With the rising popularity of intensive longitudinal research, the modeling techniques for such data are increasingly focused on individual differences. Here we present mixture multilevel vector-autoregressive modeling, which extends multilevel vector-autoregressive modeling by including a mixture, to identify individuals with similar traits and dynamic processes. This exploratory model identifies mixture components, where each component refers to individuals with similarities in means (expressing traits), autoregressions, and cross-regressions (expressing dynamics), while allowing for some interindividual differences in these attributes. Key issues in modeling are discussed, where the issue of centering predictors is examined in a small simulation study. The proposed model is validated in a simulation study and used to analyze the affective data from the COGITO study. These data consist of samples for two different age groups of over 100 individuals each who were measured for about 100 days. We demonstrate the advantage of exploratory identifying mixture components by analyzing these heterogeneous samples jointly. The model identifies three distinct components, and we provide an interpretation for each component motivated by developmental psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Comparing revised latent state–trait models including autoregressive effects.
    Understanding the longitudinal dynamics of behavior, their stability and change over time, are of great interest in the social and behavioral sciences. Researchers investigate the degree to which an observed measure reflects stable components of the construct, situational fluctuations, method effects, or just random measurement error. An important question in such models is whether autoregressive effects occur between the residuals, as in the trait–state occasion model (TSO model), or between the state variables, as in the latent state–trait model with autoregression (LST-AR model). In this article, we compare the two approaches by applying revised latent state–trait theory (LST-R theory). Similarly to Eid et al. (2017) regarding the TSO model, we show how to formulate the LST-AR model using definitions from LST-R theory, and we discuss the practical implications. We demonstrate that the two models are equivalent when the trait loadings are allowed to vary over time. This is also true for bivariate model versions. The different but same approaches to modeling latent states and traits with autoregressive effects are illustrated with a longitudinal study of cancer-related fatigue in Hodgkin lymphoma patients. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Refining the causal loop diagram: A tutorial for maximizing the contribution of domain expertise in computational system dynamics modeling.
    Complexity science and systems thinking are increasingly recognized as relevant paradigms for studying systems where biology, psychology, and socioenvironmental factors interact. The application of systems thinking, however, often stops at developing a conceptual model that visualizes the mapping of causal links within a system, e.g., a causal loop diagram (CLD). While this is an important contribution in itself, it is imperative to subsequently formulate a computable version of a CLD in order to interpret the dynamics of the modeled system and simulate “what if” scenarios. We propose to realize this by deriving knowledge from experts’ mental models in biopsychosocial domains. This article first describes the steps required for capturing expert knowledge in a CLD such that it may result in a computational system dynamics model (SDM). For this purpose, we introduce several annotations to the CLD that facilitate this intended conversion. This annotated CLD (aCLD) includes sources of evidence, intermediary variables, functional forms of causal links, and the distinction between uncertain and known-to-be-absent causal links. We propose an algorithm for developing an aCLD that includes these annotations. We then describe how to formulate an SDM based on the aCLD. The described steps for this conversion help identify, quantify, and potentially reduce sources of uncertainty and obtain confidence in the results of the SDM’s simulations. We utilize a running example that illustrates each step of this conversion process. The systematic approach described in this article facilitates and advances the application of computational science methods to biopsychosocial systems. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Tutorial: Artificial neural networks to analyze single-case experimental designs.
    Since the start of the 21st century, few advances have had as far-reaching impact in science as the widespread adoption of artificial neural networks in fields as diverse as fundamental physics, clinical medicine, and psychology. In research methods, one promising area for the adoption of artificial neural networks involves the analysis of single-case experimental designs. Given that these types of networks are not generally part of training in the psychological sciences, the purpose of our article is to provide a step-by-step introduction to using artificial neural networks to analyze single-case designs. To this end, we trained a new model using data from a Monte Carlo simulation to analyze multiple baseline graphs and compared its outcomes with traditional methods of analysis. In addition to showing that artificial neural networks may produce less error than other methods, this tutorial provides information to facilitate the replication and extension of this line of work to other designs and datasets. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source

  • Efficient selection between hierarchical cognitive models: Cross-validation with variational Bayes.
    Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-sample descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation remains out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being its prohibitive computational cost. To address this issue, we develop novel algorithms that make variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters, which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy tradeoff? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood, yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, making it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible. To enhance the applicability of the algorithm, we provide Matlab code together with a user manual so users can easily implement VB and/or CVVB for the models considered in this article and their variants. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
    Citation link to source



Back to top


Back to top