ψ   Psychology Research on the Web   

Couples needed for online psychology research

Help us grow:

Psychological Methods - Vol 22, Iss 4

Random Abstract
Quick Journal Finder:
Psychological Methods Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues.
Copyright 2018 American Psychological Association
  • Bayesian estimation and modeling: Editorial to the second special issue on Bayesian data analysis.
    This editorial accompanies the second special issue on Bayesian data analysis published in this journal. The emphases of this issue are on Bayesian estimation and modeling. In this editorial, we outline the basics of current Bayesian estimation techniques and some notable developments in the statistical literature, as well as adaptations and extensions by psychological researchers to better tailor to the modeling applications in psychology. We end with a discussion on future outlooks of Bayesian data analysis in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Using phantom variables in structural equation modeling to assess model sensitivity to external misspecification.
    External misspecification, the omission of key variables from a structural model, can fundamentally alter the inferences one makes without such variables present. This article presents 2 strategies for dealing with omitted variables, the first a fixed parameter approach incorporating the omitted variable into the model as a phantom variable where all associated parameter values are fixed, and the other a random parameter approach specifying prior distributions for all of the phantom variable’s associated parameter values under a Bayesian framework. The logic and implementation of these methods are discussed and demonstrated on an applied example from the educational psychology literature. The argument is made that such external misspecification sensitivity analyses should become a routine part of measured and latent variable modeling where the inclusion of all salient variables might be in question. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Distinguishing outcomes from indicators via Bayesian modeling.
    A conceptual distinction is drawn between indicators, which serve to define latent variables, and outcomes, which do not. However, commonly used frequentist and Bayesian estimation procedures do not honor this distinction. They allow the outcomes to influence the latent variables and the measurement model parameters for the indicators, rendering the latent variables subject to interpretational confounding. Modified Bayesian procedures that preclude this are advanced, along with procedures for conducting diagnostic model-data fit analyses. These are studied in a simulation, where they outperform existing strategies, and illustrated with an example. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Moderation analysis with missing data in the predictors.
    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Bayesian dynamic mediation analysis.
    Most existing methods for mediation analysis assume that mediation is a stationary, time-invariant process, which overlooks the inherently dynamic nature of many human psychological processes and behavioral activities. In this article, we consider mediation as a dynamic process that continuously changes over time. We propose Bayesian multilevel time-varying coefficient models to describe and estimate such dynamic mediation effects. By taking the nonparametric penalized spline approach, the proposed method is flexible and able to accommodate any shape of the relationship between time and mediation effects. Simulation studies show that the proposed method works well and faithfully reflects the true nature of the mediation process. By modeling mediation effect nonparametrically as a continuous function of time, our method provides a valuable tool to help researchers obtain a more complete understanding of the dynamic nature of the mediation process underlying psychological and behavioral phenomena. We also briefly discuss an alternative approach of using dynamic autoregressive mediation model to estimate the dynamic mediation effect. The computer code is provided to implement the proposed Bayesian dynamic mediation analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • An alternative to post hoc model modification in confirmatory factor analysis: The Bayesian lasso.
    As a commonly used tool for operationalizing measurement models, confirmatory factor analysis (CFA) requires strong assumptions that can lead to a poor fit of the model to real data. The post hoc modification model approach attempts to improve CFA fit through the use of modification indexes for identifying significant correlated residual error terms. We analyzed a 28-item emotion measure collected for n = 175 participants. The post hoc modification approach indicated that 90 item-pair errors were significantly correlated, which demonstrated the challenge in using a modification index, as the error terms must be individually modified as a sequence. Additionally, the post hoc modification approach cannot guarantee a positive definite covariance matrix for the error terms. We propose a method that enables the entire inverse residual covariance matrix to be modeled as a sparse positive definite matrix that contains only a few off-diagonal elements bounded away from zero. This method circumvents the problem of having to handle correlated residual terms sequentially. By assigning a Lasso prior to the inverse covariance matrix, this Bayesian method achieves model parsimony as well as an identifiable model. Both simulated and real data sets were analyzed to evaluate the validity, robustness, and practical usefulness of the proposed procedure. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Using expert knowledge for test linking.
    Linking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures. This study proposes to enhance the quality of linking procedures based on sparse data by using Bayesian methods which combine the information in the linking data with background information captured in informative prior distributions. We propose two methods for the elicitation of prior knowledge about the difference in difficulty of two tests from subject-matter experts and explain how these results can be used in the specification of priors. To illustrate the proposed methods and evaluate the quality of linking with and without informative priors, an empirical example of linking primary school mathematics tests is presented. The results suggest that informative priors can increase the precision of linking without decreasing the accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Bayesian models for semicontinuous outcomes in rolling admission therapy groups.
    Alcohol and other drug abuse are frequently treated in a group therapy setting. If participants are allowed to enroll in therapy on a rolling basis, irregular patterns of participant overlap can induce complex correlations of participant outcomes. Previous work has accounted for common session attendance by modeling random effects for each therapy session, which map to participant outcomes via a multiple membership construction when modeling normally distributed outcome measures. We build on this earlier work by extending the models to semicontinuous outcomes, or outcomes that are a mixture of continuous and discrete distributions. This results in multivariate session effects, for which we allow temporal dependencies of various orders. We illustrate our methods using data from a group-based intervention to treat substance abuse and depression, focusing on the outcome of average number of drinks per day. Alcohol and other drug abuse are frequently treated in a group therapy setting. If 2 clients attend the some of the same sessions, we might expect that—on average—their posttreatment outcomes would be more similar than if they had not attended any sessions together. Hence, if participants are allowed to enroll in therapy on a rolling basis, irregular patterns of session attendance can induce complex relationships between participant outcomes. Statistical methods have been developed previously to account for rolling admission group therapy when the outcomes are normally distributed. In the case of alcohol and other drug use interventions, however, a substantial fraction of participants often report zero use after treatment. We extend previous work to build models that accommodate semicontinuous outcomes, which are a mixture of continuous and discrete distributions, for such situations. We find that modern Bayesian statistical methods and software allow users to efficiently estimate nonstandard models such as these. We illustrate our methods using data from a group-based intervention to treat substance abuse and depression, focusing on the outcome of average number of drinks per day. We find that the intervention is associated with a drop in the probability of any drinking, but find no evidence of a change in the amount of drinking, conditional on some drinking. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Bayesian unknown change-point models to investigate immediacy in single case designs.
    Although immediacy is one of the necessary criteria to show strong evidence of a causal relation in single case designs (SCDs), no inferential statistical tool is currently used to demonstrate it. We propose a Bayesian unknown change-point model to investigate and quantify immediacy in SCD analysis. Unlike visual analysis that considers only 3–5 observations in consecutive phases to investigate immediacy, this model considers all data points. Immediacy is indicated when the posterior distribution of the unknown change-point is narrow around the true value of the change-point. This model can accommodate delayed effects. Monte Carlo simulation for a 2-phase design shows that the posterior standard deviations of the change-points decrease with increase in standardized mean difference between phases and decrease in test length. This method is illustrated with real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Developing constraint in Bayesian mixed models.
    Model comparison in Bayesian mixed models is becoming popular in psychological science. Here we develop a set of nested models that account for order restrictions across individuals in psychological tasks. An order-restricted model addresses the question “Does everybody,” as in “Does everybody show the usual Stroop effect,” or “Does everybody respond more quickly to intense noises than subtle ones?” The crux of the modeling is the instantiation of 10s or 100s of order restrictions simultaneously, one for each participant. To our knowledge, the problem is intractable in frequentist contexts but relatively straightforward in Bayesian ones. We develop a Bayes factor model-comparison strategy using Zellner and Siow’s default g-priors appropriate for assessing whether effects obey equality and order restrictions. We apply the methodology to seven data sets from Stroop, Simon, and Eriksen interference tasks. Not too surprisingly, we find that everybody Stroops—that is, for all people congruent colors are truly named more quickly than incongruent ones. But, perhaps surprisingly, we find these order constraints are violated for some people in the Simon task, that is, for these people spatially incongruent responses occur truly more quickly than congruent ones! Implications of the modeling and conjectures about the task-related differences are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • A Bayesian “fill-in” method for correcting for publication bias in meta-analysis.
    Publication bias occurs when the statistical significance or direction of the results between published and unpublished studies differ after controlling for study quality, which threatens the validity of the systematic review and summary of the results on a research topic. Conclusions based on a meta-analysis of published studies without correcting for publication bias are often optimistic and biased toward significance or positivity. We propose a Bayesian fill-in meta-analysis (BALM) method for adjusting publication bias and estimating population effect size that accommodates different assumptions for publication bias. Simulation studies were conducted to examine the performance of BALM and compare it with several commonly used/discussed and recently proposed publication bias correction methods. The simulation results suggested BALM yielded small biases, small RMSE values, and close-to-nominal-level coverage rates in inferring the population effect size and the between-study variance, and outperformed the other examined publication bias correction methods across a wide range of simulation scenarios when the publication bias mechanism is correctly specified. The performance of BALM was relatively sensitive to the assumed publication bias mechanism. Even with a misspecified publication bias mechanism, BALM still outperformed the naive methods without correcting for publication in inferring the overall population effect size. BALM was applied to 2 meta-analysis case studies to illustrate the use of BALM in real life situations. R functions are provided to facilitate the implementation of BALM. Guidelines on how to specify the publication bias mechanisms in BALM and how to report overall effect size estimates are provided. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

Back to top

Back to top