ψ   Psychology Research on the Web   

Couples needed for online psychology research

Help us grow:

Psychological Methods - Vol 22, Iss 3

Random Abstract
Quick Journal Finder:
Psychological Methods Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues.
Copyright 2017 American Psychological Association
  • Changing dynamics: Time-varying autoregressive models using generalized additive modeling.
    In psychology, the use of intensive longitudinal data has steeply increased during the past decade. As a result, studying temporal dependencies in such data with autoregressive modeling is becoming common practice. However, standard autoregressive models are often suboptimal as they assume that parameters are time-invariant. This is problematic if changing dynamics (e.g., changes in the temporal dependency of a process) govern the time series. Often a change in the process, such as emotional well-being during therapy, is the very reason why it is interesting and important to study psychological dynamics. As a result, there is a need for an easily applicable method for studying such nonstationary processes that result from changing dynamics. In this article we present such a tool: the semiparametric TV-AR model. We show with a simulation study and an empirical application that the TV-AR model can approximate nonstationary processes well if there are at least 100 time points available and no unknown abrupt changes in the data. Notably, no prior knowledge of the processes that drive change in the dynamic structure is necessary. We conclude that the TV-AR model has significant potential for studying changing dynamics in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Maximum likelihood versus multiple imputation for missing data in small longitudinal samples with nonnormality.
    The study examined the performance of maximum likelihood (ML) and multiple imputation (MI) procedures for missing data in longitudinal research when fitting latent growth models. A Monte Carlo simulation study was conducted with conditions of small sample size, intermittent missing data, and nonnormality. The results indicated that ML tended to display slightly smaller degrees of bias than MI across missing completely at random (MCAR) and missing at random (MAR) conditions. Although specification of prior information in the MI imputation-posterior (I-P) phase influenced the performance of MI, especially with nonnormal small samples and missing not at random (MNAR), the impact of this tight specification was not dramatic. Several corrected ML test statistics showed proper rejections rates across research designs, whereas posterior predictive p values for MI methods were more likely to be influenced by distribution shape and yielded higher rejection rates in MCAR and MAR than in MNAR. In conclusion, ML appears to be preferable to MI in research conditions with small missing samples and multivariate nonnormality whether or not strong prior information for the I-P phase of MI analysis is available. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • An empirical Kaiser criterion.
    In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or—the current gold standard—parallel analysis, are based on eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Type I error rates and power of several versions of scaled chi-square difference tests in investigations of measurement invariance.
    A Monte Carlo simulation study was conducted to investigate Type I error rates and power of several corrections for nonnormality to the normal theory chi-square difference test in the context of evaluating measurement invariance via structural equation modeling. Studied statistics include the uncorrected difference test, DML, Satorra and Bentler’s (2001) original correction, DSB1, Satorra and Bentler’s (2010) strictly positive correction, DSB10, and a hybrid procedure, DSBH (Asparouhov & Muthén, 2013). Multiple-group data were generated from confirmatory factor analytic population models invariant on all parameters, or lacking invariance on residual variances, indicator intercepts, or factor loadings. Conditions varied in terms of the number of indicators associated with each factor in the population model, the location of noninvariance (if any), sample size, sample size ratio in the 2 groups, and nature of nonnormality. Type I error rates and power of corrected statistics were evaluated for a series of 4 nested invariance models. Overall, the strictly positive correction, DSB10, is the best and most consistently performing statistic, as it was found to be much less sensitive than the original correction, DSB1, to model size and sample evenness. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Testing measurement invariance in longitudinal data with ordered-categorical measures.
    A goal of developmental research is to examine individual changes in constructs over time. The accuracy of the models answering such research questions hinges on the assumption of longitudinal measurement invariance: The repeatedly measured variables need to represent the same construct in the same metric over time. Measurement invariance can be studied through factor models examining the relations between the observed indicators and the latent constructs. In longitudinal research, ordered-categorical indicators such as self- or observer-report Likert scales are commonly used, and these measures often do not approximate continuous normal distributions. The present didactic article extends previous work on measurement invariance to the longitudinal case for ordered-categorical indicators. We address a number of problems that commonly arise in testing measurement invariance with longitudinal data, including model identification and interpretation, sparse data, missing data, and estimation issues. We also develop a procedure and associated R program for gauging the practical significance of the violations of invariance. We illustrate these issues with an empirical example using a subscale from the Mexican American Cultural Values scale. Finally, we provide comparisons of the current capabilities of 3 major latent variable programs (lavaan, Mplus, OpenMx) and computer scripts for addressing longitudinal measurement invariance. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • A more general model for testing measurement invariance and differential item functioning.
    The evaluation of measurement invariance is an important step in establishing the validity and comparability of measurements across individuals. Most commonly, measurement invariance has been examined using 1 of 2 primary latent variable modeling approaches: the multiple groups model or the multiple-indicator multiple-cause (MIMIC) model. Both approaches offer opportunities to detect differential item functioning within multi-item scales, and thereby to test measurement invariance, but both approaches also have significant limitations. The multiple groups model allows 1 to examine the invariance of all model parameters but only across levels of a single categorical individual difference variable (e.g., ethnicity). In contrast, the MIMIC model permits both categorical and continuous individual difference variables (e.g., sex and age) but permits only a subset of the model parameters to vary as a function of these characteristics. The current article argues that moderated nonlinear factor analysis (MNLFA) constitutes an alternative, more flexible model for evaluating measurement invariance and differential item functioning. We show that the MNLFA subsumes and combines the strengths of the multiple group and MIMIC models, allowing for a full and simultaneous assessment of measurement invariance and differential item functioning across multiple categorical and/or continuous individual difference variables. The relationships between the MNLFA model and the multiple groups and MIMIC models are shown mathematically and via an empirical demonstration. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Specificity-enhanced reliability coefficients.
    Internal consistency reliability coefficients based on classical test theory, such as α, ω, λ₄, model-based ρxx, and the greatest lower bound ρglb, are computed as ratios of estimated common variance to total variance. They omit specific variance. As a result they are downward-biased and may fail to predict external criteria (McCrae et al., 2011). Some approaches for incorporating specific variance into reliability estimates are proposed and illustrated. The resulting specificity-enhanced coefficients α+, ω+, λ₄+, ρxx+ and ρglb+ provide improved estimands of reliability and thus may be worth reporting in addition to their classical counterparts. The correction for attenuation, Spearman–Brown, and maximal reliability formulas also are extended to allow specificity. Limitations, future work, and implications are discussed, including the role of specificity to quantify the extent to which items represent important facets or nuances (McCrae, 2015) of content. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Anomalous results in G-factor models: Explanations and alternatives.
    G-factor models such as the bifactor model and the hierarchical G-factor model are increasingly applied in psychology. Many applications of these models have produced anomalous and unexpected results that are often not in line with the theoretical assumptions on which these applications are based. Examples of such anomalous results are vanishing specific factors and irregular loading patterns. In this article, the authors show that from the perspective of stochastic measurement theory anomalous results have to be expected when G-factor models are applied to a single-level (rather than a 2-level) sampling process. The authors argue that the application of the bifactor model and related models require a 2-level sampling process that is usually not present in empirical studies. We demonstrate how alternative models with a G-factor and specific factors can be derived that are more well-defined for the actual single-level sampling design that underlies most empirical studies. It is shown in detail how 2 alternative models, the bifactor-(S − 1) model and the bifactor-(S·I − 1) model, can be defined. The properties of these models are described and illustrated with an empirical example. Finally, further alternatives for analyzing multidimensional models are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.
    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • In defense of causal-formative indicators: A minority report.
    Causal-formative indicators directly affect their corresponding latent variable. They run counter to the predominant view that indicators depend on latent variables and are thus often controversial. If present, such indicators have serious implications for factor analysis, reliability theory, item response theory, structural equation models, and most measurement approaches that are based on reflective or effect indicators. Psychological Methods has published a number of influential articles on causal and formative indicators as well as launching the first major backlash against them. This article examines 7 common criticisms of these indicators distilled from the literature: (a) A construct measured with “formative” indicators does not exist independently of its indicators; (b) Such indicators are causes rather than measures; (c) They imply multiple dimensions to a construct and this is a liability; (d) They are assumed to be error-free, which is unrealistic; (e) They are inherently subject to interpretational confounding; (f) They fail proportionality constraints; and (g) Their coefficients should be set in advance and not estimated. We summarize each of these criticisms and point out the flaws in the logic and evidence marshaled in their support. The most common problems are not distinguishing between what we call causal-formative and composite–formative indicators, tautological fallacies, and highlighting issues that are common to all indicators, but presenting them as special problems of causal-formative indicators. We conclude that measurement theory needs (a) to incorporate these types of indicators, and (b) to better understand their similarities to and differences from traditional indicators. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • A call for theory to support the use of causal-formative indicators: A commentary on Bollen and Diamantopoulos (2017).
    In this issue, Bollen and Diamantopoulos (2017) defend causal-formative indicators against several common criticisms leveled by scholars who oppose their use. In doing so, the authors make several convincing assertions: Constructs exist independently from their measures; theory determines whether indicators cause or measure latent variables; and reflective and causal-formative indicators are both subject to interpretational confounding. However, despite being a well-reasoned, comprehensive defense of causal-formative indicators, no single article can address all of the issues associated with this debate. Thus, Bollen and Diamantopoulos leave a few fundamental issues unresolved. For example, how can researchers establish the reliability of indicators that may include measurement error? Moreover, how should researchers interpret disturbance terms that capture sources of influence related to both the empirical definition of the latent variable and to the theoretical definition of the construct? Relatedly, how should researchers reconcile the requirement for a census of causal-formative indicators with the knowledge that indicators are likely missing from the empirically estimated latent variable? This commentary develops 6 related research questions to draw attention to these fundamental issues, and to call for future research that can lead to the development of theory to guide the use of causal-formative indicators. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Notes on measurement theory for causal-formative indicators: A reply to Hardin.
    The current article is a rejoinder to “A Call for Theory to Support the Use of Causal-Formative Indicators: A Commentary on Bollen and Diamantopoulos.” Our article comments on the 6 research questions raised by Hardin (2017) in his constructive commentary on our original article (i.e., “In Defense of Causal-Formative Indicators: A Minority Report”). (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

Back to top

Back to top