PsyResearch
ψ   Psychology Research on the Web   



Couples needed for online psychology research


Help us grow:




Psychological Assessment - Vol 29, Iss 12

Random Abstract
Quick Journal Finder:
Psychological Assessment Psychological Assessment publishes mainly empirical articles concerning clinical assessment. Papers that fall within the domain of the journal include research on the development, validation, application, and evaluation of psychological assessment instruments. Diverse modalities (e.g., cognitive, physiologic, and motoric) and methods of assessment (e.g., questionnaires, interviews, natural environment and analog environment observation, self-monitoring, participant observation, physiological measurement, instrument-assisted and computer-assisted assessment) are within the domain of the journal, especially as they relate to clinical assessment. Also included are topics on clinical judgment and decision making (including diagnostic assessment), methods of measurement of treatment process and outcome, and dimensions of individual differences (e.g., race, ethnicity, age, gender, sexual orientation, economic status) as they relate to clinical assessment.
Copyright 2017 American Psychological Association
  • The importance of assessing for validity of symptom report and performance in attention deficit/hyperactivity disorder (ADHD): Introduction to the special section on noncredible presentation in ADHD.
    Invalid self-report and invalid performance occur with high base rates in attention deficit/hyperactivity disorder (ADHD; Harrison, 2006; Musso & Gouvier, 2014). Although much research has focused on the development and validation of symptom validity tests (SVTs) and performance validity tests (PVTs) for psychiatric and neurological presentations, less attention has been given to the use of SVTs and PVTs in ADHD evaluation. This introduction to the special section describes a series of studies examining the use of SVTs and PVTs in adult ADHD evaluation. We present the series of studies in the context of prior research on noncredible presentation and call for future research using improved research methods and with a focus on assessment issues specific to ADHD evaluation. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The effects of symptom information coaching on the feigning of adult ADHD.
    College students without ADHD may feign symptoms of ADHD to gain access to stimulant medications and academic accommodations. Unfortunately, research has shown that it can be difficult to discriminate malingered from genuine ADHD symptomatology, especially when evaluations are based only on self-report questionnaires. The present study investigated whether nonclinical college students given no additional information could feign ADHD as successfully as those who were coached on symptoms of the disorder. Similar to Jasinski et al. (2011) and other research on feigned ADHD, a battery of neuropsychological, performance validity, and self-report tests was administered. Undergraduates with no history of ADHD or other psychiatric disorders were randomly assigned to 1 of 2 simulator groups: a coached group that was given information about ADHD symptoms, or a noncoached group that was given no such information. Both simulator groups were asked to feign ADHD. Their performance was compared to a genuine ADHD group and a nonclinical group asked to respond honestly. Self-report, neuropsychological, and performance validity test data are discussed in the context of the effect of coaching and its implications for ADHD evaluations. Symptom coaching did not have a significant effect on feigning success. Performance validity tests were moderately effective at detecting feigned ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Utility of the Conners’ Adult ADHD Rating Scale validity scales in identifying simulated attention-deficit hyperactivity disorder and random responding.
    Recent concern about malingered self-report of symptoms of attention-deficit hyperactivity disorder (ADHD) in college students has resulted in an urgent need for scales that can detect feigning of this disorder. The present study provided further validation data for a recently developed validity scale for the Conners’ Adult ADHD Rating Scale (CAARS), the CAARS Infrequency Index (CII), as well as for the Inconsistency Index (INC). The sample included 139 undergraduate students: 21 individuals with diagnoses of ADHD, 29 individuals responding honestly, 54 individuals responding randomly (full or half), and 35 individuals instructed to feign. Overall, the INC showed moderate sensitivity to random responding (.44–.63) and fairly high specificity to ADHD (.86–.91). The CII demonstrated modest sensitivity to feigning (.31–.46) and excellent specificity to ADHD (.91–.95). Sequential application of validity scales had correct classification rates of honest (93.1%), ADHD (81.0%), feigning (57.1%), half random (42.3%), and full random (92.9%). The present study suggests that the CII is modestly sensitive (true positive rate) to feigned ADHD symptoms, and highly specific (true negative rate) to ADHD. Additionally, this study highlights the utility of applying the CAARS validity scales in a sequential manner for identifying feigning. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Intentional inattention: Detecting feigned attention-deficit/hyperactivity disorder on the Personality Assessment Inventory.
    Given the increasing number of college students seeking Attention-Deficit/Hyperactivity Disorder (ADHD) diagnoses as well as the potential secondary gains associated with this disorder (e.g., access to stimulant medication, academic accommodations), the detection of malingered symptom presentations in this population is a major concern. The present study examined the ability of validity indicators on the widely used Personality Assessment Inventory (PAI; Morey, 1991) to distinguish between individuals experiencing genuine ADHD symptoms and individuals instructed to present with ADHD symptomatology for secondary gain. Sixty-six participants who successfully simulated ADHD (based on elevations on the Conners’ Adult ADHD Rating Scale; Conners, Erhardt, & Sparrow, 1998) were compared with a sample of undergraduate students meeting diagnostic criteria for ADHD (N = 22) and an archival sample of adults who received an ADHD diagnosis at a university psychology clinic following a comprehensive psychological evaluation (N = 41). Successful simulators obtained significantly higher scores on all relevant PAI validity indicators compared with the clinical and archival comparison samples, with the Rogers Discriminant Function demonstrating the highest predictive accuracy (AUC = .86). Traditional cut scores on the Negative Impression (NIM) validity scale used to designate probable malingering, however, were not sensitive to simulated ADHD symptoms, although they did demonstrate excellent specificity. The PAI may be informative as an indicator of potentially exaggerated or malingered symptom presentation, but alternative cut scores for symptom validity indicators may be necessary to maximize its utility in these particular types of psychological evaluations. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Symptom and performance validity with veterans assessed for attention-deficit/hyperactivity disorder (ADHD).
    Little is known about attention-deficit/hyperactivity disorder (ADHD) in veterans. Practice standards recommend the use of both symptom and performance validity measures in any assessment, and there are salient external incentives associated with ADHD evaluation (stimulant medication access and academic accommodations). The purpose of this study was to evaluate symptom and performance validity measures in a clinical sample of veterans presenting for specialty ADHD evaluation. Patients without a history of a neurocognitive disorder and for whom data were available on all measures (n = 114) completed a clinical interview structured on DSM–5 ADHD symptoms, the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF), and the Test of Memory Malingering Trial 1 (TOMM1) as part of a standardized ADHD diagnostic evaluation. Veterans meeting criteria for ADHD were not more likely to overreport symptoms on the MMPI-2-RF nor to fail TOMM1 (score ≤ 41) compared with those who did not meet criteria. Those who overreported symptoms did not endorse significantly more ADHD symptoms; however, those who failed TOMM1 did report significantly more ADHD symptoms (g = 0.90). In the total sample, 19.3% failed TOMM1, 44.7% overreported on the MMPI-2-RF, and 8.8% produced both an overreported MMPI-2-RF and invalid TOMM1. F-r had the highest correlation to TOMM1 scores (r = −.30). These results underscore the importance of assessing both symptom and performance validity in a clinical ADHD evaluation with veterans. In contrast to certain other conditions (e.g., mild traumatic brain injury), ADHD as a diagnosis is not related to higher rates of invalid report/performance in veterans. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Noncredible cognitive performance at clinical evaluation of adult ADHD: An embedded validity indicator in a visuospatial working memory test.
    The assessment of performance validity is an essential part of the neuropsychological evaluation of adults with attention-deficit/hyperactivity disorder (ADHD). Most available tools, however, are inaccurate regarding the identification of noncredible performance. This study describes the development of a visuospatial working memory test, including a validity indicator for noncredible cognitive performance of adults with ADHD. Visuospatial working memory of adults with ADHD (n = 48) was first compared to the test performance of healthy individuals (n = 48). Furthermore, a simulation design was performed including 252 individuals who were randomly assigned to either a control group (n = 48) or to 1 of 3 simulation groups who were requested to feign ADHD (n = 204). Additional samples of 27 adults with ADHD and 69 instructed simulators were included to cross-validate findings from the first samples. Adults with ADHD showed impaired visuospatial working memory performance of medium size as compared to healthy individuals. Simulation groups committed significantly more errors and had shorter response times as compared to patients with ADHD. Moreover, binary logistic regression analysis was carried out to derive a validity index that optimally differentiates between true and feigned ADHD. ROC analysis demonstrated high classification rates of the validity index, as shown in excellent specificity (95.8%) and adequate sensitivity (60.3%). The visuospatial working memory test as presented in this study therefore appears sensitive in indicating cognitive impairment of adults with ADHD. Furthermore, the embedded validity index revealed promising results concerning the detection of noncredible cognitive performance of adults with ADHD. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Assessing future care preparation in late life: Two short measures.
    The purpose of this article is to introduce 2 short forms of the previously published measure of preparation for future care (PFC). Community-dwelling older adults ages 65–94 who had completed the 29-item Preparation for Future Care Needs scale were randomly divided into scale development (n = 697) and scale validation (n = 690) samples. Fifteen items were selected using exploratory and confirmatory factor analyses on the scale development and scale validation samples, respectively. Consistent with PFC theory, the 5 subscales of the original long-form measure (Awareness, Gathering Information, Decision Making, Concrete Planning, Avoidance of Care Planning) were maintained. A 5-item scale with acceptable score reliability and validity was also developed. Compared to the long form, these short forms are more easily incorporated into epidemiologic studies and can be used in medical, psychology, and social work practice to initiate discussions about long-term care planning. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Initial validation of the Self-Directed Violence Picture System (SDVPS).
    A better understanding of factors that differentiate those who only experience suicidal ideation from those who engage in self-directed violence (SDV) is critical for suicide prevention efforts (Klonsky & May, 2014; May & Klonsky, 2016). To identify who is at greatest risk for death by suicide, it is imperative that new innovative assessment tools be created to facilitate behavioral measurement of key constructs associated with increased risk for SDV. The aim of the current study was to develop and validate a set of suicide-specific images, called the Self-Directed Violence Picture System (SDVPS), to help meet this need. A sample of 119 U.S. military veterans provided valence, arousal, and dominance ratings on the SDVPS. These ratings were compared to International Affective Picture System (IAPS) negative, neutral, and positive images. SDVPS images were rated with significantly greater negative valence and elicited decreased feelings of being in control than did IAPS positive (p <.001, p <.001), IAPS negative (p = .03, p = .001), and IAPS neutral (p <.001, p <.001) images. SDVPS images were also rated with significantly greater arousal than were IAPS neutral images (p <.001). Initial validation data support that the SDVPS images functioned as intended. Although continued validation of the SDVPS in other populations is necessary, the SDVPS may become a new tool by which researchers can begin to systematically and reliably examine reactions to suicide-related content using behavioral and/or experimental paradigms. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Psychometric validation of the Anticipated Effects of Alcohol Mixed with Energy Drinks Scale.
    Young people are increasingly consuming alcohol mixed with energy drinks (AmEDs). As coingestion of these beverages results in greater adverse consequences than from drinking alcohol alone, we need to understand what factors contribute to and deter coingestion. Existing studies in this area have not utilized a theoretically based or empirically validated measure of outcome expectancies for drinking AmEDs. Our study modified Morean, Corbin, and Treat’s (2012) Anticipated Effects of Alcohol Scale to assess the expected effects of drinking AmEDs. We evaluated the factor structure and concurrent validity of the Anticipated Effects of Alcohol Mixed with Energy Drinks (AEAMEDS) among 549 university students, aged 18–25, who had a lifetime history of consuming alcohol (231 had consumed AmEDs in the past 90 days). Exploratory and confirmatory factor analysis supported a 4-factor structure. Consistent with hypotheses, stronger high arousal/positive expectancies and weaker low arousal/negative expectancies were associated with greater AmED use. At the bivariate level, stronger low arousal/positive expectancies were associated with greater quantities of AmED use, but this relationship disappeared when taking into account other outcome expectancies. Moreover, students expected low arousal/positive expectancies to be less intense when consuming AmEDs than alcohol alone, but ratings for all other AmED expectancies were equivalent to consuming alcohol alone. These findings contribute to our knowledge of risk and protective factors for AmED use. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Explaining discrepancies in assessment protocols: Trait relevance and functional equivalence.
    Inconsistencies among independent sources of information about psychological constructs are widely documented, but not adequately explained. Measurement error as the primary explanation, though historically popular, is no longer tenable. Yet, even as assessors acknowledge that various measures of the same construct are not necessarily interchangeable, there are no agreed upon frameworks to discern the unique contribution of each measure in multiinformant and multimethod assessment protocols. In this study, we focus on the relevance of the target trait in its measured contexts and on the functional equivalence of the trait across its measures (similar self-regulatory requirements for trait expression) as driving relations between scores. These 2 considerations enabled prediction of informant differences in mean ratings and of patterns of divergences and convergences between parent and teacher ratings of kindergarteners’ social competence (SC) and executive functioning (EF) and between informant-based and performance-based measures of executive functioning (N = 73). (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The Multidimensional Personality Questionnaire’s inconsistency scales identify invalid profiles through internal statistics and external correlates.
    Inconsistency scales represent a promising method for separating valid and invalid personality profiles. In a sample of 1,258 participants in the waiting room of the emergency department of an urban university hospital, we examined whether data from participants with profiles flagged as invalid (n = 132) using the Variable Response Inconsistency (VRIN) or True Response Inconsistency (TRIN) scales of the Multidimensional Personality Questionnaire’s brief form (MPQ-BF) differed from those that did not exceed any validity cutoffs (n = 1,026). Invalid profiles’ scores on many scales were less internally consistent and had less variability than those from valid profiles, especially for random and acquiescent response styles. Scores on MPQ-BF primary trait scales from profiles featuring random responses appeared more psychologically maladjusted than those on valid profiles. Compared to primary trait scores on valid profiles, acquiescent profiles generally had higher scores, and counteracquiescent profiles had lower scores. The higher order component structure of invalid profiles was less consistent with published MPQ-BF component structures than that of valid profiles, though negative emotionality was generally reasonably well-preserved. Scores on primary traits associated with negative emotionality generally had larger correlations with demographic criteria for valid profiles than invalid profiles. These results argue that inconsistency scales meaningfully identify invalid profiles in normal-range personality assessment. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Test–retest reliability of the facial expression labeling task.
    Recognizing others’ emotional expressions is vital for socioemotional development; impairments in this ability occur in several psychiatric disorders. Further study is needed to map the development of this ability and to evaluate its components as potential transdiagnostic endophenotypes. Before doing so, however, research is required to substantiate the test–retest reliability of scores of the face emotion identification tasks linked to developmental psychopathology. The current study estimated test–retest reliability of scores of one such task, the facial expression labeling task (FELT) among a sample of twin children (N = 157; ages 9–14). Participants completed the FELT at two visits two to five weeks apart. Participants discerned the emotion presented of faces depicting six emotions (i.e., happiness, anger, sadness, fear, surprise, and disgust) morphed with a neutral face to provide 10 levels of increasing emotional expressivity. The present study found strong test–retest reliability (Pearson r) of the FELT scores across all emotions. Results suggested that data from this task may be effectively analyzed using a latent growth curve model to estimate overall ability (i.e., intercept; r’s = 0.76—0.85) and improvement as emotions become clearer (i.e., linear slope; r’s = 0.69—0.83). Evidence of high test–retest reliability of this task’s scores informs future developmental research and the potential identification of transdiagnostic endophenotypes for child psychopathology. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Factor analytic replication and model comparison of the BASC-2 Behavioral and Emotional Screening System.
    We conducted this study to add to literature of previous conflicting factorial examinations of the BASC-2 Behavioral and Emotional Screening System (BESS), Teacher Form—Child/Adolescent. Data were collected by an urban school district in the southeastern United States including 2,228 students rated by 120 teachers in Fall 2014 and 1,955 students rated by 104 teachers in Spring 2015. In both samples, we replicated and then conceptually and statistically compared factor models to examine the (a) 4-factor structure from which the BESS Teacher Form was developed, and (b) existence of a general factor currently being used. Previous studies examined the 4-factor and bifactor structure of the BESS Teacher Form on separate samples. Our model comparison results support a multidimensional interpretation. We recovered similar fit statistics and standardized factor loadings as previous factor analyses. However, measures of variance accounted for by the general factor were below recommended thresholds of a unidimensional construct. We recommend advancing a factorial model that represents a weighted combination of general and specific factors, but do not support continued use of a unidimensional total T score. Limitations and implications of the study are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Observer, youth, and therapist perspectives on the alliance in cognitive behavioral treatment for youth anxiety.
    This study examined the score reliability and validity of observer- (Therapy Process Observational Coding System for Child Psychotherapy—Alliance scale [TPOCS-A]; Vanderbilt Therapeutic Alliance Scale Revised, Short Form [VTAS-R-SF]), therapist- (Therapeutic Alliance Scale for Children Therapist Version [TASC-T]), and youth-rated (Therapeutic Alliance Scale for Children Child Version [TASC-C]) alliance instruments. Youths (N = 50) aged 7–15 (Mage = 10.28 years, SD = 1.84; 88.0% Caucasian; 60.0% male) diagnosed with a principal anxiety disorder received manual-based cognitive–behavioral treatment. Four independent coders, 2 using the TPOCS-A and 2 using the VTAS-R-SF, rated 2 sessions per case from early (Session 3) and late (Sessions 12) treatment. Youth and therapists completed the TASC-C and TASC-T at the end Session 3 and 12. Internal consistency of the alliance instruments was α > .80 and interrater reliability of the observer-rated instruments was ICC(2,2) > .75. The TPOCS-A, VTAS-R-SF, and TASC-T scores showed evidence of convergent validity. Conversely, the TASC-C scores failed to converge with the other instruments in a sample of children (age
    Citation link to source



Back to top


Back to top