PsyResearch
ψ   Psychology Research on the Web   



Couples needed for online psychology research


Help us grow:




Training and Education in Professional Psychology - Vol 11, Iss 3

Random Abstract
Quick Journal Finder:
Training and Education in Professional Psychology Training and Education in Professional Psychology is dedicated to enhancing supervision and training provided by psychologists.
Copyright 2017 American Psychological Association
  • The Competency Assessment Project of the Association of Psychology Postdoctoral and Internship Centers: Overview to the initiative to advance the competency movement in psychology training.
    This overview commentary provides background and context on the Competency Assessment Project of the Association of Psychology Postdoctoral and Internship Centers. Articles published in this special section and future issues of this journal highlight important initiatives to measure competencies in professional/health service psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • The Supervisor Trainee Quarterly Evaluation (STQE): Psychometric support for use as a measure of competency.
    The Supervisor Trainee Quarterly Evaluation (STQE) has been available to preinternship doctoral training clinics nationwide for more than a decade, via the Association for Psychology Training Clinics (APTC). However, no psychometric examinations of the measure have been published. The current study sought to explore the factor structure of this measure and assess its basic psychometric properties (i.e., descriptive statistics, scale internal consistency, interitem correlations, and predictive validity). Data came from 217 STQE forms based on supervisor rating of 38 trainees (6 consecutive and complete training cohorts). A 1-factor model had acceptable fit, but a bifactor model showed the best fit and suggested the STQE measures overarching general competency as well as specific foundational and functional competencies. Internal consistency for the general scale was high, and the total scale score demonstrated predictive validity by being significantly correlated with (a) formal remediation or termination from doctoral training, (b) program-vetted total number of intervention hours, and (c) program-vetted total number of supervision hours. The STQE provides a psychometrically sound measure of overall competency that predicts important preinternship benchmarks. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Psychometric investigation of competency benchmarks.
    The 2009 special issue of Training and Education in Professional Psychology focused on competency areas and assessment-related issues, presenting 2 seminal articles: “Competency Benchmarks: A Model for Understanding and Measuring Competence in Professional Psychology Across Training Levels” (“Benchmarks”; Fouad et al., 2009) and “Competency Assessment Toolkit for Professional Psychology” (“Toolkit”; Kaslow et al., 2009). The Benchmarks outlined and described competency areas encompassing both foundational and functional competencies, while the Toolkit (a partner to the Benchmarks) reviewed a variety of methods and considerations for competency assessment. However, as noted at the time (i.e., DeMers, 2009; Donovan & Ponce, 2009; McCutcheon, 2009; Schulte & Daly, 2009), these efforts, while thoughtful and well reasoned, lacked empirical validation and needed field-testing. The current study responds to that call and represents the first psychometric study of the Benchmarks. Data drew from supervisors’ ratings of competency pertaining to 94 preinternship doctoral trainees enrolled in prepracticum, internal practicum, or external practicum within any of 3 accredited doctoral (PhD) programs in the department of psychology of a large, public university. Many-facet Rasch measurement, an item response theory (IRT) approach used when there are multiple raters of repeated individuals over time, was used to analyze 270 competency evaluations. Supervisor ratings of competency were found to fit the Rasch model specified. Foundational competencies were rated as significantly higher than functional competencies and competencies improved significantly across training levels (prepracticum, internal practicum, external practicum). Item difficulties are provided and implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Development and validation of a Self-Care Behavior Inventory.
    Self-care is a critical component and considered a core foundational competency for doctoral students in the field of psychology. It is an ethical imperative to maintain adequate self-care to prevent burnout and negative outcomes to those receiving health-care services. Self-care is also related to the professional values of psychology, specifically, beneficence and nonmaleficence. However, little research has explored the topic. One reason that may contribute to the scant research is the lack of a valid tool to measure self-care behaviors. In the first study, we developed and validated a self-care instrument with 232 doctoral students in programs accredited by the American Psychological Association across different stages of the developmental trajectory. A pilot study (n = 28) provided feedback on item content and suggestions for improvement. In the second study, the refined self-care instrument, the Self-Care Behavior Inventory, was used, along with the Maslach Burnout Inventory–Human Services Survey (Schaufeli, Leiter, & Maslach, 2009), the Perception of Competence Scale (Williams & Deci, 1996), the Flourishing measure (Diener et al., 2010), and the Contributors to Distress measure (Carter & Barnett, 2014). Exploratory factor analyses revealed a 3-factor model with Cognitive–Emotional–Relational, Physical, and Spiritual components underlying the construct of self-care. Limitations of the present study and implications for training programs and trainees are discussed, as are suggestions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Overview commentary on the initial evaluation of training and implementation of a new technology for psychological assessment.
    This commentary describes the context for publication of 2 articles on Q-interactive, a commercially available product providing a digital tablet platform for administering, scoring, and interpreting psychological instruments, including intelligence tests. Although this journal does not publish articles that serve mainly to advertise commercial products, the education and training community needs information to evaluate the utility and impact of the products through an evidence-based examination of implementation and outcomes. Two peer-reviewed articles by Clark, Gulin, Heller, and Vrana (2017) and Noland (2017) serve these purposes well. A closing commentary includes responses from the product developers. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Graduate training implications of the Q-interactive platform for administering Wechsler intelligence tests.
    With the introduction of the iPad-based Q-interactive platform for cognitive ability and achievement test administration, psychology training programs need to adapt to effectively train doctoral-level psychologists to be competent in administering, scoring, and interpreting assessment instruments. This article describes the implications for graduate training of moving to iPad-mediated administration of the Wechsler intelligence tests using the Q-interactive program by Pearson. We enumerate differences between Q-interactive and traditional assessment administration, including cost structure, technological requirements, and approach to administration. Changes to coursework, practicum, and supervision and evaluation of assessment competencies are discussed. The benefits of Q-interactive include reduced testing and training time and the decrease or elimination of many types of administration and scoring errors. However, new training challenges are introduced, including the need to be proficient at trouble-shooting technology, changes in rapport-building with clients, and assessing and facilitating clients’ comfort with the platform. Challenges for course instructors and practicum supervisors include deciding which testing modality to use, increased difficulty evaluating some aspects of administration and scoring competency, and the potential for more frequent updates requiring additional training and updating of skills. We discuss the training implications of this new platform, and make specific suggestions for how training programs may respond to these changes and integrate iPad administration into their courses and practicum. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Intelligence testing using a tablet computer: Experiences with using Q-interactive.
    Q-interactive is a relatively new technology-based individualized testing platform developed by Pearson, Inc. for use by practitioners as an alternative to the traditional paper-and-pencil method of individualized testing. The potential utility of this type of assessment format for both practicing psychologists and trainers of psychologists is explored, including positive and negative initial reactions to the use of the program and first impressions from a number of first-time users. The implementation of new technology as part of a testing course for graduate students from 3 different graduate programs was initiated, and data were collected over 1.5 years in order to investigate the utility of Q-interactive as a test administration method, determine any potential problems for the use of this testing format, and explore graduate student user impressions. No differences were noted among graduate student ratings of test administration experiences, regardless of the administration method learned initially. Significant differences were found, however, with regard to students’ impressions of volunteer client engagement, eagerness to participate, and client enjoyment of testing, with volunteer clients rated as more engaged, more eager, and having more fun when presented with technology-based materials. Interestingly, although the majority of students indicated a strong preference for one administration format over another, the number preferring a technology-enhanced administration was only slightly higher, with most preferring to learn using the paper-and-pencil administration format initially. Implications for practitioners, supervisors, and instructors are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Concluding commentary on the initial evaluation of training and implementation of a new technology for psychological assessment.
    Commentary follows the articles by Noland (2017) and Clark, Gulin, Heller, and Vrana (2017) which offer perspectives on using a technology in developing training in professional psychology, the Q-interactive platform for intellectual assessment. This commentary includes responses from the commercial company offering Q-interactive, Pearson, Inc. (D. Wahlstrom, personal communication, April 6, 2017). These articles and the responses further an evidence-based approach to training psychological assessment in the profession. They also demonstrate a constructive dialogue to advance the interaction and collaboration of product developers with the training community and the profession. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • For whom the bills pile: An equity frame for an equity problem.
    Student loan debt associated with doctoral psychology education is a serious problem, and the recent study by Doran, Kraha, Marks, Ameen, and El-Ghoroury (2016) represents an important step in recognizing the significant student loan burden that professional psychology students and early career psychologists (ECPs) are facing. Although there were some differences by degree program and type, students across subfields and degrees were burdened with high levels of debt (Doran et al., 2016). Notably, significant dispersion was evidenced across subfields and degree types, making it difficult to generalize from the reported averages. Additionally, they did not collect data on socioeconomic status (SES), and while they noted observing no significant between-group differences by race and gender, we demonstrate how such between-group differences may have been obfuscated. Available data suggest that between-group differences do exist by race, gender, SES, and further by intersectional marginalized identities (e.g., higher borrowing by women of color than White women). We suggest that taking an equity, rather than equality, approach to student funding may ameliorate disparities in student loan borrowing. Through this commentary, we aim to (a) further clarify what student debt loads mean for individuals on a monthly basis; (b) use existing data to highlight disparities in student loan distribution, that is, that women, students of color, and students from lower SES backgrounds generally borrow more to finance their education; (c) address the implications of student loan disparities for the field of psychology; and (d) describe and recommend aspiring to an equity, rather than an equality, approach to distribution of resources. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • An investigation of training, schemas, and false recall of diagnostic features for mental disorders.
    This study examined whether schemas formed during training (graduate coursework, clinical supervision, etc.) are responsible for the tendency of clinicians to experience higher rates of false recall for clinical case details when compared with novices. Participants in this study were recruited from a general psychology class to limit preexisting knowledge of psychological disorders. Half of the participants were trained to recognize features of generalized anxiety disorder (GAD) with the purpose of forming a schema for that disorder, whereas the other half were not. Participants’ memories for diagnostic and nondiagnostic details within a hypothetical case vignette were tested using a free recall prompt followed by a yes–no recognition test. Trained participants falsely recognized the diagnostic detail “restlessness” and falsely recalled the diagnostic detail “uncontrollable worry” at a significantly higher rate than controls, suggesting that the training successfully formed a schema for GAD symptoms. Graduate training programs should consider incorporating training about false memories in students’ coursework as one mechanism for mitigating these errors. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Training the next generation in routine outcome monitoring: Current practices in psychology training clinics.
    Monitoring client progress throughout psychotherapy (i.e., routine outcome monitoring/ROM) is increasingly recognized as an effective method to improve treatment outcomes. While the use of ROM has been explored across multiple settings, little is known about its use in psychology training clinics and the methods employed to train students to use ROM effectively. The present study surveyed psychology training clinic directors about the use of ROM in their training clinics, their reasons for or against ROM implementation as well as their current ROM training practices (or lack thereof) for both trainees and supervisors. Of the 92 respondents, 67% indicated they were currently using some form of ROM in their clinics. The highest rated reason for choosing to implement ROM was an increased ability to help students determine when they need to alter treatment, whereas supervisor barriers and lack of resources needed for successful implementation were the primary reasons for not using ROM. Most of the clinics using ROM provided some form of targeted training in the practice, with students as the primary recipients. For both students and supervisors the two training topics addressed most frequently were providing a rationale for ROM and discussing ROM data with clients. The most commonly used instructional methods were presentations and literature; use of role plays and instructional videos occurred less frequently. The discussion emphasizes the implications of these results for ROM implementation and training for supervisors as well as trainees. Training suggestions drawn from the literature on evidence-based practices are provided. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • “Factors associated with multicultural teaching competence: Social justice orientation and multicultural environment”: Correction to Mena and Rogers (2017).
    Reports an error in "Factors associated with multicultural teaching competence: Social justice orientation and multicultural environment" by Jasmine A. Mena and Margaret R. Rogers (Training and Education in Professional Psychology, 2017[May], Vol 11[2], 61-68). In the article, there were various errors in the author note due to production errors. The first paragraph of the author note should read as follows: JASMINE A. MENA earned her doctorate in clinical psychology at the University of Rhode Island. She is currently an assistant professor of psychology at Bucknell University. Her research interests focus on cultural competence and health disparities. The second paragraph of the author note should read as follows: MARGARET R. ROGERS earned her doctorate in school psychology at the University of Nebraska-Lincoln. She is currently a full professor of psychology at the University of Rhode Island. Her research interests focus on multicultural issues, equity, advocacy, and cultural competencies of school psychologists. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-09883-001.) Multicultural psychology courses are integral to the cultural competence training of future psychologists, yet little is known about the factors that influence the multicultural teaching competencies of the educators of such courses. Faculty (N = 78) who teach graduate multicultural psychology courses responded to an online survey that included questions about their demographics, professional background, engagement, and four measures that assessed their (a) multicultural teaching competency, (b) attitudes toward social justice, (c) perceptions of multicultural environment, and (d) social desirability. Hierarchical multiple regression analyses revealed that behavioral intentions to engage in socially just action, honesty in recruitment about the multicultural environment, and motivation to learn, grow, and improve had statistically significant associations with multicultural teaching competence. Analyses were conducted while controlling for socially desirable responding. To reinforce multicultural teaching competence, we suggest that multicultural psychology educators assess their social justice orientation and strive to take action to address noted injustices; monitor levels of motivation to learn and improve their multicultural psychology instruction; and critically examine and speak honestly about the multicultural climate to accurately represent their programs when recruiting new students and faculty and to address needed improvements. Implications and future directions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Advocating for advocacy: An exploratory survey on student advocacy skills and training in counseling psychology.
    Advocacy is considered a core competency within the field of counseling psychology, however more attention is needed to the training and assessment of advocacy competence for counselors-in-training. This study utilized Ratts and Ford’s (2010) Advocacy Competencies Self-Assessment survey to measure self-perceived advocacy competence of master’s and doctoral students within counseling (Council for Accreditation of Counseling and Related Educational Programs–accredited) and counseling psychology (American Psychological Association–accredited) programs. An exploratory factor analysis suggested 3 underlying factors in self-reported advocacy competence: Alliance Building and Systems Collaboration, Action and Assessment, and Awareness Building. Master’s and doctoral students displayed marginal differences in Advocacy Competencies Self-Assessment scores with doctoral students scoring slightly higher in the Awareness Building factor. Respondents’ perceived level of advocacy importance was a significant predictor of advocacy competence. Program characteristics (advocacy-related resources and opportunities to engage in advocacy activities) were also significant predictors of perceived competence. We propose a developmental model of advocacy competency acquisition as a basis for future research on assessment and training of advocacy skills. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source

  • Group telephone consultation after online self-administered training: Acceptability and feasibility.
    Although there is growing support for the integration of Internet-based and in-person modalities for teaching clinical skills in psychology, few studies have examined the benefits of widely scalable complementary telephone-based strategies. This study was an investigation of the feasibility and acceptability of a group telephone-consultation protocol used within a large, randomized trial of online cognitive–behavioral therapy (CBT) training. Posttraining-feedback questionnaires collected from consultees indicated that they found telephone consultation to be both feasible and acceptable in acquiring clinical skills. A mixed-methods review of the feedback-questionnaire data along with qualitative analysis of posttraining-consultant interviews revealed key findings about the structure, format, and function of consultation, as well as barriers to successful implementation. Based on these findings, telephone consultation is a promising methodology for enhancing large-scale, online clinical training in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
    Citation link to source



Back to top


Back to top