This project will evaluate 3 approaches of implementation support for university peer leaders who will deliver a prevention program with a particularly strong evidence-base compared to changes observed in outcomes in response to usual care at the universities before implementation. This study aims to advance knowledge of this innovative and economical way to provide college prevention programs to reduce the burden of mental illness in the college student population.
Several interventions for mental health problems are efficacious and effective, but few are routinely offered to college students, who represent 59% of young adults. This is regrettable because college students are at high risk for mental health problems (e.g., depression, substance abuse, eating disorders), and college counseling centers lack sufficient clinicians to offer individual therapy to all afflicted students and are not well positioned to deliver prevention programs. One solution for this service shortfall is to have peer educators deliver scripted group-based prevention programs, which can more efficiently reduce the burden of mental illness than individual therapy. Targeting college students is a cost-effective tactic for delivering prevention programs and has vast potential reach because 85% of colleges have peer educator programs. Peer educators have effectively delivered several prevention programs, sometimes producing larger effects than clinicians. This study will investigate 3 levels of implementation support (training, training with technical assistance, and training with technical assistance and quality assurance) and the impacts of differing levels of implementation support on program outcomes across 57 college campuses nationwide. Specifically we have five aims for this study: Aim 1: Test whether greater implementation support is associated with graded increases in fidelity and competence in delivering the scripted prevention program. This will be assessed by an established procedure for reliably rating fidelity and competence of audio-recorded intervention sessions. Aim 2: Test whether greater implementation support, which should increase fidelity and competence of intervention delivery, is associated with graded increases in student attendance of intervention sessions (recorded by the peer educators) and effectiveness of the prevention program (measured by pre-to-post changes in core outcomes assessed with anonymous surveys completed by group participants) and compared to parallel pre-to-post change data collected from students at the colleges before implementation is initiated. Aim 3: Test whether greater implementation support is associated with graded increases in program reach (% of female students at each college who complete the prevention program during the 1-year implementation period) and sustainability (% of female students at each college who complete the prevention program during the subsequent 2-year sustainability monitoring period). Aim 4: Test whether Consolidated Framework for Implementation of Research (CFIR) indices of perceived intervention factors, outer and inner setting factors, peer educator attributes, and process factors after the initial training correlate with fidelity, competence, attendance, effectiveness, and reach over the 1-year implementation period and with sustainability. We will test whether at the end of the initial implementation period the 3 conditions differ on relevant CFIR indices and on the progress and speed of implementation. Aim 5: Compare the prevention program delivery cost in the 3 implementation conditions, and the relative cost-effectiveness of each condition in terms of attaining fidelity, competence, attendance, and effectiveness, reach, and sustainability, plus cost-savings from and reductions in waitlists and eating disorder prevalence at clinics. Starting in March of 2020, virtually-hosted Body Project groups are being recommended to participating Peer Education teams in contexts where in-person groups can not be offered due to COVID-19 social distancing guidelines. Existing research indicates that virtual Body Project groups are a viable and effective alternative in-person groups.
Study Type
INTERVENTIONAL
Allocation
RANDOMIZED
Purpose
PREVENTION
Masking
DOUBLE
Enrollment
2,261
Level of Support: Intensive 2-day train-the-trainer workshop, facilitator guide and facilitator support website.
Level of Support: Intensive 2-day train-the-trainer workshop, facilitator guide, and facilitator support website, plus half-day implementation training to further define goals, needs, leadership structure and strategy for adoption and recruitment.
Level of Support: As for "Training and Technical Assistance" arm, plus 1-year of technical assistance, coaching, and quality assurance to enhance skills for implementation and sustainability.
Stanford University
Stanford, California, United States
Oregon Research Institute
Eugene, Oregon, United States
University of Texas at Austin
Austin, Texas, United States
Trinity University
San Antonio, Texas, United States
Fidelity of Program Implementation as assessed by 25-item Session Adherence Scale (Stice et al., 2013a)
Peer educators' adherence to scripted intervention manual and accuracy of script delivery as coded through evaluation of audio-recorded sessions by two clinicians independently coding a random selection of sessions using the Session Adherence Scale. Coders will indicate the extent to which peer leaders adhere to the 25 total total necessary components of the 4-session intervention script using a scoring guide ranging from 10 (indicating no adherence) to 100 (indicating perfect adherence) with a possible total score range of 250 to 2500. Inter-rater agreement for Session Adherence Scale has been found to be .92 (Stice et al., 2013a).
Time frame: 12 months
Competence of Program Implementation as assessed by 12-item Group Leader Competence Scale (Stice et al., 2013a)
Peer educators' competence with intervention delivery as assessed by 12-item Group Leader Competence Scale measuring various indicators of a competent group facilitator (e.g. leaders allot equal speaking time for all members.) Coders will indicate the extent to which peer leaders show competence in their delivery of the scripted intervention across 12 items using a scoring guide ranging from 10 (indicating poor competence) to 100 (indicating superior competence) with a possible total score range of 120 to 1200. Inter-rater agreement for Group Leader Competence Scale has been found to be .96 (Stice et al., 2013a).
Time frame: 12 months
Attendance
Attendance levels of participants as recorded by peer educators
Time frame: 12 months
Reach
Percentage of students who complete the prevention program at 1 year post-educator training
Time frame: 12 months
Sustainability
Percentage of students who complete the prevention program during the subsequent 2-year sustainability monitoring period
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.
Time frame: 24 months
Delivery Cost
Dollar amount delivery cost for each arm of the implementation support model
Time frame: 12 months
Relative Cost-Effectiveness
Relative cost-effectiveness of each arm of the implementation support model
Time frame: 12 months, 24 months
Perceived Characteristics of the Intervention as measured by the 28-item Provider Intervention Adoption Scale
Perceived Characteristics of the Intervention as measured by the 28-item Provider Intervention Adoption Scale. Respondents will indicate their level of agreements with the 27 items using a 5-point Likert-type scale ranging from 1 ("Strongly Disagree") to 5 ("Strongly Agree) with a possible score ranging from 27 to 135.
Time frame: 1 Week or less post training
Project Knowledge
Peer educators' declarative project knowledge as measured by the 20-item Body Project Knowledge Scale. Peer educators will indicate whether the statements in this scale are "true" or "false," with a possible score ranging from 0 for no correct replies to 20 for every answer correct.
Time frame: Baseline, 1 week or less post-training, 12 months
Provider Attitudes Towards Evidence-Based Interventions
Provider Attitudes Towards Evidence-Based Interventions as measured by the 50-item Evidence-Based Practice Attitude Scale (EBPAS-50), which will assess provider attitudes toward adopting evidence-based intervention (Aarons, 2004); it has four subscales: Appeal (intuitive appeal of evidence-based interventions), Requirements (likelihood of adopting evidence-based interventions given supervisor, organizational, or system requirements), Openness (general openness to new practices), and Divergence (perceived divergence between research developed interventions and current practice), which sum up to a total score representing respondents' global attitude toward adopting and using evidence-based practice. Respondents will indicate the extent to which they agree with each item on a 5-point Likert scale ranging from 0 (Not at all) to 4 (Very great extent). A higher total score indicates a more positive attitude toward adopting and using evidence-based practice. 23 items are reverse-scored.
Time frame: Baseline, 1 week or less post-training, 12 months
Implementation Progress
Implementation Progress as measured by the Prevention Implementation Progress Scale
Time frame: Baseline, 12 months
Peer Educator Self-Efficacy
Peer Educator Self-Efficacy as measured by the 14-item Peer Educator Self-Efficacy Questionnaire. Respondents will degree of confidence on a 6-point scale ranging from 1 (No Confidence) to 6 (Complete Confidence). Scores will range from 14 (no demonstrated peer educator self-efficacy) to 84 (high peer educator self-efficacy.)
Time frame: Baseline, 1 week or less post-training, 12 months
Inner Setting
Inner Setting sub domains assessed using the Team Climate Inventory
Time frame: Baseline, 12 months
Outer Setting
Evaluate presence or absence of formal policies related to evidence-based programs, fiscal and other organizational resources for peer educators based on two coded interviews
Time frame: Baseline