Sexual Assault Prevention Evaluation Checklist
This Sexual Assault Prevention Evaluation Checklist provides a summary of prevention
evaluation, the importance of evaluating prevention programs, and an overview of the evaluation
process. This user-friendly guide explains what prevention evaluation can look like and is
designed as technical assistance for programs newly asked to include evaluation strategies in
their prevention work. It focuses on how to incorporate evaluation with minimal cost when
additional funding is not available. For program-specific technical assistance about evaluation
strategies, please call MCASA at 301-328-7023 or email info@mcasa.org
.
What is program evaluation?
Program evaluation is a set of practices and approaches that help us to gauge the efficacy of
our prevention programs and report results to others. Evaluation requires prevention educators
to identify specific outcomes as they relate to the program’s goals and objectives and how you
will determine if your outcomes were achieved. Below are some questions to consider and a
table with examples of outcomes you can think about relating to sexual assault prevention
programs:
What skills, situations, factors, knowledge, attitudes, behaviors, environments, or policies
does this program want to change?
How are we measuring that change?
How will we know that change has taken place?
Knowledge Change and Awareness-
raising
Increased knowledge about sexual
assault
Increased awareness about the
problem and prevalence of sexual
assault in our society
Attitude Change
Decreased acceptance of rape-
supportive attitudes and rape
myths
Decreased victim-blaming attitudes
Skill-building
Increased self-efficacy of
bystander intervention skills
Increased consent communication
skills
Behavior Change
Increased use of engaged
bystander behaviors to prevent
sexual assault
Policy Change
Improved anti-harassment policies
that increase support for survivors
2
Environmental Change
Improved community norms that
do not tolerate sexual violence
Redesigned physical spaces that
support healthy social interactions
Process vs. Outcome Evaluation
Process evaluation addresses whether your program and/or strategy is being implemented
as intended.
1
This type of evaluation will assess the components of your program and
ensure that you are reaching your target audience, completing activities as planned, and
following through on logistical goals and objectives. For example, you are running a ten-
week bystander intervention program for high school students. To assess the process, you
could collect data from your facilitators about the number of attendees, the running time of
the program each week, and personalized questions about facilitation styles and overall
engagement of participants. Process evaluation encourages you to evaluate the program
implementation and track program information.
Outcome evaluation addresses the progress in the outcomes or outcome objectives that the
program is to achieve.
1
Outcome evaluation utilizes feedback from participants to measure
both short-term and long-term impacts of the program. To measure the short-term impact of
a bystander intervention program, you could ask students ‘What bystander intervention skills
did you learn throughout the program and ‘Do you feel confident you could utilize one or
more of the skills you learned to intervene in a situation?’ and measure the responses. In the
long-term, you could follow up with participants and ask ‘Have you intervened in a situation
utilizing bystander intervention skills since the program?’ You could also look at wider
campus culture by researching if you are seeing higher rates of students saying they have
intervened in a high-risk situation in health class surveys. In this document, we will focus on
outcome evaluation.
Why evaluate?
Evaluation lets us tell the story of our work to the public, to practitioners, and to funders. It
gives us a language to discuss change in more concrete terms. We can see the change
taking place in our classrooms, in our communities, and in our rape crisis centers.
Evaluation is a way that we can help others who cannot see this change first-hand to
witness it too.
It helps us figure out how to focus our efforts. When we see that one program has been
particularly effective, we can focus on expanding it, applying its approaches to other
programs, and ensuring it continues into the future. Evaluation also prevents us from
spending lots of time, energy, and other resources working on programs that are not
showing strong results.
Evaluation helps us to see where progress is being made and where we can continue to
scale up sexual assault prevention programs.
Evaluation prevents us from making assumptions about our communities’ readiness,
receptiveness, and response to sexual assault prevention programs.
1
Centers for Disease Control and Prevention. National Center for HIV/AIDS, Viral Hepatitis, STD, and TB
Prevention. ‘Types of Evaluation.’
3
Evaluation also helps us measure program effectiveness for various communities and
populations we serve. Evaluation can give us the tools to determine who are program is
working for and make adaptations and adjustments to be culturally specific to meet the
needs of diverse communities and historically underserved populations.
How do you evaluate?
Sometimes it can be hard to translate the abstract goals of evaluation into concrete strategies
for incorporating evaluations into sexual assault prevention programming. Here, we define key
components of the evaluation process and offer three methods for administering prevention
program evaluations. These are not the only methods for evaluation, and we encourage you to
use any methods that allow you to accurately assess whether a program is meeting its goals.
Types of Data
It is important to know what types of data can be collected in an evaluation.
Quantitative data consists of numbers and numerical data that can be tallied and totaled.
Quantitative data is often easier to measure and interpret, since the data is calculated and
quantified into digestible figures like rates, fractions, and percentages. For example, 35% of
students were able to provide a definition of consent after attending the program.
Quantitative data is typically collected using surveys.
Qualitative data consists of other information, usually words or ideas, that is descriptive and
can be summarized and categorized by themes. For example, some participants expressed
feelings of confusion around setting boundaries with their intimate partner and difficulty
communicating their emotions. Qualitative data is typically collected using interviews and
focus groups.
Data Collection Best Practices
Before conducting any evaluation, make sure that you outline specific evaluation questions
that you want to ask to guide the evaluation process. Make sure you’ll be able to act on what
you learn whether through modifications to existing programming or by exploring new
programming avenues.
It is essential to address human subject’s considerations when people are participating in
an evaluation. This means having participants consent to being included in this evaluation
through a written consent form. It is also important to de-identify the data (meaning, taking
out personal, identifying information, such as names). Keeping the collected data in a safe
and secure location is critical (for example, in a password-protected file is ideal).
It is also ethical to inform the respondents how you’re using their responses. This could be a
disclosure statement in a survey or focus group or contacting a focus group after the fact
and letting them know how you plan to incorporate specific guidance that came up.
As a general rule, it is important that any data you collect is useful data. It is respectful of
people’s time to collect data that will serve a purpose and contribute to your research in a
meaningful way. A good rule of thumb: if you’re asking a question “just because” or “out of
curiosity,” without a particular reason or plan for using the data, it is not worth asking that
question.
4
While implementing programs related to sexual violence prevention, it is important to be
aware of potentially triggering questions and include relevant support resources. Discussing
sexual violence in your program and evaluation can trigger trauma responses in survivors
and those connected to survivors. It is important to recognize this throughout the entire
process, and ensure participants are provided with resources they can utilize at any time
before, during, and after the program.
Finally, educate yourselves on any Institutional Review Boards that may oversee research in
your population (particularly relevant on college and university campuses) to be clear on any
needed approval processes before collecting data.
Collection Methods
1) Pre- and Post-Tests
Pre-and post-testing is an evaluation method that measures participants’ knowledge, attitudes,
behaviors, and beliefs both before beginning and after completing sexual assault prevention
activities. By measuring the difference between the “starting” and “ending” scores, we can see
where we have helped our participants to growand what areas of our work might need some
adjustments.
There are benefits and challenges to using pre- and post-tests for evaluation. Some benefits
include:
Allowing you to learn more about participants’ baseline knowledge coming into the
training and to determine how much the program changed their knowledge, attitudes,
etc.
Helping you avoid making assumptions about students’ backgrounds, progress, or
engagement with prevention activities.
Providing information to help tailor future programs.
Being well-suited to programs and activities focused on attitudes and information.
Some challenges include:
Being less useful for evaluating programs related to particular skills (for example,
healthy communication in relationships, communicating consent and non-consent, or
bystander intervention behavior).
Difficulty guaranteeing future participation to conduct long term follow-up assessments
with participants after a program to measure for lasting impacts
Difficultly accurately gauging participants learning developments and growth due to
participant over-estimation of their knowledge, skills, and beliefs in pre-tests
Additional post-test follow-up can be helpful if this method is used to measure skills and
behavior. For example, giving a follow up post-test to program participants 6 months after the
conclusion of the program can help you measure for long-term impacts of a program. In
addition, some evaluators recommend administering both pre- and post-tests after the training.
For example, in the same survey, ask, how much did you know about how to communicate
about consent before this training? How much has your knowledge increased/decreased/
stayed the same after this training? This shift is because in pre- and post-tests, it’s very
5
common for participants to overestimate their knowledge during a pre-test, and then not have
any room to report growth in the post-test, even if they realized during the training that they did
learn a lot.
2) Activity-based Assessment Methods
Voting and rubrics are two examples of activity-based assessment methods that can help to
integrate evaluation into regular sexual assault prevention program activities.
Voting
There are many ways to make evaluations of prevention programs more interactive to the
participants. One way of doing this would be to use “voting”.
For example, you can present participants with a scenario. Ask them to write responses
on post-it notes, and then place the post-it notes on a poster (or in another location)
based on categories.
o Example: When teaching a bystander intervention workshop, ask participants to write
down a strategy for intervening, and then put that sticky in a location that
corresponds to the type of strategy it is (such as the direct, delegate, or distract
categories). Instructors can then collect and record responses for quantitative data
and state that participants were able to produce x number of ways to intervene in
situations.
o This can also be done virtually, as so many trainings, programs, and events are
happening in online spaces. You could also poll participants in a Zoom session and
have recorded data available to review after a webinar or training session. You can
also utilize other anonymous polling and participation platforms, such as Mentimeter
or Poll Everywhere, to ask attendees questions throughout a session and measure
those responses.
Rubrics
Sexual assault prevention activities and projects can also become a valuable tool for
evaluation when they are paired with a rubric. Simply put, a rubric is an outline for how to
determine if participants’ work (whether in the form of a skit, a poster campaign, a role-play,
a writing assignment, or another creative project) reflects the key messages and goals of
the prevention program.
Figure out what components are important, as either things to include (e.g. positive
bystander participation, healthy masculinity) or things to avoid (e.g. actions that condone
victim-blaming, toxic masculinity). Then, assign point values to these in a checklist that can
be used to score the activity.
o Example: Team activities can be utilized as part of a program to evaluate change.
One goal of a program might be to decrease tolerance of sexual violence within a
community. For example, you can instruct groups of participants to brainstorm a
slogan and social media posts for a public awareness campaign with the goals of
increasing community discussions around sexual violence and reducing acceptance
of sexual violence in their community. Utilizing a rubric with a scoring system will
help with the evaluation process. The table below shows an example of a rubric that
could be used to assess if this particular goal has been achieved:
6
Sample Evaluation Rubric:
Score
Description
“2” (
Goal Met)
The public awareness campaign must include 3 or more of
the following
messaging elements to receive a score of “2”
Develop messaging to engage and appeal to
various groups and identities in the community
Highlighting community responsibility to create
safe physical and social environments
Rejection of community tolerance and
acceptance of sexual violence
Provide action steps for community mem
bers to
get involved in prevention work
“1” (Goal in
Development)
The public awareness campaign must include 1-2 of the
messaging elements listed above to receive a score of “1”
“0” (Goal Not
Met)
If the public awareness campaign did not include items from
the “Goal Met” list, it would receive a score of “0”
You can collect the data from the scoring rubric to assess if the results of the program activity
reflect the stated goals. There are benefits and challenges to using activity-based assessments
for evaluation.
Some benefits include:
Actively engaging participants in learning opportunities and evaluation activities.
Gathering immediate results on the impact of a program in real time.
Being able to evaluate the effectiveness of your program activities and adjust based on
participant successes and challenges.
Some challenges include:
Evaluating this type of activity is more subjective, particularly when using a rubric, so
evaluators must be properly trained to complete the evaluation.
Requiring more time to develop activities, measurements, and complete the
assessment.
3) Interviews and Focus Groups
Interviews and focus groups are additional collection methods that can be used for evaluation.
Interviews are typically conducted one on one with an interviewer and participant, while focus
groups are made up of a small group of participants who engage in a discussion led by a
moderator.
7
There are benefits and challenges to using interviews and focus groups for evaluation. Some
benefits include:
Gathering in-depth information and feedback from people about your prevention
activities.
Providing a space for you to both ask prepared questions and gain additional
perspectives and insights on topics that arise through discussions.
Being particularly useful for understanding the impacts of a program on a particular
population, community, or setting.
Some challenges include:
Typically used in a small group or individual setting, which can make it difficult to
generalize results for a larger population or community effectiveness.
Conducting interviews and focus groups are very time consuming and requires some
expertise in quantifying the data to share with others. It is important to conduct
interviews with multiple participants to gather input from various perspectives, which
will require a significant time commitment.
What do you do with the data?
Data Analysis
Once you have collected your evaluation data, you must analyze the data. Data analysis is the
process of organizing and classifying the information you have collected, tabulating it,
summarizing it, comparing the results with other appropriate information, and presenting the
results in an easily understandable manner.
2
Quantitative and qualitative data have different
analysis methods, described in a brief overview below. In the analysis, you will synthesize and
interpret the data to draw conclusions and determine next steps for your program. To learn
more about how to do this, check out the
CDC’s Veto Violence EvaluACTION Framework for
Evaluation.
Quantitative Data
Quantitative data analysis translates the numbers, data points, and additional numerical data
into a digestible and sharable format. Some types of quantitative data analysis include
frequencies or simple counts, statistical tests for differences, and trends over time.
3
This might
be represented as percentages, medians, or a range for various data points. This type of
analysis also typically utilizes a program to do calculations or transform data, such as Microsoft
Excel or Google Sheets. These programs can format data and use formulas to make
calculations.
2
Centers for Disease Control and Prevention. Veto Violence: EvaluACTION - Putting Evaluation to Work. 14 April
2021.
3
Centers for Disease Control and Prevention. Veto Violence: EvaluACTION - Putting Evaluation to Work. ‘Data
Analysis, Synthesis, and Interpretation.’
8
Qualitative Data
In order to measure whether a program has met its goals, qualitative data needs to somehow be
converted into numbers. It can also be coded using qualitative analysis software’s, where by
analyzing the codes you assign the quotes, you can then uncover overarching themes in the
data. Even without software, you should assess information for overarching themes and code
appropriately. Sometimes, the best way to do this is by “scoring” the data to turn it into
quantitative data. “Positive” results or “appropriate” responses can be marked as a “1,” and
“negative” results or “inappropriate” responses can be marked as a “0.” These numbers can
then be used to generate quantitative informationfor example, what proportion of students
shared an “appropriate” response to the exercise in question. In other situations, however,
quantification loses important nuance. Use narrative descriptions when needed.
For more open-ended questions, it’s helpful to read through full text to highlight themes-for
example, the theme of “being afraid to call out friends on sexist jokes” or “lack of institutional
support for survivors” is appearing in multiple answers. You could code calling out friends as
green, lack of institutional support as yellow, and then go through the rest of the text
highlighting any other content that alludes to those themes in the appropriate color.
Utilize the Data
Once the data has been analyzed, synthesized, and interpreted, it is important to build a plan to
effectively share your findings and improve your programs and approaches. In particular, you
should translate the findings to your various audiences and can follow these steps to share the
information with your audience:
Identify the key messages that will help you achieve your goal for use and action.
Tailor the language and format considering the key audience’s expertise and
preferences, ensuring that reports are culturally appropriate.
Plan ahead for how and when you will be developing and disseminating the product.
4
Conclusion
It is important to remember that evaluation is an ongoing process, not a once-and-done kind of
task. As we revise sexual assault prevention work, we must continue to evaluate them and
continue to improve them. Check out the step-by-step checklist below to help keep you on the
right track for your evaluation efforts and additional available resources below. For program-
specific technical assistance about evaluation strategies, please reach out to MCASAs team at
.
4
Centers for Disease Control and Prevention. Veto Violence: EvaluACTION - Putting Evaluation to Work. ‘Effectively
Sharing Evaluation Findings.’
9
Evaluation Checklist:
Step One: Determine the goals and objectives of your prevention program.
o Outline the goals and objectives of your sexual assault prevention program. This
should include:
Target audience
Risk and protective factors to be addressed
Timeline length of program and evaluation opportunities
Step Two: Determine an evaluation plan to measure the achievement of the program goals
o Determine evaluation measures:
What attitudes, behaviors, beliefs, or knowledge are you trying to change and
what is possible to measure?
Are you interested in finding out if participants’ have demonstrated a new
skill or behavior as a result of the program?
Are you interested in measuring participants’ knowledge and understanding
of a new concept?
Are you interested in changing policy or changing an element of the physical
or social environment?
o Create a list of evaluation questions you want to answer that align with the
program’s goals and objectives
o Select the evaluation method that is most appropriate for measuring outcomes with
your available resources
Step Three: Collect the data.
o Develop a plan for data collection. Identify your method for collecting data. This may
include pre- and post-test measures, activity-based assessments, surveys,
interviews, focus groups, etc.
o Is the data quantitative or qualitative? Did you use a mixed methods approach?
(Meaning, collecting and using both quantitative and qualitative data).
Step Four: Quantify the knowledge.
o Create a plan to analyze the data.
Can you create a scoring system for whether participants’ contributions do or
do not meet your expectations?
Should you also include narrative assessments to most accurately describe
outcomes?
Step Five: Utilize the data.
Create a plan outlining using your data, who will have access to data, and how
results will be shared.
Determine how you will share the evaluation results with interested stakeholders
(such as funders, practitioners, community members, etc.
Determine how you and your evaluation team will use the data to improve program
process and outcomes. It is important to use data from the evaluation to make
improvements to sexual assault prevention activities.
10
Program Evaluation Resources
If you would like to learn more about program evaluation for sexual violence
prevention programs, check out these resources listed below:
1. CDC Veto Violence: EvaluACTION - Putting Evaluation to Work: This interactive
guide to evaluation walks users through the evaluation process and helps them
to build their customized evaluation plan.
https://vetoviolence.cdc.gov/apps/evaluaction/home
2. CDC Sexual Violence on Campus: Strategies for Prevention: This technical
assistance document acts as a starting point for prevention practitioners and
campus partners in planning, implementing, and evaluating sexual violence
prevention programming on campus.
https://www.cdc.gov/violenceprevention/pdf/campussvprevention.pdf
3. National Sexual Violence Resource Center (NSVRC) Technical Assistance Guide
and Resource Kit for Primary Prevention and Evaluation.
https://www.nsvrc.org/sites/default/files/Projects_RPE_PCAR_TA_Guide_for_Ev
aluation_2009.pdf
4. National Sexual Violence Resource Center (NSVRC) Evaluating Sexual Violence
Prevention Programs: Steps and Strategies for Preventionists. This 60-minute
online course provides a basic overview of program evaluation. It includes
content from the Technical Assistance Guide and Resource Kit for Primary
Prevention and Evaluation. http://www.nsvrc.org/elearning/20026
5. PreventConnect Wiki Evaluation Page. This page provides an overview of, and
links to resources for, program evaluation.
http://wiki.preventconnect.org/Evaluation
6. Pennsylvania Coalition Against Rape (PCAR) Technical Assistance Guide and
Resource Kit for Primary Prevention and Evaluation. This technical assistance
guide provides an in-depth look at program evaluation for primary prevention
programs.
Volume 1: Choosing Prevention Strategies
Volume 2: Evaluating Prevention Strategies
Volume 3: Analyzing Evaluation Data
Volume 4: Analyzing Qualitative Data
Updates to this checklist were supported by the Maryland Department of Health RSAPP
Rape Prevention and Education: Using the Best Available Evidence for Sexual Violence
Prevention (CE #19-1902) Grant. The opinions, findings, and conclusions in this
document are those of the author(s) and do not necessarily represent the official
position or policies of the Maryland Department of Health.
Last Updated: November 2022