AmeriCorps State and National Evaluation Plan Template
Organization Name: Click or tap here to enter text.
Program Name: Click or tap here to enter text.
Application ID: Click or tap here to enter text.
Instructions: Fill in the relevant sections of your evaluation plan under the headings below (you may delete the italicized text). Details about the evaluation requirements for AmeriCorps State and National grantees and subgrantees are here. If you wish to request an Alternative Evaluation Approach (AEA), an AEA Request Form must be submitted in addition to this document.
- Introductory Sections and Program Description
1.1 Theory of Change
Describe the nature of your service activities (interventions) and why they are expected to produce the desired outcomes. This section should be short but include enough detail to assess how well the proposed evaluation aligns with your program model. The content of this section can be adapted from the Theory of Change section of your grant application.
1.2 Scope of the Evaluation
State concisely the goal(s) of the evaluation and specify which service activity/ies will be assessed. AmeriCorps does not require you to evaluate all components of your theory of change; your evaluation may focus on a sub-set of program activities.
- Evaluation Outcome(s) of Interest
List the outcome(s) your evaluation will measure. Outcomes must align with the theory of change and scope in #1 and must be feasible to measure, based on the source(s) of data needed and level of effort required. Given the length of a single grant cycle, it can be challenging to evaluate long-term program outcomes; evaluating short- and medium-term program outcomes may be more practical.
For impact evaluations and non-experimental outcome evaluations, outcomes must involve a change in knowledge, attitude, behavior, or condition. Metrics that measure the amount of service provided (e.g. number of students tutored/volunteers recruited/organizations served) should not be listed as outcomes for this type of evaluation.
- Research Question(s)
List the research question(s) that will guide your evaluation. Research questions must be clearly connected to the outcomes in #2 and aligned with the theory of change and scope in #1.
For impact evaluations, research questions must:
- involve a comparison between beneficiaries receiving the intervention or aspect of the intervention being studied (program/treatment group) and those that do not receive the intervention or aspect of the intervention (comparison/control group)
- Compare data on the outcome(s) of interest for both groups (i.e., program/treatment and comparison/control) at two different time points, preferably at baseline (pre-intervention) and follow-up (post-intervention)
- Evaluation Design
4.1 Evaluation type
State the type(s) of evaluation design that will be used, and explain why this is the most appropriate design to achieve the evaluation goal(s) in #1 and answer the research question(s) in #3. Possible evaluation designs include but are not limited to:
- Process or implementation evaluation
- Non-experimental outcome evaluation
- Quasi-experimental design (QED) evaluation
- Randomized controlled trial (RCT) evaluation
For impact evaluations, the evaluation must include a QED or RCT design.
4.2 Control or Comparison Group Formation (if applicable)
For a non-experimental evaluation that will use a comparison group, describe the group that will be used and explain why this selection is appropriate.
For a QED study, describe the approach for identifying a matched comparison group (e.g., propensity score matching, nearest-neighbor matching, etc.). Include the procedures for identifying a pool of similar individuals, organizations, or locations from which to draw a comparison group. Also include a description of the procedures and a list of the variables (covariates) you will use to match treatment and comparison groups, including either a baseline measure of the outcome (e.g., pre-intervention outcome score) or a proxy measure for the outcome of interest (e.g., grade point average to estimate future HS graduation).
For an RCT study, describe the eligibility criteria for inclusion in the study and the randomization process, which will result in two or more study groups (i.e., treatment and control).
- Sampling Methods
5.1 Sample Selection
Describe the population from which the sample will be drawn, the estimated sample sizes for treatment and (if applicable) comparison groups, and how the sample will be selected (i.e., sampling procedures and/or eligibility criteria). Specify any consent procedures (i.e., parental/guardian consent, opt-in/opt-out) or data use agreements that will be necessary to gather or obtain data.
5.2 Sample Size Justification
For non-experimental evaluations, explain the basis for selecting the sample sizes in #5.1 and how the size will be adequate to answer the research questions.
For impact evaluations, describe how a power analysis was used to determine (1) how large a sample is needed to enable statistical tests that are accurate and reliable (i.e., required minimum sample size), and (2) the likelihood the statistical tests used in the analysis will be able to detect effects of a given size in a particular situation. Include detail on the assumptions used to conduct the power analysis (including how a minimum detectable effect size (MDES) was identified) and specify the results of the power analysis. If subgroup analyses are anticipated, ensure that the sample size is sufficient to allow for these analyses.
- Data Collection Procedures, Data Sources, and Measurement Tools
Describe each data source and measurement tool and the procedures that will be used to collect or extract data, including when, how often, and by what mode (i.e., paper/pencil, phone, or web survey; administrative data extract). Explain how the proposed data sources and tools are adequate for addressing all of the research question(s) and how the data align with the evaluation’s outcome(s) of interest.
- Analysis Plan
Describe an analysis that is appropriate for the evaluation’s design and data sources (e.g., statistical testing for quantitative data or descriptive analysis methods for qualitative data). Explain how the analysis will address all of the evaluation’s research questions.
For impact evaluations:
- Describe how you will use multivariate analysis techniques (e.g., M/ANOVA or ANCOVA or regression models) for analyzing the pre-post data for the purpose of answering the research questions; t-tests of statistical significance are not sufficient because covariates cannot be utilized for these statistical tests.
- Describe how baseline equivalency test(s) will be conducted to demonstrate that the sample groups (i.e., treatment and comparison/control) do not differ significantly at baseline or, if differences exist, how the necessary statistical adjustment will be made to address any group differences identified.
- Evaluator Qualifications
Describe how the person(s) who will conduct the evaluation are sufficiently qualified to conduct the proposed evaluation (e.g., have experience and technical qualifications that align with the planned evaluation design). If the evaluator is not yet identified or hired, describe the required and/or preferred qualifications for an evaluator.
For impact evaluations, an external evaluator is strongly recommended and may be required by the terms of your grant award.
Provide a timeline for all of the major evaluation activities (e.g., finalizing evaluation design, hiring evaluator, developing data collection instruments, collecting pre-intervention data, collecting post-intervention data, analyzing data, writing report). Delineate the timeline by month and year or on a quarterly basis (e.g., fall 2020, spring 2020). The timeline must show how all evaluation activities and a final report will be completed before your next recompete application.
AmeriCorps recommends using the first program year for evaluation planning (including gaining final approval of the plan) and data collection instrument development; the second program year for data collection; and the remaining time in the third program year to analyze data and complete the evaluation report. Since grantees have unique programs and recompete application deadlines may vary by state, exact evaluation timelines may vary.
Specify the overall budget allotted for the evaluation, including the cost of engaging an external evaluator if applicable. If you will be utilizing staff time for conducting an internal evaluation, provide a description of those in-kind resources.