Meaning and Measurement: Studying Implementation to Understand Impact for State Sponsored Programs

Sarah Crittenden Fuller, Ph.D., Erin Manuel, Ph.D., and Rosie Miesner, Ph.D.
7 min read
Image
faculty meeting

In education, nearly all the action in policy and program design happens at the state and local levels. And, compared to local-level studies, state-level studies are attractive because they offer large samples of students, teachers, and schools and opportunities to look at a policy or program across different types of students and schools. However, many “statewide” policies are actually implemented by the school district, and school districts make adjustments for their own context.

Understanding the impact of a policy or program on outcomes is only useful if we can accurately describe the program or policy to those trying to use it. Unfortunately, the variation created by local adjustments presents a challenge to evaluators. So how can we study a statewide summer program when differences across districts could have such a significant impact on the results? Our research practice partnership has developed strategies to overcome some of the common challenges.

Partnering to bring research and implementation together.

Our research practice partnership is between the North Carolina Department of Public Instruction’s Office of Learning Recovery and Acceleration (OLR) and the Education Policy Initiative at Carolina (EPIC) at the University of North Carolina at Chapel Hill. Together, we set out to study a set of state sponsored summer programs in the summers after the COVID-19 pandemic. We aim to examine how well these programs meet their goal of reducing the educational impacts of the pandemic on at-risk students. While these programs are state-sponsored, school districts designed and ran the programs based on a set of guidelines from the state. This means that there are similarities across districts, but there are also many differences. Therefore, to understand the effect of the programs, we must not only look at outcomes but also examine the barriers, priorities, and available resources that shaped the programs in specific districts.

Our partnership between the state education agency and university researchers has unique strengths that help us to collect the data we need to understand program implementation across the entire state. Because the state education agency is sponsoring the summer programs, we were able to build data collection directly into program requirements and the implementation process. Our university research partners, on the other hand, provide a neutral outside perspective that allows district and school staff to share thoughts and concerns they might be more reluctant to share directly with the state education department.

Collecting data from different sources.

To develop a thorough portrait of summer programs in different districts, we have collected implementation data in three different ways

  1. District Plans.  We used information from school district plans submitted to the state by districts to get approval for their summer programs. These plans describe the design of the programs in each district, including program goals, students targeted for enrollment, planned activities, and partners.
  2. District Surveys. We asked each district to complete a survey at the end of their summer program. These surveys asked programs about any changes they made to the program design and for more detail about the summer program activities in each district. These surveys also asked about barriers faced by programs around staffing and student participation. The research team worked together to develop the survey to ensure that it covered important aspects of the program. We also used regular webinars hosted by the OLR to share details about the survey with the districts.
  3. Interviews and Focus Groups. We conducted interviews and focus groups with district and school personnel. These qualitative data provide deeper insights into program goals, implementation barriers, and perceived impacts of the program.

Using implementation data to understand program impacts.

We will use the implementation data we collected to understand more about how the summer programs operated in different districts. In addition, the implementation data plays a key role in how we study program impacts. Program goals described in district program plans in interviews and focus groups help us to decide which student outcomes to include in our study. For example, many educators in focus groups and interviews described focusing on engagement with school during their summer programs. To make sure our research matches the goals of the summer programs, we are including several measures of engagement, such as student attendance and grades, in our analyses. From implementation data, we also learn about key criteria that districts use in deciding which students to invite to be part of summer programs. We use this information to compare students who attend summer programs to students who are most like them. This helps us better estimate how the summer programs affect student outcomes. Finally, we can use information from program plans and surveys about the types of activities and resources included in summer programs across different districts to understand not just whether summer programs are successful but how districts can design their programs to make the biggest difference for student outcomes.

Making an impact by working together.

Most researchers recognize the value of understanding exactly what programs and policies look like on the ground. However, it is not always easy to gather data across many different diverse districts. Partnering with a state agency or another organization that works across districts can provide access and a process for collecting data across many diverse school districts. In addition, partnering early allows researchers and practitioners to work together to collect data in ways that provides high quality information to inform both research and practice to make the greatest impact for students.

 

Author

Sarah Crittenden Fuller holds a Ph.D. in public policy from the Sanford School of Public Policy at Duke University. She  leads an Institute of Education Sciences funded grant in partnership with the Office of Learning Recovery and Acceleration (OLR) at the North Carolina Department of Public Instruction to examine the impact of OLR sponsored programs on COVID-19 recovery for students. Fuller’s work leverages large administrative datasets and quasi-experimental designs to estimate the causal effects of particular policies or events on student outcomes. She uses primarily quantitative methods to address research questions surrounding schools and students focusing on questions that have the potential to inform policy decisions. She has recent publications in Natural Hazards Review, Education Finance and Policy, Urban Education, Social Science Researcher, and the Journal of Policy Analysis and Management.

Photo by Allison Shelley for EDUimages