Post

Contrary to news headlines, the U.S. Department of Education hasn’t approved local assessments to take the place of the statewide tests in New Hampshire. Instead, it approved piloting of a new, statewide assessment model — and that’s a critical distinction, especially as a few members of Congress suggest that districts should choose their own tests under a reauthorized Elementary and Secondary Education Act.

Indeed, it wasn’t so long ago that districts were left entirely on their own to choose what assessments they would administer. In place of the common measuring stick we have today, districts used to choose tests that put their results in the best light (and, often, that were the cheapest they could find). The testing companies even sold different versions of the same test, some with so-called “urban norms.”  Needless to say, this undermined parent — and educator — confidence in the results, because virtually everybody was “above average.”

Statewide assessments have since been developed to combat this problem. They serve as a check to ensure the students who are the focus of federal law — low-income students, students of color, students with disabilities, and English learners — are not subject to lower expectations than their peers. They provide parents with critical information for making school choices across district boundaries. They allow educators to benchmark their students’ performance not just against other students in the school or the district, but across the state. And they serve as a cornerstone for statewide accountability systems that expect and support all students to make progress toward college and career readiness.

That’s why a wide range of stakeholders, including chief state school officers, district leaders, and teachers, support statewide assessments — and why civil rights, business, and disability groups have pushed back hard against proposals that would undermine them.

As the assessment field evolves and improves over time, there will rightly be calls to try out new models to see if they’re ready to be rolled out statewide. That’s exactly what New Hampshire is doing, and the contours of its agreement with the U.S. Department of Education offers some guidance on how lawmakers in the ESEA debate should think about allowing innovation while still holding fast to the imperative of statewide assessment.

First and foremost, this is a true pilot, with an annual, data-driven evaluation of the system. There’s a clear expectation that if the new assessment model does meet rigorous criteria, all districts statewide will adopt it, and if not, pilot districts will go back to the statewide test. And it’s led by the state, not by individual districts. Other key conditions of the pilot include that New Hampshire will:

  • Ensure participating districts provide parents with information about the pilot; assess all students, including students with disabilities and English learners, with appropriate accommodations; and have some students participate in the statewide assessment each year as a benchmark. If the pool of pilot districts grow, the student body in that pool must be demographically similar to the rest of the state to ensure that pilot results reflect the range of students in the state.
  • Publicly report results for all students, and use those results in statewide accountability and educator evaluation systems.
  • Provide data to the U.S. Department of Education on the number and demographics of participating students, performance for all groups of students, the comparability of results between pilot districts and the statewide assessment, and feedback from students, parents, and administrators.
  • Align the pilot assessment with state standards and go through the U.S. Department of Education’s peer review. New Hampshire must also ensure that there’s an external evaluation of the pilot.

To be sure, there’s real risk in this approach. Parents of academically struggling students in pilot districts may not get the kind of information they need to see if interventions are working. Educators might fail to get a common cross-district understanding of what “good” looks like and thus lose the ability to benchmark not just within their district but across the state.

That’s why this is a pilot. It’s why the criteria for participation and continuation are as high as they are. And it’s why the U.S. Department of Education is retaining the ability to review the pilot and rescind it if need be.

These are the right kinds of safeguards to have in place. And they — together with the fact that this is a state (rather than district) initiative — are what unequivocally distinguish this from just plain, local assessment.