Press Release

May 1, 2023
Patrick Rooney
Office of Elementary and Secondary Education
U.S. Department of Education
400 Maryland Avenue, S.W., Room 3W202
Washington, D.C. 20202

Dear Mr. Rooney:

On behalf of the undersigned civil rights, disability rights, and education advocacy organizations, we appreciate the opportunity to respond to the request for information (RFI) regarding the Innovative Assessment Demonstration Authority (IADA) under the Elementary and Secondary Education Act (ESEA). Valid, reliable, and comparable information on student achievement produced each year by statewide assessments is an essential tool for addressing inequities in education, particularly for students of color, students from low-income backgrounds, students learning English, students with disabilities, students experiencing homelessness, and other historically underserved groups. This data is not only useful for families and educators helping students as they recover from disruptions to their learning experience during the pandemic, but also necessary for education leaders charged with targeting state and local resources to the students and schools that need them most.

However, several recent reports have found students, families, educators, and other stakeholders questioning whether current assessments are fully meeting their intended goals and being used effectively to improve educational outcomes and reduce educational inequities. Given these findings, we welcome states’ efforts to innovate and improve their assessment systems – and recognize the important roles IADA and the U.S. Department of Education’s (ED’s) other assessment programs could play. States should be encouraged to develop and use more sophisticated, innovative test designs and items that can measure higher-order thinking skills aligned with the state’s academic standards; to make assessments more instructionally, culturally, and linguistically relevant and ensure they are free of bias; to incorporate Universal Design for Learning principles that make assessments more accessible and inclusive; to develop assessments in students’ native languages; and to provide more timely, relevant results to families and educators.

Thus, our comments address four key ways that IADA could be updated to support state educational agencies (SEAs) in developing and adopting more innovative statewide assessments, as well as how ED could leverage its other assessment activities to enhance these efforts:

1. States need dedicated time for planning their innovative assessment system prior to receiving flexibility for field testing.

Ever since SEAs were invited to apply for IADA, states have been expected to be relatively far along in the test development process to take advantage of its flexibility. Specifically, because of the statutory and regulatory requirements, particularly the fact that an SEA receives flexibility from giving the same assessment to all students upon receiving IADA, the idea has been that the first year of the demonstration period coincides with field testing a state’s new assessment. However, that means there is no dedicated program to support the critical phases of work prior to field testing, including:

  • Collaborating with diverse stakeholders to determine a new assessment approach and its goals;
  • Soliciting proposals and selecting vendors and other technical experts to facilitate the project;
  • Developing assessment blueprints and items;
  • Testing new assessment platforms, formats, and questions with students and educators;
  • Screening for and addressing racial, cultural, and linguistic bias and accessibility issues with new assessment items;
  • Providing professional development for educators and other staff; and
  • Collecting feedback and making adjustments prior to field testing.

Given the importance of these activities, we recommend ED create a planning phase for SEAs seeking to develop innovative assessments. This could be accomplished in multiple ways:

  • Using the Competitive Grants for State Assessments (CGSA) program to provide resources, technical assistance, and guidance to SEAs as they plan and develop a statewide innovative assessment before they apply for flexibility through IADA or a one-year field test flexibility waiver . This approach would have the added benefit of providing SEAs with financial resources needed for this work (see more below).
  • Creating a planning phase within the five-year IADA demonstration period. Under this approach, ED could add a “planning” application for IADA, which would enable selected states to enter a community of practice with other SEAs. Because SEAs in the planning phase would not yet be ready to field test their innovative assessment, ED would need to condition approval of the planning application on continuing to give the same assessment statewide. In other words, the flexibility in ESEA section 1204(e)(2)(A)(i) would not apply during planning. Once an SEA in the planning cohort was ready to field test its new assessment, it could submit to ED any remaining IADA application items that were not addressed in its planning application; if all requirements are met, the SEA would then receive flexibility to begin field testing the innovative assessment with a subset of schools. With some of the demonstration period devoted to planning, SEAs would need to anticipate a shorter time for scaling the innovative assessment statewide. However, ED would retain the ability to offer states an IADA extension for 1-2 years if they needed additional time for field testing.

2. States need funding, as well as flexibility, to help them develop and implement innovative statewide assessment systems that will meet ESEA requirements.

Many SEAs lack the resources and capacity to successfully complete the planning activities noted earlier while continuing to maintain and administer their existing state assessments. States that receive IADA have the benefit of some statutory flexibility, but no additional funding – a significant barrier inhibiting them from developing and using more innovative assessment approaches.

Thus, we encourage ED to work with Congress to provide funds to support the development of innovative statewide assessments by:

  • Continuing to request increased funding for future CGSA competition(s) and focusing them on innovative statewide assessment systems, as opposed to other priorities, by creating an absolute competitive priority for SEAs seeking to develop innovative assessments to meet ESEA requirements in section 1111(b)(2).
  • Also including in the President’s budget request new, dedicated funding for states to plan, develop, pilot, and adopt innovative statewide assessments under IADA.

We believe all states should be eligible to compete for innovative assessment funds, not just those participating in IADA, as SEAs could adopt the innovative assessment funded by a CGSA statewide without taking advantage of the flexibility in IADA (e.g., by requesting a one-year field test flexibility waiver instead). This would also be an effective, and logical, use of CGSA funding, especially as the statutory priorities already align with innovations in state assessments. Plus, with more funding for CGSA, it would be possible for CGSA funds to support larger, transformative changes to state assessment systems relative to the smaller, more limited CGSA grants of the past. To make participation in IADA more robust and effective, it would also be helpful to have dedicated funding through a mechanism that would enable states to invest in developing, implementing, and scaling their innovative assessments.

3. States need clarity on how to meet IADA’s requirements for “comparability,” including whether the innovative assessment must yield the same results as the existing assessment and/or be aligned to the same achievement standards as the existing state assessment.

To date, and as ED’s RFI recognizes, many states have struggled to meet IADA’s “comparability” requirements in ESEA section 1204(e)(2)(A)(iv) and (x): the innovative assessment system must “generate results that are valid and reliable, and comparable… as compared to the results” on the existing state assessment and must “generate an annual, summative achievement determination, based on the aligned State academic achievement standards” for each participating student.

We believe that much of this challenge may stem from a misinterpretation of the legislative language that should be corrected. The correct interpretation of these provisions is not to require SEAs to produce individual student results on new innovative assessments that are the same as current statewide assessments. Rather, “comparable” should mean the results are “able to be compared.” Specifically, states need to be able to compare student results from both assessments in order to use that data during the demonstration period for several purposes required by Title I, Part A: reporting to parents and educators on individual student progress, reporting to the public on state and local report cards, and identifying and supporting school improvement through its accountability system. The ability to compare the results to use them for these purposes goes beyond simply comparing the quality of the two assessment systems, and ED could issue guidance or a Dear Colleague Letter to clarify this interpretation of “comparable” results.

Indeed, part of the impetus for developing a new assessment may be to more accurately measure deeper learning skills and knowledge that are not well-represented on current tests, and student results have naturally differed from prior tests when states have adopted new assessment systems in the past. But because SEAs may receive flexibility through IADA for five to seven years, the “comparability” provisions are intended to provide an equity guardrail while multiple assessments are in use – helping to maintain clear reporting for parents, educators and the public, fairness for schools in the accountability system, and high expectations for all students’ learning during extended field testing. The goal for “comparability” within IADA should be to ensure transparency, alignment to state academic content standards, and some method for families, educators, and state and local leaders to interpret results between the old and new tests, not to ensure that student results would be the same on both assessments.

Given that, ED cannot, and should not, ignore these statutory provisions. However, the “comparability” requirements could be clarified, including in the following ways:

  • Based on the clarified definition of “comparability” above, IADA implementation to date, and input from psychometric and technical experts, ED could provide examples of how states could compare results from the existing state assessment and more innovative assessment that would meet the regulatory requirements in 34 CFR §200.105(b)(4)(E): “ An alternative method for demonstrating comparability that an SEA can demonstrate will provide for an equally rigorous and statistically valid comparison between student performance on the innovative assessment and the statewide assessment, including for each subgroup of students.”
  • ED could clarify whether the academic achievement standards used for the innovative assessment must be the same as the existing statewide assessment, or whether states could demonstrate they are setting more rigorous achievement standards on the new test. If it is permissible under IADA to set different academic achievement standards, guidance could clarify how an SEA could do so while still meeting the requirement to issue annual summative determinations for students participating in the innovative assessment “based on the aligned State academic achievement standards” used for the current state test (ESEA section 1204 (e)(2)(A)(x)) .

4. States need knowledge of approaches they can take to develop innovative assessment systems and how they can demonstrate these systems meet federal requirements.

As noted above, some of IADA’s “comparability” provisions have proven to be a barrier for states, even as the issue of comparing student assessment results statewide during IADA remains a bedrock principle of equity. However, SEAs may be unaware of other approaches to develop innovative state assessments that better suit their needs. Specifically, ED could issue guidance on the multiple pathways states can use to plan, build, pilot, and operationalize a more innovative assessment system. For example:

  • ED could highlight how states may use CGSA to support planning and development phases of the project or submit a planning application for IADA (if such an approach were adopted). Likewise, guidance could explain how states could prepare for and pursue a one-year field test flexibility waiver (which, per ED’s practice, has no “comparability” expectations) or submit an application for IADA to support a longer field test while demonstrating “comparability.”
  • Guidance could also address questions state leaders have about planning time; field-test flexibility waivers; and what pathways to innovation exist for alternate assessments aligned with alternate academic achievement standards, English language proficiency assessments, and/or native language assessments.
  • At the same time, to facilitate states exploring all potential pathways to innovation, ED should use the progress report described in ESEA section 1204(c) to inform the expansion of IADA and permit more than seven states to participate.
  • Further, while the Every Student Succeeds Act (ESSA) maintained most of ESEA’s prior assessment rules, it recognized several innovations states could consider both in IADA and in the assessment requirements in Title I, Part A. Yet, despite these changes, states remain unsure of how to develop an assessment system with these features that would satisfy federal assessment peer review requirements. Specifically:
  • ED should update its peer review guidance with examples of how SEAs could submit satisfactory evidence in cases where they are using a computer adaptive assessment, multiple assessments during the year (i.e., “through-year” assessment), performance tasks, and other innovative approaches highlighted in ESSA.
  • Accompanying guidance to a peer review update could address myths related to innovative assessment designs and explain how these designs are consistent with ESEA requirements. It would be especially useful to include examples of states that have adopted innovative approaches for required federal assessments and submitted them successfully for peer review.
  • Whenever ED publishes new guidance, we also encourage you to offer technical assistance to SEAs and communicate with state assessment directors, Title I directors, assessment technical advisory committees, assessment vendors, psychometricians and professional organizations, and other stakeholders to explain the guidance and highlight how new assessment approaches align with federal requirements and programs.

Thank you again for the opportunity to provide feedback as you consider how to best support innovative approaches to statewide assessments. We appreciate your leadership in calling attention to the value of continuously improving our assessment approaches so that they can be better tools for promoting educational equity and student learning.

Sincerely,

All4Ed
Center for American Progress
EducationCounsel
Education Reform Now
National Center for Learning Disabilities
SchoolHouse Connection
Teach Plus
The Education Trust
UnidosUS