Post

Over the past six months, we have worked with a large coalition of national civil rights, disabilities, and business organizations to convince Congress to strengthen the accountability provisions in the renewal of the Elementary and Secondary Education Act. This diverse coalition shares the conviction that if the new law is to be a step forward, rather than back, for children, it is critical that schools and districts receiving federal dollars have an obligation to act whenever any group of children is not progressing.

Why? Because plenty of evidence shows that averages can hide large gaps, and schools that appear to be doing well on average sometimes demonstrate far worse results for some groups of students. We’ve long found big within-school differences in measures that matter, from proficiency to AP/IB access and success.

Recently, the Thomas B. Fordham Institute published an analysis that claims that the problem we worry about — schools that are serving some groups of students well, while underserving others — is “virtually nonexistent.” They rely, in part, on data from Colorado showing that there are few schools that receive a high rating for overall student growth and the lowest rating for growth of their low-income students and students of color. Based on these findings, Fordham’s analysts would have you believe that good schools are just plain good for all of the kids they serve.

When we look at Colorado’s growth ratings, however, we see a very different picture.

Student growth data tells us how the progress students are making compares with the progress of all other students with similar past performance histories. As many parents — especially parents of color — will tell you, looking at growth alone is not enough: They want to know whether students are meeting standards, not just how much they’re progressing. Many educators, however, have rightly pointed out that proficiency rates often don’t tell the whole story, and that schools educating large numbers of students who enter behind may be helping to close gaps by substantially accelerating these students’ learning, even if they don’t hit the proficiency bar.

But if we are ever to close America’s longstanding achievement gaps, more of our schools need to do what the best among them already do: make more progress for their low-income students and students of color, helping to catch them up to their more privileged peers. What we absolutely can’t afford is schools that demonstrate lower growth — even a little bit lower — for these groups, because gaps already exist and they’ll only widen year after year. This means that any school that receives a lower rating — not just the lowest — for the progress of low-income students and students of color than for all students may be leaving their highest need students behind.

As Fordham’s own numbers show, this is the reality in a lot of Colorado schools:

  • Nearly 40 percent of schools that got an “Exceeds” rating for overall (all student) growth got a lower rating for the growth of their low-income students in reading, and nearly a quarter got a lower rating for the growth of their minority student group.
  • Almost 25 percent of schools that got a “Meets” rating based on overall growth got lower ratings for the growth of their low-income students in reading, and about 18 percent got lower ratings for the growth of their minority student group.

That’s not what we’d call “virtually nonexistent.”

According to Fordham, it’s just fine if schools show less progress for their low-income students and students of color than for other students — as long as it’s not way less progress. We couldn’t disagree more. To us, not taking action when schools and districts fail to accelerate the very students for whom quality education is the only path to the middle class is akin to watching a child tread water in the open seas and not throwing a life preserver. Certainly, No Child Left Behind had its problems that must be fixed, but this approach would be a dramatic step backward.

A Final Point on Measuring Student Growth

The growth data Fordham uses is a lot more complicated than their analysts acknowledge. For simplicity’s sake, throughout this post, we assume (as Fordham does) that getting the same growth rating for all students and for, say, low-income students, means that a school is making similar progress for both of these groups. In reality though, that may not be the case. Here’s why:

Student growth percentiles measure how much progress a student is making compared with all other students with similar performance histories. While this indicator is important and should certainly factor into a school accountability system, it tells us little about the amount of progress a student has made. For example, a student growth percentile of 60 means that a student made more progress than 60 percent of students with similar past performance. But depending on how much growth all students with similar performance histories made, a 60 could mean three months of growth in an academic year or 15 months of growth. Moreover, a 60 for one group of students — e.g., low-income students — may mean a very different amount of progress than a 60 for another group.