What Story Do 2013 State Accountability Ratings Tell?

San Antonio AllianceUncategorized

The state’s school accountability system has been tweaked but remains excessively focused on standardized testing rather than more comprehensive measures of learning, and the legislature just ordered a new accountability overhaul that has yet to be implemented. Hence it is best to take today’s test-driven accountability ratings of school districts, charter networks, and individual campuses with due skepticism.

With that caveat, what do these 2013 ratings purport to tell us?

Headline writers may focus on the finding that 92.5 percent of districts and charters and 84.2 percent of campuses have met state standards. It may also be noted that 6.5 percent of districts and charters, along with 9.1 percent of campuses, failed to meet state standards, receiving an “improvement required” rating. Some will spin the results as a success story; others will dwell on the rise from somewhat more than 6 percent to more than 9 percent in the percentage of campuses rated substandard, as measured against the 2011 accountability ratings. (No ratings were issued in 2012, deemed a transition year as the new State of Texas Assessments of Academic Readiness, or STAAR exams, were introduced.) Of course, upon further reflection some might consider it relevant that the period from 2011 to 2013 coincides with massive state cuts in education funding.

This year also marks the first use of Commissioner of Education Michael Williams’ much-ballyhooed rating system involving four “indices” of performance. Williams has touted it as a move away from over-reliance on test scores. But in fact three-plus of the four indices come down to some variation on the use of test scores. One measures “student achievement” in terms of STAAR test results. Another gauges “student progress” in terms of test-score gains.  A third rates districts and schools on their success in “closing performance gaps” between historically higher-scoring and historically lower-scoring subgroups, again in terms of test scores. The fourth rates post-secondary readiness in terms of graduation rates and, again, test scores.

The public has grown increasingly aware of the broad discretion that state education officials exercise when they set “passing” standards for districts and schools subject to state ratings. One of the commissioner’s four indices offers a clear example. That is the “student progress” index, which arbitrarily assigns the substandard ranking of “improvement required” to the 5 percent of districts and schools with the lowest test-score gains—and only to those 5 percent. In other words, as the Texas Education Agency’s director of performance reporting told the Associated Press, this amounts to saying in advance that by definition 5 percent of schools will not meet the standard.

As noted above, the rating scheme put in place by Commissioner Williams for this year already is due to be superseded in the near future. By school year 2016, according to HB 5 as passed last May, districts will be rated according to a new A-to-F grading system, while campus ratings will revert to the familiar exemplary/recognized/academically acceptable/academically unacceptable categories used until now. Under HB 5, the commissioner is exhorted to reduce the reliance on standardized testing for school ratings, but it’s largely up to the commissioner to decide how and how much. The current commissioner seems to believe his four performance indices already take care of the problem, even though they rely heavily on STAAR scores.

Today’s state accountability ratings reinforce at least one other familiar story line. Once again regular school districts outperformed their comparatively deregulated charter counterparts. By our calculation, some 15.7 percent of charters that received a rating were rated substandard (i.e., “improvement required”). The comparable figure for regular school districts:  4.9 percent.  Charters thus were more than three times as likely as regular school districts to be rated substandard. In fact, this comparison probably understates the lower results for charters, because more than one out of six had the benefit of an alternative, more lenient rating method unavailable to regular school districts.