School’s out for summer, but the report cards are still coming in. And this time the grades aren’t for the students; they’re for the schools.
The Rhode Island Department of Elementary and Secondary Education (RIDE) released the 2012 School Classification data Friday, ranking schools on several points of criteria and giving them each a rating of “commended,” “leading,” “typical,” “warning,” “focus” or “priority.”
Twenty-six schools achieved the top honor of “commended” and 29 fell into the lowest ranks of “focus” and “priority.”
The whole classification process was part of the new Rhode Island Accountability System, which was approved by the U.S. Department of Education on May 29.
The schools were ranked on a scale of 100 possible points, with most falling into the “typical” category. Newport’s Dr. M.H. Sullivan School scored the lowest with 25 points, while Tiverton’s Fort Barton School scored well above the others with a near-perfect 98.5.
Parts of the scoring rubric relied on NECAP data, a statistic RIDE often turns to in determining the proficiency of students statewide. Opponents of heavy reliance on standardized testing say the NECAPs aren’t the best way to show student growth and teacher efficacy, and RIDE should take other factors into consideration. The result of weighting the test scores, even for some high-achieving schools, was a lower rank than what their numerical scores reflected.
So does the data really paint an accurate picture of local schools’ performances? There will be plenty of people crunching numbers and examining trends to determine that on their own time.
Those in administration that will pore over the data in the coming months have a lot of work to do. Not only will they have to determine what lead to the final scores and classifications, they’ll have to decide what to do from here.
Will the schools that were “commended” sit back and relax? Doubtful. What about those that fell into the lowest-ranking categories? Those schools will be required by RIDE to implement multiple-year reform programs to ensure they get themselves out from the bottom of the barrel.
What about the bulk of the schools that fell somewhere in the middle? Well, no one celebrates mediocrity, so it’s likely they’ll be striving to make improvements, too. Each school’s individual faults will require specific attention, and district-wide problems will be pinpointed. It’s hard to say just yet whether this data will be helpful in changing the performance of our students or the proficiency of our schools, but at least its sparking a conversation.
And really, if these classifications succeed in motivating people to improve (or at least, attempt to improve) our school systems, the accuracy of the data is a moot point.