I’m a passionate person. If you’ve attended any of our committee meetings when I have poured forth, you know. I care deeply and sometimes I can overdo it. It’s who I am.
I want to expand on comments I made at our last Educational Affairs meeting wherein I was pretty adamant that we needed to study the data from our recent PSSA testing reports carefully, to include a significant amount of correlative analysis that could bring more insight into what the data is telling us. My comments ran long, so I cut it off.
There are loads of news reports and commentaries about testing these days sparked by the LA Times and their efforts to publicize specific teacher performance. I see that article as mostly a waste of time, almost completely missing the point of testing and not focusing on the right use of test data. The only benefit has been in promoting separate productive discussion on testing. For such a rational discussion of this story see this post by John Merrow, Education Correspondent for the PBS NewsHour.
I want to expand on my comments, first, by saying that I do not see test data as a way to punish or evaluate any person or group based on a single year’s results. These tests are just not that good. There are too many factors that can come into play in any given year to allow for such a short term consequence. This lesson I’ve learned both from extensive mathematical training and good old hard knocks. So I’ll set aside any interpretations of the data that might drive any short term conclusion or action. Instead, let’s focus on all the good this testing can do.
In my comments, I indicated that the data can be correlated with a number of factors such as school conditions, teaching team factors, and even teachers, so I want to also note my intent with such correlations.
First, these correlations should only be done across multiple years, perhaps 5 or more, in order to remove obvious statistical population factors and to focus on the important long term trends. Even this may be too short of a timeframe given that various conditions can change that may influence the analysis. For most things, it would seem to be a reasonable time period to consider.
Next, these analyses should be used to identify areas for improvement. There are plenty of things that can be done to improve and, if analyzing data helps us to focus and prioritize our efforts and resources, then doing some homework is worth the effort.
Also, my list of correlations can be expanded quite extensively to include multiple subgroups within the data that are related to non-prejudicial characteristics of various populations. Subgroups can be based on being new to our district, new to a school, new to a curriculum, etc., as well as the obvious subgroups of ethnicity and socioeconomics that are already included in most analyses. Again, only with a focus on what can be learned and improved on.
Finally, once we have the data, we should spend very little time praising or lamenting any immediate annual results and focus on the long term trend analyses they support. On this point, I hope to see much more work done and will support further analysis so that we can be data-driven toward a productive end…making our schools better and better every year. We have the data.