The other day, David Brooks of the New York Times published a short piece, The Philosophy of Data, on the current prevailing mania for “data-ism”. This op-ed strikes a chord with me, as this “data-ism” is something I have seen in public education for as long as I’ve been teaching (which is just a paltry 4 years). This data-driven mania is prevalent on both coasts – with San Francisco Unified School District and with the New York Department of Education. I’m willing to bet that it’s the same with most other large school districts and probably trickling down into the smaller ones too. When I started teaching, one of the first things I was told (or was implied to me) was that data was the be-all, end-all, and that it was a measure of how good a teacher you are. I mulled that over my first year and initially agreed – we enjoyed a double digit increase in percentage points of the STAR test that year – but I was always skeptical of the implied causation of the results. I don’t think I was a very good teacher at all my first year. In fact, I’d say I was horrible. David Brooks’s article touches on this skepticism for data driven strategies. He says that there is no evidence that teaching to students’ learning styles gets results. Does this mean it’s OK to not break my neck over trying to tailor everything I do to all the different student learning styles? Blasphemy!
I confess I enter this in a skeptical frame of mind, believing that we tend to get carried away in our desire to reduce everything to the quantifiable.
…many teachers have an intuitive sense that different students have different learning styles: some are verbal and some are visual; some are linear, some are holistic. Teachers imagine they will improve outcomes if they tailor their presentations to each student. But there’s no evidence to support this either.
This philosophy of using data as absolute proof has so many implications for what I do every day in the classroom. For example, the big thing in SFUSD is using data to inform your instruction (if I had a dime for every time I heard those words…). In an effort to help us teachers inform our instruction, and to hold us accountable, our school implemented mandatory “accountability and assessment meetings”, where we had to show data from an assessment and talk about how we plan to act on the information we get from the data. The district licensed the use of a handy program called Data Director, which took our (mostly multiple choice) test questions, aligned them to the state standards, and spat out a statistical analysis of how our students performed after we scanned in the answer sheets. Sadly, this sort of cool efficient technology is not available to me here at the NYDOE – I have to spend hours entering in answers to an Excel spreadsheet to run my own analysis. What a time-suck.
I had a love/hate relationship with Data Director. It made test grading super fast and easy, allowed me to design assesments with a variety of types of questions and did all the analysis for me. I could look at performance across a class, across a grade, special population students, every which way. The data was super informative and clued me into things I probably would have missed otherwise. I could see what questions students struggled with the most and what answers were most popular, allowing me to clear up misconceptions right away and re-teach only the most important or commonly missed topics. I was also able to be transparent with this data and show the students their own numbers, their class data, and how they compared to the other Biology sections. This transparency was a huge boon to my instruction. For kids who thrive on competition, they could reflect on where they stand when compared to others. For kids who are self motivated and benefit from quiet reflection, they could see which topics they needed to study more or get tutoring on. Students who just didn’t give a shit could see that many of their peers did in fact give a shit (thus motivating them to actually give a shit – in theory).
While I love seeing data, it continuously serves as a slap in the face. It crushes my confidence, it depresses me, it pisses me off and makes me disappointed in my students. It also tells me that I’m a crappy teacher who shouldn’t even breath the same air as my administrators, because we’re all made to feel (or just I feel on my own) that they could have gotten better results. I get anxiety when I analyze my data. The take home message that is continuously driven into our psyches is that if our students aren’t performing, it’s because we’re doing something wrong. Plain and simple, if your students are not acing your tests, it’s because you’re a bad teacher. This alone is enough to drive anyone into a stress and anxiety induced breakdown. And what does the data also show, that for some reason is not talked about as often? That many teachers do not last past 5 years. Who would, when the measure of your success is wholly dependent on the performance of your students, regardless of all the other variables that come into play when educating kids?
These variables that are most of the time completely out of teacher control include but are not at all limited to: the amount of time students spend studying, whether or not homework was completed (I have an abysmal HW return rate BTW), and how motivated students are by test taking (and grades). These variables are just the tip of the iceberg, not even grazing the surface of the plethora of emotional/social/economic issues our students face. This lack of control renders tests (especially these standardized tests like the STAR test in CA and the Regents exams in NY) completely unreliable and invalid. For more on this issue, check out this blog post, called “Don’t Buy the Snake Oil“, written by Lisa Myers, the same teacher who also inspired this post.
Our educational system is data driven – I know that and I accept it, even if i don’t like it. Kids in every state have to take standardized tests, whose data then gets used to label schools as good or bad, teachers as effective or ineffective. Thus, I find that I am forced to play by those rules. This means preparing my students for those tests and using data. If my data tells me that my class average on a practice Regents exam is 50%, I freak out for a couple days, then I get rational and relax. After all, kids only have to score a 45% on the test to be considered “passing”.
I’m all for data in terms of it’s informational purposes. I’m completely against using data as a metric for the worth of a teacher. I can and will use data to see where I need to go back and teach differently. But if that data is going to used to compare me against another teacher who does not have students who show up sporadically, with a 2 year old waiting at home, and then sleep through entire lessons then I call bull shit. And don’t you dare tell me that kid is sleeping because my lessons aren’t exciting or engaging enough. I put on a song and dance for every lesson.
Data is not everything, something teachers have always known. For once someone else is also talking about it.
2/25/13: David Brooks wrote a follow up to the discussion on data, titled What Data Can’t Do.
Like this:
Like Loading...