Data Driven Pedagogy

In a belated quest to find a single document from a few years ago, I uncovered some notes that I had made while teaching high school.  I started looking through my old papers and realized that I was a very data-driven instructor.  Looking back, I am convinced that a significantly large share of the success in my classroom came from tracking micro-level student performance data.  This discovery seems antithetical with the messages I hear from teachers’ groups, ranging from unions to offhand remarks by current and future teachers to widely cited rants.  As the world careens toward “big data”, it is a shame that those in the trenches have not adopted a more data-friendly approach to their own pedagogy.  If there is one message I could provide to my friends in the public schools, it is that data is your friend, not your enemy.

Fear of Failure

When I started at Edmond North, I was a reasonably good hand in the classroom, but my skill set was easily outclassed by my peers.  I had gained limited skill by working as a substitute teacher in Putnam City Schools and as an instructor with Upward Bound.  Those two experiences provided me the ability to diffuse tense situations and connect with students.  On my best days, I was an average teacher at a good school.

Edmond Public Schools had district-wide “benchmark” exams, given to all students in a given course.  Afterward, teachers would compare notes on how their students did on the assessments.  Administrators constantly reassured us that scores would not impact our job evaluations, but they could not settle a fear of humiliation and underlying paranoia teachers have developed in the face of standardized exams.  (In reality, I imagine that if a teacher presides over multiple classes with wide-scale benchmark failures, then they would be future-endeavored.)

In order to compensate for my clear shortfalls, I pored over data.  At first, it was simple information: is the key right/did a lot of people miss a question and how many people passed/failed.  After not appropriately preparing students for a benchmark (twelve weeks into the year), I was heartbroken.

I tried to sort out what I had done wrong before the post mortem.  I went through each question on the exam and found a trend.  The mistake I had made was somewhat simple: I misread the timeline of what was supposed to be covered.  I had inadvertently shuffled two topics.  One topic my students had prepped was not on the exam, and they were completely blindsided by questions on the test.

Admitting in the meeting that I had fumbled the schedule was embarrassing, but I had the opportunity to pick the brains of my coworkers whose students had already taken the exam.  (Among their experiences, I was able to carve out an effective lesson plan.)  In return, I shared a unique lesson plan I had attempted where students had responded fairly well.  The exam provided a standard across classes that enabled sharing information – pedagogical tactics.  It was in this meeting that I realized what made North a great school; instead of teachers saying “my kids”, we were using the phrase “our kids.”  To this day, I use this linguistic nuance as a barometer of whether or not an instructor is student-first or in business for themselves.

Surviving as a Small Fish in a Big Ocean

At the end of my first fall as a full-time teacher, Edmond North assigned me an Advanced Placement course the following semester.  Instead of teaching to the test, I merged the syllabi used at the colleges most of the school’s students attend.  I feared that every student would fail; this approach at least would give the students a head start in their classes that fall.  What resulted was a simple philosophy about standardized exams: (1) assuming the test is over the curriculum (2) and a teacher teaches the curriculum, then (3) the students should do well on the test.

Over the summer, I obsessed over teaching both classes again.  I went through each exam and notes from writing assignments, trying to find trends in my teaching.  I found that certain topics (cattle trails, elasticity) yielded particularly weak understanding from students.  I systematically rewrote every exam – to allow for quick understanding of how students understood the information by topic.  Lessons that had demonstrable evidence of student learning remained; lessons that had failed to provide students a better understanding of the material landed in the trash.

After an exam, I realized that the vast majority of questions missed were over a couple of topics.  As a result, I rebooted the lesson plan and worked on the diagnosed problem areas.  Data drove my lesson plan revisions, and I had become a more effective instructor.  More importantly than anything else, I had been able to provide solutions to a student not understanding a key concept before it limited learning throughout the entire year.

The next year, I had the distinct advantage of near-immediate feedback.  Halfway through the year, the district received a clicker system.  I was in heaven.  Instead of having to wait for the class to leave so I could scan their responses into a database, I had immediate access to the data.  Quizzes transformed from a give-wait-and-take procedure to a conversation. Making matters better, I was able to consolidate tests and quiz data into even more detailed information.

Using data allowed me to constantly rework my lesson plan.  By my sixth lap through a subject, my lesson plan was a calibrated machine.  With input from my peers at benchmark meetings and interpreting practically real-time data, I was able to put myself in position to be an above average teacher at a great school.

Data-driven school

In Oklahoma, students must pass four state exams in order to graduate.

In one Edmond winter, the principal summoned me to predict state test scores based on district benchmark exams.  The goal was to uncover students who were at risk of failing the state exam and provide them additional resources.  I had developed a technique for quizzes and my own exams, and it easily transferred to the state exam.  We used ANOVA and OLS to estimate the relationship between district and state test scores, any score within two standard deviations of the “pass score” would receive extra tutoring.  Although this program inconvenienced several students by requiring tutoring, my understanding is that they all graduated.

Wayfaring, Data-Driven Stranger in an Anti-Data World

When I arrived in Chicago, I landed in a dysfunctional school.  On a near daily basis, I was reminded that it was “tough” to be a teacher; on a near weekly basis, I was reminded that we were the “city’s cheap babysitting service.”  With old-school technology (machine-gun scantrons), I retooled my lesson plan.

After discovering several (eleventh grade) students had the effective reading level of elementary students, I reworked my lesson plan to center around reading skills.  The goal was to expose the at-level students to material while building skills through exposure for the less equipped.  Data had uncovered the problem, and I decided to use data to solve the problem. Every other day I had a reading quiz – targeted at building skills.  Every two weeks, I gave an exam that included reading skills.

Halfway through the semester, I realized that I shared many of these students with an English instructor.  I (ignorantly) went to him hoping to build a team to work on improving our students’ skill set.  My goal was to build synergy between our classes (which was somewhat frequent practice in Edmond.)  When I approached him, it was as if I had urinated on his shoe.  Looking back, my error was using data from a test I had given as evidence that we should intervene.  The man literally asked if I had lost my mind, suggested I was accusing him of not teaching his kids (not our kids), and demanded I go to hell.  I was trying to help, but by referencing data I had inadvertently created an enemy.  It was in this exchange that I realized that problem some teachers have with data: they feel like it is an indictment of the job they do.

If I had a second message to teachers, it would be that data is not an indictment of failure, it is an archive of success.

Upon discovering the literacy shortfall, I learned that many of the students did not read unless it was a text message or twitter.  They even ignored the newspapers aimed at younger Chicagoans because it was “too hard to understand.”  One of my proudest moments as a teacher is having one of the less-skilled students tell me that they actually enjoyed reading the newspaper.  Two of his peers nodded in agreement.  I nearly cried when they thanked me for teaching them to enjoy to read.

Advanced statistics are hard, but basic statistics are not

The biggest misconception of statistical analysis is that it is too difficult to understand.  This is simply false.  The only statistics expertise a teacher need to have are making data points and looking for patterns.  Most teachers already use most frequent statistic: the average.  The arithmetic average is the industry standard computed to determine a student’s final grade.

If teachers can calculate this score for each student, then why not give it a try for each question.  Many teachers receive spit-outs from scantron or other multiple-choice assessment graders.  These range from the percentage of students missing each question to detailed analysis of how each student answered each question.  Teachers can take a quick pulse by reviewing the most frequently missed and most frequently correct questions.  If the items are all over the same topic, then the teacher may need to review lesson plans concerning what their students missed and pat themselves on the back for lesson plans that enabled student success.  This simple, pattern-seeking process will likely take a planning period, but it can illuminate trouble areas before they get out of control.  I have used approach for both history and economics (effectively applied math) courses with reasonable success; I assume it would work elsewhere.

Beyond pattern seeking, teachers may consider looking at the responses from students who struggle.  Students will likely perform better on certain topics compared to others.  This may be a result of several different things – ranging from personal interest to lesson structure.  Data points will hopefully reveal a pattern for you to develop a lesson plan that helps struggling students succeed.

Tough tests are best.  Ideally, a test will create a wide distribution of student performance – meaning that the gap between the highest and lowest students will be rather large.  This principle is similar to taking high quality pictures and shrinking them later; resizing an image large-to-small maintains quality, while small-to-large results in lower quality, pixel-y distortions.  Remember, you can always curve later.  Although this may be immediately disconcerting to students who score low, with the right redirection it can be valuable.  In the end, a raw test score is just a number. A quick method to revalue a very tough exam is using a “square-root curve”, where you take the square root of their percent.  (The square root of an 81% is 90%, 64% is 80%, 49% is 70%, etc.)  The letter/value-judgment attached to the number matters.    After all, a 1200 on the SAT is far worse than a 30 on the ACT, despite being 40 times larger.

Data is not the problem, it is quickly becoming the best practice in the industry.  The ham-and-egg teacher’s misconception that data is a condemnation is preventing many of these well-intentioned folks from achieving their full potential.

Leave a Reply

Your email address will not be published. Required fields are marked *