“My Research is Better than Your Research” Wars

When I retired from teaching (after 32+ years), I enrolled in a doctoral program in Education Policy. (Spoiler: I didn’t finish, although I completed the coursework.) In the first year, I took a required, doctoral-level course in Educational Research.

In every class, we read one to three pieces of research, then discussed the work’s validity and utility, usually in small, mixed groups. It was a big class, with two professors and candidates from all the doctoral programs in education—ed leadership, teacher education, administration, quantitative measurement and ed policy. Once people got over being intimidated, there was a lot of lively disagreement.

There were two HS math teachers in the class; both were enrolled in the graduate program in Administration—wannabe principals or superintendents. They brought in a paper they wrote for an earlier, masters-level class summarizing some action research they’d done in their school, using their own students, comparing two methods of teaching a particular concept.

The design was simple. They planned a unit, using two very different sets of learning activities and strategies (A and B) to be taught over the same amount of time. Each of them taught the A method to one class and the B method to another—four classes in all, two taught the A way and two the B way. All four classes were the same course (Geometry I) and the same general grade level. They gave the students identical pre- and post-tests, and recorded a lot of observed data.

There was a great deal of “teacher talk” in the summary of their results (i.e., factors that couldn’t be controlled—an often-disrupted last hour class, or a particularly talkative group—but also important variables like the kinds of questions students asked and misconceptions revealed in homework). Both teachers admitted that the research results surprised them—one method got significantly better post-test results and would be utilized in re-organizing the class for next year. They encouraged other teachers to do similar experiments.

These were experienced teachers, presenting what they found useful in a low-key research design. And the comments from their fellow students were brutal. For starters, the  teachers used the term ‘action research’ which set off the quantitative measurement folks, who called such work unsupportable, unreliable and worse.

There were questions about their sample pool, their “fidelity” in teaching methods, the fact that their numbers were small, and the results were not generalizable. Several people said that their findings were useless, and the work they did was not research. I was embarrassed for the teachers—many of the students in the course had never been teachers, and their criticisms were harsh and even arrogant.

At that point, I had read dozens of research reports, hundreds of pages filled with incomprehensible (to me) equations and complex theoretical frameworks. I had served as a research assistant doing data analysis on a multi-year grant designed to figure out which pre-packaged curriculum model yielded the best test results. I sat in endless policy seminars where researchers explicated wide-scale “gold standard” studies, wherein the only thing people found convincing were standardized test scores. Bringing up Daniel Koretz or Alfie Kohn or any of the other credible voices who found standardized testing data at least questionable would draw a sneer.

In our small groups, the prevailing opinion was that action research wasn’t really research, and the two teachers’ work was biased garbage. It was the first time I ever argued in my small group that a research study had validity and utility, at least to the researchers, and ought to be given consideration.

In the end, it came down to the fact that small, highly targeted research studies seldom got grants. And grants were the lifeblood of research (and notoriety of the good kind for universities and organizations that depend on grant funding). And we were there to learn how to do the kind of research that generated grants and recognition.

(For an excellent, easy-reading synopsis of “evidence-based” research, see this new piece from Peter Greene.)

I’ve never been a fan of Rick Hess’s RHSU Edu-Scholar Public Influence Rankings, speaking of long, convoluted equations. It’s because of these mashed-up “influence” rankings that people who aren’t educators get spotlights (and money).

So I was surprised to see Hess proclaim that scholars aren’t studying the right research questions:

There are heated debates around gender, race, and politicized curricula. These tend to turn on a crucial empirical claim: Right-wingers insist that classrooms are rife with progressive politicking and left-wingers that such claims are nonsense. Who’s correct? We don’t know, and there’s no research to help sort fact from fiction. Again, I get the challenges. Obtaining access to schools for this kind of research is really difficult, and actually conducting it is even more daunting. Absent such information, though, the debate roars dumbly on, with all parties sure they’re right.

I could tell similar tales about reading instruction, school discipline, chronic absenteeism, and much more. In each case, policymakers or district leaders have repeatedly told me that researchers just aren’t providing them with much that’s helpful. Many in the research community are prone to lament that policymakers and practitioners don’t heed their expertise. But I’ve found that those in and around K–12 schools are hungry for practical insight into what’s actually happening and what to do about it. In other words, there’s a hearty appetite for wisdom, descriptive data, and applied knowledge.

The problem? That’s not the path to success in education research today. The academy tends to reward esoteric econometrics and critical-theory jeremiads. 

Bingo. Esoteric econometrics get grants.

Simple theoretical questions—like “which method produces greater student understanding of decomposing geometric shapes?”—have limited utility. They’re not sexy, and don’t get funding. Maybe what we need to do is stop ranking the most influential researchers in the country, and teach educators how to run small, valid and reliable studies to address important questions in their own practice, and to think more about the theoretical frameworks underlying their work in the classroom.

As Jose Vilson recently wrote:

Teachers ought to name what theories mobilize their work into practice, because more of the world needs to hear what goes into teaching. Treating teachers as automatons easily replaced by artificial intelligence belies the heart of the work. The best teachers I know may not have the words right now to explain why they do what they do, but they most certainly have more clarity about their actions and how they move about the classroom.

In case you were wondering why I became a PhD dropout, it had to do with my dissertation proposal. I had theories and questions around teachers who wanted to lead but didn’t want to leave the classroom. I was in possession of a large survey database from over 2000 self-identified teacher leaders (and permission to use the data).

None of the professors in Ed Policy thought this dissertation was a useful idea, however. The data was qualitative, and as one well-respected professor said– “Ya gotta have numbers!” There were no esoteric econometrics involved—only what teachers said about their efforts to lead–say, doing some action research to inform their own instruction–being shut down.

And so it goes.

Star Tech: The Next Generation of Record-Keeping

In her last year of a degree program in Justice Studies, my daughter took a course called “Surveillance in Society.” The readings and discussion were around intrusions into personal privacy and data made possible by technology. Dear Daughter and I had many amusing conversations about some of her assignments—“Are Bar Codes the Mark of the Beast? Discuss.”—which struck me as paranoid in the extreme. Her professor was obsessed with our imminent loss of civil liberty, always urging his undergrads to be suspicious of anyone asking for personal information, and, presumably, scanning the sky for black helicopters.

However—I have been thinking a lot about the use of technology to gather data and “streamline” normal school processes, like testing, attendance and grading, to present an image of a “21st century school.”  Here is a simple story about data collection and our belief that All Technology is Good.

In 1998, my district opened a new middle school, full of state-of-the-art technological systems. We were the envy of the other buildings, with fully networked software to handle all our data needs. We got some training and the big pitch—our new procedures would save time, paper and man-hours, give us more accurate data, impress parents with e-communications, yada yada,

Under Old Attendance procedures, every teacher took attendance once, at the same time every morning, recorded it in their grade/attendance book, and sent a student to the office, with an attendance form, printed on scrap paper from recycle bins. Secretaries recorded these on a master list, and handled absence data for students who came/left during the day. Teachers got a copy of the master list, to help confirm absences when students needed to make up work.

Under New, Improved Attendance procedures, every teacher had a computer, with separate attendance book and gradebook functions. Teachers were now required to take attendance every hour and enter absences and tardies on the computer within a five-minute window. We were not allowed to keep the attendance program open on our computer desktops (because our gradebooks, protected by the same password, might be accessed by devious students)—so we had to log in every hour.

Because this was 1998, the server’s horsepower was severely strained by 40 teachers logging in simultaneously, and it would take 30-60 seconds for the program to load. Teachers who forgot to take attendance within 5 minutes would be called by the office (where a secretary now sat, monitoring the data coming in every hour), disrupting teachers’ lessons. If someone had a missing assignment, you had to toggle between attendance and grade programs to discover whether the child had actually been absent.

A process that had taken two minutes of teacher-time daily suddenly began to take two minutes every hour. Best-case scenario, teachers would lose ten extra minutes of instructional time each day: 50 minutes/week, four class periods per month, 36 class periods per school year, or six full days of instructional time. Taking attendance.

Lest you think I’m being overdramatic (or are dying to tell me that faster computing and better software have eliminated problems and made attendance-taking an absolute joy)—I tell this story not to whine about record-keeping, but to question our automatic goal of “efficiency” and the uses and purposes of all K-12 tech-enhanced data collection.

The state requires daily absent/present data, and that to ferret out kids who aren’t actually attending school but were counted for funding purposes. A student who went AWOL would not necessarily be picked up any quicker under the new system, and most of our mid-day leavers were signed out to go to the orthodontist with their mom, anyway.

The new system made data-entry mistakes six times more likely and kept a secretary busy checking on students who were marked present one hour, but absent the other five due to teacher error. I had great sympathy for “careless” teachers who rushed through the attendance procedure to get started on, you know, teaching—only to be monitored and chastised later. I was one of them.

Nobody in the office could explain why or how, precisely, the new system was helping us do a better job of serving kids. The on-line gradebooks also came with unanticipated problems—teachers who didn’t post enough grades (remember when formative data included things that weren’t numbers?), the amount of time now required to deal with anxious parents, and so on.

The most obvious reason to question always-available online gradebooks is that responsibility for turning in work and monitoring a running performance record should belong to students, especially in secondary settings. We have always had periodic reporting to parents—four or six times a year, or in some cases, weekly progress reports. Any more than that elevates grades over actual learning and encourages students to let mom be in charge of their education.

Tech-based surveillance of students is now on steroids. In a thoughtful post entitled How Much Should I Track My Kid? Ann Helen Peterson says this:

My parents trusted me because I had earned their trust. Sometimes I stretched that trust, but I was constantly figuring out what felt too risky, what felt right or wrong, who I didn’t want to get in a car with. Maybe that sounds like a lot of discernment for a teen. But how else do we figure out who we are? My parents could’ve lectured me about “making good decisions” all they wanted; I only knew how to make them by finding myself in situations far from them where I had to.

The same principle applied to my grades, to my online use, to how I talked to boys and figured out friendships. In high school, I would see my exact grade around twice during the quarter, when a teacher would distribute printouts that included all graded assignments and your current percentage.

Schools pay attention to what they value. We collect data first, and decide how to manage it later, a pattern repeatedly endlessly in thousands of schools. We assume that everything can be done faster, cheaper and better through technology. Sometimes, the rationale runs backwards—we adopt the technology, and then invent reasons for why we need it.

The Problem with Jingle Bells

If you follow various chat groups and Facebook pages of music educators, this time of year is rife with the Great Christmas Literature Discussion, centered around whether to schedule a concert in December and, if so, what songs to play, while avoiding stepping on anyone’s cultural traditions.

I have written, often, about this conundrum—honoring the festive spirit of seasonal holidays (which is evident absolutely everywhere, in December, from the grocery store to TV ads) vs. avoiding any mention of Christmas at school, because it’s inappropriate to preference one religious celebration over others, in a public institution filled with diverse children.

From a professional education perspective, it’s thorny. You can play a Christmas-heavy concert, sending parents home in a rosy glow—some parents, anyway. You can try to recognize every winter/light holiday with a tune—or rely on “classical” pieces like Messiah transcriptions. You can try to take Jesus out of the equation, and end up with a lot of junk literature. Or you can avoid the whole thing and schedule your concert in January.

Increasingly, I’ve seen elementary music teachers bowing out of anything directly related to Christmas. They can articulate good reasons for this, distinguishing between music students are fortunate enough to experience at home and with their families, and what belongs in a solid music education curriculum. For teachers who are under pressure from administrators or parents to put on a holiday show, there are winter weather songs. Enter Jingle Bells.

A couple of weeks ago, Peter Greene reprinted his blog entitled The Jingle Bells Effect and the Canon. It’s a bit of brilliance comparing 30 different versions of Jingle Bells, 30 ways of taking a small collection of notes and rhythms and turning them into something unique and different.

It’s like literature, Greene says—there are multiple ways to teach a concept, theme or historical era through the same medium: the printed word. He makes the point that teachers should always be able to offer a cogent answer to the question: Why are we learning this? I agree.

And for many years, I found Jingle Bells a handy instructional tool. The chorus uses only five notes, so the tune appears in virtually every beginning band method book, just about the time kids are eager to play real songs. The lyrics are thoroughly secular—no mention of Christmas—so when kids are singing about a one-horse open sleigh, it’s kind of like the Little Deuce Coupe of its day.

It’s also one of those three-chord songs, simple to harmonize. Add some sleighbells and voila! First concert magic. For years, my middle school band (some 200 7th and 8th graders) played Jingle Bells in a local Fantasy of Lights parade. Because when you’re trying to get 200 young musicians to march and play at the same time, you need something easy.

As awareness of the racist roots and language in some of our most beloved folk and composed songs began to grow, in recent decades, elementary and secondary music teachers rightfully started pulling certain songs out of their teaching repertoire. Scarcely a week goes by without an argument about this trend, on music-ed social media sites. Do songs that sprang from minstrelsy, performed in a different era, for example, have a racially negative impact today? Or are they just tunes? A valid and important question.

I find these skirmishes encouraging, an example of teachers discussing–with some conviction–the beliefs that shape their own professional work. And sometimes, seeing things in a new light. As Maya Angelou said:Do the best you can until you know better. Then when you know better, do better.’

I’ve read dozens of these “is this racist?” discussions on-line. And music teachers, given the chance to re-think the cultural value–or lack therof—in certain pieces of music, often are willing to choose something else, or share the origins of the work, the outmoded and biased thinking reflected in the lyrics, as an opportunity to teach cultural history associated with music. People will adapt.

Except when it comes to Jingle Bells.

Back in 2017, a professor at Boston University , Kyna Hamill, published a research paper, suggesting that Jingle Bells was first sung in minstrel shows. Research papers are not generally the subject of teachers’ lounge chat, but this one caught fire, and pretty soon, there were teachers arguing that the composer of the piece, James Lord Pierpont, was a fervent Confederate, and therefore a supporter of slavery. Out with Jingle Bells!

Pierpont was not a household name, in his own time. He was a struggling composer, organist and teacher. His father was an ardent abolitionist and Unitarian minister, as were his two brothers, all in Massachusetts. But Pierpont took a position as organist in a Unitarian church in Georgia and was there when the Civil War broke out. He wrote music and sold it to support his family—including songs that supported the Southern war effort.

He also enlisted in the Confederate Army and served as a clerk. His father, the Reverend John Pierpont, was a Chaplain in the Union Army—one of those families split by a tragic war. There are plenty of families in the same situation right now, in this country—split by politics, influenced by cultural context. Something to think about, as we evaluate and banish Pierpont, 150 years after he wrote his most famous sleighing ditty.

Even Kyna Hamill, arguably the genesis of the anti-Jingle Bells movement now says this:
My article tried to tell the story of the first performance of the song. I do not connect this to the popular Christmas tradition of singing the song now. “The very fact of (“Jingle Bells’”) popularity has to do with the very catchy melody of the song, and not to be only understood in terms of its origins in the minstrel tradition. … I would say it should very much be sung and enjoyed, and perhaps discussed.”

There are teachers and schools that have taken Jingle Bells out of the curricular mix—and good on them for having that thoughtful discussion in the first place. And there are teachers who have decided they have bigger curricular fish to fry than banishing the bells on bobtails—they’ll save their firepower for songs with overtly racist lyrics and intentions.

Again– these are valid and important questions. The trick is to keep the conversation going, and refrain from condemnation of well-meaning peers.

Are those sleighbells I hear?