Teachers are Patriots! Who Knew?

The world of education research is replete with studies that feel… unnecessary to folks who have spent their careers in front of K-12 classrooms. Primarily because so much good research is ignored in practice, and so much bad research becomes conventional wisdom.

For example—scientific inquiries  affirm that the best age to begin formal reading instruction would be about seven years old. Ask any early childhood teacher what our literacy success rate would be were we to initiate two-hour formal reading blocks in second grade, leaving room in kindergarten and first grade for lots of read-alouds, rhyming, picture books, letter-sound correspondence activities and development of oral language.

It’s more than that. Teachers with a few years under their belt have well-developed opinions about the best special education placements, the benefits of free play, and why taking your students outdoors yields physical and intellectual growth. Teachers can speak articulately about the post-pandemic changes in student behaviors.

How do they know about these things? Experience. Paying attention to what happens in their daily practice.

You might say that teachers’ observations and informal experiments—Teach it this way? Or that?—are the most valuable action research data to build a successful practice. But don’t say it too loud, because research is tied tightly to the source of the money that funds it—and the commercial products and politics that drive educational change.

There are lots of reasons why education research is suspect—or products are published to great fanfare, then sink like an oversized silver bullet:  An analysis of 30 years of educational research by scholars at Johns Hopkins University found that when a maker of an educational intervention conducted its own research or paid someone to do the research, the results commonly showed greater benefits for students than when the research was independent. On average, the developer research showed benefits — usually improvements in test scores —  that were 70 percent greater than what independent studies found.

Hmmm. I’d put my money on teacher perspectives about instructional strategies and materials, especially if teachers were in charge of their own professional work and offered ongoing opportunities to assess worthy curricula and teaching techniques.

Keeping the problems with ed research in mind (a book-length topic), I was amused to see this headline in Education Week this morning: Teachers Value ‘Patriotic’ Education More than Most Americans.

(Dangerous but brave) subheading: The findings stand in contrast to conservative rhetoric about ‘indoctrination’ in the social studies. And surprise! The polls themselves were conducted by EdChoice, a nonprofit advocacy organization supporting school choice, and the Morning Consult Public Opinion Tracker.

‘More than 80% of K-12 teachers thought it was “very” or “extremely” important to teach students about the Constitution’s core values, and 62% found it similarly crucial to teach that America is a fundamentally good country. In both cases, teachers were more likely than parents or the public at large to favor teaching these concepts.’

Least surprising education research result, ever. Kind of shoots holes in the ‘teachers practice leftist brainwashing’ theory that the privatizers and public school vandals keep advancing. The study showed that 57 percent of parents thought schools should overtly teach patriotism and loyalty to the United State—the exact same percentage as teachers.

There’s more: ‘Both Democratic and Republican teachers were less likely than similarly affiliated members of the public to think it important to teach students to question government actions and policies.’

Again—totally predictable. Schools are inherently moderate and cautious, politically. Your teenager is far more likely to become radicalized—in either direction—via their online presence. Online—where there isn’t a caring and educated adult moderating the decontextualized content they are reading.

I am a retired teacher, but I spent more than three decades explicitly teaching my students about being an engaged citizen, and appreciating the cultural mix that is America. I am a patriot.

Like most teachers, I am grateful to be an American. I didn’t need research to tell me so.

“My Research is Better than Your Research” Wars

When I retired from teaching (after 32+ years), I enrolled in a doctoral program in Education Policy. (Spoiler: I didn’t finish, although I completed the coursework.) In the first year, I took a required, doctoral-level course in Educational Research.

In every class, we read one to three pieces of research, then discussed the work’s validity and utility, usually in small, mixed groups. It was a big class, with two professors and candidates from all the doctoral programs in education—ed leadership, teacher education, administration, quantitative measurement and ed policy. Once people got over being intimidated, there was a lot of lively disagreement.

There were two HS math teachers in the class; both were enrolled in the graduate program in Administration—wannabe principals or superintendents. They brought in a paper they wrote for an earlier, masters-level class summarizing some action research they’d done in their school, using their own students, comparing two methods of teaching a particular concept.

The design was simple. They planned a unit, using two very different sets of learning activities and strategies (A and B) to be taught over the same amount of time. Each of them taught the A method to one class and the B method to another—four classes in all, two taught the A way and two the B way. All four classes were the same course (Geometry I) and the same general grade level. They gave the students identical pre- and post-tests, and recorded a lot of observed data.

There was a great deal of “teacher talk” in the summary of their results (i.e., factors that couldn’t be controlled—an often-disrupted last hour class, or a particularly talkative group—but also important variables like the kinds of questions students asked and misconceptions revealed in homework). Both teachers admitted that the research results surprised them—one method got significantly better post-test results and would be utilized in re-organizing the class for next year. They encouraged other teachers to do similar experiments.

These were experienced teachers, presenting what they found useful in a low-key research design. And the comments from their fellow students were brutal. For starters, the  teachers used the term ‘action research’ which set off the quantitative measurement folks, who called such work unsupportable, unreliable and worse.

There were questions about their sample pool, their “fidelity” in teaching methods, the fact that their numbers were small, and the results were not generalizable. Several people said that their findings were useless, and the work they did was not research. I was embarrassed for the teachers—many of the students in the course had never been teachers, and their criticisms were harsh and even arrogant.

At that point, I had read dozens of research reports, hundreds of pages filled with incomprehensible (to me) equations and complex theoretical frameworks. I had served as a research assistant doing data analysis on a multi-year grant designed to figure out which pre-packaged curriculum model yielded the best test results. I sat in endless policy seminars where researchers explicated wide-scale “gold standard” studies, wherein the only thing people found convincing were standardized test scores. Bringing up Daniel Koretz or Alfie Kohn or any of the other credible voices who found standardized testing data at least questionable would draw a sneer.

In our small groups, the prevailing opinion was that action research wasn’t really research, and the two teachers’ work was biased garbage. It was the first time I ever argued in my small group that a research study had validity and utility, at least to the researchers, and ought to be given consideration.

In the end, it came down to the fact that small, highly targeted research studies seldom got grants. And grants were the lifeblood of research (and notoriety of the good kind for universities and organizations that depend on grant funding). And we were there to learn how to do the kind of research that generated grants and recognition.

(For an excellent, easy-reading synopsis of “evidence-based” research, see this new piece from Peter Greene.)

I’ve never been a fan of Rick Hess’s RHSU Edu-Scholar Public Influence Rankings, speaking of long, convoluted equations. It’s because of these mashed-up “influence” rankings that people who aren’t educators get spotlights (and money).

So I was surprised to see Hess proclaim that scholars aren’t studying the right research questions:

There are heated debates around gender, race, and politicized curricula. These tend to turn on a crucial empirical claim: Right-wingers insist that classrooms are rife with progressive politicking and left-wingers that such claims are nonsense. Who’s correct? We don’t know, and there’s no research to help sort fact from fiction. Again, I get the challenges. Obtaining access to schools for this kind of research is really difficult, and actually conducting it is even more daunting. Absent such information, though, the debate roars dumbly on, with all parties sure they’re right.

I could tell similar tales about reading instruction, school discipline, chronic absenteeism, and much more. In each case, policymakers or district leaders have repeatedly told me that researchers just aren’t providing them with much that’s helpful. Many in the research community are prone to lament that policymakers and practitioners don’t heed their expertise. But I’ve found that those in and around K–12 schools are hungry for practical insight into what’s actually happening and what to do about it. In other words, there’s a hearty appetite for wisdom, descriptive data, and applied knowledge.

The problem? That’s not the path to success in education research today. The academy tends to reward esoteric econometrics and critical-theory jeremiads. 

Bingo. Esoteric econometrics get grants.

Simple theoretical questions—like “which method produces greater student understanding of decomposing geometric shapes?”—have limited utility. They’re not sexy, and don’t get funding. Maybe what we need to do is stop ranking the most influential researchers in the country, and teach educators how to run small, valid and reliable studies to address important questions in their own practice, and to think more about the theoretical frameworks underlying their work in the classroom.

As Jose Vilson recently wrote:

Teachers ought to name what theories mobilize their work into practice, because more of the world needs to hear what goes into teaching. Treating teachers as automatons easily replaced by artificial intelligence belies the heart of the work. The best teachers I know may not have the words right now to explain why they do what they do, but they most certainly have more clarity about their actions and how they move about the classroom.

In case you were wondering why I became a PhD dropout, it had to do with my dissertation proposal. I had theories and questions around teachers who wanted to lead but didn’t want to leave the classroom. I was in possession of a large survey database from over 2000 self-identified teacher leaders (and permission to use the data).

None of the professors in Ed Policy thought this dissertation was a useful idea, however. The data was qualitative, and as one well-respected professor said– “Ya gotta have numbers!” There were no esoteric econometrics involved—only what teachers said about their efforts to lead–say, doing some action research to inform their own instruction–being shut down.

And so it goes.