The Mental Energy of Teaching

Interesting tweet from @EdFuller_PSU:

The one thing non-teachers simply do not and cannot grasp is how MENTALLY EXHAUSTING IT IS TO TEACH ALL DAY. There are very, very few jobs that require the constant mental attention that teaching does. I’d love to see all the people criticizing teachers to teach for a week. (Caps are Fuller’s.)

There are over 750 responses, running about 30 to one some form of confirmation, most of which are from teachers or parents. The odd pushback (i.e., @Angrydocsx: Surgery, nursing, working on an oil rig, construction, being a lineman, etc… Teachers are great but get over yourself.) are either from people who feel their jobs are equally taxing, or your garden-variety anti-teacher/anti-union/you-suck-so-shut-up tweets.

Side note: I think surgery and nursing are also incredibly demanding and find @Angrydoc’s immediate shift to oil rigs and linemen in cherry pickers—dangerous, outdoor male-dominated jobs—telling.

Fuller (who, not coincidentally, was a HS teacher before moving into higher education) puts his finger on the thing that makes teaching exhausting—you’re on all the time, making decisions on the fly and—if you’re doing the job right—taking sincere responsibility for teaching…something, to students who may not particularly want to be taught.

He did not say teaching was the most mentally exhausting job in the world—there are others where you can’t take a break or turn your back—only that the need to constantly pay attention and adapt were factors that many folks did not perceive, when they thought about teaching. A number of the tweeted responses, in fact, were from people who thought they’d give teaching a try, but concluded that it wasn’t the job they thought it would be.

Larry Cuban recently compared teachers’ decision-making to playing jazz and rebounding in basketball—two complex skills that depend on prior learning and practice for automaticity. He includes two footnotes about the number of decisions teachers typically make:

*Researchers Hilda Borko and Richard Shavelson summarized studies that reported .7 decisions per minute during interactive teaching.

*Researcher Philip Jackson said that elementary teachers have 200 to 300 exchanges with students every hour (between 1200-1500 a day), most of which are unplanned and unpredictable calling for teacher decisions, if not judgments.

Cuban notes that those studies are older, and invites readers to share any newer research—but those figures ring absolutely true to me. Interactions, decisions, re-direction, pop-up questions, wait time, modeling, judgments. On and on and on. Teaching is all about an on-your-feet response to whatever crops up. It’s the essence of unpredictability, and every day is exhausting.

What Fuller’s tweet and the plethora of responses clearly illustrates: There is no such thing as successful scripted teaching or “effective” fidelity to pre-constructed lessons. Also: the more you teach, if you’re paying attention, the more fluid the decision-making becomes, and the more tools in your mental (and emotional) tool bag. Experience matters. Perception matters. Judgment matters.

When I had been teaching for more than 25 years, I took a two-year sabbatical to work at a national education non-profit. There was an opportunity to pursue an alternate career in our contract language, but even though I knew I could return to teaching, I was certain that this new job was my off-ramp.

At first, it was great. I had my own cubicle, with a computer and a phone and–get this–a secretary. We took an hour for lunch, occasionally going out to a restaurant (and, also occasionally, having a glass of wine). We could use the bathroom as often as we liked. I could pop into someone else’s office and have a long chat about some issue that had arisen. I could leave early to go to the dentist. We were doing a lot of conferences and workshops—on weekends, because our clients were educators—and if we were in another city for the weekend, we didn’t return to work until Wednesday or Thursday: comp time!

I found the workload easy and the pace relaxed. I liked the people I worked with. But after the first year, I started thinking about going back to teaching. It took a long time to work through the reasons. Teaching offered less money, less prestige and way more what might be called mandatory time on task.

What I finally concluded was this: When I left the school building at night, and walked across the parking lot, I could describe the good I had done that day, things students had learned, progress made. I didn’t get that daily confirmation at the non-profit (which was much-admired). Lots of days were focused on strengthening the business end of the non-profit’s work. I didn’t get to hang out with kids, either.

I taught a lot of subjects and varied grade levels during my career, speaking of mental exhaustion. I taught large middle school and high school band classes (65+ students), and 7th grade math in the first year of a new, “connected” curriculum that the old math teachers loathed. I taught vocal and instrumental music in every grade from pre-K to 12. By far the most mentally challenging class I ever taught was general music to a group of 12 Pre-K children, mostly four years old, in my last year in the classroom.

These kiddos were all over the place, maturity-wise. My biggest challenge at first was getting them all to sit, not sprawl or run around, on the circular rug in my classroom. I had them for 50 minutes, twice a week (yup—too long, I know, but that’s the way the schedule was set up), so the first time they came to my room, I prepared a lesson plan with seven different activities, from listening to marching. Seven!

They ran through that plan in about 20 minutes. I remember thinking: I’m supposed to be good at this! I hope nobody makes an unscheduled visit to my room.

Although I got much better at teaching very young children, thanks to the generous suggestions of my colleagues, it was a mental attention marathon, day in and day out. Did they understand that word? Why aren’t his hands coming together when he claps? How much time is left? Wait— is she actually spitting?

When we speak of teacher professionalism, we think of content knowledge, instructional expertise, being a respected contributor to a school learning community. But a big part of professionalism is accepting responsibility for what happens in your worksite, for expending the continuous mental energy to create a successful and skilled practice.

The last word about the way the public sees teacher professionalism, from Jose Vilson:

Over the last few decades, pundits and policymakers have derided the professionalism of teachers because “accountability” or whatever. No matter how many degrees and certificates they get, how many years of experience they accumulate, or student commendations they collect, American society looks at teachers and says “Oh, that’s nice!” but also, “How do you do it? Couldn’t be me!” “You and your union make the job easy, right?” and my personal favorite, “I couldn’t stand me when I was a child. How does that work out with 30 of them?!” In other words, even though many people think only a special set of people can do the job, they also think anyone can do it.

“My Research is Better than Your Research” Wars

When I retired from teaching (after 32+ years), I enrolled in a doctoral program in Education Policy. (Spoiler: I didn’t finish, although I completed the coursework.) In the first year, I took a required, doctoral-level course in Educational Research.

In every class, we read one to three pieces of research, then discussed the work’s validity and utility, usually in small, mixed groups. It was a big class, with two professors and candidates from all the doctoral programs in education—ed leadership, teacher education, administration, quantitative measurement and ed policy. Once people got over being intimidated, there was a lot of lively disagreement.

There were two HS math teachers in the class; both were enrolled in the graduate program in Administration—wannabe principals or superintendents. They brought in a paper they wrote for an earlier, masters-level class summarizing some action research they’d done in their school, using their own students, comparing two methods of teaching a particular concept.

The design was simple. They planned a unit, using two very different sets of learning activities and strategies (A and B) to be taught over the same amount of time. Each of them taught the A method to one class and the B method to another—four classes in all, two taught the A way and two the B way. All four classes were the same course (Geometry I) and the same general grade level. They gave the students identical pre- and post-tests, and recorded a lot of observed data.

There was a great deal of “teacher talk” in the summary of their results (i.e., factors that couldn’t be controlled—an often-disrupted last hour class, or a particularly talkative group—but also important variables like the kinds of questions students asked and misconceptions revealed in homework). Both teachers admitted that the research results surprised them—one method got significantly better post-test results and would be utilized in re-organizing the class for next year. They encouraged other teachers to do similar experiments.

These were experienced teachers, presenting what they found useful in a low-key research design. And the comments from their fellow students were brutal. For starters, the  teachers used the term ‘action research’ which set off the quantitative measurement folks, who called such work unsupportable, unreliable and worse.

There were questions about their sample pool, their “fidelity” in teaching methods, the fact that their numbers were small, and the results were not generalizable. Several people said that their findings were useless, and the work they did was not research. I was embarrassed for the teachers—many of the students in the course had never been teachers, and their criticisms were harsh and even arrogant.

At that point, I had read dozens of research reports, hundreds of pages filled with incomprehensible (to me) equations and complex theoretical frameworks. I had served as a research assistant doing data analysis on a multi-year grant designed to figure out which pre-packaged curriculum model yielded the best test results. I sat in endless policy seminars where researchers explicated wide-scale “gold standard” studies, wherein the only thing people found convincing were standardized test scores. Bringing up Daniel Koretz or Alfie Kohn or any of the other credible voices who found standardized testing data at least questionable would draw a sneer.

In our small groups, the prevailing opinion was that action research wasn’t really research, and the two teachers’ work was biased garbage. It was the first time I ever argued in my small group that a research study had validity and utility, at least to the researchers, and ought to be given consideration.

In the end, it came down to the fact that small, highly targeted research studies seldom got grants. And grants were the lifeblood of research (and notoriety of the good kind for universities and organizations that depend on grant funding). And we were there to learn how to do the kind of research that generated grants and recognition.

(For an excellent, easy-reading synopsis of “evidence-based” research, see this new piece from Peter Greene.)

I’ve never been a fan of Rick Hess’s RHSU Edu-Scholar Public Influence Rankings, speaking of long, convoluted equations. It’s because of these mashed-up “influence” rankings that people who aren’t educators get spotlights (and money).

So I was surprised to see Hess proclaim that scholars aren’t studying the right research questions:

There are heated debates around gender, race, and politicized curricula. These tend to turn on a crucial empirical claim: Right-wingers insist that classrooms are rife with progressive politicking and left-wingers that such claims are nonsense. Who’s correct? We don’t know, and there’s no research to help sort fact from fiction. Again, I get the challenges. Obtaining access to schools for this kind of research is really difficult, and actually conducting it is even more daunting. Absent such information, though, the debate roars dumbly on, with all parties sure they’re right.

I could tell similar tales about reading instruction, school discipline, chronic absenteeism, and much more. In each case, policymakers or district leaders have repeatedly told me that researchers just aren’t providing them with much that’s helpful. Many in the research community are prone to lament that policymakers and practitioners don’t heed their expertise. But I’ve found that those in and around K–12 schools are hungry for practical insight into what’s actually happening and what to do about it. In other words, there’s a hearty appetite for wisdom, descriptive data, and applied knowledge.

The problem? That’s not the path to success in education research today. The academy tends to reward esoteric econometrics and critical-theory jeremiads. 

Bingo. Esoteric econometrics get grants.

Simple theoretical questions—like “which method produces greater student understanding of decomposing geometric shapes?”—have limited utility. They’re not sexy, and don’t get funding. Maybe what we need to do is stop ranking the most influential researchers in the country, and teach educators how to run small, valid and reliable studies to address important questions in their own practice, and to think more about the theoretical frameworks underlying their work in the classroom.

As Jose Vilson recently wrote:

Teachers ought to name what theories mobilize their work into practice, because more of the world needs to hear what goes into teaching. Treating teachers as automatons easily replaced by artificial intelligence belies the heart of the work. The best teachers I know may not have the words right now to explain why they do what they do, but they most certainly have more clarity about their actions and how they move about the classroom.

In case you were wondering why I became a PhD dropout, it had to do with my dissertation proposal. I had theories and questions around teachers who wanted to lead but didn’t want to leave the classroom. I was in possession of a large survey database from over 2000 self-identified teacher leaders (and permission to use the data).

None of the professors in Ed Policy thought this dissertation was a useful idea, however. The data was qualitative, and as one well-respected professor said– “Ya gotta have numbers!” There were no esoteric econometrics involved—only what teachers said about their efforts to lead–say, doing some action research to inform their own instruction–being shut down.

And so it goes.

Star Tech: The Next Generation of Record-Keeping

In her last year of a degree program in Justice Studies, my daughter took a course called “Surveillance in Society.” The readings and discussion were around intrusions into personal privacy and data made possible by technology. Dear Daughter and I had many amusing conversations about some of her assignments—“Are Bar Codes the Mark of the Beast? Discuss.”—which struck me as paranoid in the extreme. Her professor was obsessed with our imminent loss of civil liberty, always urging his undergrads to be suspicious of anyone asking for personal information, and, presumably, scanning the sky for black helicopters.

However—I have been thinking a lot about the use of technology to gather data and “streamline” normal school processes, like testing, attendance and grading, to present an image of a “21st century school.”  Here is a simple story about data collection and our belief that All Technology is Good.

In 1998, my district opened a new middle school, full of state-of-the-art technological systems. We were the envy of the other buildings, with fully networked software to handle all our data needs. We got some training and the big pitch—our new procedures would save time, paper and man-hours, give us more accurate data, impress parents with e-communications, yada yada,

Under Old Attendance procedures, every teacher took attendance once, at the same time every morning, recorded it in their grade/attendance book, and sent a student to the office, with an attendance form, printed on scrap paper from recycle bins. Secretaries recorded these on a master list, and handled absence data for students who came/left during the day. Teachers got a copy of the master list, to help confirm absences when students needed to make up work.

Under New, Improved Attendance procedures, every teacher had a computer, with separate attendance book and gradebook functions. Teachers were now required to take attendance every hour and enter absences and tardies on the computer within a five-minute window. We were not allowed to keep the attendance program open on our computer desktops (because our gradebooks, protected by the same password, might be accessed by devious students)—so we had to log in every hour.

Because this was 1998, the server’s horsepower was severely strained by 40 teachers logging in simultaneously, and it would take 30-60 seconds for the program to load. Teachers who forgot to take attendance within 5 minutes would be called by the office (where a secretary now sat, monitoring the data coming in every hour), disrupting teachers’ lessons. If someone had a missing assignment, you had to toggle between attendance and grade programs to discover whether the child had actually been absent.

A process that had taken two minutes of teacher-time daily suddenly began to take two minutes every hour. Best-case scenario, teachers would lose ten extra minutes of instructional time each day: 50 minutes/week, four class periods per month, 36 class periods per school year, or six full days of instructional time. Taking attendance.

Lest you think I’m being overdramatic (or are dying to tell me that faster computing and better software have eliminated problems and made attendance-taking an absolute joy)—I tell this story not to whine about record-keeping, but to question our automatic goal of “efficiency” and the uses and purposes of all K-12 tech-enhanced data collection.

The state requires daily absent/present data, and that to ferret out kids who aren’t actually attending school but were counted for funding purposes. A student who went AWOL would not necessarily be picked up any quicker under the new system, and most of our mid-day leavers were signed out to go to the orthodontist with their mom, anyway.

The new system made data-entry mistakes six times more likely and kept a secretary busy checking on students who were marked present one hour, but absent the other five due to teacher error. I had great sympathy for “careless” teachers who rushed through the attendance procedure to get started on, you know, teaching—only to be monitored and chastised later. I was one of them.

Nobody in the office could explain why or how, precisely, the new system was helping us do a better job of serving kids. The on-line gradebooks also came with unanticipated problems—teachers who didn’t post enough grades (remember when formative data included things that weren’t numbers?), the amount of time now required to deal with anxious parents, and so on.

The most obvious reason to question always-available online gradebooks is that responsibility for turning in work and monitoring a running performance record should belong to students, especially in secondary settings. We have always had periodic reporting to parents—four or six times a year, or in some cases, weekly progress reports. Any more than that elevates grades over actual learning and encourages students to let mom be in charge of their education.

Tech-based surveillance of students is now on steroids. In a thoughtful post entitled How Much Should I Track My Kid? Ann Helen Peterson says this:

My parents trusted me because I had earned their trust. Sometimes I stretched that trust, but I was constantly figuring out what felt too risky, what felt right or wrong, who I didn’t want to get in a car with. Maybe that sounds like a lot of discernment for a teen. But how else do we figure out who we are? My parents could’ve lectured me about “making good decisions” all they wanted; I only knew how to make them by finding myself in situations far from them where I had to.

The same principle applied to my grades, to my online use, to how I talked to boys and figured out friendships. In high school, I would see my exact grade around twice during the quarter, when a teacher would distribute printouts that included all graded assignments and your current percentage.

Schools pay attention to what they value. We collect data first, and decide how to manage it later, a pattern repeatedly endlessly in thousands of schools. We assume that everything can be done faster, cheaper and better through technology. Sometimes, the rationale runs backwards—we adopt the technology, and then invent reasons for why we need it.