“Teaching effectiveness” as another dimension in cognitive ability

I’m not a great teacher. I can get by because I work hard and I know a lot, and for some students my classes are just great, but it’s not a natural talent of mine. I know people who are amazing teachers, and they have something that I just don’t have. I wrote that book, Teaching Statistics: A Bag of Tricks (with Deb Nolan) because I’m not a good teacher and hence need to develop all sorts of techniques to be able to do what good teachers can do without even trying.

I’m not proud of being mediocre at teaching. I don’t think that low teaching skill is some sort of indicator that I’m a great researcher. The other think about teaching ability is that I think it’s hard to detect without actually seeing someone teach a class. If you see me give a seminar presentation or even a guest lecture, you’d think I’m an awesome teacher. But, actually, no. I’m an excellent speaker, not such a great teacher.

This all came to mind when I received the following email from anthropologist Henry Harpending:

I [Harpending] am writing to ask about value-added studies of primary school teachers like the one commissioned and published by the LA Times. It looks quite solid to me but I am no statistician.

My take is that it may be a useful tools for objective personnel evaluation that is not simply another IQ test. Everyone agrees that there is “more than IQ” but no one can measure it. The value added stuff makes me optimistic.

I have two reasons for my interest in this issue. First, I teach a course on biology and social issues. Most of the course ends up being about IQ tests or ability tests as we called them for a few decades. Everyone knows and acknowledges that lots of other perhaps uncorrelated traits predict success, like time preference and charisma and attention, and so on. My focus on IQ is like the focus of the drunk looking for his lost keys under the streetlight.

Seems to me that the VAT approach holds out the hope of pushing personnel selection beyond IQ by identifying other important characteristics of individuals. If so, perhaps before I retire I can talk about something other than IQ for most of the semester.

I suppose literature will soon appear about how to predict who will be a high VAT teacher. After that perhaps literature about the difference between VAT teachers for the top students and VAT teachers for the bottom: certainly these will be different.

I have a high school junior who attends an urban high school. They manage to track students very well with AP courses and with the IB program, whatever that is. At any rate he and his nerd friends all agree on who the good and the bad teachers are. They approve of one of the AP calculus teachers because “most of his students get 5s on the AP exam”, which they all did last year. . . . Interestingly they are completely inarticulate when asked to describe what makes a “good teacher”: their opinions are apparently purely data based.

My earlier comments on value-added teacher assessment are here and here.

Harpending’s point, about teaching ability being a different dimension than what is usually measured, is interesting. I have never heard it put this way but that sounds right to me, partly from my own experience as a teacher and observer of teachers, and partly because I recall Jonah Rockoff telling us that nothing much predicts teacher performance except for the teacher’s performance last year. I wonder how this has been studied by the “multiple intelligences” researchers.

13 thoughts on ““Teaching effectiveness” as another dimension in cognitive ability

  1. You are NOT mediocre at teaching! Also, all the amazing teachers I know (I’m including you in this list if you don’t mind) work very hard to prepare examples and stories for lecture, so they definitely aren’t doing it “without even trying.”

  2. As another person who lacks aptitude for teaching and has had to struggle to be reasonably good at it, I have a theory. The “natural” teacher is someone who intuits what others are thinking — and not only what but how. Such a person knows what the next step is for the student, or what is causing the logjam, and can take the most effective measures. One reason you see a lot of poor teachers in technical subject areas is that they select for people who are comfortable or even happy to spend hours and years in solitary pursuits. A lot of math, stats and econ types are a bit into the autism spectrum, even. They may be able to speak brilliantly, but they are not attuned to what the students in front of them are thinking and needing.

    To the extent I’m right (and of course there are multiple nonexclusive factors at work), the antidote is anything that informs the teacher in real time about student knowledge/thinking patterns/affective responses/etc. For those of us less interpersonally gifted, those methods may be somewhat mechanical, but that’s OK.

    • I couldn’t agree more — empathy with the student is a key factor in teaching ability.

      I’ve always had empathy with my students because I’ve never been the best person at math in the room (at least since junior high school and then for a while working as a linguist).

      I’m also a neat freak, so some of my learning always involves organizing the material in a way that seems logical and step-by-step to me (I think it’s also what drew me to formal logic and knowledge representation back in the “symbolic” AI days). I can usually remember that organization and what I found hard about each concept when I’m teaching or writing a book.

      I had immense problems talking to Andrew about statistics before I became more fluent in both math stats and applied stats. I think he had a hard time understanding what I didn’t understand. It’s also confusing when you cross fields (in my case from natural language processing to stats), because the concepts are similar but the language for talking about them so different (I’m talking 10+ years ago — the fields are converging at least on terminology if not in focus).

    • Agreeing with Bob, those who don’t know what the students don’t know or want to tell stories to others rather than induce others to tell themselves the story (recreat something equivalent). Maybe why creative people with different backgrounds who may actually have more to offer tend to teach _more wrongly_.

      But maybe whats more interesting here is the research is not be designed and carried out to better clarify this.

  3. The problem with VAM scores, is that they too are like looking for the lost keys under the streetlight. For the most part, only Math and Reading are tested every year, so we are only checking under the Math and Reading street lights, an only in ways that can be tested within a 45 minute class period or two. There are many things I want a good teacher to do other than just have the students score well on standardized Math and Reading tests. I also want them to inspire students to go beyond the material taught in class. I want them to help students develop good study habits, and to be able to work together in teams.

    A second problem is that the VAMs often are effectively equivalent to fitting a multi-level model to the data and assigning the classroom level error term as a causal effect of the teacher. Naturally, these are not very stable and by analyzing them, we may be trying to just explain the noise in the series.

    While I agree with Harpending that they are an interesting contribution to the discussion of what is effective teaching and how do we measure it, I also think it is premature to be firing teachers (and replacing them with untrained teachers who may or may not eventually become as good as the fired teacher) on the basis of an error term of a model that restricts its measurement to the area illuminated by two street lights.

  4. Measurement and evaluation issues are gigantic issues with VAM that have not been figured out yet (if they can or should be at all), especially given the huge confidence intervals on “effectiveness” ratings. Not to mention that 65%-75% of state test scores measure how rich your parents are, so what’s left is a lot of noise to model.

    Aaron Pallas at Teachers College, Columbia had a nice piece on this recently, titled “The Fuzzy Scarlet Letter”. The last few paragraphs are key and relate directly to New York’s release of VAM scores:

    http://www.ascd.org/publications/educational-leadership/nov12/vol70/num03/The-Fuzzy-Scarlet-Letter.aspx

  5. My highly subjective viewpoint on this is that great teachers tend to be the people who are actually interested in what they are supposed to be teaching (and not merely filling time to get back to research) and are also desperate to impart what they know. I’ve only known one thermodynamics lecturer who was able to bring the subject to life and he genuinely wanted people to understand what he was saying (to the point of setting off fire extinguishers to demonstrate expansion of gases into a vacuum).

    The biggest conundrum I’ve known was a lecturer who would talk for the full time of the lecture and write everything he said. We all thought we had no hope of understanding the subject, but then found that if we wrote everything he said it all made perfect sense afterwards. I guess this is about the student wanting to learn – if that wasn’t the case, then he did nothing to bring the subject to life.

    The worst kind of teacher is the type who teaches in order to show how clever they are – you learn nothing other than how to spot self importance at a distance.

  6. This is from my (bad) memory — I remember reading that someone researching schools (and that was in the US) said that when he got to a new school, to find out who were the bad teachers , he just went to the parking lot and found who had the most expensive cars ….

  7. Back in the 1990s, psychologist Chris Brand got into all sorts of trouble for pointing out that in his youthful experience as a student at famous English boarding schools, the very best teachers — the thoughtful ones who really cared about the boys as individuals, the ones who put their hearts and souls into learning about each youth and how best to teach him — were the pederasts.

  8. The Coleman Report of 1966 notoriously found that schools didn’t have terribly much to do with students’ school achievement versus what students brought to school with them — but one factor that James Coleman did find that influenced test scores was that teacher performance on a test of vocabulary and reading comprehension did have an independent effect. Smarter teachers appeared to be better teachers.

    It would be interesting to find out if anything besides verbal intelligence is predictive of becoming an effective teacher? Perhaps oral fluency/lucidity? That doesn’t sound impossible to test objectively.

Comments are closed.