Skip to content
 

Bird fight! (Kroodsma vs. Podos)

Donald Kroodsma writes:

Birdsong biologists interested in sexual selection and honest signalling have repeatedly reported confirmation, over more than a decade, of the biological significance of a scatterplot between trill rate and frequency bandwidth. This ‘performance hypothesis’ proposes that the closer a song plots to an upper bound on the graph, the more difficult the song is to sing, and the more difficult the song the higher quality the singer, so that song quality honestly reveals male quality. In reviewing the confirming literature, however, I can find no support for this performance hypothesis.

OK, that sounds jargony, so let me make it clear: when Kroodsma says he “can find no support for this performance hypothesis,” what he’s really saying is that a sub-literature in the animal behavior literature is in error. Rip it up and start over.

How did everyone get it wrong? Kroodsma continues:

I will argue here that the scatter in the graph for songbirds is better explained by social factors and song learning. When songbirds learn their songs from each other, multiple males in a neighbourhood will sing the same song type. The need to conform to the local dialect of song types guides a male to learn a typical example of each song type for that population, not to take a memorized song and diminish or exaggerate it in trill rate or frequency bandwidth to honestly demonstrate his relative prowess. . . . There is no consistent, reliable information in the song performance measures that can be used to evaluate a singing male.

Damn.

But the other side is not going down without a fight. Jeffrey Podos responds:

Kroodsma [in the above-linked article] has critiqued ‘the performance hypothesis’, which posits that two song attributes, trill rate and frequency bandwidth, provide reliable indicators of singer quality and are used as such in mate or rival assessment. . . .

I address these critiques in turn, offering the following counterpoints: (1) the reviewed literature actually reveals substantial plasticity in song learning, leaving room for birds to tailor songs to their own performance capacities; (2) reasonable scenarios, largely untested, remain to explain how songs of repertoire species could convey information about singer quality; and (3) the playback studies critiqued actually enable direct, reasonable inferences about the function of vocal performance variations, because they directly contrast birds’ responses to low- versus high-performance stimuli.

Where did the critics go wrong? Podos continues:

My analyses also reveal numerous shortcomings with Kroodsma’s arguments, including an inaccurate portrayal throughout of publications under review, logic that is thus rendered questionable and reliance on original data sets that are incomplete and thus inconclusive.

I have not read either paper because it just all seems so technical. I suppose with some effort I could untangle this one, but I don’t feel like putting in the effort right now.

Any ornithologists in the house?

Fun fact: Both authors in this discussion had the same academic affiliation of Department of Biology, University of Massachusetts, Amherst. Podos is a professor there, and Kroodsma is a retired professor. Either way, the story is compelling: youngster does shoddy research and the retired prof blows the whistle, or cranky old man can’t handle new methods. In some general sense, I’ve been on both sides of this debate: Sometimes I criticize what I see as flashy research with empty claims, otherwise I’m frustrated that traditionalists will seem to find any excuse not to take a new method seriously.

P.S. A google search turned up this review from 2005 of a book on the science of birdsong. In the review, Bernard Lohr writes:

The contributors do not shy away from controversy. Donald Kroodsma, for example, issues a challenge to those who suggest large song repertoires are a consequence of sexual selection. Kroodsma remains unconvinced that existing direct experimental data demonstrate female choice for larger repertoires in a natural context. Although his criticism is general, he selects—as he did in an earlier critique of song playback designs—studies of other eminent birdsong biologists as specific examples. Because those researchers are more than capable of defending their conclusions and viewpoints, an interesting and vigorous debate is sure to ensue.

That was 12 years ago, and the issue doesn’t yet seem to have been resolved. Strike one against the story that science is self-correcting.

P.P.S. Also relevant is this article, Response to Kroodsma’s critique of banded wren song performance research, by S. L. Vehrencamp, S. R. de Kort, and A. E. Illes.

24 Comments

  1. Anoneuoid says:

    Something is off about how people are writing about this topic. What does it mean to “repeatedly report confirmation of the biological significance of a scatterplot”?

    • jrc says:

      I thought we covered that. Something about elephant parameters.

      http://andrewgelman.com/2017/07/22/a-stunned-dyson/

    • Don Kroodsma says:

      Please allow me to try to clarify (you’d need more information to make sense of the passage). What I’m trying to say is that dozens of research papers have confirmed the biological significance of something (revealed in the scatterplot graph) that is entirely false. It’s the garden of forking paths, the p-hacking, the do-whatever-it-takes-to-get a p of 0.05 that makes a good story publishable. Then the next paper confirms what is now known to be true, and the next, and so on, with no one questioning the process or the results (until now).

      Hundreds of laudatory citations for these stories have solidified the place of these mistruths in birdsong biology. The authors have achieved “fame, fortune, and acclaim” (e.g., Podos is President of the Animal Behavior Society, as were two of his mentors before him; quote from Andrew’s “Winds”). Trouble is, none of the publications are true. It’s all rather inconvenient, and embarrassing, to say the least.

      • Anoneuoid says:

        Thanks,
        So it sounds like you believe there are some reproducible results (the relationship shown in this scatterplot), but they have been interpreted incorrectly. Same as if you test how quickly mice get a food reward in a maze and believe you are testing memory rather than motivation (hunger, etc).

        • Don Kroodsma says:

          You’re welcome. Thanks for asking.

          I have to confess not knowing much about mice (except I’ve read that they sing, and they’re a bit of a nuisance when they invade the house in the fall), but let me respond about birdsong, trying to make the comments sufficiently general that they would apply to any knowledge base. (It’s probably far more detail than you wanted, but I think it’s instructive in how things can go bad.)

          Yes, the scatterplot is reproducible and real (more on why the standard interpretation is wrong later). If you take lots of “trilled songs,” such as those illustrated in the top figure of the blog, and then measure the “trill rate” (number of repeated units per second—time is the x-axis there for the song graphs, or sonagrams) and the frequency bandwidth of the trill (high frequency minus low frequency—the vertical axis is Hertz, or frequency), and plot lots of songs in the scatterplot such as Figure 4, for many species you get a graph of this sort. The plot often shows an oval-shaped cloud of points, tilted down and to the right, with each point representing a “normal song” for that particular population or species.

          It’s as if the voice box (syrinx) of the bird is limited in what it can do. If the bird is to sing a very fast trill rate (e.g., right most song in top figure), then maybe it doesn’t have the time or ability to extend each repeated unit into a broad range of frequencies; those tiny muscles can do just so much. If the song has a slow trill rate (e.g., left-most song in the top figure), the frequency range of each repeated unit is often much broader. You can see that trend in the four songs illustrated in the top figure.

          If you’re game, let’s explore that Figure 4 that Andrew reprinted at the top of the blog, because in the process I can illustrate how something so simple has gone so wrong. First, remove everything from the inside of the graph except the data points. You now see data points for “normal songs.” Outside of that cloud of data points, you see space where songs do not occur, i.e., any songs there would be “abnormal songs.” I repeat: Any songs that plot to the left, bottom, or top right of the graph would be considered abnormal. Birds probably don’t “want to” produce a song out there, because over evolutionary time they have somehow “decided” not to sing out there, and anything out there wouldn’t be recognized as normal by other members of the same species, either male or female.

          Next try this: Move the x-axis up so that it intersects the y-axis at 1 kHz. Slide the y-axis to the right so that it intersects the x-axis at 5 (that’s the way most of these graphs are drawn). The “abnormal spaces” to left and bottom of the figure disappear, to be largely forgotten. Now draw a regression line over the top of the figure, and your eyes become riveted to that line. What does that line mean? What if those songs up there are not abnormal, but instead supernormal? Maybe it’s not that a male “doesn’t want” to produce a song up there; it’s that he can’t, unless he’s a real stud. That would be exciting. Every male could dream of producing a song as close to that regression line as possible, or beyond it, because songs up there are difficult to produce (we’ve now come to assume). Maybe all birds, males and females alike, can assess a singing male based on where his song plots relative to that upper bound.

          At this point, we have only one explanation for the graph, so there’s only one explanation to prove (forgive a bit of irreverence or sarcasm on occasion). We can do that (using all of the tools that Andrew has described), and it turns out that the collected data are stunningly consistent with this exciting interpretation of the graph.

          Here’s one clever “proof” of the performance hypothesis (not far off from several actual examples). Take any song from the middle of the pack and show how, when played back to a male songbird on his territory, he attacks the loudspeaker in his territory as if an intruder is singing there. Now manipulate that song so that the trill rate is faster, or the frequency bandwidth wider. The song now plots near the upper bound of the graph, or beyond it—you’ve transformed an ordinary song into a supernormal song. When you play that song to the territorial male, he does not attack. It’s as if he’s scared to death of this highly intimidating song and doesn’t dare to approach; maybe he even flees the song. Lots of birds “flee” these songs, it turns out.

          Never mind that in your manipulation of the song you’ve created a song with a trill rate and frequency bandwidth that the birds have never heard before. Never mind that one alternative explanation for your data is that the song you created is so abnormal that the birds don’t even recognize it as one of their own species. Never mind that you have no way of knowing that a bird is fleeing that odd song or ignoring it (all you can know is that the bird is not attacking the manipulated song). None of that matters. What you have are data consistent with your performance hypothesis, i.e., a song that plots closer to the upper bound is more intimidating and reflects the prowess of the male who delivered it, because males don’t attack it the way they would a normal song.

          Proof of the explanation for the graph is that simple.

          The first paper to prove this idea was a resounding success, becoming one of the 10 most-cited articles from the journal Behavioral Ecology. Other proofs began piling up in other papers. Now there’s nothing left to do but to pile up more proofs of what is widely known to be true.

          In my very long article for Animal Behavior (Kroodsma 2017), the editors (kudos to them, with Susan Foster at the helm) allow me to address one published paper after another, one by one by one, showing how this kind of silliness permeates them all. Elsewhere in this blog, under the title MORE WIND, I spell out more details on how all this happened.

          Sorry you asked? I almost am, as it’s embarrassing to reveal the state of birdsong research and sexual selection (more specifically, the type of performance explanations advocated by Podos et al.). There are also, I would like to point out, a few bright researchers in this field who are doing superb work; I could name names.

  2. I think these discussions can be helpful, even if they last 12 or more years and sometimes even if they are acrimonious. Thus I disagree with the last sentence of the post– that this situation is strike one for the self-correcting nature of science. The interchange being discussed here is precisely what self-correction looks like, up close. Now, all readers of Animal Behaviour and similar venues will be able to make up our own minds on the relevant issues. Resolution of a scientific dispute does not mean that one of the two (or more) disputants have changed each others’ minds. We fallible scientists are not always able to accomplish that, unfortunately. Sometimes, as the post mentions, there are psychological or other personal things going on that only partly relate to the science involved, and don’t we all tend to hold too tightly to our views, and sometimes get angry when we should just get serious? So those of us in a dispute are sometimes hobbled by our own humanness. It’s the rest of the field, and the future of the field, where resolution is accomplished.

  3. mightysparrow says:

    I’m an ornithologist with an interest in birdsong. I read Kroodsma’s critique and Podos’a response, and found the critique very interesting and persuasive, the response much less so (it seemed to focus on peripheral issues without really engaging Kroodsma’s main points). I’m also familiar with a couple of the papers that Kroodsma’s criticizes, and found them to be good examples of authors walking through the garden of forking paths to reach a desired conclusion. When it comes to the particular “performance hypothesis” that Kroodsma targets, I find myself leaning toward the rip it up and start over camp.

  4. Don Kroodsma says:

    MORE WIND

    [yes, no, yes, no . . . do I press “Submit Comment,” or not . . .yes, no. . . yes (gulp) wins]

    It’s me. The cranky old fart. Set in his ways. Can’t handle new methods . . . OR, the wise old whistle blower exposing shoddy research. But you’ll never find out which from the ornithologists in the house, at least not from those who sign their comments, as illustrated by the Lahti post and as predicted by me in a private email to Andrew. Lahti essentially says “I know birdsong, but you have to read all the material and decide for yourself; I’m not going to help you by offering my opinion.” Unstated is “I don’t dare,” and I completely understand.

    I didn’t become cranky as I aged; I was already cranky as a youngster, and braced for the consequences. Almost 30 years ago I challenged birdsong colleagues on pseudoreplication, what seems like a tame issue now; I was immediately challenged in print by Searcy, who would later become a semi-mentor to Podos. I addressed other malpractices in our field of avian bioacoustics as well, pushing for better science.

    And there were CONSEQUENCES. But after rediscovering the joy of life on a bicycle journey from Virginia to Oregon during the summer of 2003 (Listening to a Continent Sing, Princeton Univ. Press), I left academics at age 57, to begin a new life writing about the magic of birdsong for real people (see http://donaldkroodsma.com/). I’m 71 now. It’s been a wonderful, refreshing ride through life since then, but after a 10-year blissful hiatus, three years ago I got sucked into the old battles once again, by now with three generations of the same crowd; the battle is even more intense this time around, because I continue to care deeply about what birds actually do, and I resent deeply all the shoddy science and falsehoods permeating the literature. Maybe, I thought (naively, of course), if I’m even more blunt (and cranky) this time around I could stamp out some of this nonsense.

    In a private email to Andrew, I had predicted that few, if any, experts on birdsong would openly step forward with an opinion, simply because they couldn’t afford to. Lahti, early in his career, depends on his lifeline to his postdoctoral advisor Podos, a member of a fecund, prolific, influential group from Duke (mentor Nowicki, with close colleague Searcy). It would be professional suicide for Lahti to destroy his lifeline and say Kroodsma is right, and he knows better than to say Podos is right, because he knows that “the winds have changed,” and they’re getting stronger (driving rain too, of course, rivers rising, you name it). No one who is still in the business of studying birdsong (i.e., relying on grant money, jobs, publications, promotions, etc.) can afford to take sides openly.

    In an email to me, Andrew said it was ok if ornithologists wanted to post comments anonymously. That seems gutless, but it’s the only way to move comments beyond Lahti’s vague response. The “mightysparrow” has now weighed in, anonymously. Good. But who is this person who smartly chimes in anonymously, you have to wonder? Must be a pawn of mine, one would suspect. One thing is clear: The sparrow appreciates Andrew’s blogs.

    Perhaps I am out of line by offering any opinions on this topic, as I should let others sort it out independently. But, I go to Andrew’s “Winds” again, to the last paragraph, and see how Fiske was encouraged to respond. This is an open dialogue, not terrorism, and Lord knows how many open dialogues I have attempted to have with Podos over the last three years and been denied (n = 8.0, to be exact, all documented in the criminal harassment item below). Maybe this is the open forum where he and I can exchange ideas about what constitutes appropriate scientific and ethical conduct, a conversation to be held at everyone else’s expense, or amusement, or edification. Feathers might fly. Let’s see.

    * * * * * * * *

    THE PROBLEMS WITH PODOS ET AL. AND THEIR RESEARCH ARE LEGION

    Let me list a few of the problems (quotes from Andrew’s “Winds are blowing” blog), in more candid terms than were allowed by editors of the Animal Behavior journal (where it was important that no one’s feelings were hurt). In my opinion . . .

    1. Podos et al. wholeheartedly adopt the “ . . .find-statistical-signficance-any-way-you-can-and-declare-victory paradigm.”

    2. Podos et al. follow “ . . . what I’ve sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth . . .”

    3. Podos et al. exemplify “ . . . the deadly combination of weak theory being supported almost entirely by statistically significant results which themselves are the product of uncontrolled researcher degrees of freedom.”

    4. Podos et al. have “ . . . huge, obvious multiple comparisons problems. . .”

    5. Podos et al. reveal the “ . . . connection between scientific fraud, sloppiness, and plain old incompetence . . . ” Indeed, here’s an email I wrote to Podos on 8 October 2014: “I have no idea what is in your head . . . Only two possibilities come to mind: 1) You truly believe you are doing fine research and learning about bird song. . . . 2) Research and publishing are a game not to be taken too seriously, and it’s no big deal if what you write has no semblance of truth, no big deal that you dupe the vast majority of readers into believing things you know not to be true.“ The first is incompetence, the second fraud, the results of either process indistinguishable from the other in the literature. I received no answer.

    6. Podos et al. illustrate the following, that “the real ‘conclusion of the paper’ doesn’t depend on any of its details—all that matters is that there’s something, somewhere, that has p less than .05, because that’s enough to make publishable, promotable claims about ‘the pervasiveness and persistence of . . .’ whatever . . . they want to publish that day. When the authors protest that none of the errors really matter, it makes you realize that, in these projects, the data hardly matter at all.”

    7. Podos et al. use “ . . . the paradigm of the open-ended theory, of publication in top journals and promotion in the popular and business press, based on ‘p less than .05’ results obtained using abundant researcher degrees of freedom. It’s the paradigm of the theory that . . . is ‘more vampirical than empirical—unable to be killed by mere data’.”

    8. Podos et al. reveal a cultural transmission of research and publication techniques, across three generations, with “collaborators and former students . . . [and mentors all using] . . . similar research styles, favoring flexible hypotheses, proof-by-statistical-significance, and an unserious attitude toward criticism.”

    9. Podos and his colleagues “ . . . followed a certain path which has given them fame, fortune, and acclaim. Question the path, and you question the legitimacy of all that came from it. And that can’t be pleasant . . . [We all] . . . spend our professional lives building up a small fortune of coin in the form of publications and citations, and it’s painful to see that devalued.”

    10. Podos et al. do not acknowledge the simplest of alternative explanations for their data, and instead point out how consistent their data are with their chosen explanation, which not coincidentally happens to be their performance paradigm. “ . . . the data hardly matter . . .”

    11. Podos operates in extreme secrecy, refusing to communicate, hiding . . . from what? (See items 1 and 2 below for examples.) This secretive behavior is forbidden by NSF for Podos’ federally funded research, and just downright unethical according to the publishing guidelines of the Animal Behavior Society, of which Podos himself is President.

    12. And more . . . but I rest, lest someone might think I protest too much and am too WINDY!

    * * * * * * * *

    CRANKINESS DEFINED

    I’d hate to have anyone underappreciate how cranky I am (but perhaps keep open the possibility that I could be both cranky and wise). You can read all about the details at http://donaldkroodsma.com/?page_id=1596. Here’s some evidence:

    1. I’m so cranky that I’ve dared to ask questions about the research of Podos and his students, only to be stonewalled and then threatened with criminal harassment charges by the University of Massachusetts police if I ask one more question of this other group of birdsong biologists in my own Biology Department. Furthermore, the police have informed me that I must tell correspondents worldwide that none of them are allowed to talk to Podos either. (This is a novel approach to trying to silence critics. I suggest rereading the section about “collaborators” and “adversaries” in the Winds blog.)

    2. I’m cranky enough that I asked Biology Letters to retract Goodwin and Podos (2014), and an open exchange was about to ensue; then, however, a confidential communication arrived from the Dean of the UMass graduate school, submitted by Podos. Top-secret. I am not allowed to know the contents. End of discussion. Biology Letters says, sorry, but “per university rules” the contents of the letter cannot be revealed. Good-bye. But there’s one small problem: the dean (John McCarthy, Vice Provost for Graduate Education and Dean of Graduate School, and Distinguished Professor of Linguistics—I love titles) has no idea who wrote this letter that supposedly had come from him. End of story, as he was not interested in finding out the source. . . . “Journals and authors [and university administrators] often apply massive resistance to bury criticisms.”—quote from Gelman’s Winds.

    3. Do you know anyone who’s cranky enough to ask a scientific organization that they retract a best student paper award, on the grounds that it was entirely false, because slick marketing and glitz had won the day? . . . My argument was the same as that advanced by Gelman in Winds: “Fiske expresses concerns for the careers of her friends, careers that may have been damaged by public airing of their research mistakes. Just remember that, for each of these people, there may well be three other young researchers who were doing careful, serious work but then didn’t get picked for a plum job or promotion because it was too hard to compete with other candidates who did sloppy but flashy work . . .”

    4. Any of you cranky (and tireless) enough to take what you perceive as misconduct to University Administrators, those who are charged with maintaining the quality of science and integrity of investigators at the university? To departmental chairs, multiple deans, several vice provosts, the chancellor and deputy chancellor and vice chancellors for this and that, and to the very top of the campus hierarchy, to the Provost & Sr. Vice Chancellor for Academic Affairs? Nothing. All this is perfectly acceptable behavior for scientists at our institution. Go UMass! “No surprise,” says NSF to me: “Universities are big business, and their number one priority is to protect their own. Get used to it.”

    5. I’m sufficiently cranky that, over ethical considerations, I resigned as a Fellow from my favorite scientific society, and then suggested to the Officers of the Society that they might want to ask President Podos himself to resign his position, thus sending a strong message of zero tolerance for scientific and ethical misconduct to everyone, especially those beginning their careers and wondering how to “get ahead.”

    * * * * * * * *

    All the gory details of these shenanigans and more are documented on my website, http://donaldkroodsma.com/?page_id=1596, documented there in part because I needed to have a full record of everything should I be taken to court for criminal harassment, but documented there mostly in disbelief—no one could make this stuff up. In the end, and I do hope I am near the end of my efforts, I inevitably ask myself how close I’ve come to stamping out these kinds of behaviors among birdsong biologists. Was it worth the effort and agony over the last three years? I have my doubts.

    One clarification, written to me by a referee of my published account in Animal Behavior after seeing the Bird Fight! post: “I think he [Gelman] may misunderstand some of the sociology of the Kroodsma-Podos battle, though, as a full professor and president of the Animal Behavior Society does not quite qualify as an upstart ‘youngster’. Podos seems more like another stock character in Gelman’s world, the established scientist who can’t handle the possibility that he might have made a mistake. See http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/, . . . as much of what he describes seems to match up with your experience.”

    Back in 2004, Podos might have been called an “upstart youngster” in my Biology Department when I scolded him with the following message in a review of his manuscript: “Science is the search for truth regardless of how good the story is, whereas ‘marketing or advertising’ is the search for a good story regardless of the truth.” After my 10-year hiatus from academics, I returned in shock to see how successful this marketing had been. Since that review of his manuscript during late 2004, Podos has not spoken to me or communicated with me in any way, except via the UMass Police, even though we have ties to the same academic department and both live in the same small New England town.

    A blunt summary: Podos et al. should “ . . . step back and think that maybe almost everything they’ve been doing for years is all a mistake . . . that’s a big jump to take. Indeed, they’ll probably never take it. All the incentives fall in the other direction” (from Winds, of course).

    The movie rights to this sorry saga? They’re all mine. I’m hoping Redford will play me.

    Enough said on my part. I welcome an open dialogue from anyone, especially Podos.

    –Don Kroodsma

    • Kyle C says:

      I just bought your book (new hardcover).

    • Don, you know if I didn’t already know of several similar stories in other fields, it’d be hard for me to believe all that stuff… It’s only because bird-song is sooo far out of my normal field that I think I get some insight into why when people complain about these things the public isn’t utterly outraged. There’s a tendency to think “It couldn’t be all THAT bad”

      But it can.

      Definitively.

  5. I am an avian ecologist but not familiar with this area of research. Gelman in the body of the post and Kroodsma in his comment have both may reference to “new methods” that Podus et al use – are these literally new analytical methods (ie hierarchical models) or field techniques of some kind? Or is it just the scatterplot between trill rate and frequency? Glancing at one of the paper under critique, Goodwin & Podus 2014, it looks like they didn’t use anything fancier than PCA.

    • Don Kroodsma says:

      I think that the “new method” that has been referred to is best characterized as simply a new (and sexy) interpretation of an old graph that has been around for a while (the scatterplot of trill rate vs. frequency bandwidth). Because the interpretation is so appealing (“we finally have an explanation for how birds listen to each other and assess one other based on song”), it gained traction without critical thought, I think, and was “confirmed” over and over by a prolific and influential group of authors; hundreds of citations, many of them self-citatons, then helped solidify these ideas of “song performance” in the literature. (It doesn’t help that observers judging how birds react during these playback experiments are almost never “blind,” i.e., the observers know the treatment and know what results are expected/desired if they are to be publishable and support what is widely known to be true.)

      What I find so astonishing is that it took someone (me) to come out of retirement to take this burgeoning field to task. At the same time, I’m embarrassed that I or anyone should really have to address these basic issues of how to do science, and especially embarrassed that I’ve felt I had to use a sledge hammer when a wet noodle should have sufficed. I fear all this does not speak well for the doers or the gate keepers of this particular subfield of animal behavior (trying to put it mildly).

  6. Jeff Podos says:

    Dear blog readers,

    I’ve been following this discussion thread with interest, and at this time would like to offer three points:

    (1) I concur with David Lahti’s recommendation, that readers interested in this topic consult the original published record and draw their own conclusions. Call me old-fashioned, but I think academic scholarship and discourse works best when it focuses on the peer-reviewed, published record. The flip side of this coin is that one should treat extraneous material, including comments on blogs or personal websites, with a grain of salt. This is especially true regarding commentary from invested parties — myself included, of course. I presumed this distinction would be evident to readers of this blog, but apparently, based on what I’ve read so far in this thread, a reminder is in order.

    Anyhow, I’m opting to contribute now mainly because it occurs to me that young scientists down the road, just getting started on research in the field of animal communication, might one day stumble upon this thread and be unduly swayed by Kroodsma’s posts — which I admit are impressive, at least in their volume and earnestness. After reading the suite of comments so far, what budding scientist would possibly want to participate in a research field that is, as Kroodsma presents it, so wrong-headed as to be embarrassing?

    (2) Yet, let’s not forget that there are published, peer-reviewed responses that challenge Kroodsma’s original critique — see the two additional postings in the original blog entry. If Kroodsma is interested in dialogue, as he states, then he ought to consider those published challenges with some minimal level of attention, and prepare a counter-response. What Kroodsma instead offers us on the blog, at least so far, are simple restatements of his original critiques, peppered with superfluous and distracting passages that illustrate little more than a well-honed facility for ad-hominem attacks. He sounds impressive and authoritative, but this only serves to obscure the fact that he has so far ignored, in full measure, the content of our rebuttals. I wonder if he’s even read them.

    I’ll illustrate this with reference to Kroodsma’s comment from Aug 16, 4:34 PM. That comment rehashes one of his paper’s main critiques: that scientists cannot fairly compare birds’ responses to playback of normal songs (“from the middle of the pack”) versus playback of manipulated songs (high performance). This is because those song categories differ not just in performance but also along other axes, including familiar vs. novel or normal structure vs. abnormal structure. Thus, papers drawing the stated comparison will be fundamentally compromised. In his posting, Kroodsma restates his critique and chides the scientists who conducted the work. What he might have done instead is grappled with my published response, in which I point out the following:

    The paper that Kroodsma recognizes as a resounding success (Ballentine et al. 2004, Behavioral Ecology) actually did not commit the stated fallacy. Rather, Ballentine and co-authors compared responses to variants of natural songs only, i.e. songs that were un-manipulated. Hence there was no confound in design. I could shout what I just said, or frame it with an insult, but instead I’ll just say it again: In Ballentine et al. (2004), there was no confound in design. So clearly Kroodsma and I differ in our re-telling of the design and subsequent validity of Ballentine et al. (2004). So where did we get our information? For my part, I reached my conclusion by digging out Ballentine et al’s paper, and read about the actual study design — a step Kroodsma appears to have foregone: “We compared female response to different versions of the same song type as produced by different males, with one version being high performance and the other version being low performance” (Ballentine et al. 2004, pg. 165)?

    Additionally on this same critique, none of the other papers Kroodsma (2017) dismissed for committing the fallacy actually committed the fallacy. This is true for the various playback studies that used manipulated rather than natural songs; in these papers, the key inferences were always drawn for responses to songs manipulated to be either high or low performance. Also, as I noted in my rebuttal, a parallel fallacy was circumvented in the song learning study by Lahti et al. (2011), in spite of Kroodsma’s critique of the same point in this paper. In summary, the stated fallacy has been recognized and avoided over and over again, since the start of this work some 15 years ago. This was stated with clarity in my rebuttal — yet Kroodsma here ignores my counterpoint, and instead doubles down on his critique. What gives? Does anybody care if Kroodsma feeds this blog with fake news? Does a red herring become less false with repetition?

    (3) As the rebuttals lay bare, Kroodsma (2017) contains not just this one red herring, but a moderate-sized school thereof. Over and over again, Kroodsma’s critiques simply fail to withstand close scrutiny. The primary thread for all of these issues, as with the above example, is a stunning disregard for the actual published literature. His critiques are burdened by other major flaws too, some of which are outlined in the rebuttals.

    Stepping back for a moment, and as a bit of a public service announcement, Kroodsma’s stubborn adherence to contrarian viewpoints here comes as no surprise to veterans of our field. As Vehrencamp et al. (2017) observed, Kroodsma has previously applied the same modus operandi in attempts to discredit other major topics in the field of avian vocal communication. Of course there’s nothing wrong with reasonable, fair criticism, as noted eloquently by Vehrencamp et al. in their opening paragraph. Yet Kroodsma’s critiques have moved well beyond reasonable and fair. So I offer a message to future students stumbling on this thread; take anything Kroodsma has published, critiques and science alike, with a heavy dose of skepticism. But then again, you should also be highly critical of anything anyone writes, including what I am writing now on this non-peer-reviewed blog. Just take a look at the published exchange and relevant papers for yourselves, and take from it what you can — regardless of all the heat and noise.

    What next? I predict my post will be followed soon by the inevitable long-winded Kroodsma barrage(s). When that happens, ask yourself: has Kroodsma responded to any of the counterpoints raised both by me and Vehrencamp et al.? If so, has he been able to reconcile his positions with the actual published literature? And, to boot, will Kroodsma be able to express himself in a tone that is respectful and professional, or barring that at least mildly civil? I for one am not holding my breath.

    Sincerely,

    Jeff Podos

    • Aunt Sally says:

      Though I’m not knowledgeable enough to comment on the merits of this dispute, I don’t buy the idea that we should discount “extraneous” commentary that occurs outside of the “peer-reviewed, published record.” You are posting to a site that is proof positive that sophisticated, high-quality criticism of research is as likely to be found on a blog as in the pages of refereed journals. In practice, those journals and their editors often act as barriers to (rather than facilitators of) legitimate criticism.

      • Martha (Smith) says:

        “You are posting to a site that is proof positive that sophisticated, high-quality criticism of research is as likely to be found on a blog as in the pages of refereed journals. In practice, those journals and their editors often act as barriers to (rather than facilitators of) legitimate criticism.”

        +1

        In fact, it may (sadly) be that high-quality criticism is more likely to be found in a blog than in refereed journals.

    • Don Kroodsma says:

      Response noted. I think it appropriate that I delay my response, perhaps for a week or so, to give others a chance to find their voices. I have said a lot already, up front and candid and honest, and perhaps my voice needs a little rest. It would be nice to hear from someone who studies birdsong and knows the particulars of the papers that I critique.

    • Aunt Sally says:

      I took a look at the Ballantine et al. paper that is apparently one of the things at issue in this dispute. The meat of the paper is an experiment in which the response variable was a behavioral measure that varied considerably among tested individuals and that required an observer judgement to detect. In other words, typically noisy behavioral data. Sample size for the experiment was N = 10. So . . . with noisy data and a very small sample, the finding of statistical significance (p = 0.028) is all but meaningless. Whether or not the experimental method was “correct” seems almost beside the point. Even if the experiment was perfectly designed in every respect, the resulting evidence is far too thin for meaningful inference.

Leave a Reply