There’s nothing embarrassing about self-citation

Someone sent me an email writing that one of my papers “has an embarrassing amount of self-citation.”

I’m sorry that this person is embarrassed on my behalf. I’m not embarrassed at all. If I wrote something in the past that’s relevant, it makes sense to cite it rather than repeating myself, no? A citation is not a reward to the person cited, and I don’t have a burning desire for my citation count to go up from 71,347 to 71,354 or whatever. The purpose of a citation is to help the reader.

66 thoughts on “There’s nothing embarrassing about self-citation

    • That level of self-citation seems obnoxious. But, at the end of the day, not much worse than obnoxious unless there’s more to the story.

      I recently reviewed a resume that, on the top of the first page, plotted the applicant’s h-index by year. I think that’s about the same level of obnoxious.

      • A:

        Regarding Robert Sternberg, there is more to the story. The “more to the story” is that he abuses his position as journal editor in other ways and has shown a disregard for the review process as an editor.

      • Another:

        Wow, how rude of those twitter commenters? I read a lot of the report in question and I gave the authors feedback on it, so I was very well qualified to discuss it. It just happened that I hadn’t read all of the report—I’d only read the parts that were directly related to the questions that the authors had asked of me. But, hey, I guess that doesn’t stop people on twitter from telling me what I should be doing!

        But I’ll give Sanjay a break because he’s contributed lots of good comments to this blog over the years. If he occasionally gives me a hard time, that’s a price worth paying for his contributions to our discussions.

        • Yikes – this one is particularly rude “Andrew Gelman is a bit more like having Wittgenstein in your seminar.”

        • Mostly I was joking but did have this in the back of my mind from Ramsey’s archives “rather occupied by Wittgenstein, who has arrived to finish the PhD he was at before the war. I am his supervisor! He is in much better spirits, and very nice, but rather dogmatic and inclined to repeat explanations of simple things. Because if you doubt the truth of what he says he always thinks you can’t have understood it. This makes him rather tiring to talk to”

          but then continues “but if I had more time I think I should learn a lot from discussing with him.”

          from CAMBRIDGE PRAGMATISM: From Peirce and James to Ramsey and Wittgenstein. Cheryl Misak

        • Keith Oh no. I can entertain Frank Ramsey saying something like that. Ramsey tended to be a good listener from what I heard & read. The only Cambridge Apostle I actually met in person was Bertrand Russell. I was too young to appreciate his influence. Lettice Ramsey I wouldn’t remember although I heard that we would see her often in Girten Village Cambridgeshire.

          I got back late last week from Boston. But this week should have a few free nights to read through Chery Misak’s work. Thank you.

    • I have had considerable admiration for Robert Sternberg’s work since the 80’s [back when he was at Yale] I came across his Triarchic theory of intelligence. During his years at Yale I believe he did his finest analytical Since then He has published over 1400 articles, with unique perspectives in cognition, metrics, and education more generally. I was a bit surprised to read several dismissive criticisms of David Perkins’ and Howard Gardner’s theories. He has had some stark disagreements with several university administrations. I don’t know the details.

      I think his two books on Creativity are outstanding. And among psychologists, he deserves praise. I think it is more productive to discuss his work rather than complain about how many self-citations he uses.

      His consistent admonition is that creatives, particularly exceptional ones, are given very rough time by those who have to rely on them for ideation.

        • I can see how that can sour you to APS. I gather Robert Sternberg was the editor. I do not know Sternberg at all. Just read several of his books.

          I am however very surprised as to your experience b/c Sternberg impressed me as a sensitive and cordial person. I would love to know what caused him to resign from several of his appointments.

          In reading through the entire thread you linked, RJB made some comments that may have been factors. You may have very strong opinions that may have been overinterpreted or misinterpreted. It doesn’t excuse such dynamics. It seems to me that academia was never the friendliest environment.

        • Again, I’m very surprised by the actions taken by APS. I plan to investigate further these allegations.

          Sternberg and Gardner have endorsed an ethics based pedagogy that, while deemed implicit in university education & administration, has lagged in acceptance due to conflicts of interests and the incentive structure in place.

          My guess is that Sternberg, specifically, thinks that university criteria for these incentives do not reward the ones who should be rewarded. The creative eclectics with whom the mill around and include in social events.

          I draw this hypothesis from his his book, Wisdom, Intelligence, Creativity Synthesized

          That was John Rawls position also; although Rawls’conversations with his colleagues about this subject went better, according to one of his colleagues, the late Samuel Huntington.

      • I don’t think anyone doing research on intelligence other than Sternberg himself and maybe some of his students thinks much of his triarchic theory. Same is true of Gardner.

        • Triarchic theory was considered novel and cutting edge back in 80’s. I personally did not adhere to it then b/c b/c I categorize intelligences as neither as Sternberg nor Gardner do. Nevertheless in recent years, Sternberg particularly has conducted quite interesting queries into creativity. I like those insights much more.

          So what is it specifically about the Triarchic intelligence theory do you and your [friends?] object to?

          I am not sure what occurred in his presidency at University of Wyoming or as dean at Tufts. I haven’t kept track of his career.

          That said, Sternberg’s review of some other psychologists did surprise me. I do think he admires non-conformist creative individuals. That’s what I’ve heard. Will go to bat for them which is admirable.

        • Sternberg maintains that there are far too many analytically intelligent experts. Not enough creative expertise. Some prominent scientists in various institutions like National Academy of Sciences, MIT, Harvard, and even a smaller circle in AAAS also suggest creativity ebbing in academia.

        • I read through the !st few commentaries. Frankly, Sternberg’s response to Brody had more explanatory force, granting that Sternberg’s response is quite technical.

          Sternberg mentions that his theory is an ongoing project, adapting it to critiques he has received, including Brody’s. Brody was responding to Sternberg’s 1993 STATs whereas Sternberg claimed that he has revised them since then. If indeed, he did incorporate them, that is commendable.

          Sternberg as I recall has one of the widest field experiences as compared with others, during 80s’ & 90s,’in his field. Traversing across different disciplines and professions. Doing considerable field work in South America. That is impressive.

          In short he is a pioneer in his field; as is Gardner.

          Rather I’m interested in what YOU find wrong & not original with Sternberg’s Triarchic theory and why the claim you posit that ‘what is right i it is old’ even matters. So please explain if feasible.

          I’m a proponent of multiple intelligences. But I do not categorize intelligences as do Sternberg and Gardner. Admittedly, each has written more since 2004 or so. I haven’t had a chance to read their recent work.

          I agree with Sternberg that exceptionally creative [sensitive] individuals can be put through the ringer and taken advantage of. Gordon Allport held to that too.

        • Intelligence research, unlike most other areas of psychology and social science, is highly practical. Intelligence, aptitude etc. tests are widely used in educational, clinical and employment settings and judged based on their practical validity which does not at all conform to Sternberg’s or Gardner’s ideas. Their theories lack this practical dimension and are therefore regarded as speculation that is not just evidence-free but actually directly contradicted by evidence. Gardner at least admits the essentially pseudo-scientific nature of his program:

          [E]ven if at the end of the day, the bad guys [who emphasize the importance of general intelligence] turn out to be more correct scientifically than I am, life is short, and we have to make choices about how we spend our time. And that’s where I think the multiple intelligences way of thinking about things will continue to be useful even if the scientific evidence doesn’t support it. (Gardner, 2009)

        • M
          Sternberg distinguishes practical vs. analytical intelligence b/c he thinks that mainstream tests do not necessarily evaluate practical intelligence. By that Sternberg means, for example, that mathematical operations that may befuddle children during testing do not befuddle them in practical application. Therefore, he has been developing tests that are easier to grasp. I don’t have any problem with such a pursuit.

          One can query whether or not these revisions to these tests are meritorious or not. I gather he targets populations that don’t have access to good schools and may be in lower economic communities.

          I think we have to recognize that education is an experiment. Some approaches work well maybe; others may not. Trial and error is the name of the game.

          From that quote, I don’t think Gardner admitted that his program is of a ‘pseudo-scientific nature’. That’s your characterization/interpretation.

          Finally whose theory do you countenance? Jensen, Spearman?

        • M,

          I do not support Gardner and Sternberg’s theories, and I don’t claim to be an expert in research on intelligence, but have some concerns related to your comment “Intelligence research, unlike most other areas of psychology and social science, is highly practical. Intelligence, aptitude etc. tests are widely used in educational, clinical and employment settings and judged based on their practical validity.”

          I agree that intelligence research is motivated by practical concerns, and that intelligence and aptitude tests are widely used, but just because something is motivated by practical concerns or is widely used does not mean that the results necessarily address the intended use well. My understanding is that the validity of the “instruments” and theories depends on the nature of the measuring instrument and the choices made in analyzing the results and the instrument — in particular, “forking paths”, noisy measures, interpretations, etc. are all serious problems in this area as well as in other areas using statistics.

        • M:

          Yes. I do say so. But, don’t believe me. Think through the logic yourself.

          To reference today’s blog post, “g” theory is the archetypal cargo cult science derived via the reification of an eigen value to which crude methods are applied and from which strong inferences are drawn about its genetic underpinnings despite weak evidence to support these claims. We then use an extremely crude theory building method called a nomological network from which it is argued that confirmation of the original crude theory can be supported via correlation with a theoretically implied hypothesis, though we seem to only report the implied hypotheses that are supportive and not those that are not and even though we have not yet validated the causal underpinnings of the original measure, we conclude that it is causing whatever outcome we’ve predicted.

          In my estimation, the widespread belief in “g” theory as a true measure of human variability in intelligence given the crude methods used to develop the theory is no less absurd than the Darryl Bem research methods used to prove pre-cognition. The difference is that there is a consistent correlation between the items used in an IQ test that has been followed by the hope that biological and genetic theory would come along to prove it true, but both err in the belief that crude methods can reveal precise truth.

          If the correlations can be consistently produced, does that mean it is something real even if we do not fully understand the causal underpinnings? Does it mean we understand it? Does it mean the methods used have been validated by the results? Rather than the other way around?

          The problem is that “g” theory was developed with a method, principal component analysis, that has been widely dismissed as lacking the methodological rigor necessary to draw logically defensible conclusions about what is being measured. Positing that it could be measuring something, which is a claim the evidence can support, is nowhere close to the claims made that it is a measure of the true individual variation in human intelligence.

          What can be claimed from high scores on IQ tests? That an individual who scores highly is very likely very smart and very likely well educated and very likely from a relatively higher socio-economic category. What cannot be claimed from lower scores on IQ tests? That an individual who scores lower is very likely not smart as there are a variety of reasons for lower scores that may not be adequately modeled with a method that does not include measurement error at the item level and does not model potential causal confounds but instead assumes them away. A distribution of a scale is NOT the same as a distribution of a real unidimensional aspect of reality about which we understand 95% of the cause of individual variation. Given the complexity of items on these tests, the notion that they are unidimensional or that the error is randomly and normally distributed or that confounds are somehow controlled via item development are simply far fetched assumptions.

          “If we build the IQ test, the causal evidence will come.” This is the problematic position of one of the strongest findings ever in the field of psychology.

        • I find that imagining that a value similar to g measured something called “athleticism,” rather than something called “intelligence,” makes the shortcomings of g more intuitively obvious. Could we develop metrics on which top athletes consistently performed well, and merge those metrics to an eigenvalue, “a”? Surely yes. Would this eigenvalue actually “be” the quality of athleticism? Would it capture mainly “genetic” variation, rather than training variation, and would having low “a” prove that someone was predestined not to succeed in athletics? No, no, no. I suspect that people can recognize this more readily in the realm of sports than in emotionally charged discussions about something that sounds as fundamental as “intelligence.”

        • Genetic evidence has already come along by showing that individual differences in different mental abilities are largely caused by the same genetic effects. The majority of population variation in IQ exists within families (e.g. sibling correlations for IQ are <.5), so any theory that attributes socioeconomic status a large role in the causation of IQ differences is stillborn and can be disregarded. Factor analysis (not PCA) is just one method and evidence for g comes from abductive reasoning involving many different bodies of evidence.

          The g model is the only game in town. Stuff like Sternberg's and Gardner's is not intellectually serious. van der Maas's model is perhaps more serious, but it suffers from a lack of parsimony and fails to account for the fact that the positive manifold exists from the earliest age. If you want to challenge the g model, stop ranting ignorantly, familiarize yourself with the evidence and develop a model that fits the evidence better.

        • The genetic evidence is almost none:

          https://www.nytimes.com/2017/05/22/science/52-genes-human-intelligence.html

          “In a significant advance in the study of mental ability, a team of European and American scientists announced on Monday that they had identified 52 genes linked to intelligence in nearly 80,000 people.

          These genes do not determine intelligence, however. Their combined influence is minuscule, the researchers said, suggesting that thousands more are likely to be involved and still await discovery. Just as important, intelligence is profoundly shaped by the environment.”

          As I said above, this quote demonstrates that “g” enthusiasts are still hoping for the evidence to show up on the runway.

          Correlational evidence of a reified eigen value is simply not compelling to anyone that takes causality seriously. There are a plethora of possible alternative explanations that cannot be ruled out with current research that are summarily dismissed with weak argumentation such as the inference you draw from the sibling correlation. This entire line of reasoning is based on weak assumptions that are treated as strong typically conducted on exceptionally small samples and for which obvious possible confounds are rarely treated appropriately.

          And just because something is the only game in town, doesn’t mean it’s worth playing.

        • The goal should be to create a better model and collect far more precise evidence, not to fit the crude evidence that exists.

        • And by the way, I don’t think Sternberg’s and Gardner’s models are much better, except for the fact that they appropriately consider the multidimensional aspects of reality.

        • The argument that the “positive manifold” must be accounted for is simply absurd. It is the reification of parallel form reliability as if it is a supernatural thing that must be explained.

        • Stop relying on the New York Times and read the actual scholarly literature. I see no point in wasting my time correcting your misconceptions. I mean, if you seriously think that g theorists believe intelligence to be unidimensional or that IQ research draws on “exceptionally small samples”, what’s the point in arguing with you? You might as well believe that the moon is made of cheddar.

        • Nope. There’s one dominant dimension but in addition to that there are many others. Spearman himself thought that there are an infinite number of dimensions of intelligence.

        • M:

          I will leave you to your imagination as this does not appear to be a productive discussion, but it is utterly illogical to argue both for the “g” theory of intelligence and also for multidimensional theories of intelligence.

          Be well and may the positive manifold be with you.

        • If you’ve ever factor analyzed a battery of mental tests, the compatibility of g with the existence of other factors is blindingly obvious. But yes, this is not productive.

        • Curious said: “it is utterly illogical to argue both for the “g” theory of intelligence and also for multidimensional theories of intelligence.”

          This may depend on just what you consider “the “g” theory of intelligence” and “multidimensional theories of intelligence.”

          Here is how a statistician explained the situation when I was first learning statistics: When studying intelligence, researchers ask their questions and do a factor analysis on the results. The result is typically is one strong (high weight) “first factor” and several additional factors that appear non-trivial. The strong first factor is what is typically called “g”. For some purposes it seems to be the best single predictor for what people would like to use a single predictor for what they call “intelligence”. But if you restrict to, say, graduate students at that university, the first factor would be pretty useless in predicting anything much, because graduate students tend to be pretty high in that predictor. But if applied to the general group of grad students, the other factors might be relevant to distinguishing between, say grad students in art, engineering, and psychology.

        • Curious

          RE: Unidimensional

          Nearly all the individuals that keep calling attention to their IQ are pretty boring.

          Sternberg theorization about creativity intriguing but I don’t know why devising tests for it so necessary.

          I do agree with the claim that we have plethora of analytically capable but not necessarily outstandingly creative. Creativity can’t be scheduled in university environment. It is a spontaneous and serendipitous process for me.

  1. I don’t have a problem with the # of self-citations. If you have a unique perspective then why shouldn’t you self-cite? God knows that some experts display self importance without resorting to much self-citation. For me, demonstration of your ability to refute, admit error, and reformulate your argument to coherence & calibration, if so required would be impressive.

  2. Ha! I am curious when you got this mail because there was a recent “Thing” about self-citations on facebook psychology pages and twitter if i am nost mistaken. E.g. see here https://www.facebook.com/groups/psychmap/permalink/568858320157761/

    I didn’t see the problem of self-citations per se, but i thought i was the only one.

    I tried to explain in words why here: http://eiko-fried.com/sternberg-selfcitations/

    “Hmm, i wonder what the real issue/cause of possible outrage is here. Is it 1) self-citations, or 2) is it that citations can be/are used as some sort of measure for evaluating researchers?

    I reason that 1) is not necessarily a problem. In fact, i reason that the possible “outrage” about this may in large part reflect current (possibly severely flawed) research- and publication practices where it seems to me that researchers jump fom topic to topic, from paper to paper, not interpret non-published (non-significant?) papers, hereby alltogether not really systemically investigating anything really.

    What if i were to have a “real research program” and systematically investigated a theory or phenonemon? (possibly see here for my best attempt at describing what i view as a “real research program”: http://statmodeling.stat.columbia.edu/2017/12/17/stranger-than-fiction/#comment-628652). Would it not make sense to cite this work in every paper following the prior investigations?

    Now if self-citations are a possible problem because of 2), i reason that the possible “outrage” should be about using citations numbers as evaluation metrics, not necesssarily self-citations.

    Just my 2 cents.”

    • Martha

      I’m quite sure that neither Gardner nor Sternberg would disagree with your opinion. They have posited similar views. In fact I’m quite sure that Sternberg has explicitly made the same point. I though am not particularly interested in the excessive testing of creativity. I think creativity can dry up quickly in academic institutions as it is. Social science is pretty snooze producing. So too some statistics, as Andrew may have implied to T eariler. Boring sometimes.

      Nevertheless I doubt that any researcher can overcome qualitative & quantitative conundrums so presented in research efforts. It doesn’t mean that we should then dismiss theoretical exploration as fruitless. Sternberg and Gardner have been pivotal in global educational efforts.

      Lastly, I think it is commendable that both acknowledge how difficult it is for very creative people to find an institution or venue that doesn’t try to take advantage and bully them. Jealousy and ego are rampant and so it’s important to take into account sociology of expertise and prestige seeking.

    • Elko,

      The Sternberg symposium article you cite on your blog is behind a paywall? Do you have a PDF of it? If so, I’d like to review it. Thanks in advance. [email protected] Otherwise I’ll find some other source for it.

      You have presented a relatively balanced perspective on self-citation. Kudos. My guess is that Sternberg’s does put more pressure on researchers to engage more cutting edge theorizing which is not easy to do in this environment. I lean to Paul Rozin’s views on the state of psychology > not theoretically mature enough to be assessed statistically. I have leanded to this view even before reading Rozin’s perspective.

  3. Extra, extra read all about it — “g” theory adherents believe their positive manifold correlational construct is better than Sternberg’s multiple factor analyzed correlational constructs.

    To rotate or not to rotate, that is the question: ’tis nobler in the the divination of truth from propinquity to suffer the slings and arrows of outrageous correlation or to take arms against a sea of factors.

  4. “The purpose of a citation is to help the reader.”

    Indeed! But regrettably, many authors and journals don’t comprehend this. Hence the prevalence (at least in some fields) of citations that don’t include the article title, and that abbreviate the name of the journal to the point where it is indecipherable (and the reference therefore unfindable). Some journals seem to require such abbreviation, presumably so that they save minuscule quantities of ink (or used to, before the journals went online).

    I think some authors/journals see citations as some sort of game, or perhaps a ritual. That may be why there are some authors who self-cite in ways that aren’t really helpful to the reader.

    • “Hence the prevalence (at least in some fields) of citations that don’t include the article title, and that abbreviate the name of the journal to the point where it is indecipherable (and the reference therefore unfindable).”

      And don’t mention what part of the article is relevant to the citation.

      (This is even more egregious when citing books; I recall an article saying something like “we obtained the result by linear regression analysis”, with a reference to Draper and Smith’s regression textbook — just the book; so details on how the result was obtained).

    • “Hence the prevalence (at least in some fields) of citations that don’t include the article title, and that abbreviate the name of the journal to the point where it is indecipherable (and the reference therefore unfindable). Some journals seem to require such abbreviation, presumably so that they save minuscule quantities of ink (or used to, before the journals went online).”

      Yes ! This annoys me, and i never understood it (and i found it very annoying to have to use when writing a paper as well).

      I’m not sure, but i think this practice often goes along with using numbers as the in-text references. I much more prefer Name + Year-style references, which give some extra information when reading the text. For instance, when reading some text in a paper with in-text Name + Year references, i can much easier determine if i probably know that paper already, which i find useful when interpreting the useage of the reference.

      Or, if i don’t recognize the in-text Name + Year reference, it makes me want to look at the reference-list to further check the full reference-information, and possibly look up the paper, much more frequently compared to the number-references. In the number-references situation, i find it very hard to keep switching from text to reference-list so i do that less. And i also just like the reference list to be in alphabetical order, not based on the place of their first mentioning in the paper.

  5. > “The purpose of a citation is to help the reader.”

    Well, I guess if you are professor… If you are a no-name student, the role of citation is to prove you know the context of the question you are writing about.

Leave a Reply to M Cancel reply

Your email address will not be published. Required fields are marked *