Skip to content
Archive of posts filed under the Sociology category.

A political sociological course on statistics for high school students

Ben Frisch writes: I am designing a semester long non-AP Statistics course for high school juniors and seniors. I am wondering if you had some advice for the design of my class. My currentthinking for the design of the class includes: 0) Brief introduction to R/ R Studio and descriptive statistics and data sheet structure. […]

Rockin the tabloids

Rick Gerkin points me to this opinion piece from a couple years ago by biologist Randy Schekman, titled “How journals like Nature, Cell and Science are damaging science” and subtitled “The incentives offered by top journals distort science, just as big bonuses distort banking.” Here’s Schekman: The prevailing structures of personal reputation and career advancement […]

Hey—Don’t trust anything coming from the Tri-Valley Center for Human Potential!

Shravan sends along this article by Douglas Peters and Stephen Ceci, who report: We selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices. With fictitious […]

It’s hard to replicate (that is, duplicate) analyses in sociology

Cristobal Young points us to this post on replication packages; he writes, “we found that only 28% of sociologists would/could provide a replication package.” I read the comments. The topic arouses a lot of passion. Some of the commenters are pretty rude! And, yes, I’m glad to see this post, given my own frustrating experience […]

Monte Carlo and the Holy Grail

On 31 Dec 2010, someone wrote in: A British Bayesian curiosity: Adrian Smith has just been knighted, and so becomes Sir Adrian. He can’t be the first Bayesian knight, as Harold Jeffreys was Sir Harold. I replied by pointing to this discussion from 2008, and adding: Perhaps Spiegelhalter can be knighted next. Or maybe Ripley! […]

“We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)”

Someone pointed me to this discussion by Lior Pachter of a controversial claim in biology. The statistics The statistical content has to do with a biology paper by M. Kellis, B. W. Birren, and E.S. Lander from 2004 that contains the following passage: Strikingly, 95% of cases of accelerated evolution involve only one member of […]

“17 Baby Names You Didn’t Know Were Totally Made Up”

From Laura Wattenberg: Want to drive the baby-naming public up the wall? Tell them you’re naming your daughter Renesmee. Author Stephenie Meyer invented the name for the half-vampire child in her wildly popular Twilight series. In the story it’s simply an homage to the child’s two grandmothers, Renee and Esmé. To the traditional-minded, though, Renesmee […]

Survey weighting and regression modeling

Yphtach Lelkes points us to a recent article on survey weighting by three economists, Gary Solon, Steven Haider, and Jeffrey Wooldridge, who write: We start by distinguishing two purposes of estimation: to estimate population descriptive statistics and to estimate causal effects. In the former type of research, weighting is called for when it is needed […]

Inauthentic leadership? Development and validation of methods-based criticism

Thomas Basbøll writes: I need some help with a critique of a paper that is part of the apparently growing retraction scandal in leadership studies. Here’s Retraction Watch. The paper I want to look at is here: “Authentic Leadership: Development and Validation of a Theory-Based Measure” By F. O. Walumbwa, B. J. Avolio, W. L. […]

Hey, what’s up with that x-axis??

CDC should know better. P.S. In comments, Zachary David supplies this correctly-scaled version: It would be better to label the lines directly than to use a legend, and the y-axis is off by a factor of 100, but I can hardly complain given that he just whipped this graph up for us. The real point […]

Born-open data

Jeff Rouder writes: Although many researchers agree that scientific data should be open to scrutiny to ferret out poor analyses and outright fraud, most raw data sets are not available on demand. There are many reasons researchers do not open their data, and one is technical. It is often time consuming to prepare and archive […]

The language of insignificance

Jonathan Falk points me to an amusing post by Matthew Hankins giving synonyms for “not statistically significant.” Hankins writes: The following list is culled from peer-reviewed journal articles in which (a) the authors set themselves the threshold of 0.05 for significance, (b) failed to achieve that threshold value for p and (c) described it in […]

Of buggy whips and moral hazards; or, Sympathy for the Aapor

We’ve talked before about those dark-ages classical survey sampling types who say you can’t do poop with opt-in samples. The funny thing is, these people do all sorts of adjustment themselves, in the sampling or in post-data weighting or both, to deal with the inevitable fact that the people you can actually reach when you […]

“With that assurance, a scientist can report his or her work to the public, and the public can trust the work.”

Dan Wright writes: Given your healthy skepticism of findings/conclusions from post-peer-reviewed papers, I thought I would forward the following from Institute of Educational Sciences. Here is a sample quote: Simply put, peer review is a method by which scientists who are experts in a particular field examine another scientist’s work to verify that it makes […]

An inundation of significance tests

Jan Vanhove writes: The last three research papers I’ve read contained 51, 49 and 70 significance tests (counting conservatively), and to the extent that I’m able to see the forest for the trees, mostly poorly motivated ones. I wonder what the motivation behind this deluge of tests is. Is it wanton obfuscation (seems unlikely), a […]

Chess + statistics + plagiarism, again!

In response to this post (in which I noted that the Elo chess rating system is a static model which, paradoxically, is used to for the purposes of studying changes), Keith Knight writes: It’s notable that Glickman’s work is related to some research by Harry Joe at UBC, which in turn was inspired by data […]

Creativity is the ability to see relationships where none exist

Brent Goldfarb and Andrew King, in a paper to appear in the journal Strategic Management, write: In a recent issue of this journal, Bettis (2012) reports a conversation with a graduate student who forthrightly announced that he had been trained by faculty to “search for asterisks”. The student explained that he sifted through large databases […]

I actually think this infographic is ok

Under the heading, “bad charts,” Mark Duckenfield links to this display by Quoctrung Bui and writes: So much to go with here, but I [Duckenfield] would just highlight the bars as the most egregious problem as it is implied that the same number of people are in each category. Obviously that is not the case […]

Collaborative filtering, hierarchical modeling, and . . . speed dating

Jonah Sinick posted a few things on the famous speed-dating dataset and writes: The main element that I seem to have been missing is principal component analysis of the different rating types. The basic situation is that the first PC is something that people are roughly equally responsive to, while people vary a lot with […]

Social networks spread disease—but they also spread practices that reduce disease

I recently posted on the sister blog regarding a paper by Jon Zelner, James Trostle, Jason Goldstick, William Cevallos, James House, and Joseph Eisenberg, “Social Connectedness and Disease Transmission: Social Organization, Cohesion, Village Context, and Infection Risk in Rural Ecuador.” Zelner follows up: This made me think of my favorite figure from this paper, which […]