The “Psychological Science Accelerator”: it’s probably a good idea but I’m still skeptical

Asher Meir points us to this post by Christie Aschwanden entitled, “Can Teamwork Solve One Of Psychology’s Biggest Problems?”, which begins:

Psychologist Christopher Chartier admits to a case of “physics envy.” That field boasts numerous projects on which international research teams come together to tackle big questions. Just think of CERN’s Large Hadron Collider or LIGO, which recently detected gravitational waves for the first time. Both are huge collaborations that study problems too big for one group to solve alone. Chartier, a researcher at Ashland University, doesn’t think massively scaled group projects should only be the domain of physicists. So he’s starting the “Psychological Science Accelerator,” which has a simple idea behind it: Psychological studies will take place simultaneously at multiple labs around the globe. Through these collaborations, the research will produce much bigger data sets with a far more diverse pool of study subjects than if it were done in just one place.

Aschwanden continues:

The accelerator approach eliminates two problems that can contribute to psychology’s much-discussed reproducibility problem, the finding that some studies aren’t replicated in subsequent studies. It removes both small sample sizes and the so-called weird samples problem . . .

So far, the project has enlisted 183 labs on six continents. The idea is to create a standing network of researchers who are available to consider and potentially take part in study proposals . . .

Studies that are selected go through a collaborative process in which researchers hammer out protocols and commit to publish a research plan in advance — a process known as preregistration . . .

The idea isn’t to pump out a bunch of studies, but to produce a lot of data, Chartier said: “Everything is open and transparent from the start, so what we’re going to end up with is a really solid data set on a specific hypothesis.”

This all sounds fine. But . . . what concerns me is the weakness of the underlying theory. This is all a step forward and a great idea, but the big difference between the physics particle accelerator and the psychological science accelerator is that in physics there are strong theories that make precise predictions, and in social science we’re mostly stumbling in the dark. So, yeah, go for it, but who knows if much of anything useful will come from all this.

I say this not with any intention to criticize the particular research projects mentioned in the linked article; I just want to say Whoa on the comparison to physics here: if it’s works, great, but let’s not be surprised if the data come out pretty damn noisy.

P.S. I wrote this post around 6 months ago. The topic just happened to come up recently in comments, where someone pointed to an article by James Coyne from 2016, Replication initiatives will not salvage the trustworthiness of psychology, that made similar points. Coyne’s article is stunning: it’s nothing that regular readers of this blog haven’t seen before, but it’s kind of amazing to see it all in one place. We discussed the article when it came out, but I guess that back then I thought that major progress was around the corner; I’d underestimated how slow the change would continue to be.

P.P.S. Alexander Aarts saw this post on the schedule and sends along these thoughts:

The people behind the initiative have recently posted a pre-print about it.

I (Aarts) am also skeptical, because I worry it will (1) not really (optimally) accelerate anything, and (2) will make the same sort of mistakes that have probably happened in the last decades.

Concerning (1) I think they will not really be optimally accelerating anything because:

# It seems to me that that the studies they will performing are very different concerning the topics
# If I understood things correctly there is no clear follow-up planned concerning the studies, and relevant theories.
# If I understood things correctly lots of participants will be used in these studies (possibly too many?)

These things altogether makes me think that this might not the most optimally manner to (a) perform psychological research, and (b) accelerate psychological research. I reason this might be much better accomplished by research programs that use an “optimal” number of participants, and directly follow-up previous studies in order to (re-) formulate, and test theories, and design better measurements, etc. I reason smaller groups of researchers might be way more efficient in optimally performing research and accelerating psychological science (e.g. here)

Concering (2) I think they will possibly make the same sort of mistakes that have probably happened in the last decades:

# It seems to me that they favor a hierarchical approach where there is talk of “study selection committee”, “experts”, etc. This to me is worrying in the sense that a lot of power will be handed to relatively few people. History, and science, have shown this might not be a good idea. To me, they are sort of building a network of “research assistant”-labs that will execute what the “principal investigator/professor”-lab has come up with. To me, that is not what a true collaboration is about.

# Their idea of “collaboration” is somewhat different than mine. I would view, and call, their collaboration-style via the Psychological Accelerator as more “dictatorial” in the sense that 1 idea/researcher/study is responsible for what the rest of the labs will do. My idea of collaboration (as attempted to best explain in the link to your blogpost called “stranger than fiction”) is more “democratic” in the sense that more ideas/studies by more different reserachers will be performed.

# I am a fan of a “democratic” approach to science in the sense that I reason everyone should be able to try and contribute to science, but not in the sense that everyone’s contribution is equal, or that scientists should vote for which study to perform. The science should be most important, and the science should lead the scientists, not the other way around. I think the Psychological Science Accelerator will make the same mistakes that probably happened in the past decades: many factors concerning the scientists that should have nothing to do with the actual science will play a role with the Psychological Science Accelerator.

Concerning the possible use of way too many participants, I posted a comment on your blog which might be a way to try and determine what the “optimal” no. of labs/participants could be for collaboration project: I don’t know much about computers and statistics, but I reason the information from all the large-scale collaborations from all the Registered Replication Reports could be used for the idea presented in the link, and could be a argument for/against the use of that many labs/participants.

David A. Kenny & Charles M. Judd wrote:

All of this leads to very different ideas about the conduct of research and the quest to establish the true effect in the presence of random variation. Replication research, it seems to us, should search to do more than simply confirm or disconfirm earlier results in the literature. Replication researchers should not strive to conduct the definitive large N study in an effort to establish whether a given effect exists or not. The goal of replication research should instead be to establish both typical effects in a domain and the range of possible effects, given all of what Campbell called the“heterogeneity of irrelevancies” (Cook, 1990) that affect studies and their results. Many smaller studies that vary those irrelevancies likely serve us better than one single large study. Moreover, in this era of increasing preregistration and collaborative research efforts, multiple studies by different groups of researchers is increasingly feasible.

Now if I understand this correctly, it fits nicely with the Psychological Science Accelerator (PSA), and with my smaller groups collaboration format, in the sense that multiple labs will perform the same (replication) research. However, it is not clear to me how many labs the PSA will use per (replication) study, and whether “enough is enough” at some point. For instance, I read around 200 labs have now signed up for the PSA. I think it might be a giant waste of resources of all of these labs will be executing the same study (perhaps even more so if it’s not a replication of a well-known, and influential, effect as has been the case concerning the “Registered Replication Reports” but some sort of relatively new study).

I think the PSA does some things possibly right (I like collaboration, only a different version of it), but also might be doing some things wrong (possibly waste resources, not following up research in a way that in my view optimally accelerates psychological science via theory (re-) formulation and -testing, etc.). I reason a big part of my critique that may be useful, and listened to, is about the possible “optimal” no. of labs/participants per study.

I thought the idea about using the data from all the RRR’s performed thusfar could provide a possibly useful argument why, or why not, the smaller groups format I described might be better. Should the data from the RRR’s provide an argument against super large scale collaboration projects and for smaller collaborations, I reason it could form the base of a possibly pretty strong case as to why smaller collaboration groups might be much better in optimally accelerating Psychological Science. For instance, to me it makes much more sense to (1) have immediate follow-up research concerning a certain theory/phenomenon/effect (e.g. see my comment here), and (2) have researchers involved with this whole process that have been working on the specific theory/phenomenon/effect under investigation. I reason both will increase the chances of optimally (re-) formulating theories, coming up with useful experiments and measurement-tools, etc, and if I understood things correctly the PSA doesn’t have any of this.

Why are we talking about this in the first place??

When considering these arguments, I think at some point we need to step back and consider why we are studying psychology at all. Here are some motivations for psychology research:
– Understanding and treatment of severe mental illness;
– Improvement of the lives of people who do not have debilitating psychological problems but still have difficulties in their lives;
– Enabling the smoother functioning of modern life (this would include a lot of things such as psychometrics, employee evaluation, nudges, etc.);
– Understanding problems of modern life (for example, studies of bias and stereotypes);
– Pure science (with most of the examples relating to the applications above).

Lots of the famous examples of failed replications are on topics that are uninteresting and unimportant. Or, we could say, interesting if true but uninteresting if not true. For example, if Cornell students had ESP, that would be kind of amazing as it would overturn much of what we knew about science. But if we learn that there’s no evidence that Cornell students have ESP, that’s pretty boring. For another example, in the above-linked paper, James Coyne writes:

These problems are compounded by the publicity machines of professional organizations and journals screaming “Listen up consumers, here are scientific results that you must accommodate in your life.” . . . For instance, consider a 2011 press release from the Association for Psychological Science, “Life is one big priming experiment”:

Scientists have shown again and again that they can very subtly cue people’s unconscious minds to think and act certain ways. These cues might be concepts—like cold or fast or elderly—or they might be goals like professional success; either way, these signals shape our behavior, often without any awareness that we are being manipulated. This is humbling, especially when you think about what it means for our everyday beliefs and actions. The priming experiments take place in laboratories, using deliberately contrived signals, but in fact our world is full of cues that act on our minds all the time, for better or for worse. Indeed, many of our actions are reactions to random stimuli outside our consciousness, meaning that the lives we lead are much more automated than we like to acknowledge.

Interesting, maybe important—if true. If not true, though, this claim is about as interesting as flat-earth beliefs in physics, creationism in biology, or Obama-birther conspiracy theories in political science: that is, what’s interesting is not the theories themselves but rather the fact that influential people believe in them. (OK, I guess nobody influential believes in the flat earth, but that’s kind of interesting too, that this particular theory does not happen to have any powerful adherents.) The interesting about that APS statement, or about later claims such as that notorious Harvard statement that “The replication rate in psychology is quite high—indeed, it is statistically indistinguishable from 100%,” is that influential people in the field of psychology were saying it.

Anyway, my point in bringing up that foolish APS press release is that much of the discussion of replication has focused on silly topics such as social priming, a subject which for historical reasons has been important in psychology’s replication crisis which is ultimately uninteresting and unimportant in itself. It could be helpful to step back and remember why we care about psychology research in the first place.

67 thoughts on “The “Psychological Science Accelerator”: it’s probably a good idea but I’m still skeptical

  1. Imagine a collaborative physics research effort conducted among hundreds of labs, but where several of the measurement instruments are not standardized (e.g., poor quality control results in varying inaccurate measurements of the supposedly same thing). I think the problems quickly approach what is likely to happen with psychological experiments. Of course, we will try to analyze the variation, including the variation due to instrument mis-measurement, but at some point the noise will overwhelm the signal. Unless we have a very focused study – but then we run the risks discussed by potentially wasting a lot of resources examining a few questions that may not even be the most important.

    Then, we add the layers of academic and research politics and we get all the problems alluded to above. So, it is hard to be optimistic about this effort, although there is at least some potential for things to improve. For example, the screening of research ideas might eliminate many of the silly but headline-attracting studies done today. On the other hand, we may want many silly studies since the underlying theory is not robust enough to warrant intense focus to begin with. I am agreeing with the above thoughts that there may well be better ways to improve things than this particular imitation of physics.

  2. (I usually comment on this blog using the name “Anonymous” but in this case i think it could be useful to use my name, which is “Alexander A. Aarts”, as i am mentioned in this blog)

    Thank you for giving this some attention professor Gelman.

    I really think this whole Psychological Science Accelerator should be thought about, and discussed, much more than has thusfar seems to be the case before possibly jumping aboard on this whole thing and/or promoting it like it’s the greatest thing since sliced bread.

    Here are some additional thoughts i wrote up in a different post (http://statmodeling.stat.columbia.edu/2018/05/08/what-killed-alchemy/#comment-772767). I will summarize them here because i think they might be relevant and/or useful for the possible discussion on this post here. Please correct me if i understood any of this wrong, and/or am making other kinds of mistakes!

    Possible problematic issues of the Psychological Science Accelerator:

    – outsources/gets rid of researchers’ individual thinking, activities, and roles (?)

    – (semi-) outsources data-collection and analyses to 1 computer and/or a few people on behalf of dozens of labs (?)

    – there are already 3 “assistant directors”, and a “committee” like it’s a giant firm before the 1st paper is even published (?)

    – possibly creates an entire new unnecessary “level” that could have unnecessary influence, and unnecessarily take away research funds (just like journals and/or university administators) (?)

    – possibly wastes many unnecessary resources (?)

    – a small group of people decides what the other labs will do (?)

    – will not make all the date available so others can check these at the time of the results getting published (and widely talked/written about) (?)

    – possibly “nudge” folks into follow-up research on that specific study/topic (chosen by a small group of people?) by being all mysterious and keeping the 2nd part of the data a “secret” (?)

    Now, please correct me if i am wrong about any of these things. If i am correct about (some of) them, to me all these things to me seem highly un-wanted and perhaps even un-scientific. I sincerely hope i am seeing things wrong here, but i am worried that this whole thing may simply not be a very good idea. I am all for collaboration, but done in a different manner which i reason does not have any of the possible problematic issues written above and in this blogpost.

  3. To my eye, your list of reasons we study psychology leaves out the entirety of cognitive psychology and much of cognitive neuroscience too, which I think are larger than the small sliver you allot to “pure science” unrelated to illness or other social problems. (You are of course free to believe that all of this work is uninteresting and unimportant!)

    This is all a step forward and a great idea, but the big difference between the physics particle accelerator and the psychological science accelerator is that in physics there are strong theories that make precise predictions, and in social science we’re mostly stumbling in the dark. So, yeah, go for it, but who knows if much of anything useful will come from all this.

    Out of curiosity, what was your own motivation for moving from physics to statistics, and for specializing in an applied social science?

    • Erin:

      Yes, I was putting lots of the work in cognitive psychology in the “pure science” category, but I agree with you that it has applications too. For example, studying how children learn language can give us general insight into political attitudes and also can have some applications in education. Still, I think these fit into the categories of “enabling the smoother functioning of modern life” and “understanding problems of modern life.”

      Regarding your last paragraph, see this memoir from a few years ago. Short answer: (a) I felt I could make more of a contribution in statistics than physics because I didn’t feel that I had a deep understanding of physics, (b) I study politics because I think it’s interesting and important.

      • Impressive background.

        Orwell is good to keep in mind. But constitutes exercises in cynicism. That is why blogs at least hold out platforms for open mindedness and intellectual freedoms, the bases for the potential for health, happiness, integrity, justice, and more broadly a sustainable planet & species.

  4. I see both advantages and disadvantages of the accelerator. One strength is that it may reintroduce a much needed degree of “disinterestedness” into psychological research. Perhaps I am being overly cynical but it seems that too many researchers are personally invested in confirming their ideas and hunches. Add in journal preferences for sexy findings, researcher degrees of freedom and garden-of-forking path effects and you have a mess on your hands. Agreeing to test an idea that was not yours in the making may result in more disinterestedness and hence in more trustworth findings.

    The risk of course is that weaknesses in design and measurement are also replicated across many labs if the common protocol is not well developed. Unfortunately weak design and measurement are fairly common in many areas of psychology. For example, the infamous Cuddy et al. p-curve analysis of the power pose literature that Psych Science just published analyses data from 55 studies on power poses. Curiously almost all of these 55 studies – designed by many different smart people with PhDs – failed to include a control group in their design. That is, they contrasted a group holding an expansive power pose with a group holding a contract (slouching) pose and then inferred a positive effect of an expansive pose. Obviously such an inference is inappropriate if you cannot rule out a negative effect for a contractive pose.

  5. I think many faculty in many disciplines would describe other disciplines as not interesting, hence we did not get PhDs in those disciplines. (Also why people can make fun of NSF or NIH grants based on their titles. ) Most faculty might well be surprised at what motivates people in other disciplines or to learn that they find their own research fascinating. Even with neighboring disciplines I’m often so bored with talks or topics that they find so fascinating even if I find the methods great. I often feel “why would someone use such a great analytical approach on such a topic?” though I do keep trying.

    In all the social and behavioral sciences I think there is a tendency for people to say: but isn’t your goal to “help people” or “solve social problems” or “spread democracy” ? And the person being asked will sigh. Is the purpose of biology to “feed the world”? Is the purpose of physics “to create weapons”? Of computer science to “figure out what generates clicks”? Also, again, and we’ve discussed this before, consider whether your assumption that scholarship in academic non-clinically oriented psychology is supposed to be about helping people possibly reflects some stereotypes.

    • Elin:

      I guess the point of most inquiries is to help people, or to be interesting, or to be fun, or to make money. As a political scientist, I’m not at all bothered by the view that the fundamental purpose of political science is to help people understand the political world; to help leaders make better decisions; to improve our political institutions so that people will be safer, happier, more prosperous; etc.

      I’m not a biologist, but it would certainly seem reasonable to say that the main motivations for biological research are to reduce human suffering, to feed the world, to prevent and cure disease, etc.

      In contrast, I don’t think anyone would say that the purpose of physics is to create weapons, or that the purpose of computer science is to figure out what generates clicks. These are highly cynical statements which might describe certain aspects of how these fields are currently funded, but of course these are not fundamental purposes, in the sense that the fundamental purpose of political science is to understand and improve our societies.

      • “I’m not a biologist, but it would certainly seem reasonable to say that the main motivations for biological research are to reduce human suffering, to feed the world, to prevent and cure disease, etc.”

        There are undoubtedly lots of biology researchers who went into biological research with these motivations. But, based on biologists I have known, there are an awful lot who went into biology because they were fascinated by frogs, lizards, monkeys, insects, slime molds, etc., etc. Similarly, many (most?)astronomers went into that field because they were fascinated by stars comets, planets, …; computer researchers got into the field because they were fascinated by computers; physicists entered the field because they were fascinated with (at least some aspects of) physics, etc.

      • As a political scientist, I’m not at all bothered by the view that the fundamental purpose of political science is to help people understand the political world; to help leaders make better decisions; to improve our political institutions so that people will be safer, happier, more prosperous; etc.

        It seems to me that the first of your reasons — to understand the political world — is a kind of attribution that is largely missing in your explanations for why other fields exist. What I was trying to get at above is that cognitive psychology isn’t mostly aimed at improving the world at all, but understanding it. Maybe not in a very useful way, some of the time. But I spent a few years there, and I think it lives in some space between philosophy of mind and biology (and the bits I was in, close to linguistics and music theory as well). I think, for good or ill, the disciplinary culture is quite far from the helping professions. There is much more hands-on humanity in my applied stats work, honestly.

        • Conjectures:

          1. Many cognitive psychologists go into the field because they are fascinated by some aspect of the human mind.

          2. Many cognitive psychologists get stuck in their intuitive ideas of what “understanding” is, and thus are prone to discounting ideas and evidence that are contradictory to their intuitive “understandings”.

  6. Several of the complaints raised about the replication project seems to miss an essential point. The goal of replication projects as I see it is not to directly do worthwhile science, but to establish practices under which bad science is difficult and unrewarding.
    If your goal is to improve scientific practices within various subfields of psychology, you will not get very far by unilaterally writing off large portions of those subfields as a priori irrelevant. Those fields will just ignore you and go on as they have for the last 50-odd years. You need some way of cajoling members of those fields into buying in to you standards, and this means taking dumb priming stuff seriously.

    Maybe you do not want to reform those areas. Maybe instead you want to isolate them and lower their status, with the hopes that they shrink and eventually wither away in the manner of phrenology and freudianism. This may in fact be the better path, but it is not the one replication projects are pursuing. And in terms of wasted resources, well, it is not obvious which path is better, as isolated fields take a very long time to die and suck up considerable resources as they are doing so.

    • > establish practices under which bad science is difficult and unrewarding.
      Certainly a worthwhile goal but will formalized collaboration groups actually achieve that?

      My guess based on some experience (e.g. Cochrane Collaboration) is that really poor methods will be replaced by OK methods while better methods and innovation will be blocked. It just seems to be what groups of colleagues seem to always gravitate to (e.g. use _our_ methods, trust us we are the experts).

      • “My guess based on some experience (e.g. Cochrane Collaboration) is that really poor methods will be replaced by OK methods while better methods and innovation will be blocked. It just seems to be what groups of colleagues seem to always gravitate to (e.g. use _our_ methods, trust us we are the experts).”

        Yes! Thank you.

        This is one of the things i don’t like about the “collaborative” take of the Psychological Science Accelerator. The research- and publication format i linked to also includes collaboration, but done in a different way where every single individual researcher comes up with their own ideas/studies. In my reasoning this will enhance the chances of truly useful/innovative work.

        My view of collaboration is that it should make use of the strengths of the individual researchers in such a way that they will have a chance to “put forward” their best work. The collaborative part then kicks in where small groups of researchers will replicate each others’ work.

        Overall, i think this could provide a possibly “optimal” manner of making use of both the individual minds of researchers and what they (potentially) have to contribute, and the collaborative efforts of replicating using small groups.

        Overall, i reason the science is what will lead researchers in the research and -publication format i linked to, with the possibility of individual researchers (and/or certain specific small groups of collaborator) pointing the rest in the right direction (possibly in an alternating matter).

      • Keith I couldn’t agree more. To change this culture requires a change in incentives. How to draw these incentives. I do have hope out for the Open Science Framework. And blogs like Andrew’s, Daniel Lakens, Facebook Psych Methods Discussions, Reddit, etc.

    • “Specifically, in the article Gilbert says he welcomes the accelerator because it will help psychologists identify moderators”

      I am skeptical about this possibility, and/or i think this might be achieved much more optimally using small groups of collaborators using relatively few participants.

      I reason that with that many possible variables that can differ across that many labs, it seems hard to pinpoint to what exactly could have caused possible differences. Perhaps there’s also a difference between 1) identifying moderators based on actual data/variables present/measured, or 2) merely speculating.

      The 1st way to me seems closely related to theory building and -testing, and i reason that could possibly be much better achieved using a collaboration of relatively small groups of researchers and participants.

      The 2nd way to me seems what has happened a lot in recent discussions about “failed” replications, and could perhaps also follow from large-scale collaboration efforts. These speculations are likely to be followed up by sub-optimal research that aims to show evidence for these proposed moderators, hereby keeping the possibly problematic research and -publication process alive which i hope we could stop from happening (possibly see: http://statmodeling.stat.columbia.edu/2018/04/20/carol-nickerson-investigates-unfounded-claim-17-replications/#comment-711458).

      Also see: http://datacolada.org/63

      “I shuffled the dataset, conducted a meta-analysis on each of the four anchoring questions, computed I2, and repeated this 1000 times (R Code). The first thing we can ask is how many of those meta-analyses led to a significant, false-positive, I2. How often do they wrongly conclude there is a hidden moderator?

      The answer should be, for each of the 4 anchoring questions: “5%”.
      But the answers are: 20%, 49%, 47% and 46%.”

  7. A very interesting idea.

    It would save a lot of time if we could do the replication at the same time we do the initial work. If successful, this would eliminate wasting time looking at studies that don’t replicate.

    Could the replicators be corrupted? Probably. So maybe you need a cadre of researchers who specialize in replication and whose reputations depend on not being corrupted.

    One likely effect is that corrupt researchers would move more towards studies that are engineered to give the “right” results by stacking the deck beforehand. There is a lot of this already. Rote replication would just deal twice from the same stacked deck.

  8. I am rather weirded out by the idea of the sample size being to large. Isn’t the usual complaint that psych studies are underpowered? Given the whole positive predictive value thing, power (or precision, if you so please) would seem like a pretty important thing to have. I get the idea that smaller samples let you iterate faster, but it’s not like small sample research is about to go away.

    I can see the appeal of “strong” theory in psychology, but I haven’t yet seen anyone give an example of how to get it, given the complexity of the subject matter (I would rather appreciate an example if someone had one though). Weak measurements seem much easier to deal with (notably via behavioral measurements that closely match the phenomenon of interest, and perhaps more consistent use of validation techniques). Still, I don’t think research based on “weak” theory is necessarily pointless. If one variable increases monotonically with another, it’s great to know that even if you can’t find the precise function. Evidence can still be found for or against pretty vague theories, and evidence can in at least some cases (perhaps most) be produced to distinguish between two competing “weak” theories.

    • As a further comment on the sample size angle, I would especially question the notion that the search for moderators should be done with “relatively smaller” sample sizes, because that type of thing is murder for statistical power and you already need lots of people to reasonably measure effect size. As discussed here (morehttp://datacolada.org/17), even checking for one moderator requires a total of four times the number of participants to retain the same statistical power, assuming that moderator entirely eliminates the effect, which may not be the most likely case. (I’m not sure if there’s a Bayesian equivalent to the power issue – it seems like there would have to be if testing was being used instead of estimation – but since these tests will almost certainly be frequentist that’s not relevant in this case).

      • “As a further comment on the sample size angle, I would especially question the notion that the search for moderators should be done with “relatively smaller” sample sizes, because that type of thing is murder for statistical power and you already need lots of people to reasonably measure effect size”

        I am not very good at statistics, but perhaps you could design more appropriate studies to test a certain hypothesis/theories, and/or decide how to optimally test hypotheses and (re-) formulate theories.

        Please correct me if i’m wrong about the following. To use the example you link to (http://datacolada.org/17):

        Perhaps the hypothesis that people rate cartoons as funnier with pen held in their teeth vs. lips, *but less so* if they hold pen after seeing cartoons (that has been tested via a two-way interaction that possibly requires larger sample sizes) can also be tested by a new experiment that simply looks at ratings teeth VS. lips *after* seeing cartoons.

        Altough i reason it technically differs slightly, it may not matter much for theory building and -testing (at least in this stage of a research program), and/or it could be a much more economical way to test and build theories and gather relatively optimally useful data (400 pp in one two-way interaction test/experiment VS. 4 x 100 in 4 simple effects tests/experiment).

        Also see: http://daniellakens.blogspot.com/2014/12/more-data-is-always-better-but-enough.html

        • Okay, that split experiment might be a good idea. But I guess I was working on the assumption that the effect size is small (as is often said on this blog) or that we’re trying to estimate the effect size (and thus be able to note differences in it) (as is recommended on this blog, to my memory). If it’s the latter case especially, I would like to link another analysis from the same site:

          http://datacolada.org/20.

          But yes, I acknowledge the expense difference would be pretty big, and that could be important. I guess it depends on how many people these labs are recruiting anyway, what effect size we’re looking for, and whether we want to measure effect size. Plus probably other stuff that’s not occurring to me. But anyhow, the Accelerator can serve in a complementary fashion to other labs. It preregisters its protocols if I recall, so p-hacking and the garden of forking paths shouldn’t be an issue. Meanwhile, positive predictive value should be fairly good and estimates of importance would be more feasible. Meanwhile, other labs can rapidly iterate and create potentially important findings for the accelerator to verify.

      • > if there’s a Bayesian equivalent to the power issue
        Definitely – it is perhaps made most clear through Mike Evans’ concept of prior bias http://www.mdpi.com/1099-4300/20/4/289

        Essentially you simulate the performance of the Bayesian analysis you plan to do on the study as designed (sample size being an important feature of that) under say the effect parameter set to zero. You track how often the prior for that parameter being zero will be higher than in the posterior – how often such a study will move you away from the truth. And then you do the same for a effect parameter set to a non-zero boundary. This may scare researchers into realizing the real uncertainty.

        Noisy data is noisy and at most a Bayesian analysis can cope with it best but cannot decrease it – as its real. To get less noise you need a better design (e.g. large sample size). Informative priors simply shift the biases for different parameter values which may or may not be appropriate/defensible given the background knowledge.

  9. The psychological science accelarator is in 10 years going to be the single best thing that ever happened to psychology. The people involved (I’m not one of them) know their stuff – the silly argument in this blog about weak theories is nothing to worry about – the people at PSA have a good understanding of measurement, a good understanding of theory. It’s good to be critical – but this blog is weak critical. It is really not saying anything substantial about what PSA is actually doing. Indeed, it seems no one who has commented here, nor the blog writer, actually knows anything the PSA is doing. Kind of silly to comment on something you don’t know enough about to comment on it, I’d say.

    • 1) “It’s good to be critical – but this blog is weak critical. It is really not saying anything substantial about what PSA is actually doing”

      Ehm, i (think) i listed several possible problematic issues with the PSA in the several comments listed and/or linked to above?

      2) “Indeed, it seems no one who has commented here, nor the blog writer, actually knows anything the PSA is doing. Kind of silly to comment on something you don’t know enough about to comment on it, I’d say.”

      Please forgive me if i am trying to understand this whole thing before saying things like that the PSA is “in 10 years going to be the single best thing that even happened to psychology”. In doing so, i am trying to be carefull to not jump to conclusions or make mistakes. I tried to do that by being careful in my comments, and by providing links to several sources so readers can check things themselves. Unless i am not supposed to talk critically about the PSA, please help me understand and/or explain what i don’t possibly know about this PSA.

      Side notes:

      1) In my reasoning you can’t just use words like “silly” and say stuff like “this blog is weak critical” without at least trying to connect such a comment to things written in the blog/comments that make such a statement relevant/ appropriate.

      2) In my reasoning you can’t just say stuff like “the people involved know their stuff” without at least trying to connect such a comment to evidence for this. And even then, it seems like an appeal of authority and not really useful in these type of discussions.

      • It’s really difficult to have a discussion about this, Alexander, since most of your assumptions about what the project are doing are incorrect. So it feels like a waste of time to correct your misconceptions – it seems to make much more sense for you just to join the project if you care about preventing mistakes. That’s my issue here – you lack a good understanding of what people are actually doing, and raising weak criticisms.
        Let me just take 1: You say they might use too many participants. The PSA has spent a lot of time thinking about this, and is fully aware of the need to be efficient. There is no concern, unless you don’t like the thoughts they have about this – but you don’t know the thoughts they have about this. Instead, you seem to suggest that the PSA might not think about this – but the team has so many smart people, that thinking you would come up with an issue they didn’t already come up with themselves should have a really low prior. I would personally, given such a low prior, first learn more about the project instead of writing criticism. Now it’s fine if you want to take another approach, where you write weak criticism – everyone is free to choice what they think is a good way to spend time in their lives.

        • 1) “It’s really difficult to have a discussion about this, Alexander, since most of your assumptions about what the project are doing are incorrect. So it feels like a waste of time to correct your misconceptions”

          Ehm, i have read their site, several tweets, and pre-print about the project. That’s all the information i can go on. If the project is doing other things than they stated about it via those sources, they are not being “transparent” like they adverstise to value, and i of course can’t comment on that.

          If i understood things about the project incorrectly, perhaps you could enlighten me and the other readers. It’s not really fruitful in a discussion to just say that most of my assumptions about what the project is doing are incorrect and leave it at that.

          2) “The PSA has spent a lot of time thinking about this, and is fully aware of the need to be efficient. There is no concern, unless you don’t like the thoughts they have about this – but you don’t know the thoughts they have about this”

          Ehm, don’t you think such a project that promotes the h@ck out of themselves, asks for money, and people joining their project has some sort of scientific (and perhaps even ethical) responsibility to better inform and/or explain their project to others (including the no. of participants they plan to use)?

          This is the only think i could find in their pre-print about use of resources: “First, the ability to pool resources from many institutions is a strength of the PSA, but one that comes with a great deal of responsibility. The PSA will draw on resources for each of its projects that could have been spent investigating other ideas. Our study selection process is meant to mitigate the risks of wasting valuable research resources and appropriately calibrate.”

          3) “it seems to make much more sense for you just to join the project if you care about preventing mistakes”

          Ehm, given my criticism written above about the PSA, including that 1) i reason they are unnecessarily creating an entire new “level” in the research and publication world that can potentially be abused and manipulated, and 2) that i reason they are nudging and/or telling other researchers what topics/studies to contribute to because they “are the experts”, how can i in good (scientific) conscience join (or promote) such a project?

          There is no way to improve the Psychological Science Accelerator in my reasoning, because it fundamentally does things that i view as being un-scientific, and perhaps even un-ethical.

        • You can join PSA if you want to learn more, and care about things enough to make a real contribution. I don’t see why they would inform strangers about how they are working, but if you’d join, you’d be able to spend your time thinking about things that have a connection with reality.

    • +1 to Alexander’s comment.

      Daniel:

      1. If you’re going around describing something as “going to be the single best thing that ever happened to psychology,” you’re gonna have to expect some scrutiny. You write that the people running the project “have a good understanding of measurement, a good understanding of theory,” and that’s fine, but that does not contradict the point that the underlying theories being tested are weak. It’s completely possible to have a good understanding of what theory is available, while still recognizing the problems with these theories. In any case, I disagree with your implication that I don’t know enough about to comment on it. Comments can come at all levels.

      2. I, like the researchers in the Psychological Science Accelerator, have done some work intended for public consumption, and I recognize, expect, and welcome comments from all perspectives. I wouldn’t like a comment that is misleading, for example someone saying they are an expert and know all about X when they don’t—but I think it’s perfectly fine for someone to make it clear where they are coming from, what they know, and what they’re asking. And that’s what my correspondent and I are doing above.

      I just went to the Psychological Science Accelerator, where I see the following comment from the organization: “Thank you for all of the interesting and well articulated comments and critiques @StatModeling !” I’m glad they, unlike you, welcome comments from non-experts. Over the years, my colleagues and I have found we can lean a lot from skeptical outsiders, and we consider their comments carefully. Going around and taking open, careful, and sincere questioning and calling it “silly” and “weak” while hyping your something as “the single best thing that ever happened to psychology” . . . that’s no way to learn. Don’t you think academic psychology has had enough of defensiveness and hype already? I’m happy to see that the leaders of the Psychological Science Accelerator seem to be taking a much more open and healthy attitude to criticism than you are in your comment here.

      • It’s quite a challenge how we should address the conceptual/theoretical underpinnings without relying on the extent of jargon/terminology which sounds arrid and as Paul Robin so aptly describes; very narrowly conceived through labs and surveys. And these conferences become showcases & networks exclusively of expertise. The empathetic element is subdued.

      • Hi Andrew,

        sorry if I misread you blog, but it seems to boil down to: “I say this not with any intention to criticize the particular research projects mentioned in the linked article; I just want to say Whoa on the comparison to physics here: if it’s works, great, but let’s not be surprised if the data come out pretty damn noisy.”

        I think if you don’t have anything else to base your skepticism on, my assessment of weak and silly is a very accurate summary.

        I personally think psychology will progress, more than anything, by changing the field into a more collaborative science. PSA has started this, shown it is possible. The comparison with physics is perfectly accurate here – CERN is not heralded because if it’s discoveries, but how it, after WWII, got all these countries together in a non-military research project.

        You can be skeptical all you want – but it’s a bit silly to be skeptical based on the concerns you raised (which are vague and underspecified) when the contribution simply of having a collaborative structure in science will very likely have a similar impact on psychology as CERN had. We can’t predict the future, but I think some optimism is more warranted than skepticism. We will see the next 50 years what will come from this.

        • Daniel:

          The leaders of the Psychological Science Accelerator thanked us for our “interesting and well articulated comments and critiques.” So you might want to talk to them before being so sure that the comments here are “weak” and “silly.” I also remain baffled by your claim that the Psychological Science Accelerator is “going to be the single best thing that ever happened to psychology.” Hype + dismissal of open criticism, that’s not a good combination.

          That said, I very much appreciate your engagement directly on this blog, where there is an opportunity for open discussion.

        • Yes I hope Daniel, and others, engage on this or other blogs. The sociology of expertise is an integral part of improvement or lack of improvement of the knowledge base. Sometimes the back door opinions remain in echo chambers and cliques. Posting transparently is a way out of these habits of mind and practices.

        • Thanks Andrew – I’m typically very appreciative of criticism. And I don’t intend to hype – I’m giving you my personal belief that the PSA is the biggest development in psychology in years. Maybe I’m ignoring the difference between skepticism and criticism – I’m seeing the first, but not the second. I guess we all have our priors.
          Best,
          Daniel

        • Daniel:

          We may be closer to agreement than you think! You write that you’re not seeing criticism. I’m not trying to criticize! I’m raising concerns, not criticisms. I’m not trying to shoot down the Psychological Science Accelerator, I’m asking some questions that I don’t see answered anywhere. And I think Aarts and the others in this thread are coming from a similar perspective. See this comment for more on this point.

        • “CERN is not heralded because if it’s discoveries, but how it, after WWII, got all these countries together in a non-military research project.” Really? I have *never* heard anyone claim this. First, “is heralded” implies the present, and I’m sure the particle physicists I know would be surprised to hear that what’s important isn’t their discoveries, but their ability to work nicely together. Second, if “is heralded” refers to the actual heralding when CERN was created in the early 1950s, my understanding was that the explicit aim was to help European nuclear science catch up with the US and USSR, something that could hardly be considered intrinsically “non-military” at the time, and that, again, cared about science and not just having a bunch of people work together. [https://timeline.web.cern.ch/timelines/The-history-of-CERN]

          As a side note, I feel compelled to point out, given a lot of comments on this thread, that (i) large particle physics experiments are a small subset of physics, and so “physics” should not be used as a synonym for large projects. (ii) There are a lot of drawbacks to the giant collaboration approach to science, that the particle physicists themselves are aware of — be careful what you wish for.

        • Last time I was at CERN (January this year) I talked with researchers there how they could not get a Nobel for physics (they are with more than 3) but that there was talk they might get the Nobel peace prize. Regardless of whether or not they get it, the role of CERN in promoting international collaboration seems pretty huge.

          The last paragraph is really something that psychologists are more aware of than anyone, given that it is their discipline, so I’m pretty aware what we are wishing for. Large collaborations are often not a choice to begin with – and psychology is moving in that direction.

        • There is the possibility that the field of ‘psychology’, or any social science, through several of these cross disciplinary collaborative appeals & efforts, will evolve epistemically and epistemologically. That is being demonstrated by the initiatives that METRIC Stanford may be undertaking. John Ioannidis has been forging such collaborative appeals for several decades. And well positioned to spearhead them as will some bloggers here. I tend to agree with Keith O’Rourke in that we have to be pragmatic as to what quality will be adopted. But I think we also need to widen the types of thinkers in these efforts.

    • Daniel:

      Can you help me understand how your comment is not an instance of this – “trust them, they are the experts” http://statmodeling.stat.columbia.edu/2018/06/26/psychological-science-accelerator-probably-good-idea-im-still-skeptical/#comment-773983

      Now in my linked comment, the likely common behavior I was postulating is not in any way meant to be obstructive but unfortunately rather usual group dynamics for which countervailing measures need to be taken (e.g. invite outside not necessarily polite seeming criticism and make people take it seriously at least as an mental exercise – lets at least pretend how can what’s being said can be seen as helpful to _us_).

      • I’m not saying trust them – I’m saying join them. Or if you don’t want to, first figure out what they are actually doing. I’m seeing some of their work (my PhD student is involved) and there is nothing mentioned on this blog they didn’t discuss extensively themselves. Kind of obvious, no? There’s 300 of them, and there are just a bunch of people who spend less than a fraction of their time thinking about the PSA on this blog. Would really be surprising if anyone here came up with good criticism – they could, but in my experience, good criticism it typically weeks of investigation.

        • Daniel:

          I think you are making a mistake by implicitly framing this as some sort of debate where the goal is to present strong criticisms. I’d say that the material in this thread (the initial statements from Aarts and me, and then the many comments) are raising concerns rather than expressing criticism. To the extent there is criticism, much of this criticism is about the Psychological Science Accelerator team not being clear about addressing certain potential problems. To the extent that you and the 299 others in this project have addressed these issues, that’s great, and I hope you can take the above comment thread as a motivation to address these publicly, rather than merely reassuring us that we should trust you.

          When I go to the Psychological Science Accelerator website, I see a lot about openness and a lot about procedural issues regarding the distributed laboratory network, and core principles of diversity and inclusion, decentralized authority, transparency, rigor, and humility. That’s great (and I think that the goal of “humility” somewhat contradicts your confident claim that this project is “going to be the single best thing that ever happened to psychology”), and I think it’s part of the story. As I wrote last year, honesty and transparency are not enough. Theory and measurement are important too. It may well be that your team of 300 has thought very seriously about theory and measurement—but that hasn’t made it onto the webpage. On that webpage, I see lists of committees, I see descriptions of some projects, I see lots of procedural material, but nothing addressing concerns regarding theory and measurement in psychology. Again, I believe you when you say that the team has thought hard about these issues—but perhaps you can understand that, given there is nothing about this in the project’s easily accessible public materials, outsiders such as myself can be concerned. You write, “first figure out what they are actually doing.” Going to their website would be a good start, no?

          I was heartened to see the project’s leaders (or whoever maintains their website) thank us all for our “interesting and well articulated comments and critiques,” and I hope they will publicly share the fruits of their extensive discussions, thus addressing some of the concerns raised in the above thread.

        • Daniel & Andrew,

          There is a strong tendency to pat the ones who are in one’s own circles. That is how ‘groupthink’, over time, forms. The ‘like’ function on Twitter is a bit of hindrance toward critiquing members of one’s own group/clique/circle.

          I heard the podcast where Daniel offered his opinion of the author’s collaboration process in ‘Justify Your Alpha’. It would have been more educative if other observations of the process had been included. It takes managerial gifts to negotiate a collaboration. Specifically not to play politics & favoritism behind the scene, and continue to defer to one or two more prominent experts. In that sense, we all have a lot to learn.

        • We should all guard against being too cliquish and deferential toward some thought leaders. That is a hazard of many of the circles I’ve come across. It is, I guess, the means by which careers are made. But it is also a risk in dragging mediocre arguments for far longer needed.

        • You wrote: “(…) there is nothing mentioned on this blog they didn’t discuss extensively themselves. Kind of obvious, no? There’s 300 of them, and there are just a bunch of people who spend less than a fraction of their time thinking about the PSA on this blog. Would really be surprising if anyone here came up with good criticism – they could, but in my experience, good criticism it typically weeks of investigation.”

          I like this reasoning a lot! Does this also mean we can all stop doing science because, surely, if 300 folks have thought about everything concerning something for a few months, then i reason millions of scientists in hundreds of years must have really thought of everything already!! I mean what are the priors there, am i right?!

          Regardless, I just spent an hour or so browsing their site on june 30th and july 1st 2018 (https://psysciacc.org/people/), especially all the “committees”. Please correct me if I am misreading or misunderstanding anything here. I am sure the site will keep changing the text on it so I am not sure how useful this will be for future readers, but alas.

          For now, if you are correct, and in line with the reasoning described in the PSA pre-print (“Altogether, the PSA depends on distributed expertise model likely to reduce many common mistakes each psychologist makes during the course of independent projects”, p. 17) the PSA will all collectively of course already have thought about every single one of the following things (and of course took appropriate action), but perhaps it still could be useful in some way or form to contemplate some more about the following questions:

          1) It seems to me that there are lots of graduate/phd/post-doc students/”early career” folks in all of the committees. For them, and most of the other committee members, I can also not find any clear evidence for their “expertise”, or even any experience with regard to their committee membership and tasks for that matter. What does the PSA mean when it uses words like “experts” in their pre-print and when talking to the media, and do you think it is appropriate to call these folks “experts” or put them in that position?

          (as a side note: The committee that chooses members for all the other committees seems to have these characteristics to an even larger extend. Isn’t this even more remarkable?)

          2) The pre-print talks about “expert review of the theoretical rationale”. How does this work? I reason there can be no permanent “theoretical experts” in some sort of “theoretical expert” committee, because these “experts” must then have mental super powers to be able to possess the knowledge of everything in psychological science in order to be able to decide the “theoretical worth” for studies on topics that will necessarily be totally different (due to how the PSA has been set up) and they can therefore (almost certainly) never have all been involved with themselves to possibly even develop their potential “theoretical expertise”?

          If the PSA is using “theoretical experts” on a study to study basis who have joined the PSA or who you could potentially contact yourself, doesn’t this necessarily imply you are making use of a very tiny sub-set of “theoretical experts”, and do you think this could be important with regard to how the PSA will function?

          (possibly compare this to simply having “theoretical experts” decide themselves what they view as “theoretically important/useful/whatever” and execute their own studies using the small-group collaboration via StudySwap for instance without any interference of a 3rd party like the PSA)

          3) Is the PSA aware of several “Registered Replication Reports” where so called “experts” were also used to come up with the “best” studies possible using thousands of participants like the PSA possibly intends to do, and their subsequent post-study “expertise” that showed they really should have done things differently? (e.g. see Dijksterhuis’ reaction to the “professor priming” RRR; Strack’s reaction to the “pen in mouth” RRR). What does that tell you about the scientific value of all the “experts”, and do you think involving (certain) “experts” can (therefore) actually be a bad idea?

          4) I’ve read on the PSA site and pre-print paper they are following and/or advising the “Registered Report“ process where the stage 1 submission will be peer-reviewed. Doesn’t this mean that the “experts” work can all be undone because some annoying “reviewer 2” and/or journal editor is suggesting something else?

          And how can you let possible “non-experts”, or at least other “experts” than the PSA has chosen, like these journal- reviewers be involved when you seem to especially value the “experts” you yourself choose in the PSA? Or are “peer-reviewers” the journal picks out automatically “experts” as well? Or will there be no “normal” independent stage 1 journal “peer-review”, and if so: could you then still call it a “Registered Report” that has gone through “rigorous” independent peer-review?

          (as a side note: are you aware that the “Registered Report” format does not seem to have mandated publicly accessible registration so the reader can actually check things and/or journals can decide themselves whether they think that’s even important in a “registration” (e.g. see https://osf.io/preprints/bitss/fzpcy/). Since you are all about “transparency”, if I understood things correctly, how are you going to make sure the reader can access the pre-registration?)

          5) Am i correct in noticing several “experts” in the committees have their own projects which are tied to the PSA (Lebel with CurateScience, Chambers with Registered Reports, Ijzerman with CREP). Are these conflicts of interest?

          6) Are, or will, any members or directors, or whatever be paid using research funds that could otherwise go to actual research instead of “managing” (or whatever the appropriate term is) the PSA? Are these (possible future) conflict of interest that need to be made clear before promoting the PSA and asking for funding and researchers around the world to contribute?

          7) On the pre-print paper accessed july 1st 2018 (https://psyarxiv.com/785qu/) it is stated that “Author’s Note: The authors declare no conflict of interest with the research”. I think several of the authors are members of PSA committees, directors, or people who have submitted (approved) studies. Are these conflicts of interest?

          8) Isn’t one of the committee members (DeBruine) the same person who (co-) submitted the 1st study that is/will be performed? Is this a conflict of interest? It seems that some folks are on several different “committees” at the same time. Isn’t one (Kline) of the “study selection” committee members also in the “publication and dissemination” and “leadership committee”? Isn’t one (Janssen) of the “authorship criteria” committee also in the “leadership team”. And if so, are these conflicts of interest?

          9) Do you think researchers who want to be in one of your committees, or want to function as PSA “experts”, could be a certain specific type of researcher? If yes, do you think that could be important with regard to how the PSA will function from a scientific perspective?

          10) Do you think researchers who submit studies to be performed via the PSA by dozens of labs, and thousands of participants (?) could be a certain specific type of researcher? And if yes, do you think that could be important with regard to how the PSA will function from a scientific perspective?

          Regardless, I think these details might not even matter anyway. They might just all be minor things that really don’t matter. Perhaps it is more about getting things done as quickly as possible, and getting as many people on board as you can. All in the name of “open science”, “collaboration”, and “changing incentives”! You perhaps named it the “Accelerator” for a reason! You can always think about things later, and correct them much further down the road based on “meta-scientific research”. That seems to be scientifically responsible (behavior) to some I have come to learn (e.g. see https://www.psychologicalscience.org/observer/preregistration-becoming-the-norm-in-psychological-science#comment-8353307). It’s also a great way to simply do what you think is best without really engaging in any criticism, or trying to prevent making big mistakes.

          To me, what the Psychological Science Accelerator is doing is taking a park where there are lots of different folks doing all kinds of things and then installing gates, fences, and locks around it. They then form several “gate-committees”, “fence-committees” and “locks-committees” that are supposed to keep an eye on wants to enter the park, and be given a key, and after doing this proudly claim to be “inclusive” and making sure that “everyone can enter the park and do what they want there”. That doesn’t make any sense to me. You should not (want to) build the fence in the 1st place.

          Collaboration is fine, bigger sample sizes are fine, but not the way the PSA is going about things. I repeat my reasoning that 1) i reason they are unnecessarily creating an entire new “level” in the research and publication world that can potentially be abused and manipulated, and 2) that i reason they are nudging and/or telling other researchers what topics/studies to contribute to because they “are the experts”, how can i in good (scientific) conscience join (or promote) such a project? The Psychological Science Accelerator fundamentally does things that i view as being un-scientific, and perhaps even un-ethical. That certain “open science” folks seem to all be totally okay with this is incomprehensible to me, but perhaps it also makes clear that i don’t want to be involved with any of this “open science” stuff anymore.

          Should facts, logic, reasoning, evidence, careful thought, etc. (still) matter in any of this, I tried to provide some of that in my comments to the best of my abilities because I reason that’s what science should be about, and that’s what is most important in helping try to improve Psychological Science. I tried to help improve Psychological Science for several years now, from a tiny room on an old computer, and at a certain point in time without even ever wanting to be involved in science in any way whatsoever anymore. I can, have, and will, only work/reason/reply with that intention. If some of you want to go the PSA route, i am not further trying to stop you. That’s your choice. I made my choice.

          I would like to thank professor Gelman for giving me the opportunity to speak my mind about these matters, and provide an alternative (in the form of the small-group collaboration research- and publication format linked to in several places in this blog). I would also like to thank all the other commenters for their contributions to the discussion. I would now like to not have to comment on anything, because i reason it will not be very useful anymore.

          When given the stage, i tried to play my best. Just like “KISS-guy”. Always be like “KISS-guy”. I am handing the guitar back to professor Gelman, and the others in the band. “Thank you and goodnight”:

          https://www.youtube.com/watch?v=Z4b6BPaO944

        • Re:

          ‘To me, what the Psychological Science Accelerator is doing is taking a park where there are lots of different folks doing all kinds of things and then installing gates, fences, and locks around it. They then form several “gate-committees”, “fence-committees” and “locks-committees” that are supposed to keep an eye on wants to enter the park, and be given a key, and after doing this proudly claim to be “inclusive” and making sure that “everyone can enter the park and do what they want there”. That doesn’t make any sense to me. You should not (want to) build the fence in the 1st place.’
          ————-

          This has been my impression of some specific expert circles & individuals. The question is how to get around strictures. In this sense, I think METRICS at STANFORD has been more inclusive b/c I think that Drs. John Ioannidis & Steven Goodman have been evaluating selection criteria & incentives for entry into the biomedical enterprises. I speculate that this will continue to be an ongoing evaluation.

          Also I don’t really have a problem with less expert included b/c to be perfectly frank, I think sometimes novices or outsiders have made valuabe contributed to an enterprises as well.

          What I see lacking sometimes are team leadership skills in different fields because of conditioning for career success. Having said that, I am always grateful for having come across stellar team builders.

        • Re:
          Should facts, logic, reasoning, evidence, careful thought, etc. (still) matter in any of this,
          —–

          These have always bee queried. I think though that some will be better diagnosticians. A small minority typically. We have ample numbers of analytically superior thinkers. But it’s in the informal discussions where one can starkly discern biases and logical fallacies. Rex Kline also made such an observation. It’s then in re-reading their articles, I have found those same hidden biases operating. I might not have caught it so easily otherwise.

          I had this same conversation with a few mathematicians in Boston too, a fair number with whom I’d socialize, due to my family associations. One use to drill me on the logical fallacies I discerned in an argument. Believe me it took hard work for 15 years to review the critical thinking curricula, which was just a hobby.

  10. John Ioannidis, in an interview with Eric Topol, mentioned that there are over 2 million articles in the biomedical literature. Estimate that in total, in all disciplines, over 4 or 5 million. We access only a fraction of them. Of what quality is the query.

  11. I think the major weakness of the objection to the PSA is that measures on psych are unreliable. Personality, happiness, iq, person perception, willingness to pay, and a ton of other measures in psychology are highly internally and externally reliable. Psych typically has a noise problem because of small samples, which the PSA helps with,but not really with measurement.

    • I think the mistake your argument makes is that it assumes group level reliability and validity as assessed via psychometrics are equivalent to individual level reliability and validity by over-interpreting the large group level sample results as if they apply to the small sample within person results. They are very different things. Statistical estimates of reliability do not solve the major problem of psychology which is the causal mapping from reality to score to realistic interpretation.

      • Huh? Test-retest for personality and iq are super high. They both predict real life consequences very well. Your comment does not really make sense, nor does it make an argument against mine.

        • 1. Care to share what those values are within person?
          2. Can you causally explain what produces a low, mean, and high IQ score?
          3. Would those causal explanations be consistent across those levels?

        • You keep being off the mark. Reliability =/ causality. Also, I am not google scholar. I suggest reading up on stuff before you write this nonsense. Bye.

  12. “Give a man a fish and he will eat for a day, teach him how to fish and he will eat forever”.

    I reason the Psychological Science Accelerator is not teaching how (and why) to fish, but they are nudging and/or telling people where to fish (i.c. the specific topic), while holding their fishing rod (i.c. the specific hypothesis), choosing their tackle/bait (i.c. the specific study design), and taking away the fish as soon as they caught it (i.c. outsourcing data collection and analyses).

    (possibly compare this to the small-group collaboration research and -publication format http://statmodeling.stat.columbia.edu/2017/12/17/stranger-than-fiction/#comment-628652 where there is (some) explanation of why and how to perform and publish research, and where researchers decide themselves what, why, and how to study, and who to collaborate with.

    Also note that when using relatively small groups of collaborating researchers, individual researchers will still have the opportunity to try and contribute to psychological science with similarly powered research compared to their small-group collaborating peers. Note how this may not be the case when research using thousands of participants will become the norm)

Leave a Reply to Alexander A. Aarts Cancel reply

Your email address will not be published. Required fields are marked *