27 thoughts on “Albedo-boy is back!

      • disclaimer: my prior over whether people studying asteroids are under-estimating the uncertainty in their measurements is in fact pretty high. I base this on the fact that in general people under-estimate uncertainty across almost every area of study.

      • Daniel:

        Given what we know about his training, I can only assume that Myhrvold knows a lot more physics than I do. Unfortunately, he seems to suffer from a serious case of overconfidence and a bit of an obsession with albedo. We discussed some of this here.

        That said, it’s interesting that the news article starts with a statement that the astronomers in 2011 “claim to determine the diameter of asteroids with an accuracy of better than 10%” but then later on there is a quote that the “size errors end up at roughly 15%.” Maybe Myhrvold will get them to up that number to 20%?

        P.S. I was also amused that the news article refers to Myhrvold as a “patent accumulator.” More polite than “troll,” I guess!

        • Oh sure, he’s self-indulgent and self-congratulatory, and overconfident, but not a dilettante.

          We’ve talked so much about the hype machine, here are astronomers putting out “better than 10%” hype that they already acknowledge is more like “15%” eventually perhaps they’ll be up there at the right number… maybe 75% or whatever. I mean, measuring stuff is hard, there are a lot of sources of error that we often don’t acknowledge, and after all, no-one is going to go out and actually land some laser EDM (electronic distance measurement machine) on a significant fraction of these asteroids and get high-quality authoritative data…

          so, basically Mhyrvold is probably right because he chose a problem where we already pretty much know the academic funding hype-machine forces people to pretend they’re doing better than they really are.

      • So does Freeman Dyson but he’s a buffoon when it comes to climate science. Some people are able to apply their problem solving skills effectively outside their domain of expertise, others not so much.

        I’m reading Myhrvold’s paper now. (Unfortunately, at 111 pgs I don’t expect to get too far through it.) Up until a few years ago I used to do the sorts of calculations in his Section 2 on a regular basis. I’m intrigued. If his Section 2 looks reasonable I’ll be interested to see what the NASA team’s criticisms are.

        • His points re Kirchhoff’s Law at the start of Section 2 are legit.

          [Multiple comments deleted because I misinterpreted some variable definitions and didn’t realize it until an hour later.]

          Eq.(11) seems fine but I don’t know enough astronomy to know its limitations. His objection seems sketchy. Provided the phase function is about the same for the Johnson-V band as it is for the Bond albedo weight function then Eq.(11) should be a good approximation.

          The rightmost expression in Eq.(15) seems fine to me.

          I’m impressed that asteroid surface temperatures can be spatially resolved.

          Now at the end of p.15, I’m taking a break. The ratio of equations to charts comparing model w/ data is too high. My eyes are glazing over.

  1. From one of the NEOWISE investigators in the linked story: “We have strongly encouraged that the paper be submitted to a journal and peer reviewed. Instead, he released it without peer review.”

    I was under the impression that posting preprints on arxiv prior to (or simultaneous with) submission for peer-reviewed publication was the norm in physics, so this seems like an odd criticism.

    • In astronomy, it’s probably a bit more like 1/3 posted simultaneous with submission, 2/3 posted upon acceptance. (There’s a small number posted without any indication as to whether they’ve been submitted or accepted anywhere; these tend to be theoretical physicists doing cosmology, or else borderline crackpottery.)

      • Thanks for clarifying. Is that about the same across physics, would you say, or do physicists in other sub-disciplines tend to post pre-prints earlier (or later) in the process? I’m not a physicist of any flavor, but it’s always interesting to hear how publication norms vary across fields.

        • My impression is that it varies a lot. About ten years ago, I went through a month’s worth of postings in different sub-disciplines on the arXiv, and the differences were sometime rather dramatic, although at the time what was looking at was whether there was any reference to a journal (either submitted or accepted) at all. So astro-ph (astronomy/astrophysics) was about 67% journal articles, 19% conference proceedings, and 15% unspecified. On the other hand, hep-th (theoretical high-energy physics) was 9% journal articles, 7% conference proceedings, and 84% unspecified. (And at the time I was reading certain theoretical physicists claiming that “nobody bothers with journals anymore”…)

          Of course, some of the difference might have been a matter of whether or not people though it was important to mention whether a preprint was something that they’d submitted somewhere or not. Clearly, astronomers thought so (and still do)

    • Yes, things like ‘But Wright archly noted that Myhrvold once worked at Microsoft, so “is responsible in part for a lot of bad software.”’

      and ‘We have strongly encouraged that the paper be submitted to a journal and peer reviewed. Instead, he released it without peer review’

      and ‘For every mistake I found in his paper, if I got a bounty, I would be rich’

      which are all content free as far as I’m concerned. He may well be wrong, but he’s clearly touched a nerve and now it’s Never Back Down all the way.

      • I thought the Microsoft remark was quite funny. With a bit more context it was:

        `Wright says his team doesn’t have Myhrvold’s computer codes, “so we don’t know why he’s screwing up.” But Wright archly noted that Myhrvold once worked at Microsoft, so “is responsible in part for a lot of bad software.”’

  2. At this stage of the discussion (before independent authorities have had a chance to evaluate the claims and counter-claims), those of us who aren’t experts have to weigh in the credibility of who’s making the arguments. As far as I can tell, every time Myhrvold has figured prominently in a discussion, it has ended badly.

    He has shamelessly misrepresented the business model of Intellectual Ventures (which basically consists of lurking under bridges and waiting for billy goats and tech start-ups).

    His theories on climate change are filled with dangerous errors.
    http://thinkprogress.org/climate/2009/10/14/204805/superfreakonomics-errors-nathan-myhrvold-intellectual-ventures-bill-gates-warren-buffet/

    Even his Modernist Cuisine claims (which are generally accepted at face value) don’t stand up to independent tests.
    http://www.wine-searcher.com/m/2013/02/myhrvolds-theory-blender-wine-is-best

    Obviously, his criticisms should be looked at, but with Myhrvold, it is always a good idea to wait for independent confirmation.

    • Mark:

      Yes, especially when albedo is involved, I’m suspicious of anything Myhrvold says. In any case, I’m sure the astronomers will work this one out. If Myhrvold’s criticisms have value, great. If not, he probably won’t have wasted too much of their time.

  3. I for one do not mind having the paper on the arxiv. This made it easy to find errors such as mistaking diameters for radii in the equations that gave predicted fluxes, and giving a “solar flux” of a few nanoJanskies. There is a new version on the arxiv with these errors corrected but no change to the figures, so either the paper was not a true description of the calculation, or there are still errors to correct.

    I do have a problem with hiring a PR firm to issue a press blast before getting an independent assessment of the paper. Apparently Myrhvold does not feel the need for outside advice, but this paper clearly needed a careful scientific proofreading which would have saved a bunch of trees. However, I am not sure that an anonymous referee would be better than the arxiv readership for this review.

    But overall, this is a nothingburger of a paper, even at 110 pages. He objects to the NEATM because it doesn’t satisfy physical laws, even though the NEATM gives good diameters when used by people who want it to work. He then changes the NEATM, and his modified NEATM does not work as well as the original. I would say he has to get improved results before anyone should pay attention.

    • I think it’s fair to say that a model violates physical laws and that we should base our models on ones that satisfy physical laws. if that gives ‘worse’ results, then there’s some additional physics to consider and we make progress, if we just stick with some fit-to-data result that doesn’t satisfy physical laws, we may get “good” diameters, but we may also be fooling ourselves. As I said, we haven’t exactly sent out probes with EDMs to get authoritative down-to-the-millimeter results, so perhaps the ‘goodness’ of the other results is an artifact of matching some other biased procedure or whatever. (I know nothing about the specifics of asteroids)

      all that said, yes, hiring a PR firm to send a press release is fairly obnoxious. Does he care about the science, or does he care about recognition and notoriety?

      These guys had something to say about the whole thing back in 2008 and it seems like they have similar complaints (important physical effects ignored) but they obviously were not Jonesing for a PR fix of mega-proportions.

      https://www-n.oca.eu/thermops/abstract/wolters.pdf

      • The Wolters model (NESTM) is trying to address a problem with the NEATM where it predicts T=0 on the nightside. But you need to know the thermal inertia and the orientation of the rotation pole. At that point you might as well go to a thermophysical model.

        Muller etal (2014, http://pasj.oxfordjournals.org/content/early/2014/06/19/pasj.psu034.abstract) showed that the full thermophysical model can be more accurate than radar data. Very impressive work, and it satisfies the laws of physics. But one generally does not have enough data to fit the full thermophysical model.

        • You should be able to do a Bayesian fit on a full thermophysical model regardless of how much data you have, of course your uncertainty intervals will be extremely large when you have more limited data, and that’s maybe the point!

          Also, of course, doing this could be hugely computationally challenging, and the computational challenge may really not pay off, I mean, how much do we really care about this particular question? Enough to use $10k worth of computing probably, but probably not enough to use $50M (better to build a new telescope for example). So there are real-world tradeoffs between complexity of models and the accuracy needed vs resources required.

    • >”He objects to the NEATM because it doesn’t satisfy physical laws”

      I just read about it here the other day and then skimmed the below article, so I am not at all up to date on this issue:
      https://medium.com/@nathanmyhrvold/a-simple-guide-to-neowise-data-problems-a93f41e3bdb4#.6g7l7lak6

      However, he seems to be objecting to calibrating/training a model on the same data used to validate it. If that is what has been going on, the accuracy of this model is surely being overstated. Possibly to the point that the uncertainties are useless.

Leave a Reply

Your email address will not be published. Required fields are marked *