“Each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, and it verified that the results matched. I’m not sure I ever was present when a run finished.”

Bill Harris writes:

Skimming Michael Betancourt’s history of MCMC [discussed yesterday in this space] made me think: my first computer job was as a nighttime computer operator on the old Rice (R1) Computer, where I was one of several students who ran Monte Carlo programs written by (the very good) chemistry prof Dr. Zevi Salsburg and his grad students.  As I recall, each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, and it verified that the results matched.  I’m not sure I ever was present when a run finished.

I did a quick search and turned up Monte Carlo Procedure for Statistical Mechanical Calculations in a Grand Canonical Ensemble of Lattice Systems, which has an abstract that ends, “A comparison with the exact analytical results (B= ∞, Δ=0) indicates that the accuracy of the Monte Carlo procedure for the grand ensemble can be reliably estimated by a statistical analysis of partial averages over the Markov chain.” That sounds a bit like MCMC!  If so, what’s up with worries about a few days of HMC sampling.

>Here are a few pictures of the Rice Computer, along with the USAEC Bessel Function Generator.  Wikipedia has more, as does Google.

Thinking a bit more, I was told we were running it twice ’cause the hardware might make an error (or so I recall), but perhaps we were simply running two chains on a room-sized single processor with 32K words.

If you want more on Salsburg or on the R1, just ask.

Except that I obviously couldn’t remember how to spell his name right in 2011, here’s a short anecdote about him: http://makingsense.facilitatedsystems.com/2011/12/thinking-for-yourself.html.  https://ricehistorycorner.com/2010/11/18/zevi-salsburg/ is a bit more about him (and, looking at the picture, he’s third from the left, not right).  Limiting Polytope geometry for Rigid Rods, Disks, and Sphere appears to describe some of his research, although it’s too late for me to even pretend to skim it and make much sense of it tonight (the paper to the abstract I sent previously appears to be paywalled).

https://ricehistorycorner.com/2012/01/31/new-info-on-the-rice-computers/ is a bit more on the Rice Computer and Salsburg, and https://archive.li/opI1Y is perhaps the definitive online documentation about the machine.  It indicates that (apparently) some or much of Salsburg’s work on the Rice Computer was done on the bare machine, which means no Genie programming language; I don’t know if it meant no assembler.  The computer’s ability to do dynamic memory allocation using tagged memory and codewords is the reason I always heard Salsburg wanted this machine; the IBM machines of the time ran out of memory and didn’t, apparently, have the ability to reclaim unused memory.

And it’s still my favorite computer!  Real superscripts and subscripts, thanks to the Friden Flexowriter and the Genie language, and flashing neon lamps everywhere, which made an impressive sight at night, especially if you turned the room lights off.

Oooh, I love this sort of thing. I guess that’s a sign that I’m getting old. 2018 is in the future, after all.

P.S. After doing some more digging, Harris adds:

I found reference 69 in chapter 4 of Heermann’s Computer Simulation Methods in Theoretical Physics (printed page 83), which refers to some of Salsburg’s research.  Maybe that makes it clearer whether he was doing what you’d call MCMC today.  (I’m not sure that book should be online, but it is.)  He does have works listed in a list of LASL research.

Computer-Simulationenzu Strukturen undPhasenumwandlungenin Modell-Kolloiden (in English) mentions his research in several places.

I also found a brief obituary at the bottom of the second page of http://physicstoday.scitation.org/doi/pdf/10.1063/1.3021804.  It appears that he was active in statistical mechanics and related fields, but I haven’t found anything I recognize as MCMC integration.  The best I’ve seen is stuff possibly related to the non-statistical work Michael related.

If you see a connection, great.  Otherwise, perhaps it’s a false alarm.  I may ask Melissa Kean if she’s got contacts at Rice who would know.

Ooh—bingo!?!  Scroll down a bit on http://ethw.org/Oral-History:Martin_Graham, and you’ll find Metropolis and Salsburg mentioned in the same paragraph.  The Rice Computer was a descendant of the MANIAC.  At any rate, it sounds as if Salsburg was working for Metropolis at the time (at least during the summers).  https://mobile-hi-mobiles.blogspot.com/2009/04/pressures-and-goals.html makes it clear that the R1 was not the MANIAC II.

6 thoughts on ““Each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, and it verified that the results matched. I’m not sure I ever was present when a run finished.”

  1. There is no doubt that he was doing MCMC. In the context of statistical mechanics simulations, Monte Carlo designs since the 1950’s what has been called later Markov chain Monte Carlo in other fields.

    For example, in the book “Monte Carlo Simulations in Statistical Physics” by Landau and Binder the names ‘MCMC’ or ‘Markov chain Monte Carlo’ appear only in a chapter titled “Monte Carlo simulations at the periphery of physics and beyond” in reference to Hastings’ application of these sampling methods to solve numerical problems in mathematics/statistics.

  2. Interesting. I honestly thought MCMC had been invented by bioinformaticists such as Hobbolth, based in Denmark. I’m glad in a way it wasn’t. It bloody works though. Lots of complex stuff analysing DNA is being used now but sometimes still want a little accuracy. But the average results for split dates between the human and chimp ancestry that MCMC gives is staggeringly good – or was before the results were misinterpreted.

  3. Well, this completes the tie between Salsburg’s work and that of Metropolis, I think. See “Need for a High Speed Machine”: The Genesis of the R1, 1955 (https://ricehistorycorner.com/2018/08/09/need-for-a-high-speed-machine-the-genesis-of-the-r1-1955/), especially the section Need for a High Speed Machine at the bottom of p. 2 and A Possible Proposal on p. 3: the R1 was planned to be built (and presumably so built) using plans and specifications Metropolis had prepared for the Los Alamos machine (the MANIAC, I presume). Note, too, Salsburg’s statement that he worked on a problem using a Los Alamos computer that took ca. 800 hours and that he estimated it would take 40-100 times as long on a “medium speed machine” (IBM 450 class?).

Leave a Reply to Bill Harris Cancel reply

Your email address will not be published. Required fields are marked *