In his review in 1938 of *Historical Development of the Graphical Representation of Statistical Data*, by H. Gray Funkhauser, for *The Economic Journal*, the great economist writes:

Perhaps the most striking outcome of Mr. Funkhouser’s researches is the fact of the very slow progress which graphical methods made until quite recently. . . . In the first fifty volumes of the Statistical Journal, 1837-87, only fourteen graphs are printed altogether. It is surprising to be told that Laplace never drew a graph of the normal law of error . . . Edgeworth made no use of statistical charts as distinct from mathematical diagrams.

Apart from Quetelet and Jevons, the most important influences were probably those of Galton and of Mulhall’s Dictionary, first published in 1884. Galton was indeed following his father and grandfather in this field, but his pioneer work was mainly restricted to meteorological maps, and he did not contribute to the development of the graphical representation of economic statistics.

So far so good. But then comes the kicker:

Mr. Funkhouser has made an extremely interesting and valuable contribution to the history of statistical method. I wish, however, that he could have added a warning, supported by horrid examples, of the evils of the graphical method unsupported by tables of figures. Both for accurate understanding, and particularly to facilitate the use of the same material by other people, it is essential that graphs should not be published by themselves, but only when supported by the tables which lead up to them. It would be an exceedingly good rule to forbid in any scientific periodical the publication of graphs unsupported by tables.

I’m ok with that—if they also forbid the publication of all tables unsupported by graphs. Also if they allow graphs by themselves. Then I’m totally on board.

Just to give Keynes the benefit of the doubt, he says a big reason for tables is “to facilitate the use of the same material by other people”, which sounds a lot like “publish your data” to me. Back then it was difficult to access someone else’s data. So by putting it in tables you immediately made it available to readers.

exactly: table of data in 1930 = replication file on website in 2013

Agree with kerokan especially given how rough graphs were in 1930.

( Today one can often extract data accurately from pdf,s and ps files – this is how the PSA screening study disaster was uncovered http://andrewgelman.com/2010/04/15/when_engineers/ )

Believe Keynes had a deep enough understanding of statistics and its severe limitations in the absence of randomization to choose to switch from stats to economics.

Even though one could extract data from pdf’s, I think it is bad form to force your readers to take that path!

[…] Andrew Gelman […]

do i really have to explain to you the technology used in graphs in the 1930s ?

Have you ever actually made a graph with, say rapidiographs and pantograph ?

you have, literally, no idea what you are talking about, as people in the late 1930s were using technology we have no way of grasping; it is like the urban legend that no civilized person can actually start a fire by rubbing two sticks together, cause no civilized person is patient or hungry enough.

For instance, Tukey, in one of his books, warns against graphs with a Y axis that is on the 0 point of the x axis, as data points iwth x = 0 are hard to see

if you stop and *think* about that, you will see that people in that era were beset by difficulties of which we have no ken; like children dying regularly from whooping cough or measles – just doesn’t happen today

Ezra:

I used to make all my graphs by hand.

Andrew, perhaps Ezra can’t believe how old we are. I will be 50 in a few years! So I still remember the days when researchers had books of different kids of graph paper — 3-, 5-, or 7-decade log-linear or log-log, q-normal, etc. In 1979 or 1980, when I was in high school, I did an internship at NASA and part of my job was to make plots by hand. There was a computer terminal connected to the mainframe, plus a line printer, a couple of offices down, so it was possible to make ASCII plots — just text lines with X printed at the right places — but for larger datasets, or log-log plots, or many other purposes, doing it by hand was the only way to go. That really was the end of the era, though: already there were personal computers (although I don’t recall seeing any at NASA in 1980), and a year or two later the IBM PC came out and it and its brethren were widely adopted.

Now evidently people already think that making graphs by hand was something from way back in the pre-WWII era, rather than something we were still doing when E.T. was in theaters and Joan Jett ruled the radio. How quickly we forget.

I have to admit, though, I’ve never used a rapidiograph or a pantograph (whatever they are).

rapidograph is a fancy pen, still in use today, I assume primarily by artists. A pantograph is a mechanical linkage where when you trace something with a pen attached to one part of the device, a pen attached to another part moves and traces out the same curve. I guess you can still get these too,

You can still get both of these devices, yet, I can’t buy a brand new slide-rule for retro hipster effect.

When I was looking to buy a car in 1979, my father drew a scatterplot by hand of MPG versus 0 to 60 times. Looking at it, it quickly revealed something we hadn’t really noticed before: that front wheel drive cars had a systematic advantage in this tradeoff.

Keynes was some kind of distant cousin-in-law of Galton via Darwin, so he’s a supersmart guy plugged into the most sophisticated human sciences milieu in the world, and yet this example reminds us that the tools of statistical thinking weren’t very sophisticated only seven years before the atom bomb.

A general question inspired by this example is: Am I right that the development of statistics lagged a century or two behind other mathematics-related fields? If so, why?

For example, compare what Newton accomplished in the later 17th Century to what Galton accomplished in the later 19th Century. From my perspective, what Newton did seems harder than what Galton did, yet it took the world a couple of centuries longer to come up with concepts like regression to the mean. Presumably, part of the difference was that Newton was a genius among geniuses and personally accelerated the history of science by some number of decades. Still, I’m rather stumped by why the questions that Galton found intriguing didn’t come up earlier? Is there something about human nature that makes statistical reasoning unappealing to most smart people, that they’d rather reason about the solar system than about society?

Steve;

You ask why the laws of mechanics were discovered centuries before the laws of statistics. I think one reason is that mechanics helped solve or understand some important applied problems, such as the design of buildings, ships, cannons, etc. In contrast I don’t see much of a direct motivation at that time for people to understand randomness and variation. The key application areas were . . . what, exactly? Gambling, demography, maybe a bit of epidemiology. Nothing so urgent. In theory, probability could’ve been applied to public finance (for example, predicting next year’s tax revenue) and maybe it was, but that’s still not much compared to the many many immediate applications of physics.

Newton’s Principia is not a practical work. I don’t think there are any even remotely practical problems solved in it (at least not successfully) and there doesn’t seem to be any examples of practical/engineering types ever picking up the work and using it for anything. There weren’t many people at all who really read it. It’s hard to imagine anyone being able to get something practical out of it. Even the structure of it, which was based on Euclid elements, was theoretical axiom/theorem type stuff.

The differential equations we associated Newton were actually published by Euler roughly a century later and those were probably required before the subject could be reduced to the point where engineers could even think about using it. The subject of “Classical Mechanics” and the practical subject of engineering mechanics were two different areas that had remarkably little connection with each other, especially early on.

Calculating the odds in Gambling however, which surely predates Newton, would have been rubber-meets-the-road type practical and is pretty much the extreme opposite of Newton’s work in terms of practicality.

One factor might be that statistics had to wait for developments in linear algebra which occurred in the 19th century.

Not really. For example, matrix algebra didn’t become standard in statistics until well into the 20th century. Also, knowing how to solve simultaneous linear equations goes back much earlier (including Chinese roots).

Good points. It does seem like the great mathematicians before the 19th century would have had enough tools to tackle statistics if they’d wanted too. So this would shift the explanation back to conceptual problems rather than mathematical problems.

The essence of statistics is that if my partial knowledge implies that for every way in which “A” can occur, there are lots of ways in which “B” can occur, then our best guess is “B”. Euler seems to have had not seen this point in his astronomical investigations and didn’t seem to realize that there are far more ways for measurement errors to partially cancel than for them to add. He was famously unable to use that realization to reconcile measurements which were incompatible due to measurement errors. If Euler was having problems with this, it suggests to me at least, that it’s conceptual difficult for people in general.

Andrew:

Actually it was extremely important to predict such things as a full moon so that you could plan a night time attack – i.e astronomy (What Laplace, Guass, Legendre and others were working on).

Anders Hald made this all accessible to anyone who will read his book.

As Stephen Stigler has recently argued, ideas like “regression to the mean” contradict mathematical reasoning as set out by Euclid – http://biomet.oxfordjournals.org/content/99/1/1.abstract

I really wonder about Stigler’s point. A lot of the problems solved with statistics are over/under determined from a mathematical view. So they’d require new extra-mathematical principles to solve. Maybe that’s what slowed progress down?

I’d say, agriculture and medicine — just like Fisher worked on.

Darwin was inspired by advances in animal breeding by scientific farmers. Galton was inspired by Darwin. Fisher was inspired by Galton. Statistics turned out to be, unsurprisingly, very useful in improving farm output. The world could have used Fisher-level statistics much earlier.

I think Steve is essentially right to wonder why calculus and Newtonian mechanics came earlier than the main statistical breakthroughs. Perhaps it is mostly that thinking about uncertainty is difficult!

A qualification is that “regression to the mean” is a sidebar to regression in general, although one still widely misunderstood. Regression in the modern sense did not emerge all at once, but fitting linear models to data, least squares, etc. were all evident long before Galton in the 18th and earlier 19th century in the work of Boscovich, Laplace, Gauss, Legendre, etc. Yet it took several pushes from Galton, Pearson, Yule, Edgeworth, etc. before something recognisably like modern linear regression had emerged by the earlier 20th century.

Some attribute the scatter plot to John Herschel, although as always one can argue about definitions (e.g. any map of places is a scatter plot of a kind). But that attribution would make the scatter plot less than two centuries old.

That’s an interesting fact about scatter plots. I believe the first x vs. t plot in mechanics were done in the 1200’s at the University of Oxford. (The solution for the uniformly accelerated motion, which usually associated with Galileo, was already known in the late middle ages). If true that means simple plots in physics predate scatter plots by ~500 years or so. Most people would consider the former more difficult than the latter I think.

It all comes down to beer, as Gosset / Student will tell you.

In order to survive in an environment without high quality drinking water, you need some way to sanitize your water. This method was beer. In order to brew beer, you need to grow grain, and in order to grow grain, you need to have some idea of when to plant it and when to harvest it. This requires that we be able to predict reliably when the last frost of the winter season will be and the most reliable method turned out not to be Haruspicy, the initial method, but Astronomy. And so for millenia prior to Newton, detailed records of the motion of the planets and stars were kept by cultures throughout the world. This gave Newton a huge and reliable dataset from which people had already deduced several important laws (see Kepler). Plus he was a genius.

Gauss created one of the main statistical tools used in practice, the method of least squares, particularly to deal with errors in relatively precise Astronomical records, if i’m not mistaken.

Hmmm… difficult whether this story about beer, grain, frost, and so forth is serious; it is certainly suppositious.

Predicting times of frost depended largely on cultural memory before modern meteorology and climatology developed. Also, knowing about frost doesn’t presuppose temperature measurement. I don’t see that knowing the mechanics of planetary motion helped one bit, even in principle.

Also, in 17th century lowland Britain and similar climates — the context for Newton and his kind — the summer harvest was usually long before the first frost of autumn [fall]. The whole point was to gather in the grain before you most needed it.

(Gauss did indeed work on astronomical applications.)

should be: difficult _to know_ whether …

It was definitely tongue in cheek, at least a little. The main point though is the calendar. It wasn’t until the Gregorian calendar in the late 1500’s that the calendar had the right number of days in it. Figuring out what day of the year it was wasn’t a trivial matter to people and hence it was hard to know if there were 3 more weeks or 5 more weeks of winter for example. Astronomy was definitely important for agriculture, hence the fact that everyone from the babylonians to the mayans to the incas to the druids all had some kind of astronomical observations.

Also navigation relied on stars. There were a lot of parallel astronomical developments around the globe.

Ok looking it up, technically the Gregorian calendar just made modifications to the leap year to account for 7 significant digits of accuracy instead of 5, but whatever, the point is after hundreds of years things were getting out of alignment and people didn’t like it, calendars are important to people and astronomy gave them calendars.

For frost substitute flood ? See Egypt.

I think all blogs should start out with “In which I…” so the internet slowly turns in to 19th century literature. “In which Phileas Fogg conducts a regression analysis and is astounded…”

As long as we’re speculating…. Physics and mechanics were studies of how God made things work. Randomness is incompatible with God’s will. Newton, Leibniz, even Laplace (“I have no need for that hypothesis”) were quite religious men trying to uncover the fundamental rules of the universe. Statistics is sort of anti-rules discipline, since randomness (pre-quantum physics, say 1930) is not part of the structure of the universe… it’s part of our failed powers of observation and measurement. Gauss realized this and developed least squares techniques, but unless you think “God plays dice,” the theory of statistics was always going to lag physics and mechanics. And if you can work on problems in which the signal to noise ratio is high, that’s where you start. Statistics takes over as the signal-to-noise ratio shrinks.

“Randomness is incompatible with God’s will.”

Thanks.

Have heard arguments that the Ancient Greeks concept of fate precluded ideas about randomness…

Yeah, I wouldn’t be surprised. In the Muslim world (or at least the part I experienced) there was a very noticeable bias against protecting oneself from random events and this clearly had had a religious origin. This obviously had a big impact on both the insurance and finance industries, since blatant insurance/hedging operations tended to be frowned upon. It’s not hard to imagine that some cultures (or subcultures such as mathematics) may have had quite strong cultural views which slowed the progress of statistics.

There’s some history of the perceived conflict between chance and reason in Gigerenzer et al’s “The Empire of Chance”. The book begins with this issue, if I remember right (don’t have my copy handy).

Quick version of the argument: The Greeks and Romans had the dueling personifications of Athena/Minerva (Wisdom) and Tyche/Fortuna (Chance). Minerva was the patron of reason and science, while Fortuna was capricious and even malicious. Getting scholars to see Fortune as a route to Wisdom was perhaps very hard, due to these personifications.

This theme is echoed in the first chapter of Rao’s (of Cramér-Rao) old book “Statistics and Truth”. He discusses the amazing advent of books of random numbers. It’s not a deep analysis, but he quotes a bunch of scholars from antiquity.

Thanks. Fascinating.

I often feel like there is some obstacle between how I view how the world works and how 95+% of the world assumes how it is supposed to work. Athena v. Fortuna sounds like a good embodiment of this divide.

As Ignatius J. Reilly says in A Confederacy of Dunces:

“Oh Fortuna, blind, heedless goddess, I am strapped to your wheel. Do not crush me beneath your spokes. Raise me on high, divinity.”

Dubious. Consider Stoicism, a secular movement espoused by a large fraction of Greco-Roman elites; one of the components of its worldview was the ‘swerve’, which introduces randomness into the deterministic Democritean atom-universe.

Okay, but did they think swerve was something to be studied or to be endured?

Random’s just another word for nothing left to predict.

How about Random’s just another way of say “I can’t learn anything more from this data”? There’s always stuff left to predict.

Your formulation may be more accurate, but Andrew was (presumably) punning on the lyrics of “Me and Bobby McGee”: “Freedom’s just another word for nothing left to lose” (maybe this was already obvious to you — if so, sorry). Is there a better single word than “predict” to substitute?

[…] In which I disagree with John Maynard Keynes « Statistical Modeling, Causal Inference, and Soc… March 23, 2013 […]