Skip to content

Cool-ass signal processing using Gaussian processes (birthdays again)

Aki writes:

Here’s my version of the birthday frequency graph. I used Gaussian process with two slowly varying components and periodic component with decay, so that periodic form can change in time. I used Student’s t-distribution as observation model to allow exceptional dates to be outliers. I guess that periodic component due to week effect is still in the data because there is data only from twenty years. Naturally it would be better to model the whole timeseries, but it was easier to just use the cvs by Mulligan.

ALl I can say is . . . wow. Bayes wins again. Maybe Aki can supply the R or Matlab code?

P.S. And let’s not forget how great the simple and clear time series plots are, compared to various fancy visualizations that people might try.

P.P.S. More here.


  1. please do share code; my students are trying to do similar things these days with variable stars…

  2. ps Can we all please start using ISO 8601 for dates? Standards are good.

  3. Aki Vehtari says:

    Thanks for compliments!

    Our Gaussian process code for Matlab and R-interface is available at

    Since your are interested, we will in next few days add to that web page also links to specific code to produce these estimates and figures with Matlab and just the estimates with R (because we are not good with R graphics)

    I also prefer ISO 8601 dates and will use ISO 8601 like format MM-DD in the next version (MM-DD does not seem to be part of the standard)

  4. Christian Hennig says:

    It’s nice but… errr… how exactly does “Bayes win” by this? A frequentist can do pretty much the same thing, can’t he?
    By the way, for lovers of the t-distribution: What are the degrees of freedom and how chosen? (Bayes may be somewhere in here but one could do without, I guess…)

    • Andrew says:


      Just about anything that can be done using statistical method X can be done, with enough effort, using statistical method Y. Nonetheless, the actual computation was performed using a particular method, and I think that’s where the credit should go.

      • Christian Hennig says:

        I don’t see where there is any “value added” by Bayes here at all, though. Isn’t this just plain good probability modelling with no use for priors and posteriors (OK, I could imagine where they are but what is lost if they weren’t there)? Am I missing something?

        • Andrew says:


          The Gaussian process model is a prior distribution for the underlying series. In strict classical inference, you’re not allowed to assign a probability distribution for the parameters of interest. Again, you can do whatever you want and you don’t have to call it Bayesian—you can call it “regularization” or whatever—but this particular calculation happens to have been done using Bayesian methods.

  5. Robert Kern says:

    I followed Chris Mulligan’s example and extracted the full time series from BigQuery.

    The day_of_year column is coded 1-366 as if there were a leap day in every year, Feb 28th is always 59, Mar 1st is always 61, Dec 31st is always 366. The day_of_week column is coded 1-7, Mon-Sun.

    Have fun!

  6. This is fantastic, Aki. Most interesting is that the residual suggests the ~12/30 spike isn’t as severe as I thought.

  7. Carlitos says:

    Robert: thanks a lot for publishing the data… it had me entertained for a couple of hours!

    In case someone wants to play with R/ggplot2, i found the following charts interesting:

    for (i in (1+3):(nrow(data)-3))
    for (i in (1+182):(nrow(data)-182))




  8. g says:

    Shouldn’t the smoothed graph be constrained to be smooth across the year boundary? It seems like that would make the residual at the very end of the year bigger, which might gratify Chris :-).

  9. Aki Vehtari says:

    Integration over the latent values was made using robust expectation propagation as described in Covariance function and likelihood parameters were estimated by optimizing the marginal posterior. The figure shown here was made using degrees of freedom nu=2 while the optimized nu was about 1.6 producing similar figure.

    Smoothing should not be made across the year boundary as there seem to be increasing trend in the number of births. This is more obvious in the full time series. Using full time series will help to get better estimates for the dates near the year boundary.

    I’ll check the full time series data next week and will provide then code for both the above figure and full time series.

  10. Sam Clifford says:

    I’ve just spent today revising a paper I’m writing where we look at a joint annual and daily trend for some air quality data. I reckon you could use a tensor product of a cyclic B-spline for the annual trend and a cyclic B-spline or cyclic random walk model for the weekly trend. I’m not totally comfortable with the periodic term presented above. Any excess temporal variation could be captured with an AR(1) error model. If I remember I’ll try to have a look at this come next week.

  11. DCA says:

    I would imagine that nearly the same decomposition (which is very informative) could be gotten from the stl code (seasonal-trend-loess) of Cleveland et al; I’ve only used the Fortran version of this but it is in R:

    • StefanP says:

      Yes absolutely, it is very informative, especially considering the minute effort involved. Using Mulligans code it is simply:


  12. […] updates: Here is my plot using the full time series data to make the model. Data analysis could be made in […]