Skip to content
Archive of posts filed under the Statistical computing category.

Stan Project: Continuous Relaxations for Discrete MRFs

Hamiltonian Monte Carlo (HMC), as used by Stan, is only defined for continuous parameters. We’d love to be able to do discrete sampling. So I was excited when I saw this: Yichuan Zhang, Charles Sutton, Amos J Storkey, and Zoubin Ghahramani. 2012. Continuous Relaxations for Discrete Hamiltonian Monte Carlo. NIPS 25. Abstract: Continuous relaxations play […]

Workshop for Women in Machine Learning

This might interest some of you: CALL FOR ABSTRACTS Workshop for Women in Machine Learning Co-located with NIPS 2013, Lake Tahoe, Nevada, USA December 5, 2013 http://www.wimlworkshop.org Deadline for abstract submissions: September 16, 2013

Postdocs in probabilistic modeling! With David Blei! And Stan!

David Blei writes: I have two postdoc openings for basic research in probabilistic modeling. The thrusts are (a) scalable inference and (b) model checking. We will be developing new methods and implementing them in probabilistic programming systems. I am open to applicants interested in many kinds of applications and from any field. “Scalable inference” means […]

More on that machine learning course

Following up on our discussion the other day, Andrew Ng writes:

What should be in a machine learning course?

Nando de Freitas writes: We’re designing two machine learning (ML) courses at Oxford (introductory and advanced ML). In doing this, we have many questions and wonder what your thoughts are on the following: – Which do you think are the key optimization papers/ideas that should be covered. – Which topics do you think are coolest […]

Bayes related

Dave Decker writes: I’ve seen some Bayes related things recently that might make for interesting fodder on your blog. There are two books, teaching Bayesian analysis from a programming perspective. And also a “web application for data analysis using powerful Bayesian statistical methods.”

Please send all comments to /dev/ripley

Trey Causey asks, Has R-help gotten meaner over time?: I began by using Scrapy to download all the e-mails sent to R-help between April 1997 (the earliest available archive) and December 2012. . . . We each read 500 messages and coded them in the following categories: -2 Negative and unhelpful -1 Negative but helpful […]

“Non-statistical” statistics tools

Ulrich Atz writes: I regard myself fairly familiar with modern “big data” tools and models such as random forests, SVM etc. However, HyperCube is something I haven’t come across yet (met the marketing guy last week) and they advertise it as “disruptive”, “unique”, “best performing data analysis tool available”. Have you seen it in action? […]

R sucks

I was trying to make some new graphs using 5-year-old R code and I got all these problems because I was reading in files with variable names such as “co.fipsid” and now R is automatically changing them to “co_fipsid”. Or maybe the names had underbars all along, and the old R had changed them into […]

AI Stats conference on Stan etc.

Jaakko Peltonen writes: The Seventeenth International Conference on Artificial Intelligence and Statistics (http://www.aistats.org) will be next April in Reykjavik, Iceland. AISTATS is an interdisciplinary conference at the intersection of computer science, artificial intelligence, machine learning, statistics, and related areas.