Sorry, I’m just not familiar with this particular acronym.

]]>LOO error will be small when the model’s predictions are relatively accurate even without the one (or N) data points left out, usually because the structure of the model is inflexible to localized errors in fit and also relatively accurate. Whereas when the model is very flexible and can represent many behaviors, then the constrains on behavior imposed by the data are important for prediction of that data point. Consider the difference between say fitting a specific 2 term function y = a*f(x) + b*g(x) where f and g are chosen for theoretical properties, vs fitting a 32 term fourier series or the like.

From this perspective, I think we should try to interpret LOO and similar measures as letting us know how sensitive our model is to the specific information in the dataset, and with that generalized framework in mind, we could potentially pick specific alternative schemes to support asking the appropriate questions for our particular scientific application.

]]>