Comparing models

Jonathan Zaozao Zhang writes,

For the dataset in my research, I am currently trying to compare the fit between a linear (y=a+bx) and a nonlinear model y=(a0+a1*x)/(1-a2*x).

The question is: For the goodness of fit, can I compare R-squared values?(I doubt it… Also, the nls command in R does not give R-squared value for the nonlinear regression) If not, why not? and what would be a common goodness of fit measure that can be used for such comparsion?

My response: first off, you can compare the models using the residual standard deviations. R^2 is ok too, since that’s just based on the residual sd divided by the data sd. Data sd is same in 2 models (since you’re using the same dataset), so comparing R^2 is no different than comparing residual sd.

Even simpler, I think, is to note that model 2 includes model 1 as a special case. If a2=0 in model 2, you get model 1. So you can just fit model 2 and look at the confidence interval for a2 to get a sense of how close you are to model 1.

Continuing on this theme, I’d graph the fitted model 2 as a curve of E(y) vs x, showing a bunch of lines indicating inferential uncertainty in the fitted regression curve. Thien you can see the fitted model and related possibilities, and see how close it is to linear.

1 thought on “Comparing models

  1. You want to find if your data need a more complex model or not.

    What I'd try is bayesian Model comparison:

    ie computing P(Model1 | Data) / P(Model2 | Data)

    assuming flat prior P(M1)=P(M2) and a, say, a gaussian noise P( data_i | M1 ) = gauss( yi ; a xi +b; sigma) and P(di | M2 )= gauss( yi ; (a0+a1*xi)/(1-a2*xi) ; sigma).

    This will require marginalising on parameter space, which is a 2D space for M1 and a 3D for M2.
    This will favor the simpler model in case of equal fit.

Comments are closed.