Some discussions about model selection
For models with a lot of parameters, the goodness-of-fit is very likely to be very high. However, it is also likely to generalize bad. So we need measure of generalizability Here parsinomy gives us a few advantages. easy to perceive better generalizations
The parsimony model comes from the idea of Occam’s razor: We choose the simple model that has more explanatory power. The instance theory is a good model to explain the lexical decision task but it is not the only one. However, it simply makes it popular. What is a Good Model? A good model should be presumably plausibility balance of parsimony and goodness-of-fit coherence of the underlying assumptions easy to understand when it breaks down consistency with known results especially with the simple and basic phenomena ability to explain rather than describe data extent to which model predictions can be falsified through experiments.
Is the data agree with the model? distance between data and model predictions likelihood function: likelihood of observing the data if we assume the model; the results will be a set of fitting parameters. Why don’t we always use goodness-of-fit as a measure of the goodness of a model? overfitting not intuitive This is why we would like to balance it with parsimony using some measures of generalizability.