Linear models are a class of models in which a response variable is linearly related to one or more predictors. From the standpoint of the equation, each term consists of a coefficient in front of the predictor with which it is associated, and the terms are added together to complete the right side of the equation. In other words, coefficients cannot be in the denominator of a fraction or in the exponent of the predictor.

Examples of linear models include the simple linear regression equation (Y = B_{0} + B_{1}X), Polynomial Regression (Y = B_{0} + B_{1}X + B_{2}X^{2}), as even in the presence of the squared term, the model is still linear in the coefficients, and Generalized Linear Models, such as logistic regression, which linearly relate the response to coefficients through a transformation of the response.

Nonlinear models do not make such a constraint on the functional form of the model and thus have the flexibility of being able to be specified in a variety of ways. Despite this flexibility, nonlinear models are more computationally intensive to fit and lack the interpretability linear regression provides in the form of p-values and R^{2}. Because there are practically an infinite number of ways to specify a linear model, there usually should be some justification in the data or study design to support the need for such a model.

An example of a nonlinear model is the population growth model P_{t} = P_{0} * e^{rt}, which relates the population at time ‘t’ to the product of an initial population size and exponential growth rate, where ‘r’ is the rate of growth.