Section 26 MLR: Prediction & Residuals

Multiple Linear Regression: Prediction & Residuals


26.1 Regression Model

\[ \large y_{i} = a + \beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_{i} \]


26.2 Prediction

\[ \large \hat y_{i} = \hat\beta_0 + \hat\beta_1x_{1i} + \hat\beta_2x_{2i} \]


26.3 Residual

\[ \large \hat\epsilon_{i} = y_i - \hat y_{i} \]

\[ \large \hat\epsilon_{i} = y_i - \hat\beta_0 + \hat\beta_1x_{1i} + \hat\beta_2x_{2i} \]


26.4 Assumptions

  • \(y\) is related to \(x\) by the simple linear regression model:

\[ \large y_{i} = a + \beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_{i}, \space i=1,...,n\] \[ \large E(y | X_1=x_{1i}, X_2=x_{2i}) = \hat\beta_0 + \hat\beta_1x_{1i} + \hat\beta_2x_{2i} \]

  • The errors \(\epsilon_1, \epsilon_2, ..., \epsilon_n\) are independent of each other.

  • The errors \(\epsilon_1, \epsilon_2, ..., \epsilon_n\) have a common variance \(\sigma^2\).

  • The errors are normally distributed with a mean of 0 and variance \(\sigma^2\), that is:

\[ \large \epsilon \sim N(0,\sigma^2) \]