SLR: Matrix representation
Announcements
Lab 01 due on TODAY at 11:59pm
Push work to GitHub repo
Submit final PDF on Gradescope + mark pages for each question
HW 01 will be assigned on Thursday
Topics
- Application exercise on model assessment
- Matrix representation of simple linear regression
- Model form
- Least square estimate
- Predicted (fitted) values
- Residuals
Model assessment
Two statistics
Root mean square error, RMSE: A measure of the average error (average difference between observed and predicted values of the outcome)
R-squared,
: Percentage of variability in the outcome explained by the regression model (in the context of SLR, the predictor)
Application exercise
📋 sta221-sp25.netlify.app/ae/ae-01-model-assessment.html
Open ae-01
from last class. Complete Part 2.
Matrix representation of simple linear regression
SLR: Statistical model (population)
When we have a quantitative response,
: Population (true) slope of the relationship between and : Population (true) intercept of the relationship between and : Error terms centered at 0 with variance
SLR in matrix form
The simple linear regression model can be represented using vectors and matrices as
: Vector of responses : Design matrix (columns for predictors + intercept) : Vector of model coefficients : Vector of error terms centered at with variance
SLR in matrix form
What are the dimensions of
Derive least squares estimator for
Goal: Find estimator
Gradient
Let
. . .
Then
Property 1
Let
The gradient of
Side note: Property 1
Side note: Property 1
Property 2
Let
Then the gradient of
If
Proof in HW 01.
Derive least squares estimator
Find
Derive least squares estimator
Find
Did we find a minimum?
Hessian matrix
The Hessian matrix,
Using the Hessian matrix
If the Hessian matrix is…
positive-definite, then we have found a minimum.
negative-definite, then we have found a maximum.
neither positive or negative-definite, then we have found a saddle point
Did we find a minimum?
Show that
Predicted values and residuals
Predicted (fitted) values
Now that we have
. . .
Hat matrix:
. . .
is an matrix- Maps vector of observed values
to a vector of fitted values - It is only a function of
not
Residuals
Recall that the residuals are the difference between the observed and predicted values
Recap
Introduced matrix representation for simple linear regression
- Model form
- Least square estimate
- Predicted (fitted) values
- Residuals
For next class
- Complete Prepare for Lecture 05 - SLR: matrix representation cont’d