Part 2 - Linear Regression Model
Welcome to part 2 of this tutorial series where we will be creating a Regression Analysis library in Java. In the last tutorial we covered a lot of theory about the foundations and applications of regression analysis. We finished off by coding up the RegressionModel abstract class, which will become the base of all our models in this library.
Prerequisites -
Make sure you have read and understand Part 1 of this tutorial series where I explained a lot of theory about regression analysis and regression models. I won’t be repeating much of the content so it’s a good idea to have a good understanding of it all before you read on with this tutorial.
Regression Library - Regression Models
In this tutorial we will be covering and implementing our first regression model - the simple linear regression model.
The Linear Regression Model
To start off with lets consider the first the Wikipedia article definition for the Simple Linear Regression Model:
Simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable. In other words, simple linear regression fits a straight line through the set of n points in such a way that makes the sum of squared residuals of the model (that is, vertical distances between the points of the data set and the fitted line) as small as possible.
This is perhaps one of the easier to understand definitions. So in this model we have a single explanatory variable (X
in our case) and we want to match a straight line through our points that that somehow ‘best fits’ all the points. in our data set.
This model uses a least squares estimator
to find the straight line that best fits our data. So what does this mean? The least squares approach aims to find a line that makes the sum of the residuals as small as possible. So what are these residuals? The residuals are the vertical distances between our data points and our fitted line. If the best fit line passes through each of our data points, the sum of the residuals would be zero - meaning that we would find an exact fit for our data.
Consider this numerical example:
We have the data:
X Y
2 21.05
3 23.51
4 24.23
5 27.71
6 30.86
8 45.85
10 52.12
11 55.98
We want to find a straight line that makes the sum of the residuals as small as possible. As it turns out the least squares estimator for this data set produces the straight line:
y = 4.1939x + 9.4763
as the line of best fit - that is, there exists no other straight line for which the sum of the residuals (the sum of the differences between the actual data and the modelled line) is smaller.
This makes a lot of sense as a line of best fit. There essentially exists no other straight line that could better follow our data set as that would mean the sum of the residuals would have to be smaller. So remember:
Residuals = the differences in the Y
axis between the points of data in our data set and the fitted line from our model.

So in the linear regression model we want to use a least squares estimator that somehow finds a straight line that minimises the sum of the resulting residuals. The obvious question now is how can we find this straight line?
The Math
So we want to find a straight line that best fits our data set. To find this line we have to somehow find the best values of our unknown parameters so that the resulting straight line becomes our best fit. Our basic equation for a straight line is:

We want to find the values of a
and ß
that produce the final straight line that is the best fit for our data. So how do we find these variables?
Handily there is a nice formula for it (for those of us who don’t want to derive it):

Ok perhaps that formula isn’t that nice after all then at first glance. The good thing is it really isn’t all that bad when you get to know the symbols:
- x with a line over the top is called xbar = the mean of the
X
values in our data set
- y with a line over the top is called xbar = the mean of the
Y
values in our data set
The sigma
(the symbol that looks like a capital E) is the sumnation operator. The good news for us programmers is that this symbol can be nicely converted into something we can all understand - the for loop
- as we will see in a minute!
Now we could go off now and start trying to code up a solution to find ß
, but it would be a lot easier if we could somehow modularise the formula a little more to make it easier to understand. Again handily the formula does that for us. Near the bottom is the formula we actually want:
Which stands for the covariance
of x
and y
divided by the variance
of x
. Now we only have to find out those two calculations, divide them, and we have our result for ß
.
We are only really worried about ß
as finding a
is easy once we have a value for ß
. It’s just the mean
of the y
values minus the found value of ß
multipled by the mean
of the x
values. Great!
Read More