Ryan Harrison My blog, portfolio and technology related ramblings

Java Regression Library - Linear Regression Model

Part 2 - Linear Regression Model

Welcome to part 2 of this tutorial series where we will be creating a Regression Analysis library in Java. In the last tutorial we covered a lot of theory about the foundations and applications of regression analysis. We finished off by coding up the RegressionModel abstract class, which will become the base of all our models in this library.

Prerequisites -

Make sure you have read and understand Part 1 of this tutorial series where I explained a lot of theory about regression analysis and regression models. I won’t be repeating much of the content so it’s a good idea to have a good understanding of it all before you read on with this tutorial.

Regression Library - Regression Models

In this tutorial we will be covering and implementing our first regression model - the simple linear regression model.

The Linear Regression Model

To start off with lets consider the first the Wikipedia article definition for the Simple Linear Regression Model:

Simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable. In other words, simple linear regression fits a straight line through the set of n points in such a way that makes the sum of squared residuals of the model (that is, vertical distances between the points of the data set and the fitted line) as small as possible.

This is perhaps one of the easier to understand definitions. So in this model we have a single explanatory variable (X in our case) and we want to match a straight line through our points that that somehow ‘best fits’ all the points. in our data set.

This model uses a least squares estimator to find the straight line that best fits our data. So what does this mean? The least squares approach aims to find a line that makes the sum of the residuals as small as possible. So what are these residuals? The residuals are the vertical distances between our data points and our fitted line. If the best fit line passes through each of our data points, the sum of the residuals would be zero - meaning that we would find an exact fit for our data.

Consider this numerical example:

We have the data:

X         Y
2         21.05
3         23.51
4         24.23
5         27.71
6         30.86
8         45.85
10        52.12
11        55.98

We want to find a straight line that makes the sum of the residuals as small as possible. As it turns out the least squares estimator for this data set produces the straight line:

y = 4.1939x + 9.4763

as the line of best fit - that is, there exists no other straight line for which the sum of the residuals (the sum of the differences between the actual data and the modelled line) is smaller.

This makes a lot of sense as a line of best fit. There essentially exists no other straight line that could better follow our data set as that would mean the sum of the residuals would have to be smaller. So remember:

Residuals = the differences in the Y axis between the points of data in our data set and the fitted line from our model.

Good Model

So in the linear regression model we want to use a least squares estimator that somehow finds a straight line that minimises the sum of the resulting residuals. The obvious question now is how can we find this straight line?

The Math

So we want to find a straight line that best fits our data set. To find this line we have to somehow find the best values of our unknown parameters so that the resulting straight line becomes our best fit. Our basic equation for a straight line is:

Linear Equation

We want to find the values of a and ß that produce the final straight line that is the best fit for our data. So how do we find these variables?

Handily there is a nice formula for it (for those of us who don’t want to derive it):

Linear Regression Formula

Ok perhaps that formula isn’t that nice after all then at first glance. The good thing is it really isn’t all that bad when you get to know the symbols:

  • x with a line over the top is called xbar = the mean of the X values in our data set
  • y with a line over the top is called xbar = the mean of the Y values in our data set

The sigma (the symbol that looks like a capital E) is the sumnation operator. The good news for us programmers is that this symbol can be nicely converted into something we can all understand - the for loop - as we will see in a minute!

Now we could go off now and start trying to code up a solution to find ß, but it would be a lot easier if we could somehow modularise the formula a little more to make it easier to understand. Again handily the formula does that for us. Near the bottom is the formula we actually want:

Cov[x, y]
——————
Var[x]

Which stands for the covariance of x and y divided by the variance of x. Now we only have to find out those two calculations, divide them, and we have our result for ß.

We are only really worried about ß as finding a is easy once we have a value for ß. It’s just the mean of the y values minus the found value of ß multipled by the mean of the x values. Great!

Read More

Java - Calculate the Harmonic Mean

Here is a small snippet to calculate the harmonic mean of a data set.

The harmonic mean is defined as:


public static double harmonicMean(double[] data)  
{  
	double sum = 0.0;

	for (int i = 0; i < data.length; i++) { 
		sum += 1.0 / data[i]; 
	} 
	return data.length / sum; 
}
Read More

Java - Calculate the Geometric Mean

Here is a small snippet to calculate the geometric mean of a data set.

The geometric mean is defined as:

Formula


 
public static double geometricMean(double[] data)  
{
	double sum = data[0];

	for (int i = 1; i < data.length; i++) {
		sum *= data[i]; 
	}
	return Math.pow(sum, 1.0 / data.length); 
}
Read More

Java - Serialization Constructors

It is a common misconception that classes which implement the Serializable interface must also declare a constructor which takes no arguments.

When deserialization is taking place, the process does not actually use the object’s constructor itself. The object is instantiated without a constructor and is then initialised using the serialized instance data.

The only requirement on the constructor for a class that implements Serializable is that the first non-serializable superclass in its inheritance hierarchy must have a no-argument constructor. This is because when you serialize an object, the serialization process chains it’s way up the inheritance hierarchy of the class - saving the instance data of each Serializable type it finds along the way. When a class is found that does not implement Serializable, the serialization process halts.

Then when deserialization is taking place, the state of this first non-serializable superclass cannot be restored from the data stream, but is instead initialised by invoking that class’ no-argument constructor. The rest of the instance data of all the Serializable subclasses can then be restored from the stream.

For example this class which does not provide a no-arguments constructor:

public class Foo implements Serializable {  
	public Foo(Bar bar) {  
		...  
	}  
	...
	...  
}  

Although the class itself does not itself declare a no-arguments constructor, the class is still able to be serialized. This is because the first non-serializable superclass of this class, which in this case is Object, provides a no-arguments constructor which can be used to initialize the subclass during deserialization.

If however Foo extended from a Baz class which did not implement Serializable and did not declare a no-arguments constructor:

public class Baz {  
	public Baz(Bar bar) {  
	   ...
	}  
	...
}
public class Foo implements Serializable {  
	...
	...
}  

In this case a NotSerializableException would be thrown during the deserialization process as the state of the Baz class cannot be restored through the use of a no-arguments constructor. Because the instance data of the superclass Baz could not be restored, the subclass also cannot be properly initialised - so the deserialization process cannot complete.

Read More

Java Regression Library - Regression Models

Part 1 - Regression Models

In this tutorial series we’ll be going over how to create a simple Regression Analysis library in Java. If you have any prior knowledge of regression analysis you will probably know that this is a very large field with a great many applications. In this tutorial series we won’t be covering any massively advanced techniques. Our final library will be able to produce the same results as you would find in Microsoft Excel (excluding the graph plotting), which in most basic circumstances will be plenty enough to get you some good results.

Prerequisites -

It’s best if you start this series with a sound knowledge of OOP (object-oriented programming) practices in Java as this series will include the use of abstract classes and polymorphism. You will also need a good knowledge of some of the more basic concepts in Java such as looping, methods and variables. I will do my best to explain the code as much as I can but it is advisable that you have some prior knowledge.

As this tutorial series will of course focus on mathematical concepts as regression analysis is a mathematical technique you will need a sound knowledge of algebra and graphs. I will again do my best to explain all of the concepts as much as possible to cater for beginners, people who have a basic algebra or statistics course under their belts will find things a lot easier.

What is Regression Analysis?

So enough of all the introductions lets get straight in! If you haven’t heard of regression analysis before you are probably already asking what is it and why is it useful? From the Wikipedia article on regression analysis:

“a statistical process for estimating the relationships among variables. It includes many techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.”

Well that hasn’t really helped much now has it? It is much simpler to understand if you think about the variables as the X and Y coordinates on a graph.

Consider the case where you have a simple scatter plot diagram. You have a set of X and Y coordinates that are plotted on a graph with two axis’ - the x and y. For example this graph where the data runs up until an X value of 11. Say these values are from a particular stock on the stock exchange (regression analysis has a lot of applications in stocks and shares). The X values represent each a month in the year and the respective Y coordinates are the average price of the stock in that particular month. From the graph plot we can see that the price of shares is steadily increasing but we don’t possess any data for the 12th month. Is the price going to increase or decrease in December? How can we find out? For market traders this is very important information that can make them or lose them millions. The answer - regression analysis!

Scatter Plot

So we have data up to November and we want to find out what the Y value is when X is 12. The trouble is its not December yet so we don’t know what it is. We need a forecast model. Lets revisit the situation. We have an X value and we need the Y value. Hopefully this is ringing some bells. It sounds an awful lot like a good use of an function such as Y = aX + b (or it could be any other function). We can insert an X value of 12 and we get back the corresponding Y value which is the average stock price for December. Sounds great but we have a problem. We don’t know the variables a and b! The function could have any intercept and gradient. We currently don’t have a clue. We could make one up but someone like a market trader doesn’t want to risk their money on a made up value. We need a way to find the values of a and b which when put into the function will give us back an accurate value for the price in December.

Armed with that knowledge lets go back to the Wikipedia definition. ‘estimating the relationships among variables’ - this kind of makes more sense now. As X increases what does Y do? This is called the relationship between the two variables. If the Y values are increasing a lot as X increases, our forecast should reflect this relationship. We now need to label X and Y in more formal terms.

Y is the dependent variable. It depends on the values of the other independent variables and parameters a, X and b to give it a value.

We can now again go back to the Wikipedia definition. ‘helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables or parameters are held fixed.’ Again this makes more sense now. We want to analyse how the dependent variable Y changes as the independent X value is varied and the other parameters a and b are kept fixed. This is most often done through an function such as Y = aX + b.

So essentially we want to find some function that best fits the data points that we have for the other months. The function models the relationship between X and Y. Once we have this function we can plug in X values and get the Y values that follow the relationship. This has many uses!

Lets go back to our example. We want to find the forecast of the stock price in December. We therefore need to find some function that relates the month to the price. This is regression analysis in its simplest form. Things get harder when we have to figure out what function is best to use to model the relationship (is it a linear line, an exponential line etc) and how can we find out how good our model is at describing the relationship, but we will move onto that in later parts of this series.

The most basic form of regression analysis is linear regression - that is finding a linear function that best models the relationship between the two variables. The base linear line function is Y = aX + b from earlier. We want to find the price Y and X is the month. We need to find the best values for a and b that produce a line that follows our current data as much as possible. If the line is accurate, we can use it to forecast other months. Our function becomes PRICE = a * MONTH + b. A huge part of regression analysis is finding the best values of a and b that produce a line that closely models our current data set.

Read More