It is a common misconception that classes which implement the
Serializable interface must also declare a constructor which takes no arguments.
When deserialization is taking place, the process does not actually use the object’s constructor itself. The object is instantiated without a constructor and is then initialised using the serialized instance data.
The only requirement on the constructor for a class that implements
Serializable is that the first non-serializable superclass in its inheritance hierarchy must have a no-argument constructor. This is because when you serialize an object, the serialization process chains it’s way up the inheritance hierarchy of the class - saving the instance data of each Serializable type it finds along the way. When a class is found that does not implement
Serializable, the serialization process halts.
Then when deserialization is taking place, the state of this first non-serializable superclass cannot be restored from the data stream, but is instead initialised by invoking that class’ no-argument constructor. The rest of the instance data of all the
Serializable subclasses can then be restored from the stream.
For example this class which does not provide a no-arguments constructor:
Although the class itself does not itself declare a no-arguments constructor, the class is still able to be serialized. This is because the first non-serializable superclass of this class, which in this case is
Object, provides a no-arguments constructor which can be used to initialize the subclass during deserialization.
Foo extended from a
Baz class which did not implement
Serializable and did not declare a no-arguments constructor:
In this case a
NotSerializableException would be thrown during the deserialization process as the state of the
Baz class cannot be restored through the use of a no-arguments constructor. Because the instance data of the superclass
Baz could not be restored, the subclass also cannot be properly initialised - so the deserialization process cannot complete.
In this tutorial series we’ll be going over how to create a simple Regression Analysis library in Java. If you have any prior knowledge of regression analysis you will probably know that this is a very large field with a great many applications. In this tutorial series we won’t be covering any massively advanced techniques. Our final library will be able to produce the same results as you would find in Microsoft Excel (excluding the graph plotting), which in most basic circumstances will be plenty enough to get you some good results.
It’s best if you start this series with a sound knowledge of OOP (object-oriented programming) practices in Java as this series will include the use of abstract classes and polymorphism. You will also need a good knowledge of some of the more basic concepts in Java such as looping, methods and variables. I will do my best to explain the code as much as I can but it is advisable that you have some prior knowledge.
As this tutorial series will of course focus on mathematical concepts as regression analysis is a mathematical technique you will need a sound knowledge of algebra and graphs. I will again do my best to explain all of the concepts as much as possible to cater for beginners, people who have a basic algebra or statistics course under their belts will find things a lot easier.
So enough of all the introductions lets get straight in! If you haven’t heard of regression analysis before you are probably already asking what is it and why is it useful? From the Wikipedia article on regression analysis:
“a statistical process for estimating the relationships among variables. It includes many techniques for modelling and analysing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.”
Well that hasn’t really helped much now has it? It is much simpler to understand if you think about the variables as the
Y coordinates on a graph.
Consider the case where you have a simple scatter plot diagram. You have a set of
Y coordinates that are plotted on a graph with two axis’ - the
y. For example this graph where the data runs up until an
X value of 11. Say these values are from a particular stock on the stock exchange (regression analysis has a lot of applications in stocks and shares). The
X values represent each a month in the year and the respective
Y coordinates are the average price of the stock in that particular month. From the graph plot we can see that the price of shares is steadily increasing but we don’t possess any data for the 12th month. Is the price going to increase or decrease in December? How can we find out? For market traders this is very important information that can make them or lose them millions. The answer - regression analysis!
So we have data up to November and we want to find out what the
Y value is when
X is 12. The trouble is its not December yet so we don’t know what it is. We need a forecast model. Lets revisit the situation. We have an
X value and we need the
Y value. Hopefully this is ringing some bells. It sounds an awful lot like a good use of an function such as
Y = aX + b (or it could be any other function). We can insert an
X value of 12 and we get back the corresponding
Y value which is the average stock price for December. Sounds great but we have a problem. We don’t know the variables
b! The function could have any intercept and gradient. We currently don’t have a clue. We could make one up but someone like a market trader doesn’t want to risk their money on a made up value. We need a way to find the values of a and b which when put into the function will give us back an accurate value for the price in December.
Armed with that knowledge lets go back to the Wikipedia definition. ‘estimating the relationships among variables’ - this kind of makes more sense now. As
X increases what does
Y do? This is called the relationship between the two variables. If the
Y values are increasing a lot as
X increases, our forecast should reflect this relationship. We now need to label
Y in more formal terms.
Y is the dependent variable. It depends on the values of the other independent variables and parameters a,
X and b to give it a value.
We can now again go back to the Wikipedia definition. ‘helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables or parameters are held fixed.’ Again this makes more sense now. We want to analyse how the dependent variable
Y changes as the independent
X value is varied and the other parameters
b are kept fixed. This is most often done through an function such as
Y = aX + b.
So essentially we want to find some function that best fits the data points that we have for the other months. The function models the relationship between
Y. Once we have this function we can plug in
X values and get the
Y values that follow the relationship. This has many uses!
Lets go back to our example. We want to find the forecast of the stock price in December. We therefore need to find some function that relates the month to the price. This is regression analysis in its simplest form. Things get harder when we have to figure out what function is best to use to model the relationship (is it a linear line, an exponential line etc) and how can we find out how good our model is at describing the relationship, but we will move onto that in later parts of this series.
The most basic form of regression analysis is linear regression - that is finding a linear function that best models the relationship between the two variables. The base linear line function is
Y = aX + b from earlier. We want to find the price
X is the month. We need to find the best values for
b that produce a line that follows our current data as much as possible. If the line is accurate, we can use it to forecast other months. Our function becomes
PRICE = a * MONTH + b. A huge part of regression analysis is finding the best values of
b that produce a line that closely models our current data set.
In C# it is common that exceptions are re-thrown after some logging has taken place, or perhaps even to alter the exception information to be more user friendly. However there are two different ways of re-throwing exceptions in C# and care needs to be taken when doing so, as one method will loose the stack trace - making things a lot harder to debug.
Consider the following code:
This is pretty self-explanatory. We have a method that runs another method and catches any exceptions it may throw (in this case one will be thrown every time). In the catch block we examine the exception, perhaps do some logging and re-throw the exception for the caller to handle. Finally in Main the re-thrown exception is caught again and the stack trace is examined.
At first look there is nothing wrong with this code, it’s all pretty commonplace, nothing much to see here. However this is the output that we get:
You may or may not have noticed that this is not the full stack trace. We can see that the exception came from
Run(), however we can’t tell that in actual fact the exception originated from
DoSomething() at all. This may or may not cause problems when debugging as now instead of going straight to the route cause, you first have to go through
We lose the top of the stack trace because we used
Which essentially resets the stack trace to now start in that method. This makes sense as this is really the same as doing something like:
But what if we wanted to see the whole stack trace? Well instead of using
throw e; we just use:
With the updated catch block:
We get the output:
We now have the full story in the stack trace. We can see that the exception originated from the
DoSomething() method and passed through the
Run() method into
Main() - much more helpful when debugging.
I don’t see any situation when using
throw e; would be of any use at all. If you wanted to hide the stack trace then you would typically be throwing a completely new exception anyway - with a new message and perhaps other information to pass to the caller. If you didn’t want to hide the stack trace then throw; is the statement to use. Resharper even sees
throw e; as a problem and tries to replace it with the simple throw;.
Even so I bet this mistake has been made a lot of times by a lot of people. So remember if you are wanting to re-throw an exception, never use throw e; as it will loose your stack trace. Instead always use
In C# when you concatenate two strings together you are implicitly creating a lot of strings in memory - more than you would have thought. For example consider the code:
Behind the scenes new strings are created for each portion of the resulting string in completely different memory locations through inefficient copy operations. So in total in this one line we have created:
1. "foo" 2. "bar" 3 "baz" 4 "foo bar" 5. "foo bar baz" In just one seemingly simple concatenation loop 5 strings have been created which of course is wildly inefficient. The problem gets a lot worse when you end up concatenating hundreds of strings together in a loop like this. The solution is to use
StringBuilders. The above code is converted into:
Using this method is a lot more efficient thanks to the fact that
StringBuilders keep the same position in memory for their strings and do not perform inefficient copy operations each time a new string is appended (for example number 4 from above would not be created in a completely separate memory location). This makes
StringBuilders very useful when concatenating many strings at once. But that doesn’t mean go replace all of your string concatenation code with StringBuilders right away. There are some situations where explicitly using a StringBuilder can make the situation worse. For example:
You might think that this suffers with the same inefficiencies as in the first example but in fact it doesn’t at all. The difference is that compile-time concatenations (which is what’s happening here) are automatically translated by the compiler into the appropriate calls to
String.Concat() (which is the fastest way). Adding a
StringBuilder would essentially be ruining the optimisations made by the compiler. The use of
StringBuilder should be reserved to building complex strings at runtime - not replacing compile time concatenations.