Ryan Harrison My blog, portfolio and technology related ramblings

SSH Tunneling

SSH Tunneling, is the ability to use ssh to create a bi-directional encrypted network connections between machines over which data can be exchanged, typically TCP/IP. This allows us to easily & securely make services available between machines with minimal effort, while at the same time leveraging ssh for user authentication (public-key) and encryption with little overhead.

Local Port Forwarding

$ ssh -nNT -L 8000:localhost:3306 user@server.com

The above command sets up an ssh tunnel between your machine and the server and forwards all traffic from localhost:3306 (on the remote server) to localhost:8000 (on your local machine).

Since port 3306 is the default for MySQL, you could now access a database running on your remote machine through localhost:8000 (as if it was setup and running locally). This is useful as you don’t have to configure the remote server to allow extra ports through the firewall and handle the security implications of locking all your services down just to access a dev database for example. In this case, the MySQL instance is still not visible to the outside world (just how we like it).

In the above command, the -nNT options prevent a shell from being created, so we just get the port forwarding behaviour (not strictly needed but you probably don’t also want a new tty session).

Remote Port Forwarding

$ ssh -nNT -R 4000:localhost:3000 user@server.com

The above command sets up an ssh tunnel between your machine and the server, and forwards all traffic from localhost:3000 (on your local machine) to localhost:4000 (on the remote server).

You could then access a service running locally on port 3000 on the remote server through port 4000 (again as if it was running locally on the remote server). This is useful because it allows you to expose a locally running service through your server to others on the internet without having to deploy it / setup on the server. Note: to get this working you also need to set GatewayPorts yes in the /etc/ssh/sshd_config file as ssh doesn’t allow remote hosts to forward ports by default.


$ ssh -D 5000 -nNT user@server.com

The above command sets up a SOCKS proxy server supporting SOCKS4 and SOCKS5 protocols leveraging dynamic application-level port forwarding through the ssh tunnel. You could now configure your network proxy (within the browser or the OS) to localhost:5000 as the SOCKS proxy and then when you browse, all the traffic is proxied through the ssh tunnel using your remote server.

  • It protects against eavesdropping (perhaps in an airport or coffee shop) since all the traffic is encrypted (even if you are accessing HTTP pages).
  • As all web traffic goes through the SOCKS proxy, you will be able to access web sites that your ISP/firewall may have blocked.
  • Potentially helps protect your privacy since the web services you access will see requests coming from the remote server and not from your local machine. This could prevent some (IP based) identity/location tracking for example.

Advanced Use Cases

The above-mentioned use cases are the most commonly used, however, they can be modified slightly and used in interesting ways to be able to establish the ssh tunnel not only between your local machine and your server, but also additional machines, either internal to your network or internal to your servers network:

$ ssh -nNT -R user@server.com
  • Instead of using the default bind address, I explicitly use This implies that the service available on the remote server on port 4000 (forwarded from local port 631), will be accessible internally to the remote server network across all network interfaces, including bridge networks & virtual networks such as those used by container environments like Docker.
  • Instead of using localhost as the bind address for the local machine, I have explicitly used the IP Address, which can be the IP Address of an internal machine (other than the machine you’re using this command on), such as a network printer. This allows you to be able to expose and use my internal network printer directly from the remote server, without any additional changes from within my internal network either on the router or the network printer.

This technique can also be used while doing a local port forwarding or for setting up the socks proxy server in a similar manner.

Read More

Testing RESTful Services in Kotlin with Rest Assured

If you’re not writing a Spring application, creating good integration tests for RESTful endpoints (or any other web service) isn’t always the easiest - especially when you aren’t working in a dynamically typed language. Rest Assured is a great library which makes the process a lot easier - it’s designed around use in Java, but of course we can use it just fine in Kotlin as well.

In the following examples, a simple Kotlin web service written with Ktor and Exposed is tested using Rest Assured and JUnit. Note that this isn’t a simple unit test of the endpoint, an actual instance of the server is started up and tested via requests to localhost.

Add Rest Assured as a dependency

The first step is to add Rest Assured as a test dependency in your project, just open up your build.gradle file and add the following to the dependencies section (3.3.0 is the latest version as of writing):

testCompile "io.rest-assured:rest-assured:3.3.0"

Create Kotlin aliases

Before we start getting into using Rest Assured, because Kotlin is being used, a couple function aliases need to be created because of some methods overlapping with Kotlin keywords. In this case when (which is pretty vital in Rest Assured) and a helper function taking advantage of reified generics in Kotlin to convert a response object to the type we expect for further assertions.

protected fun RequestSpecification.When(): RequestSpecification {
    return this.`when`()

// allows response.to<Widget>() -> Widget instance
protected inline fun <reified T> ResponseBodyExtractionOptions.to(): T {
    return this.`as`(T::class.java)

Define a base Integration Test

In this example, all the concrete test cases which test our server endpoints will inherit from this base class. Because Ktor is being used, it’s very straightforward to start the server up at the start of the test run and close it down at the end.

At this point we can also pass configuration options to Rest Assured (there are plenty to check out). In this case we just set the base url and port so that in our test cases we can use relative URLs which are easier to read - /widget instead of http://localhost:8080/widget.

Because we also have access to any other source files in this base class, you can also define logic to setup the database as you would like in between tests - in this case, before each test we wipe the main Widget table in H2 to make sure every test starts from a blank slate.

open class ServerTest {

    companion object {

        private var serverStarted = false

        private lateinit var server: ApplicationEngine

        fun startServer() {
            if(!serverStarted) {
                server = embeddedServer(Netty, 8080, Application::module)
                serverStarted = true

                RestAssured.baseURI = "http://localhost"
                RestAssured.port = 8080
                Runtime.getRuntime().addShutdownHook(Thread { server.stop(0, 0, TimeUnit.SECONDS) })

    fun before() = transaction {
        Widgets.deleteAll() // refresh data before each test


Create tests using Rest Assured

Now you can start using Rest Assured to test your RESTful endpoints (or any other web service really). Each test case is just a simple JUnit test so you get all the integration you would expect from using any another library. The base format is a given --> when --> then flow whereby first you define any entity you wish to use (in a POST for example), and then define your actual request with URL, followed finally by assertions on the response object.

Rest Assured includes a lot of support for making assertions on the output JSON using JSON paths etc. However I much prefer using the to helper we defined above to marshal the response back to our DTO objects. Some might frown at this approach as you shouldn’t be reusing your domain classes in tests - but the response objects should take the same format anyway and I think we can agree that the test cases look a lot more readable this way. Plus as an added benefit, you get to use your good and faithful assertion libraries - my favourite being AssertJ.

GET Requests

The below example shows testing out a GET request to our RESTful resource. The syntax is easy to follow, just create a GET request to the URL in question, make an assertion on the output status code and then extract the response body, converting it to a List of our model Widget class. Finally, just run assertions on the list to make sure it contains only the data you expect.

fun testGetWidgets() {
    // expected
    val widget1 = NewWidget(null, "widget1", 10)
    val widget2 = NewWidget(null, "widget2", 5)

    val widgets = get("/widget")

    assertThat(widgets).containsOnly(widget1, widget2)


POST Requests

Testing out POST requests mainly follows the same format, however in this case you start off with a given expression where the body entity is defined, alongside the content type (JSON in this case). After that, the only difference is the request method. In the exact same manner as before, the output is extracted and similar assertions are run.

fun testUpdateWidget() {
    val update = NewWidget("id1", "updated", 46) // already exists
    val updated = given()


Error Cases

As good testers we of course want to test the negative cases as well, for typical RESTful services this will involve looking at the response status code and maybe checking that the response contains the correct error message etc:

fun testDeleteInvalidWidget() {
    delete("/widget/{id}", "-1")


The Rest Assured usage guide is very comprehensive and gives a good overview of what Rest Assured can accomplish. In the examples above I have showed only the basic functionality - but to be honest for a lot of cases this is all your really need.

The main differences you will see in other examples is that in a typical Rest Assured test, the body method is used to run Hamcrest matchers against certain JSON elements. You can also test forms, run JSON schema validations, test against XML and use JSONPath to access specific nodes.

Find a lot more real-world use cases in the following two projects:



Read More

Typora - A Better Markdown Editor

VS Code Doesn’t Cut It

Since VS Code has come into popularity, I had always used it to write blog posts like this one in markdown. It worked just fine, it’s really just a text editor, but there is autocomplete templates and syntax highlighting etc for markdown files.

For small dev markdown files and documentation this is all that’s really needed, but for longer pieces of text, I found myself wanting something a little better (more like Word I will admit). Even with all the extensions available for markdown and the live preview version you can get in another pane, there are some big holes in the overall experience when you want to get away from the markup. I can’t imagine writing a book etc in VS Code for example, even if markdown is a pretty good option for it.

VS Code markdown editor

The live preview is good, but I never really found myself using it apart from quick checks to see if I hadn’t screwed up the markup and that everything looks the way I intended. For the actual writing part however, my attention was forced on the other pane - the markdown itself where, apart from the syntax highlighting, could really be any generic text editor. You have the nice shiny live version sitting right next to your eye, with all the nice CSS applied, but you can’t really use it much because the actual editing happens elsewhere - shame really.

The other really big hole in VS Code is the lack of good spell/grammar checking. Yes, there are a few extensions available for this, but they don’t hit the mark. One even relies on sending your text to an external web service to report back on spelling errors, seriously. There is one I used that held a local database, but it was far from extensive and you can’t right click on a word to change it. In VS Code all ‘quick fixes’ like this have to go through the lightbulb menu near the gutter - very annoying. I really hope VS Code gets updated to include a good built-in spell/grammar checker.


In Typora, you basically get to edit directly in the VS Code live preview equivalent. The actual underlying markdown is still there, but is kept behind the scenes and is not a distraction in the main editing experience. Things work pretty much how you would expect in most rich text editors, the significant difference being that Typora is converting everything to markdown for you.

Typora markdown editor

The overall user experience in Typora takes on a minimal and distraction free form. In the left panel, you have your project markdown files, and then you have the great looking preview straight in front of your eye - I think it looks great. The main preview is GitHub like by default, although there are other themes available as well.

You write the actual text the same way as you would do in VS Code, but elements are converted into the final result live as you type. For example, starting off a paragraph with a hash, will look just like you would expect. Give it some content and hit return however, and you have the generated header right there. Same goes for bullet points and images - type the markup and see the results live. It also makes error handling in the markup a lot easier, if the element doesn’t show up you must have done something wrong.

Typora is built as an Electron app, which is bit of shame as you’ll be running yet more Chrome instances and it is far from conservative in terms of download size and memory usage. But to be honest, for desktop development that seems to be the only really option nowadays and it is far from the worst example of an Electron app I have seen.

Oh, and did I mention that it has built in spell check which works the way you would expect!?

Typora spell check

Typora is still in beta and receives constant updates. Plus it’s also still free until it has a full stable release. I would definitely recommend for your markdown needs.

Read More

Ubuntu Server Setup Part 8 - Sending Email Through Gmail

In the previous part we covered how to setup Postfix to receive emails for our custom domain name and forward them onto a personal Gmail account. With that solution, you can get access to all incoming mail via the forwarding, but you have no way of sending mail as owner of your domain.

You could still add the address as a Send Mail As option within Gmail, but your underlying address would still be visible to the receiver. This is also how you see the Sent on Behalf Of message in Outlook etc. Ideally, we want to be able to send email in Gmail, but use our server as an intermediate. This is great because we can still use the Gmail interface and tooling without having to setup a real mailbox (Roundcube etc) on our server.

Securing a Relay

To get the functionality mentioned above, we have to setup Postfix as a relay server (a server that will send e-mails to their destination on behalf of it). You might have heard that relay servers are a really bad idea, and they are, but only if they are open (a.k.a unsecured). In this case, we will be making a relay, but securing it with TLS and a username/password to make sure that all communication between it and Gmail is secured. This will also prevent bad actors from being able to send email on your behalf via your server.

Install Cyrus SASL

We will be using Cyrus SASL as the method of authentication for Postfix. In this case, we will be storing it as a simple (permissioned) database file, however other more sophisticated storage solutions are available such as MySQL and PAM. Install the package using the following command:

$ sudo apt-get install sasl2-bin libsasl2-modules

Create a username and password

Once installed, we can create a username and password combination:

$ sudo saslpasswd2 -c -u yourdomain.com smtp

This will create a database file in the default location /etc/sasldb2 with a single user called smtp (you can use whatever username). You can verify that the user is created properly by running:

$ sudo sasldblistusers2

Make sure that the newly created database file is properly permissioned - in this case only readable by the Postfix user:

$ sudo chmod 400 /etc/sasldb2
$ sudo chown postfix /etc/sasldb2

Create an SSL Certificate

Because all traffic between Gmail and our server will be sent under TLS, we need an SSL certificate. If you already have a certificate (e.g from Let’s Encrypt), you can use that, but otherwise a simple self signed cert works just as well:

$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -days 3650

When prompted, enter you domain name yourdomain.com as the Common Name. The cert.pem file is what we are interested in. Makes sure to protect the key file! Now move the generated pem file so Postfix can read it:

$ sudo mv cert.pem /etc/postfix/cert.pem
$ sudo chmod 400 /etc/postfix/cert.pem
$ sudo chown postfix /etc/postfix/cert.pem

Setup Postfix as a Relay Server


Now we need to change some configuration to setup Postfix as a relay server which can send mail on behalf of another server. Open up the main config file /etc/postfix/master.cf. Uncomment the lines starting with submission and edit them to match the following:

submission inet n       -       n       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
  -o smtpd_tls_cert_file=/etc/postfix/cert.pem
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_reject_unlisted_recipient=no
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING

Here, we are enabling authentication using SASL and setting up TLS pointing to the new certificate. All traffic to the server must be sent under TLS as to be accepted by the relay. We also specify that we wish to accept relay traffic which is authenticated under SASL, and reject anything else (no open relay here).


We also need to tell Cyrus SASL to use the database file we created for authentication. Create the file /etc/postfix/sasl/smtpd.conf and enter the following:

pwcheck_method: auxprop
auxprop_plugin: sasldb
log_level: 7

Once you have made these changes, restart Postfix:

$ sudo service postfix restart

Add Firewall Rule

If everything went well and Postfix started correctly (check the logs if not), Postfix should be listening on port 587 for secured SMTP traffic. You should check the Postfix logs at /var/log/mail.log (or with journalctl -u postfix if using Systemd) if you have any problems. Add a firewall rule to allow traffic through the port:

$ ufw allow 587/tcp

Configure Gmail

Now all the server side configuration is done, time to setup your personal Gmail account to be able to send mail as your domain, using your server as a relay.

Open up Gmail and go to Settings -> Accounts and Import -> Send mail as. Click on the button to Add another email address:

Gmail Send Mail As

In the dialog box that pops up, enter your name and the full email address you wish to assign e.g me@yourdomain.com. Make sure the option to Treat as an alias is checked:

Gmail Add Address

In the next dialog, specify the address of your server and the username and password that was setup with saslpasswd2:

SMTP Server = yourdomain.com
Username = smtp@yourdomain.com (or whatever username you picked) followed by the domain
Password = the password you chose when setting up Cyrus SASL

Make sure that port 587 is selected and the connection is secured under TLS:

Gmail Configure Relay Server

If all went well, Gmail should be able to connect to your server and will send a confirmation email to your new address me@yourdomain.com. Because we setup forwarding in the previous section, this email should appear in your Gmail inbox as well. Open the mail and copy/paste the confirmation code.

Send mail as

Finally, start composing a new email or reply to an existing one and you should be able to select the new mail address me@yourdomain.com in the From dropdown. All done!

Wrap Up

In the last two sections we set up a Postfix email server for own domain name yourdomain.com:

  • All emails sent to me@yourdomain.com (or any listed in the virtual file) on port 25, will be forwarded on to you@gmail.com and be visible in your standard Gmail inbox.
  • Gmail will let you select me@yourdomain.com as the From address when sending or replying to any mail. The message will be relayed onto our Postfix server with TLS on port 587 and then passed on to the destination. Any message sent in this fashion will look to the receiver as though it was sent directly by your domain and your underlying Gmail address will not be visible.
Read More

Kotlin - Things to Improve

This list isn’t very long and doesn’t exactly include any game breaking lack of functionality. A good testament to how Kotlin is a solid language these days.

Try/Multi Catch

A somewhat simple language feature that according to the designers is still on the cards. Not too much of a problem to live without, but Java has had it for years and it should be a staple of any modern language:

try {
} catch(SomeException | OtherException e) {

The Kotlin version relies on the when construct, which has many more levels of nesting and is generally just more ugly to read and write:

try {
} catch (e: Exception) {
    when(e) {
         is SomeException, is OtherException -> {...}
         else -> throw e

Ternary Operator and Collection Literals

These are both hotly contested, but I personally believe they should be part of Kotlin. All the arguments against the ternary operator seem to revolve around being ‘too easy to abuse’. Yeah right. Kotlin has so many other language features which can be abused already (operator overloading anyone? extension functions anyone?), I just don’t see why it’s such a big deal.

Like it or not, Kotlin is competing against a myriad of other C based languages - all of which have the ternary operator already. Pretty much every developer knows it these days, it should be made available.

Because if in Kotlin is an expression, it is termed as the ‘acceptable alternative’:

val something = if(a < 4) "it's valid" else "not valid"

v.s. the syntax everyone and their mum is familiar with:

val something = a < 4 ? "it's valid" : "not valid"

Not many characters saved, true. However, when the times comes that I need one (yes because it has it’s place), I get angry at having to write the if statement - which is just more clunky to write.

Similarly, another highly wanted feature is collection literal syntax. I realise that this might get a bit complicated due to the whole mutable/immutable lists thing in Kotlin, but there are enough people who want it for a reason. Kotlin is trying to attract developers from the Python/Javascript/Swift worlds, this kind of thing is what will annoy those trying to make the transition.

People are getting on their high horses and spouting the important of ‘language principles’ and not cluttering the language. Your language principles can be as solid as you want, but if nobody uses it, then what’s the point? Developers obviously expect these things - there is data backing it up. Is how I create my lists or perform inline conditionals that much of a big deal to not have both ways of doing it?


I think the whole static and companion deal is a bit of a mess in Kotlin. When writing Kotlin on the JVM, the concept of static is still very much a thing, and in certain places still a necessity. Want to create a JUnit method marked as BeforeAll/BeforeClass? Yeah, it’s just straight up annoying in Kotlin.

To create a simple static method in Kotlin, you have to go through the hassle of creating a companion object, plus specially annotate the function to tell the compiler to actually make it static. In this case, Java is actually less verbose than Kotlin (and by no small margin) and for what gain? They can do better here to appease those on the JVM.

companion object {
    fun actuallyStatic() = println("That was a chore")

Now, I don’t need static very often, really only for loggers/testing most of the time etc, but it’s such a pain to do the above that I prefer the other (less efficient) routes of logger per instance of or a top level variable. It seems like a bit of a tacked on feature to solve this kind of problems and ties into some of the efficiency problems discussed below. Static variables/methods are about as fast as it gets, turns out companion objects are the complete opposite.


I don’t actually have any real data to back this up, but I think it’s common knowledge that the Kotlin compiler isn’t the fastest thing ever. To be honest I’m not too surprised, the amount of work it has to do is impressive. Nonetheless, when working on a Kotlin project and going back to Java, the compilation differences are noticeable to say the least. The Kotlin team have, and continue to do a lot of work to improve it though, so I hope it continues to get faster in the future.

Aside from the compiler, I feel the need to rant about generally inefficiencies though. Maybe this is just a reason to moan about we seem to have accepted that it’s a good idea to use lambdas and streams literally everywhere. What happened to the good old for-loop? Things like streams aren’t free. Depending on what you’re doing there can be significant allocation going on and other overhead. Kotlin inline functions do a good job at resolving this, but they aren’t usable everywhere.

There is a great series here which covers some of the hidden costs in using some of Kotlin fancy language features. As discussed above, when you define a simple companion object, the compiler generates a bunch of boilerplate and indirection. I just wanted a simple static variable/function, why do I have to have all this extra stuff (yes, really, multiple new classes get generated for this).

There is a widespread movement towards immutability, which don’t get me wrong, has many benefits. But it also encourages so much inefficiency. Want to increment that one integer inside this object? Let’s copy the whole thing. Hardware continues to improve, yet developers and language designers seem to find a way to add another layer of abstraction to render it moot.

Too many imports

I’ve been doing a fair amount of work with Ktor and Exposed recently, and can’t help but think that extension functions are getting massively abused at this point (already). Don’t get me wrong, they are great and the syntax they allow for is a big selling point, but when you start using some of these libraries, you realise that everything and their dog is a top level extension function. Literally everything.

Take the following snippet which defines a simple Ktor web service for example. The actual logic portion is nice and neat, but at the top we have 22 (!) imports.

import com.fasterxml.jackson.annotation.JsonInclude
import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper
import io.ktor.application.call
import io.ktor.http.HttpStatusCode
import io.ktor.http.cio.websocket.Frame
import io.ktor.request.receive
import io.ktor.request.authorization
import io.ktor.request.receiveMultipart
import io.ktor.response.respond
import io.ktor.response.etag
import io.ktor.response.header
import io.ktor.response.respondFile
import io.ktor.response.respondText
import io.ktor.routing.Route
import io.ktor.routing.delete
import io.ktor.routing.param
import io.ktor.routing.get
import io.ktor.routing.post
import io.ktor.routing.put
import io.ktor.routing.route
import io.ktor.websocket.webSocket
// before any of our actual app imports

// implementation at https://github.com/raharrison/kotlin-ktor-exposed-starter/blob/master/src/main/kotlin/web/WidgetResource.kt

Kotlin has import * syntax, which the library makers tell everyone to use, but didn’t we previously agree that this was a bad idea? Something about having to ‘explicitly define your dependencies’ or something? But hey, as long as we get our nice builder syntactic sugar, lets just make IntelliJ fold it up and forget it ever happened.

Read More