Search Icon

Ryan Harrison My blog, portfolio and technology related ramblings

RESTful Kotlin with Ktor and Exposed

Updated for Ktor 1.0 and stable coroutines in Kotlin 1.3+

I’ve been writing a lot more Kotlin recently and have been really liking the language so far. I’ll probably write another post pointing out some of my favourite features, but in short it’s basically Java, but without all the annoying stuff. I think in terms of adoption it’s still very early days for Kotlin, but due to the great interop with Java and being an official language for Android development, I wouldn’t be surprised if it doesn’t start to become extremely popular over the next few years.

Kotlin is pretty versatile, even though most people no doubt focus on the Android side of things. That doesn’t mean however that server side development isn’t also supported - in fact, quite the opposite. The Spring framework already has built-in support for Kotlin and many other libraries are also focusing attention on it. You could use such Java focused libraries, but instead you could use the dedicated Kotlin libraries - some of which are supported by JetBrains themselves.

All the code for the following example is available in the GitHub project kotlin-ktor-exposed-starter.

Create a Kotlin Project

First things first on our way to creating a barebones REST server in Kotlin. Open up IntelliJ and create a new Kotlin project. This will create the basic file structure and a gradle build file. To make sure everything is working, you can run the basic Hello World:

fun main(args: Array<String>) {
    println("Hello World!")

Setting up Ktor Async Web Framework

Ktor is great library for creating simplistic and lightweight web services in Kotlin. It’s completely asynchronous through the use of coroutines and as such should scale very well with load. It’s still in active development so might have some rough edges, but on the whole it seems solid. The documentation is somewhat lacking, but has improved significantly. Also note that it’s not at v1.0 yet so the API is no doubt subject to change.

Add the following to build.gradle to add ktor as a dependency and allow the use of kotlin coroutines (an experimental feature as of writing). We are also using Jackson as out library of choice for JSON conversions (GSON support is also available).

repositories {
    maven { url "" }
    maven { url "" }

dependencies {
    compile "io.ktor:ktor-server-netty:$ktor_version"
    compile "io.ktor:ktor-jackson:$ktor_version"

In this case, we’re making use of the Netty application server, although servlet based options are also available (although not sure why you would want to sacrifice async).

Create a Ktor application

The main configuration for a ktor app (module) is very straightforward:

fun Application.module() {
    install(ContentNegotiation) {
        jackson {
            configure(SerializationFeature.INDENT_OUTPUT, true)

    install(Routing) {

fun main(args: Array<String>) {
    embeddedServer(Netty, 8080, module = Application::module).start()

Here a new Ktor module is created. Ktor is configured around the concept of features which can be installed into the main request pipeline. In this example we add features to add default headers to all our responses, log our calls for debugging and also configure processing of JSON requests and conversion to responses. Finally, the main Routing feature is used to designate which paths to handle in our app. We defer to another extension method defined elsewhere to define the routes for a widget RESTful service. To run the application, the main method starts a Netty server pointing to the module we just created.

Of course for anything to actually happen, we need to define the widget routes and service.

Defining Routes

Here is the definition of the widget extension method which defines the interface for our service:

fun Route.widget(widgetService: WidgetService) {

    route("/widget") {

        get("/") {

        get("/{id}") {
            val widget = widgetService.getWidget(call.parameters["id"]?.toInt()!!)
            if (widget == null) call.respond(HttpStatusCode.NotFound)
            else call.respond(widget)

        post("/") {
            val widget = call.receive<NewWidget>()

        put("/") {
            val widget = call.receive<NewWidget>()

        delete("/{id}") {
            val removed = widgetService.deleteWidget(call.parameters["id"]?.toInt()!!)
            if (removed) call.respond(HttpStatusCode.OK)
            else call.respond(HttpStatusCode.NotFound)


As you can see the Ktor DSL is very intuitive thanks mainly to extension methods and lambda parameter syntax in Kotlin. The basic HTTP method are defined for dealing with widgets - each of which defer to our service which can do all the database access etc.

Note that the post and put methods expect an instance of the NewWidget class (as converted via JSON). This is defined as a Kotlin data class for a widget instance with an optional id:

data class NewWidget(
        val id: Int?,
        val name: String,
        val quantity: Int,
        val dateCreated: Long

As we have set up Jackson before, we can just return a basic kotlin data object and it will be converted to json without any additional work from us. Finally, we need to create the widget service to handle the logic of saving and retrieving our model.

Setting up Exposed

Exposed is another JetBrains sponsored library for database interactions in Kotlin. It is a kind of ORM, but unlike Hibernate it’s very simple and lightweight. In this post we’re going to use H2 as a simple in-memory database and HikariCP for connection pooling. Add the following dependencies:

compile "com.h2database:h2:$h2_version"
compile "$exposed_version"
compile 'com.zaxxer:HikariCP:2.7.8'

Exposed has two ways of interacting with databases - their DSL and DAO. In this post I focus only on the DSL (sql builder) as I think that’s where the library excels. The DAO syntax is nice, but introduces complexity when dealing with web frameworks as you have to convert to your own model class manually. The following defines a Table for widgets:

object Widgets : Table() {
    val id = integer("id").primaryKey().autoIncrement()
    val name = varchar("name", 255)
    val quantity = integer("quantity")
    val dateCreated = long("dateCreated")

It’s quite straightforward, we just define our columns as fields and use the fluent column builder to define attributes. We can then make use of the Widgets object application wide to query the table.

Connection Pooling and Database Threads

Now we can setup a connection pool for database interaction. This example uses HikariCP as it’s the most widely used library for this at the moment:

private fun hikari(): HikariDataSource {
    val config = HikariConfig()
    config.driverClassName = "org.h2.Driver"
    config.jdbcUrl = "jdbc:h2:mem:test"
    config.maximumPoolSize = 3
    config.isAutoCommit = false
    config.transactionIsolation = "TRANSACTION_REPEATABLE_READ"
    return HikariDataSource(config)

Now we can tell Exposed to connect to our H2 db and create the widgets table:

transaction {

A key thing to note when dealing with the async world is that you really don’t want to block any of the threads that are handling web requests. Unlike the standard servlet model where each request is tied to a thread, when you block in an async app you are essentially also blocking any other work from being done. If you do this a lot or have a spike in load, your app will grind to a halt.

This presents a problem when using standard JDBC to query our database because the framework is inherently blocking and so our threads will cease when waiting for results sets. To get around this, we must do our database queries on a dedicated thread pool. This is only really possible through coroutines which can suspend and resume as needed. The flow will be:

  1. Coroutine A starts to handle main web request from user
  2. Database query needed so another coroutine B is starting on another thread pool to perform this blocking operation
  3. A suspends execution until B is finished. Due to the nature of coroutines, the underlying thread is then free to perform other work (handling other requests)
  4. Background coroutine B finishes after database query. Thread is returned to the thread pool for other queries etc
  5. A resumes execution by restoring the previous state it had before suspension. It now has access to the query results which can be passed back as the response. Note that the coroutine A may now be executing on a different thread than in step 1 (pretty cool right?)

This might sound like a lot of work (and it is), but thanks to the coroutines library in Kotlin, this is thankfully very easy to accomplish. The following helper method, which is used across all database interaction in our service class, runs a block of code inside a transaction in this new coroutine. Dispatchers.IO references a thread pool managed by Kotlin coroutines that is meant for blocking IO operations like these. Once called, this function will suspend the current coroutine and launch a new one on the special IO thread pool - which will then block whilst the database transaction is performed. When the result is ready, the coroutine is resumed and returned to the initial caller.

suspend fun <T> dbQuery(block: () -> T): T =
    withContext(Dispatchers.IO) {
        transaction { block() }

The method is marked as suspend which will allow the suspension of A described in step 3.

Database Queries with Exposed

Finally, we need to define the WidgetService which will be making use of the database we just set up. The whole code is available in the GitHub project, but here is the method to retrieve a specific widget:

suspend fun getWidget(id: Int): Widget? = dbQuery { {
            ( eq id)
        }.mapNotNull { toWidget(it) }

As you can see we make use of the dbQuery helper to perform our query. The Exposed DSL for queries is nice and easy to read. The result of the select is a ResultRow, so I define a helper to perform the mapping to our model class:

private fun toWidget(row: ResultRow): Widget =
                    id = row[],
                    name = row[],
                    quantity = row[Widgets.quantity],
                    dateCreated = row[Widgets.dateCreated]

Something like Hibernate (or the DAO in Exposed) would do this automatically, but Exposed is just a lightweight wrapper around the sql so we have full control of what’s happening. Here is the method to add and delete a widget - again fairly intuitive to read:

suspend fun addWidget(widget: NewWidget): Widget {
        var key: Int? = 0
        dbQuery {
            key = Widgets.insert({
                it[name] =
                it[quantity] = widget.quantity
                it[dateCreated] = System.currentTimeMillis()
            }) get
        return getWidget(key!!)!!

suspend fun deleteWidget(id: Int): Boolean = dbQuery {
        Widgets.deleteWhere { eq id } > 0

And that’s it! Pretty straightforward in terms of lines of code to create a REST server with database interaction. Start the app as you would any other program (no need to deploy to any app server) and test out the widget routes.

The full example is available in the GitHub project kotlin-ktor-exposed-starter.

Read More

Programs to install on a New Build

Below is a list of all the software I tend to install straight away on a new build (or simply when reinstalling Windows from time to time). I am big into keeping what you have installed at any point to an absolute minimum - mainly to prevent general slowdown over time, so this list isn’t that long. These can however get pretty much anything I need done, even if extra utilities are needed later on.

Browsers: the core of your computer these days

Chrome - my main browser and has been for quite a while now. Sure it’s a massive resource hog, but what’s the point in having RAM if it’s sitting idle? Still probably the fastest browser around and the most popular.

Firefox - mainly installed as a backup which gets used every so often. The new Firefox Quantum update has improved the situation dramatically and maybe I’ll try it as my main driver if Google screws things up.

Browser Extensions: pretty much mandatory if you want any kind of sane browsing experience

uBlock Origin - ad/tracker blocker. A must have (or alternative). The web sucks these days without it. My soul dies a little inside every time I have to use a browser without some kind of adblocking - we’ve really screwed up the internet with the mountain of Javascript, popups and auto-playing videos plaguing every site.

LastPass - if you aren’t using a password manager of some kind, I recommend you revisit that decision. LastPass and their extension have been working great for me.

Google Mail Checker - displays an icon in the toolbar linking directly to your GMail account, also shows the number of unread messages.

JSONView - if you ever look at JSON in Chrome, this is a must to get some nice formatting.

Again, I like to keep this list to a minimum as Chrome starts to slowdown and consume even more resources the more you have. If you do need loads, I recommend disabling them until you need to actually use them.

Text Editors: for when you want to edit some text

Notepad++ - small, fast and feature rich replacement to the standard Windows Notepad. Great for any light text editing that doesn’t require a full-blown editor/IDE.

Visual Studio Code - probably the best editor around now after pretty much wiping the floor with Atom and Sublime. The amount of updates each month is insane and the extensions are very mature at this point. See here for the extensions I use.

Dev: tools and IDE of choice

Git - because you wouldn’t version control any differently these days now would you?

JDK - I mainly develop on the JVM (which whatever you think of Java is a great piece of tech). P.S - Kotlin is awesome.

Intellij - one IDE to rule them all. Does everything in every language, what can I say?

WSL (Windows SubSystem for Linux) - Ubuntu install for Windows for various utils.

Node - because apparently I need some way to install 3 thousand packages for a ‘Hello World’ webapp.

Media: for when you want to not do anything productive

K-Lite Codec Pack (MPC-HC) - can play pretty much anything you can ever come across and bundles in Media Player Classic which is my favourite media player.

Spotify - not much to say, does the job and I haven’t seen any need to try out any other service.

IrfanView - the built-in Windows 10 Photos app is absolutely terrible in every way imaginable.

Networking - connecting

FileZilla - SFTP client although not really needed anymore as WSL and rsync are a thing. Still small/lightweight enough to keep around.

Postman - great program to create and send HTTP requests. The de facto choice at this point for testing web services.

Private Internet Access - current VPN provider. Never had any problems with it, speeds are good and the client is solid.

PuTTY - still solid as ever even if you can use WSL for ssh now.

Games: launchers for the actual games

Steam - not much more to say about this. If it’s not on Steam I probably don’t want to play it.

Origin - because Battlefield is sadly not on Steam.

Monitoring: because you need to keep an eye on those temps after you overclock

HWMonitor (portable) - simple, lightweight and easy to read measurements across your system.

HwInfo64 - a more heavyweight alternative to HWMonitor, the number of readings it gives is comprehensive to say the least.

Misc: random tools and utilities

F.lux - remove blue light from your life.

CCleaner - still hanging around, runs every so often to delete temp files.

WinRar - yes the interface is outdated, but I only ever use the explorer context menu items. For me a staple for many years.

Again, this is just the barebones list that I tend to immediately install on a new install of Windows. Things tend to accumulate over time, but still try to keep it to a minimum.

Read More

Ubuntu Server Setup Part 2 - Secure Login

Before reading this, make sure to go over part 1 which covers initial login and setting up a new user.

In the previous section we covered logging into the server with the root user. At that point we were using a simple password, which is less than ideal. In this part we will be setting up public key authentication for the new user in order to better secure our logins. Login to the root user will be disabled via ssh as well, forcing you to go through your newly created user and use sudo commands to get root access.

Generating an RSA public/private keypair

Using Windows you can use the free PuTTygen utility which is bundled in with PuTTY.

Open the app and select SSH-2 RSA under the Key menu. Then hit Generate and provide some mouse movements to generate some randomness.


The top textbox will contain the newly generated public key which will be deployed onto the remote server. Save both the public key and private keys in a safe place. Remember you never want to give anyone/anything your private key.

The utility will save the private key in the .ppk format which PuTTY can understand. You can choose to export into the more generic OpenSSH format if needed (e.g to use with the ssh command under WSL).

Copy the contents of the top textbox into the clipboard as this will be what will be saved into the remote server in order to authorise you.

If you are using Linux you can use the ssh-keygen command instead to generate the keys.

Installing the public key

Login to the remote server under the new user you wish to secure (currently using a password although we will now change that).

If you are still the root user run su - <user>

In the home directory create a new .ssh directory which will house the public key.

$ mkdir ~/.ssh

Change the permissions to ensure that only the user can read or write to the directory.

$ chmod 700 ~/.ssh

Create a new file called authorized_keys and open using the nano editor

$ nano .ssh/authorized_keys

Paste your public key into this file. Ctrl+X and then Y to save and exit

Change the permissions on the new key file so again only the current user can read or write to it.

$ chmod 600 ~/.ssh/authorized_keys

Login using public key authentication

Now the public key is installed onto the server and you have the corresponding private key on your local machine, it’s time to login using them. In PuTTY, go to the Connection -> Data -> Auth tab and navigate to the .ppk private key in the bottom field:


If you’re using the ssh command, place the file under ~/.ssh/id_rsa and it will use it automatically. Otherwise you can pass in the path to the private key as you login:

$ ssh -i ~/.ssh/private_key [email protected]

Disable root login

In order to further secure the server, it’s best to prevent direct login to the root user. I have also changed the port for ssh to something other than 22 to prevent a lot of automated attacks and disabled password authentication (forcing you to use public keys).

$ nano /etc/ssh/sshd_config

PermitRootLogin no
Port 23401
PasswordAuthentication no
AllowUsers Fred Wilma

Reload the ssh daemon to reflect the changes

$ sudo systemctl reload sshd

With these settings active, you will be forced into logging in via the Fred or Wilma users (root being disabled) by public key authentication on port 23401.

Read More

Helpful Extensions for Visual Studio Code


Icons or Material Icons

Much needed icons for pretty much every common folder/file combination you can imagine.

File Utils

A convenient way of creating, duplicating, moving, renaming, deleting files and directories. Similar to the Sidebar Enhancement extension in Sublime Text. This again is something I see no reason can’t be integrated directly into VSCode.

Code Runner

Run code snippets or code file for many languages directly from the editor. Run the selected code snippet/file or provide a custom command as needed. Kind of surprised that VSCode doesn’t have this built in.



Rich support for the Python language (including Python 3.6), including features such as linting, debugging, IntelliSense, code navigation, code formatting, refactoring, unit tests and snippets. Definitely a must have if you do any Python development at all in VSCode.

React Code Snippets

This extension contains code snippets for Reactjs and is based on the babel-sublime-snippets package. Pretty much a must have if you any React development and use snippets.

ES6 Code Snippets

This extension contains code snippets for JavaScript in ES6 syntax for VS Code editor (supports both JavaScript and TypeScript). Very useful for class definitions, import, exports etc.



Integrates ESLint into VS Code. It can be very picky at times and suggests issues that I sometimes don’t care about, but you can get it into a decent place after some customisation.

The extension uses the ESLint library installed in the opened workspace folder. If the folder doesn’t provide one the extension looks for a global install version (npm install -g eslint for a global install).

Markdown Lint

Provides linting for the Markdown language. Includes a library of rules to encourage standards and consistency for Markdown files. It is powered by markdownlint for Node.js which is based on markdownlint for Ruby.

Code Spell Checker

A basic spell checker that works well with camelCase code. The goal of this spell checker is to help with catching common spelling errors while keeping the number of false positives low.

I only use this for Markdown files as a spell checker and it does an ok job. It’s probably the best extension that provides this functionality, but it’s still fairly limited. I wish the dev team would integrate this feature natively. You have to click on the quick fix menu (lightbulb icon) to see spelling suggestions as opposed to right clicking on the word as you would think. I guess this is a limitation of the extension framework so there’s definitely some room for improvements.


Auto Close Tag

Automatically add HTML/XML close tags. Same as how Visual Studio or Sublime Text do it so very useful if you’re used to that behaviour already.

Path Intellisense

Extension that autocompletes filenames from the local workspace. E.g typing ./ will suggest all files in the current folder. Very handy.

I’m no doubt missing a bunch of other great extensions, but I try to limit the number to keep things as responsive as possible. Visual Studio code is already a resource hog (pointing at you Electron) without a bunch of background addons making the problem worse.

Read More

A Better Alternative to Google Authenticator

2-Factor authentication is, for very good reasons, becoming increasingly popular as a way to further protect yourself online. The sole use of passwords has long been inadequate for secure authentication and so has been augmented by additional systems. A lot of online services provide SMS messages a a main method for 2-factor authentication, whereby a code will be sent to your phone. This solves part of the problem, but is still susceptible to the inherent insecurity of SMS as a whole, let alone SIM cloning and number spoofing issues.

As a better alternative, many providers have been offering the use of TOTP (Time-based One Time Passwords) to generate such codes. The protocol behind this is open, however the most popular implementation is by far the Google Authenticator app, which allows you to scan QR codes to add accounts and will constantly generate one-time-use codes as needed. Its popularity has also meant that most online services directly link to the app and include it in their usage instructions for 2FA auth.

Google Authenticator app

The Problem

The Google Authenticator app is all well and good, works well and is very easy to use. It does however open up another problem - what do you do when you lose your phone? It’s pretty plausible that for a significant number of users, their phone will either be lost, broken or stolen whilst they are using it to generate 2FA codes. What can you do when you can no longer login to many of your accounts because you aren’t able to generate the TOTP?

Many websites will also give you another security code when you enable 2-factor authentication, that you can use in this exact case. But isn’t that kind of defeating the whole point? Where are people going to store this code? You’re pretty screwed if you lose this recovery code, so you might end up writing it down somewhere insecure or store it online somewhere equally insecure. In my opinion, this is solving a problem by creating a new one.

And that’s only taking into account those sites which do offer you a recovery code. For the no doubt significant number which do not, you are locked out of your account if you lose your phone. It’s going to be in a case by case basis that some providers may let you back in if you contact them, but I’m not sure how they are going to know it’s you. For any site that stores sensitive data, I don’t see this as an option.

A Solution - Authy

Maybe a lot of users will be put off enabling 2FA for this reason, or more likely a lot of people have never really thought about the potential consequences. Either way, just like your main data, you need to also have a solid backup solution for your 2FA codes.

I mentioned before that the TOTP protocol is not proprietary - so can be implemented by anyone. A think many think that this technology is something Google have magicked up, but in reality there are a number of alternate apps out there.

One such app is called Authy, which aims to solve the problem mentioned above. In the basic sense, it is very similar to Google Authenticator, whereby you scan the same QR codes and it generates TOTP codes for you. The difference however, is that it provides a method of automatic backup of your accounts. In a similar manner to conventional password managers, such as LastPass which you should definitely be using, Authy will encrypt and upload your account strings up to their servers when you add them to the app. This is tied to a password you specify, which they don’t ever know - so if you trust password managers then this should be no different.

Your account itself is tied to your phone number, so when you lose you physical device, you can recover all your accounts as long as you move over you number. There are also features which allow sharing of your accounts to your other devices in a similar manner.

Authy app

Yes, I know you can just screenshot the QR codes which are generated, or add them to your other devices at the same time, but this is putting all the pressure of the backup on the user. Where are you meant to store the QR codes (how do you backup the backup?), will you encrypt them, how are you going to keep them in sync etc? Again, in this case you are solving a problem by generating another problem - for yourself.

It’s not perfect

The app isn’t perfect. For such a simple set of use cases, I have no idea why the app misses on some key features to make it more user friendly (and more approachable over the Google offering).

  • You can tie your accounts to a predefined set of providers that the Authy developers maintain (e.g Facebook, Google, Amazon etc). By doing so you can get a nice looking logo and some customised colours for your troubles. This does make the app look a lot nicer, but you rely on the site being in the set that the developers give you. Why the hell can I not provide my own logo? Why the hell can other users not upload their own customisations? Why the hell isn’t the existing set bigger? I mean seriously, the look and feel of the app is one of the main selling points given by the devs themselves, this should be so easy to add and contributes to one of your main features. The Google Authenticator app does look bland in comparison - but only when I don’t have to use the crappy ‘other account’ template.
  • You can rename your accounts to what you like, but this name doesn’t seem to be used when you choose the grid view. Why? Do you think I changed the name just for fun? If I changed it then it’s because I want to see it. The changed names are even used in the list view!
  • The QR scanner isn’t great. I mean, it’s definitely functional for sure, but it’s nowhere near as good as the one used in the Google Authenticator app. You have to really line up the code in the camera and get it into focus for it to work. In the Google app I can just point it somewhere close and it picks it up immediately.

For sure I am knitpicking with these annoyances, but if you want to draw people away from an app provided by Google, then you’re going to have to get it completely right. Hopefully the devs can get on top of this, because for me the main selling point - automated backups - works very well. For most users I would still definitely recommend the Authy app (or others which offer similar features) over the Google Authenticator app.

Read More