Ryan Harrison My blog, portfolio and technology related ramblings

Using Ktor with Jackson Serialization

Although the preferred JSON serialization library used in a lot of the Ktor examples is GSON, which makes sense due to it’s simplicity and ease of use, in real-world use Jackson is probably the preferred option. It’s faster (especially when combined with the AfterBurner module) and generally more flexible. Ktor comes with a built-in feature that makes use Jackson for JSON conversion very simple.

Add Jackson dependency

In your build.gradle file add a dependency to the Ktor Jackson artifact:

dependencies {
    compile "io.ktor:ktor-jackson:$ktor_version"
}

This will add the Ktor JacksonConverter class which can then be used within the standard ContentNegotiation feature.

It also includes an implicit dependency on the Jackson Kotlin Module which must be installed in order for Jackson to handle data classes (which do not have an empty default constructor as Jackson expects).

Install as a Converter

Then tell Ktor to use Jackson for serialization/deserialization for JSON content:

install(ContentNegotiation) {
    jackson {
        // extension method of ObjectMapper to allow config etc
        enable(SerializationFeature.INDENT_OUTPUT)
    }
}

which is the same as doing:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter())
}

With the converter installed, any request to your Ktor server will be served with a JSON response as long as the Content-Type header accepts it.

Reuse an Existing Mapper

The above configuration is quick and easy, however ObjectMapper instances are heavy objects and their configuration is generally shared across various areas of your app. Therefore, instead of creating a new ObjectMapper within the Ktor feature itself, initialise one for your application and point Ktor to it. You can then reuse the same mapper when needed without re-initialising it every time:

object JsonMapper {
    // automatically installs the Kotlin module
    val defaultMapper: ObjectMapper = jacksonObjectMapper()

    init {
        defaultMapper.configure(SerializationFeature.INDENT_OUTPUT, true)
        defaultMapper.registerModule(JavaTimeModule())
    }
}

then use the alternate syntax to install the converter, passing in our pre-made ObjectMapper instance:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter(defaultMapper))
}

You are then free to reuse the same JsonMapper.defaultMapper object across the rest of your app.

Read More

Ubuntu Server Setup Part 9 - Setup a Reverse Proxy with Nginx

In the previous part we covered how to set-up Nginx as a web server to serve static content. In this part, we will configure Nginx as a reverse proxy (one of the main other use cases) to be able to access other services running locally on your server without opening up a dedicated port.

What is a Reverse Proxy?

A reverse proxy can be thought of as a simple ‘passthrough’, whereby specific requests made to your web server get routed to other applications running locally and their responses returned as though they were all handled by the one server. For example, you wanted to give public access to a Python server you have running on port 8080. Instead of directly opening up the port and thus increasing the overall attack surface, Nginx can be configured to proxy certain requests to that server instead. This also has the advantage of easily enabling HTTPS for all services without having to configure each application separately and you get all the other advantages of a high performance web server like load balancing etc.

Follow the steps in the previous tutorial to setup Nginx and also optionally enable HTTPS. The rest of this part assumes you have another server running on your machine listening on localhost under port 8080.

Configure NGINX

Open up the main configuration file for your site:

$ sudo nano /etc/nginx/sites-available/yourdomain.com
server {
  listen 80;
  listen [::]:80;

  server_name yourdomain.com;

  location /otherapp {
      proxy_pass http://localhost:8080/;
  }
}

The proxy_pass directive is what makes this configuration a reverse proxy. It specifies that all requests which match the location block (in this case the root /otherapp path) should be forwarded to port 8080 on localhost, where our other app is running.

Test the new configuration to see if there are any errors

$ sudo nginx -t

If there are no errors present, reload the Nginx config

$ sudo nginx -s reload

In a browser, navigate to your main public domain and append /otherapp to the end e.g. http://yourdomain.com/otherapp. Because the URL matches the location element in the config above, Nginx will forward the request to our other server running on port 8080.

Additional Options

For basic applications, the main proxy_pass directive should work just fine. However, as you would expect, Nginx offers a number of other options to further configure the behaviour of the reverse proxy.

In the below configuration, proxy buffering is switched off - this means that the request body will be forwarded to the proxied server immediately as it is received, which can be useful for some real-time apps. A custom header X-Original-IP is also set on the forwarded request, containing the IP from the original request (which can then be picked up by the other server as needed).

location /otherapp {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Original-IP $remote_addr;
}
Read More

Setting up a Python Virtual Environment

You should setup a Python virtual environment to ensure that your library dependencies are consistent and segregated from your global packages. This can help to prevent potential versioning conflicts and makes it easier to package your app for use by others (or you, but on a different machine).

Previously, you had to install dedicated packages to create and manage Python virtual environments - such as pipenv or virtualenv. Python does however now come with it’s own solution venv which accomplishes much of the same and can be run directly as a Python module without any installation steps.

Create a Virtual Environment

As the venv module comes preinstalled, you can create a new virtual environment by running:

python -m venv virtenv

This will create a new directory called virtenv in your current directory (you can call it whatever you want - general naming scheme is venv) which will include it’s own Python interpreter, pip installation and any packages you subsequently install once the environment is activated.

If you look inside the new directory, you will find it has it’s own Lib/site-packages structure (where any new packages will be installed), alongside it’s own Python / pip executables. The version of Python within your new virtual environment will be the same as the one you used to run the venv command above.

Activate the Environment

To ‘activate’ the virtual environment, you need to call the activate shell script which got created by the previous command. This sets a bunch of environment variables to point the python / pip commands to your newly created venv instead of the globally installed version - in effect creating a completely separate Python installation.

If on Windows - virtenv\Scripts\activate.bat

If on Linux/Mac - virtenv/Scripts/activate

You should notice that your shell prompt got changed to include the name of the venv at the start. If you now run the where / which commands to show the location of the executables, it should show those located in the virtenv directory.

where python ==> \virtenv\Scripts\python.exe

where pip ==> \virtenv\Scripts\pip.exe

Install Packages

Running the pip list command shows that we don’t currently have anything installed (even if you had installed something globally).

pip (19.1.1)
setuptools (28.8.0)

You can use the pip command now to install packages as you would normally e.g.

pip install requests

If we check the list of installed packages again, you can see that requests has been added. Note that this could be a different version to the one installed in the global site-packages.

certifi (2019.3.9)
chardet (3.0.4)
idna (2.8)
pip (19.1.1)
requests (2.22.0)
setuptools (28.8.0)
urllib3 (1.25.3)

If you check the virtenv\Lib\site-packages directory, you should find thatrequests has been installed there.

Generate requirements.txt

You can run the pip freeze command to generate a requirements file containing all the currently installed packages - helpful if you want to recreate the exact same environment on a different machine.

pip freeze > requirement.txt

The contents of which will be something like:

certifi==2019.3.9
chardet==3.0.4
idna==2.8
requests==2.22.0
urllib3==1.25.3

Installing Packages from requirements.txt

When on a new machine with another blank virtual environment, you can use the requirements.txt file generated by pip freeze to install all the packages required for your project at once:

pip install -r requirements.txt

Pip will run through each entry in the file and install the exact version number specified. This makes it easy to create consistent virtual environments - in which you know the exact version of every package installed without the hassle of installing each one manually.

Deactivate the Environment

To deactivate the virtual environment and return all the environment variables to their previous values (pointing instead to your global Python installation), simply run:

deactivate

The virtual environment name should be removed from your shell prompt to denote that no environment is currently active.

Read More

SSH Tunneling

SSH Tunneling, is the ability to use ssh to create a bi-directional encrypted network connections between machines over which data can be exchanged, typically TCP/IP. This allows us to easily & securely make services available between machines with minimal effort, while at the same time leveraging ssh for user authentication (public-key) and encryption with little overhead.

Local Port Forwarding

$ ssh -nNT -L 8000:localhost:3306 user@server.com

The above command sets up an ssh tunnel between your machine and the server and forwards all traffic from localhost:3306 (on the remote server) to localhost:8000 (on your local machine).

Since port 3306 is the default for MySQL, you could now access a database running on your remote machine through localhost:8000 (as if it was setup and running locally). This is useful as you don’t have to configure the remote server to allow extra ports through the firewall and handle the security implications of locking all your services down just to access a dev database for example. In this case, the MySQL instance is still not visible to the outside world (just how we like it).

In the above command, the -nNT options prevent a shell from being created, so we just get the port forwarding behaviour (not strictly needed but you probably don’t also want a new tty session).

Remote Port Forwarding

$ ssh -nNT -R 4000:localhost:3000 user@server.com

The above command sets up an ssh tunnel between your machine and the server, and forwards all traffic from localhost:3000 (on your local machine) to localhost:4000 (on the remote server).

You could then access a service running locally on port 3000 on the remote server through port 4000 (again as if it was running locally on the remote server). This is useful because it allows you to expose a locally running service through your server to others on the internet without having to deploy it / setup on the server. Note: to get this working you also need to set GatewayPorts yes in the /etc/ssh/sshd_config file as ssh doesn’t allow remote hosts to forward ports by default.

SOCKS Proxy

$ ssh -D 5000 -nNT user@server.com

The above command sets up a SOCKS proxy server supporting SOCKS4 and SOCKS5 protocols leveraging dynamic application-level port forwarding through the ssh tunnel. You could now configure your network proxy (within the browser or the OS) to localhost:5000 as the SOCKS proxy and then when you browse, all the traffic is proxied through the ssh tunnel using your remote server.

  • It protects against eavesdropping (perhaps in an airport or coffee shop) since all the traffic is encrypted (even if you are accessing HTTP pages).
  • As all web traffic goes through the SOCKS proxy, you will be able to access web sites that your ISP/firewall may have blocked.
  • Potentially helps protect your privacy since the web services you access will see requests coming from the remote server and not from your local machine. This could prevent some (IP based) identity/location tracking for example.

Advanced Use Cases

The above-mentioned use cases are the most commonly used, however, they can be modified slightly and used in interesting ways to be able to establish the ssh tunnel not only between your local machine and your server, but also additional machines, either internal to your network or internal to your servers network:

$ ssh -nNT -R 0.0.0.0:4000:192.168.1.101:631 user@server.com
  • Instead of using the default bind address, I explicitly use 0.0.0.0. This implies that the service available on the remote server on port 4000 (forwarded from local port 631), will be accessible internally to the remote server network across all network interfaces, including bridge networks & virtual networks such as those used by container environments like Docker.
  • Instead of using localhost as the bind address for the local machine, I have explicitly used the 192.168.1.101 IP Address, which can be the IP Address of an internal machine (other than the machine you’re using this command on), such as a network printer. This allows you to be able to expose and use my internal network printer directly from the remote server, without any additional changes from within my internal network either on the router or the network printer.

This technique can also be used while doing a local port forwarding or for setting up the socks proxy server in a similar manner.

Read More

Testing RESTful Services in Kotlin with Rest Assured

If you’re not writing a Spring application, creating good integration tests for RESTful endpoints (or any other web service) isn’t always the easiest - especially when you aren’t working in a dynamically typed language. Rest Assured is a great library which makes the process a lot easier - it’s designed around use in Java, but of course we can use it just fine in Kotlin as well.

In the following examples, a simple Kotlin web service written with Ktor and Exposed is tested using Rest Assured and JUnit. Note that this isn’t a simple unit test of the endpoint, an actual instance of the server is started up and tested via requests to localhost.

Add Rest Assured as a dependency

The first step is to add Rest Assured as a test dependency in your project, just open up your build.gradle file and add the following to the dependencies section (3.3.0 is the latest version as of writing):

testCompile "io.rest-assured:rest-assured:3.3.0"

Create Kotlin aliases

Before we start getting into using Rest Assured, because Kotlin is being used, a couple function aliases need to be created because of some methods overlapping with Kotlin keywords. In this case when (which is pretty vital in Rest Assured) and a helper function taking advantage of reified generics in Kotlin to convert a response object to the type we expect for further assertions.

protected fun RequestSpecification.When(): RequestSpecification {
    return this.`when`()
}

// allows response.to<Widget>() -> Widget instance
protected inline fun <reified T> ResponseBodyExtractionOptions.to(): T {
    return this.`as`(T::class.java)
}

Define a base Integration Test

In this example, all the concrete test cases which test our server endpoints will inherit from this base class. Because Ktor is being used, it’s very straightforward to start the server up at the start of the test run and close it down at the end.

At this point we can also pass configuration options to Rest Assured (there are plenty to check out). In this case we just set the base url and port so that in our test cases we can use relative URLs which are easier to read - /widget instead of http://localhost:8080/widget.

Because we also have access to any other source files in this base class, you can also define logic to setup the database as you would like in between tests - in this case, before each test we wipe the main Widget table in H2 to make sure every test starts from a blank slate.

open class ServerTest {

    companion object {

        private var serverStarted = false

        private lateinit var server: ApplicationEngine

        @BeforeAll
        @JvmStatic
        fun startServer() {
            if(!serverStarted) {
                server = embeddedServer(Netty, 8080, Application::module)
                server.start()
                serverStarted = true

                RestAssured.baseURI = "http://localhost"
                RestAssured.port = 8080
                Runtime.getRuntime().addShutdownHook(Thread { server.stop(0, 0, TimeUnit.SECONDS) })
            }
        }
    }

    @BeforeEach
    fun before() = transaction {
        Widgets.deleteAll() // refresh data before each test
        Unit
    }

}

Create tests using Rest Assured

Now you can start using Rest Assured to test your RESTful endpoints (or any other web service really). Each test case is just a simple JUnit test so you get all the integration you would expect from using any another library. The base format is a given --> when --> then flow whereby first you define any entity you wish to use (in a POST for example), and then define your actual request with URL, followed finally by assertions on the response object.

Rest Assured includes a lot of support for making assertions on the output JSON using JSON paths etc. However I much prefer using the to helper we defined above to marshal the response back to our DTO objects. Some might frown at this approach as you shouldn’t be reusing your domain classes in tests - but the response objects should take the same format anyway and I think we can agree that the test cases look a lot more readable this way. Plus as an added benefit, you get to use your good and faithful assertion libraries - my favourite being AssertJ.

GET Requests

The below example shows testing out a GET request to our RESTful resource. The syntax is easy to follow, just create a GET request to the URL in question, make an assertion on the output status code and then extract the response body, converting it to a List of our model Widget class. Finally, just run assertions on the list to make sure it contains only the data you expect.

@Test
fun testGetWidgets() {
    // expected
    val widget1 = NewWidget(null, "widget1", 10)
    val widget2 = NewWidget(null, "widget2", 5)

    val widgets = get("/widget")
    	.then()
    	.statusCode(200)
    	.extract().to<List<Widget>>()

    assertThat(widgets).containsOnly(widget1, widget2)

}

POST Requests

Testing out POST requests mainly follows the same format, however in this case you start off with a given expression where the body entity is defined, alongside the content type (JSON in this case). After that, the only difference is the request method. In the exact same manner as before, the output is extracted and similar assertions are run.

@Test
fun testUpdateWidget() {
    val update = NewWidget("id1", "updated", 46) // already exists
    val updated = given()
        .contentType(ContentType.JSON)
        .body(update)
        .When()
        .post("/widget")
        .then()
        .statusCode(200)
        .extract().to<Widget>()

    assertThat(updated).isNotNull
    assertThat(updated.id).isEqualTo(update.id)
    assertThat(updated.name).isEqualTo(update.name)
    assertThat(updated.quantity).isEqualTo(update.quantity)
}

Error Cases

As good testers we of course want to test the negative cases as well, for typical RESTful services this will involve looking at the response status code and maybe checking that the response contains the correct error message etc:

@Test
fun testDeleteInvalidWidget() {
    delete("/widget/{id}", "-1")
        .then()
        .statusCode(404)
}

Docs

The Rest Assured usage guide is very comprehensive and gives a good overview of what Rest Assured can accomplish. In the examples above I have showed only the basic functionality - but to be honest for a lot of cases this is all your really need.

The main differences you will see in other examples is that in a typical Rest Assured test, the body method is used to run Hamcrest matchers against certain JSON elements. You can also test forms, run JSON schema validations, test against XML and use JSONPath to access specific nodes.

Find a lot more real-world use cases in the following two projects:

https://github.com/raharrison/kotlin-ktor-exposed-starter/tree/master/src/test/kotlin

https://github.com/raharrison/lynks-server/tree/master/src/test-integration/kotlin

Read More