Ryan Harrison My blog, portfolio and technology related ramblings

Ubuntu Server Setup Part 9 - Setup a Reverse Proxy with Nginx

In the previous part we covered how to set-up Nginx as a web server to serve static content. In this part, we will configure Nginx as a reverse proxy (one of the main other use cases) to be able to access other services running locally on your server without opening up a dedicated port.

What is a Reverse Proxy?

A reverse proxy can be thought of as a simple ‘passthrough’, whereby specific requests made to your web server get routed to other applications running locally and their responses returned as though they were all handled by the one server. For example, you wanted to give public access to a Python server you have running on port 8080. Instead of directly opening up the port and thus increasing the overall attack surface, Nginx can be configured to proxy certain requests to that server instead. This also has the advantage of easily enabling HTTPS for all services without having to configure each application separately and you get all the other advantages of a high performance web server like load balancing etc.

Follow the steps in the previous tutorial to setup Nginx and also optionally enable HTTPS. The rest of this part assumes you have another server running on your machine listening on localhost under port 8080.

Configure NGINX

Open up the main configuration file for your site:

$ sudo nano /etc/nginx/sites-available/yourdomain.com
server {
  listen 80;
  listen [::]:80;

  server_name yourdomain.com;

  location /otherapp {
      proxy_pass http://localhost:8080/;

The proxy_pass directive is what makes this configuration a reverse proxy. It specifies that all requests which match the location block (in this case the root /otherapp path) should be forwarded to port 8080 on localhost, where our other app is running.

Test the new configuration to see if there are any errors

$ sudo nginx -t

If there are no errors present, reload the Nginx config

$ sudo nginx -s reload

In a browser, navigate to your main public domain and append /otherapp to the end e.g. http://yourdomain.com/otherapp. Because the URL matches the location element in the config above, Nginx will forward the request to our other server running on port 8080.

Additional Options

For basic applications, the main proxy_pass directive should work just fine. However, as you would expect, Nginx offers a number of other options to further configure the behaviour of the reverse proxy.

In the below configuration, proxy buffering is switched off - this means that the request body will be forwarded to the proxied server immediately as it is received, which can be useful for some real-time apps. A custom header X-Original-IP is also set on the forwarded request, containing the IP from the original request (which can then be picked up by the other server as needed).

location /otherapp {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Original-IP $remote_addr;
Read More

Setting up a Python Virtual Environment

You should setup a Python virtual environment to ensure that your library dependencies are consistent and segregated from your global packages. This can help to prevent potential versioning conflicts and makes it easier to package your app for use by others (or you, but on a different machine).

Previously, you had to install dedicated packages to create and manage Python virtual environments - such as pipenv or virtualenv. Python does however now come with it’s own solution venv which accomplishes much of the same and can be run directly as a Python module without any installation steps.

Create a Virtual Environment

As the venv module comes preinstalled, you can create a new virtual environment by running:

python -m venv virtenv

This will create a new directory called virtenv in your current directory (you can call it whatever you want - general naming scheme is venv) which will include it’s own Python interpreter, pip installation and any packages you subsequently install once the environment is activated.

If you look inside the new directory, you will find it has it’s own Lib/site-packages structure (where any new packages will be installed), alongside it’s own Python / pip executables. The version of Python within your new virtual environment will be the same as the one you used to run the venv command above.

Activate the Environment

To ‘activate’ the virtual environment, you need to call the activate shell script which got created by the previous command. This sets a bunch of environment variables to point the python / pip commands to your newly created venv instead of the globally installed version - in effect creating a completely separate Python installation.

If on Windows - virtenv\Scripts\activate.bat

If on Linux/Mac - virtenv/Scripts/activate

You should notice that your shell prompt got changed to include the name of the venv at the start. If you now run the where / which commands to show the location of the executables, it should show those located in the virtenv directory.

where python ==> \virtenv\Scripts\python.exe

where pip ==> \virtenv\Scripts\pip.exe

Install Packages

Running the pip list command shows that we don’t currently have anything installed (even if you had installed something globally).

pip (19.1.1)
setuptools (28.8.0)

You can use the pip command now to install packages as you would normally e.g.

pip install requests

If we check the list of installed packages again, you can see that requests has been added. Note that this could be a different version to the one installed in the global site-packages.

certifi (2019.3.9)
chardet (3.0.4)
idna (2.8)
pip (19.1.1)
requests (2.22.0)
setuptools (28.8.0)
urllib3 (1.25.3)

If you check the virtenv\Lib\site-packages directory, you should find thatrequests has been installed there.

Generate requirements.txt

You can run the pip freeze command to generate a requirements file containing all the currently installed packages - helpful if you want to recreate the exact same environment on a different machine.

pip freeze > requirement.txt

The contents of which will be something like:


Installing Packages from requirements.txt

When on a new machine with another blank virtual environment, you can use the requirements.txt file generated by pip freeze to install all the packages required for your project at once:

pip install -r requirements.txt

Pip will run through each entry in the file and install the exact version number specified. This makes it easy to create consistent virtual environments - in which you know the exact version of every package installed without the hassle of installing each one manually.

Deactivate the Environment

To deactivate the virtual environment and return all the environment variables to their previous values (pointing instead to your global Python installation), simply run:


The virtual environment name should be removed from your shell prompt to denote that no environment is currently active.

Read More

SSH Tunneling

SSH Tunneling, is the ability to use ssh to create a bi-directional encrypted network connections between machines over which data can be exchanged, typically TCP/IP. This allows us to easily & securely make services available between machines with minimal effort, while at the same time leveraging ssh for user authentication (public-key) and encryption with little overhead.

Local Port Forwarding

$ ssh -nNT -L 8000:localhost:3306 user@server.com

The above command sets up an ssh tunnel between your machine and the server and forwards all traffic from localhost:3306 (on the remote server) to localhost:8000 (on your local machine).

Since port 3306 is the default for MySQL, you could now access a database running on your remote machine through localhost:8000 (as if it was setup and running locally). This is useful as you don’t have to configure the remote server to allow extra ports through the firewall and handle the security implications of locking all your services down just to access a dev database for example. In this case, the MySQL instance is still not visible to the outside world (just how we like it).

In the above command, the -nNT options prevent a shell from being created, so we just get the port forwarding behaviour (not strictly needed but you probably don’t also want a new tty session).

Remote Port Forwarding

$ ssh -nNT -R 4000:localhost:3000 user@server.com

The above command sets up an ssh tunnel between your machine and the server, and forwards all traffic from localhost:3000 (on your local machine) to localhost:4000 (on the remote server).

You could then access a service running locally on port 3000 on the remote server through port 4000 (again as if it was running locally on the remote server). This is useful because it allows you to expose a locally running service through your server to others on the internet without having to deploy it / setup on the server. Note: to get this working you also need to set GatewayPorts yes in the /etc/ssh/sshd_config file as ssh doesn’t allow remote hosts to forward ports by default.


$ ssh -D 5000 -nNT user@server.com

The above command sets up a SOCKS proxy server supporting SOCKS4 and SOCKS5 protocols leveraging dynamic application-level port forwarding through the ssh tunnel. You could now configure your network proxy (within the browser or the OS) to localhost:5000 as the SOCKS proxy and then when you browse, all the traffic is proxied through the ssh tunnel using your remote server.

  • It protects against eavesdropping (perhaps in an airport or coffee shop) since all the traffic is encrypted (even if you are accessing HTTP pages).
  • As all web traffic goes through the SOCKS proxy, you will be able to access web sites that your ISP/firewall may have blocked.
  • Potentially helps protect your privacy since the web services you access will see requests coming from the remote server and not from your local machine. This could prevent some (IP based) identity/location tracking for example.

Advanced Use Cases

The above-mentioned use cases are the most commonly used, however, they can be modified slightly and used in interesting ways to be able to establish the ssh tunnel not only between your local machine and your server, but also additional machines, either internal to your network or internal to your servers network:

$ ssh -nNT -R user@server.com
  • Instead of using the default bind address, I explicitly use This implies that the service available on the remote server on port 4000 (forwarded from local port 631), will be accessible internally to the remote server network across all network interfaces, including bridge networks & virtual networks such as those used by container environments like Docker.
  • Instead of using localhost as the bind address for the local machine, I have explicitly used the IP Address, which can be the IP Address of an internal machine (other than the machine you’re using this command on), such as a network printer. This allows you to be able to expose and use my internal network printer directly from the remote server, without any additional changes from within my internal network either on the router or the network printer.

This technique can also be used while doing a local port forwarding or for setting up the socks proxy server in a similar manner.

Read More

Testing RESTful Services in Kotlin with Rest Assured

If you’re not writing a Spring application, creating good integration tests for RESTful endpoints (or any other web service) isn’t always the easiest - especially when you aren’t working in a dynamically typed language. Rest Assured is a great library which makes the process a lot easier - it’s designed around use in Java, but of course we can use it just fine in Kotlin as well.

In the following examples, a simple Kotlin web service written with Ktor and Exposed is tested using Rest Assured and JUnit. Note that this isn’t a simple unit test of the endpoint, an actual instance of the server is started up and tested via requests to localhost.

Add Rest Assured as a dependency

The first step is to add Rest Assured as a test dependency in your project, just open up your build.gradle file and add the following to the dependencies section (3.3.0 is the latest version as of writing):

testCompile "io.rest-assured:rest-assured:3.3.0"

Create Kotlin aliases

Before we start getting into using Rest Assured, because Kotlin is being used, a couple function aliases need to be created because of some methods overlapping with Kotlin keywords. In this case when (which is pretty vital in Rest Assured) and a helper function taking advantage of reified generics in Kotlin to convert a response object to the type we expect for further assertions.

protected fun RequestSpecification.When(): RequestSpecification {
    return this.`when`()

// allows response.to<Widget>() -> Widget instance
protected inline fun <reified T> ResponseBodyExtractionOptions.to(): T {
    return this.`as`(T::class.java)

Define a base Integration Test

In this example, all the concrete test cases which test our server endpoints will inherit from this base class. Because Ktor is being used, it’s very straightforward to start the server up at the start of the test run and close it down at the end.

At this point we can also pass configuration options to Rest Assured (there are plenty to check out). In this case we just set the base url and port so that in our test cases we can use relative URLs which are easier to read - /widget instead of http://localhost:8080/widget.

Because we also have access to any other source files in this base class, you can also define logic to setup the database as you would like in between tests - in this case, before each test we wipe the main Widget table in H2 to make sure every test starts from a blank slate.

open class ServerTest {

    companion object {

        private var serverStarted = false

        private lateinit var server: ApplicationEngine

        fun startServer() {
            if(!serverStarted) {
                server = embeddedServer(Netty, 8080, Application::module)
                serverStarted = true

                RestAssured.baseURI = "http://localhost"
                RestAssured.port = 8080
                Runtime.getRuntime().addShutdownHook(Thread { server.stop(0, 0, TimeUnit.SECONDS) })

    fun before() = transaction {
        Widgets.deleteAll() // refresh data before each test


Create tests using Rest Assured

Now you can start using Rest Assured to test your RESTful endpoints (or any other web service really). Each test case is just a simple JUnit test so you get all the integration you would expect from using any another library. The base format is a given --> when --> then flow whereby first you define any entity you wish to use (in a POST for example), and then define your actual request with URL, followed finally by assertions on the response object.

Rest Assured includes a lot of support for making assertions on the output JSON using JSON paths etc. However I much prefer using the to helper we defined above to marshal the response back to our DTO objects. Some might frown at this approach as you shouldn’t be reusing your domain classes in tests - but the response objects should take the same format anyway and I think we can agree that the test cases look a lot more readable this way. Plus as an added benefit, you get to use your good and faithful assertion libraries - my favourite being AssertJ.

GET Requests

The below example shows testing out a GET request to our RESTful resource. The syntax is easy to follow, just create a GET request to the URL in question, make an assertion on the output status code and then extract the response body, converting it to a List of our model Widget class. Finally, just run assertions on the list to make sure it contains only the data you expect.

fun testGetWidgets() {
    // expected
    val widget1 = NewWidget(null, "widget1", 10)
    val widget2 = NewWidget(null, "widget2", 5)

    val widgets = get("/widget")

    assertThat(widgets).containsOnly(widget1, widget2)


POST Requests

Testing out POST requests mainly follows the same format, however in this case you start off with a given expression where the body entity is defined, alongside the content type (JSON in this case). After that, the only difference is the request method. In the exact same manner as before, the output is extracted and similar assertions are run.

fun testUpdateWidget() {
    val update = NewWidget("id1", "updated", 46) // already exists
    val updated = given()


Error Cases

As good testers we of course want to test the negative cases as well, for typical RESTful services this will involve looking at the response status code and maybe checking that the response contains the correct error message etc:

fun testDeleteInvalidWidget() {
    delete("/widget/{id}", "-1")


The Rest Assured usage guide is very comprehensive and gives a good overview of what Rest Assured can accomplish. In the examples above I have showed only the basic functionality - but to be honest for a lot of cases this is all your really need.

The main differences you will see in other examples is that in a typical Rest Assured test, the body method is used to run Hamcrest matchers against certain JSON elements. You can also test forms, run JSON schema validations, test against XML and use JSONPath to access specific nodes.

Find a lot more real-world use cases in the following two projects:



Read More

Typora - A Better Markdown Editor

VS Code Doesn’t Cut It

Since VS Code has come into popularity, I had always used it to write blog posts like this one in markdown. It worked just fine, it’s really just a text editor, but there is autocomplete templates and syntax highlighting etc for markdown files.

For small dev markdown files and documentation this is all that’s really needed, but for longer pieces of text, I found myself wanting something a little better (more like Word I will admit). Even with all the extensions available for markdown and the live preview version you can get in another pane, there are some big holes in the overall experience when you want to get away from the markup. I can’t imagine writing a book etc in VS Code for example, even if markdown is a pretty good option for it.

VS Code markdown editor

The live preview is good, but I never really found myself using it apart from quick checks to see if I hadn’t screwed up the markup and that everything looks the way I intended. For the actual writing part however, my attention was forced on the other pane - the markdown itself where, apart from the syntax highlighting, could really be any generic text editor. You have the nice shiny live version sitting right next to your eye, with all the nice CSS applied, but you can’t really use it much because the actual editing happens elsewhere - shame really.

The other really big hole in VS Code is the lack of good spell/grammar checking. Yes, there are a few extensions available for this, but they don’t hit the mark. One even relies on sending your text to an external web service to report back on spelling errors, seriously. There is one I used that held a local database, but it was far from extensive and you can’t right click on a word to change it. In VS Code all ‘quick fixes’ like this have to go through the lightbulb menu near the gutter - very annoying. I really hope VS Code gets updated to include a good built-in spell/grammar checker.


In Typora, you basically get to edit directly in the VS Code live preview equivalent. The actual underlying markdown is still there, but is kept behind the scenes and is not a distraction in the main editing experience. Things work pretty much how you would expect in most rich text editors, the significant difference being that Typora is converting everything to markdown for you.

Typora markdown editor

The overall user experience in Typora takes on a minimal and distraction free form. In the left panel, you have your project markdown files, and then you have the great looking preview straight in front of your eye - I think it looks great. The main preview is GitHub like by default, although there are other themes available as well.

You write the actual text the same way as you would do in VS Code, but elements are converted into the final result live as you type. For example, starting off a paragraph with a hash, will look just like you would expect. Give it some content and hit return however, and you have the generated header right there. Same goes for bullet points and images - type the markup and see the results live. It also makes error handling in the markup a lot easier, if the element doesn’t show up you must have done something wrong.

Typora is built as an Electron app, which is bit of shame as you’ll be running yet more Chrome instances and it is far from conservative in terms of download size and memory usage. But to be honest, for desktop development that seems to be the only really option nowadays and it is far from the worst example of an Electron app I have seen.

Oh, and did I mention that it has built in spell check which works the way you would expect!?

Typora spell check

Typora is still in beta and receives constant updates. Plus it’s also still free until it has a full stable release. I would definitely recommend for your markdown needs.

Read More