Ryan Harrison My blog, portfolio and technology related ramblings

Ubuntu Server Setup Part 5 - Install Git, Ruby and Jekyll

This part will take care of installing everything necessary to allow the new server to host your personal blog (or other Jekyll site). As a prerequisite, you will also need some kind of web server installed (such as Nginx or Apache) to take care of serving your HTML files over the web. Part 4 covers the steps for my favourite - Nginx.

Install Git

As I store my blog as a public repo on GitHub, Git first needs to be installed to allow the repo to be cloned and new changes to be pulled. Git is available in the Ubuntu repositories so can be installed simply via apt:

sudo apt install git

You might also want to modify some Git config values. This is only really necessary if you plan on committing changes from your server (so that your commit is linked to your account). As I only tend to pull changes, this isn’t strictly required.

sudo apt install git
git config --global color.ui true
git config --global user.name "me"
git config --global user.email "email

Helpful Git Aliases

Here are a few useful Git aliases from my .bashrc. You can also add aliases through Git directly via the alias command.

alias gs='git status'
alias ga='git add'
alias gaa='git add .'
alias gp='git push'
alias gpom='git push origin master'
alias gpu='git pull'
alias gcm='git commit -m'
alias gcam='git commit -am'
alias gl='git log'
alias gd='git diff'
alias gdc='git diff --cached'
alias gb='git branch'
alias gc='git checkout'
alias gra='git remote add'
alias grr='git remote rm'
alias gcl='git clone'
alias glo='git log --pretty=format:"%C(yellow)%h\\ %ad%Cred%d\\ %Creset%s%Cblue\\ [%cn]" --decorate --date=short'

More helpful aliases:

Install Ruby

Ruby is also available in the Ubuntu repositories. You will also need build-essential to allow you to compile gems.

sudo apt install ruby ruby-dev build-essential

It’s a good idea to also tell Ruby where to install gems - in this case your home directory via the GEM_HOME environment variable. Two lines are added to .bashrc to ensure this change is kept for new shell sessions:

echo '# Install Ruby Gems to ~/gems' >> ~/.bashrc
echo 'export GEM_HOME=$HOME/gems' >> ~/.bashrc
echo 'export PATH=$HOME/gems/bin:$PATH' >> ~/.bashrc
source ~/.bashrc

You should now be able to run ruby -v to ensure everything is working.

To get more control over the Ruby installation (install new versions or change versions on the fly), check out rbenv or rvm.

Install Jekyll

Once Ruby is installed, the Jekyll gem can be installed via gem:

gem install jekyll bundler

I also use some extra Jekyll plugins which can also be installed as gems:

gem install jekyll-paginate
gem install jekyll-sitemap

As the path to the Ruby gems directory has been added to the PATH (in the previous section), the jekyll command should now be available:

jekyll -v
jekyll build

Automated Build

Here is a simple bash script which pulls the latest changes from Git, builds the Jekyll site and copies the site to a directory as to be served by your web server (default location is /var/www/html).

#!/bin/bash

echo "Pulling latest from Git";
cd ~/blog/ && git pull origin master;

echo "Building Jekyll Site";
jekyll build --source ~/blog/ --destination ~/blog/_site/;
echo "Jekyll Site Built";

echo "Copying Site to /var/www/html/";
cp -rf ~/blog/_site/* /var/www/html/;
echo "Site Copied Successfully";
Read More

Testing WebSockets

Like any other web service, websockets also need to be tried out and tested. The only problem is they aren’t quite as easy to deal with as your standard REST endpoints etc as you can’t just point to the URL and inspect whatever output is sent back. Websockets are persistent, so instead you need some way of hanging on to the connection in order to see output as it might arrive in various intervals, as well as send adhoc messages down the wire.

PostMan is generally my go-to choice of testing any web related service and for pretty much any other web service it works great. Unfortunately though, PostMan doesn’t have the capability of handling websockets (yet I hope), meaning that other tools must be used if you want a quick and dirty way of displaying/sending messages.

In the Browser

The simplest way to see what’s happening is to use the browser itself - just as it would probably be used on the site itself later on. Using the built in developer tools, you can open up an adhoc WebSocket connection and interact with it as required.

Open Console

Open up the Developer Tools (F12) and go to the Console tab (FireFox works similarly). Here you can enter WebSocket related commands as necessary without having a to run a dedicated site/server.

Note: If you are not running a secured WebSocket (i.e not with the wss: protocol), you will have to visit an HTTP site before you open the console. This is because the browser will not allow unsecured websocket connections to be opened on what should otherwise be a secured HTTPS page.

The below example runs through the code needed to open a WebSocket connection, send content to the server and log the output as it is received:

Open Connection

ws = new WebSocket("ws://localhost:8080/ws"); // create new connection

List to events

// When the connection is open
ws.onopen = function () {
  connection.send('Ping');
};

// Log errors
ws.onerror = function (error) {
  console.log('WebSocket Error ' + error);
};

// Log messages from the server
ws.onmessage = function (e) {
  console.log('From Server: ' + e.data);
};

Send Messages

// Sending a String
ws.send('your message');

Close Connection

ws.close() // not necessarily required

WsCat

The web browser approach works well enough, but it is a bit cumbersome to have to paste in the code each time. There are however many tools which abstract this away into helpful command line interfaces. Wscat is one such terminal based tool which makes testing websockets just about as easy as it gets.

There isn’t much to wscat, just point it to your server URL and it will log out any messages received or send any as you type them. It’s based on Node (see below for similar alternatives in other environments) so just install through npm and run directly within the console.

https://github.com/websockets/wscat

npm install -g wscat
$ wscat -c ws://localhost:8080/ws
connected (press CTRL+C to quit)
> pong
< ping
> ping
< pong

Other Tools

Here are some other related tools (most just like wscat). This GitHub repo guide also has plenty of other websocket related tools you might want to check out.

https://github.com/thehowl/claws

  • Go based
  • Json formatting and pipes

https://github.com/esphen/wsta

  • Rust based
  • most advanced
  • very pipe friendly
  • configuration profiles

https://github.com/progrium/wssh

  • Python based
  • equivalent of wscat if Node is not your thing
Read More

Ubuntu Server Setup Part 4 - Setup Nginx Web Server

Serving web pages is one of the most common and useful use cases of a cloud server. Nginx is popular and handles some of the largest sites on the web. It’s configuration is simplistic but very powerful and Nginx can often use less resources than an equivalent Apache server.

Install Nginx

Nginx is available in the default Ubuntu repositories, so installation is simple through apt:

$ sudo apt update
$ sudo apt install nginx

That’s all you need to do for the base install of Nginx. By default, the service is started and Nginx includes a simple default landing page (located in /var/www/html) which you should now be able to access via the web.

Access through the Web

First, make sure that Nginx is running on your system. If using a modern Ubuntu server installation, you can do this via systemd:

$ sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
...

If Nginx is not already running, use the following to start the service:

$ sudo systemctl start nginx

# other useful commands
$ sudo systemctl stop nginx
$ sudo systemctl restart nginx
$ sudo systemctl reload nginx # reload config without dropping connections
$ sudo systemctl disable nginx # don't start nginx on boot
$ sudo systemctl enable nginx # do start nginx on boot

Also check that your firewall (if any) is setup to allow connections on port 80 (for HTTP). Refer to the previous part in this series for instructions using ufw.

Now you can check that everything is working correctly by accessing your web server through the internet. If you don’t already know the external IP for you server, run the following command:

$ dig +short myip.opendns.com @resolver1.opendns.com

When you have your server’s IP address, enter it into your browser’s address bar. You should see the default Nginx landing page.

http://your_server_ip

Customise Nginx Config

All of the Nginx configuration files are stored within /etc/nginx/ and it is laid out similarly to an Apache installation.

To create a new configuration - server block in Nginx, virtual host in Apache - first create a file within /etc/nginx/sites-available. It is good convention to use the domain name as the filename:

$ sudo nano /etc/nginx/sites-available/yourdomain.com

Within this file, create a new server block structure:

server {
        listen 80;
        listen [::]:80;

        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        server_name yourdomain.com www.yourdomain.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

This server block will listen to requests on port 80 (HTTP requests) and will serve resources from the default /var/www/html directory. This can be changed as necessary - ideally a dedicated root directory per server block. The server_name is set to the domain name(s) you wish to serve. This is useful if you want to add HTTPS via Let’s Encrypt later on.

Next, this server needs to be enabled by creating a symlink within the /etc/nginx/sites-enabled directory:

$ sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/

You may also wish to delete the default configuration file unless you want to fall back to the defaults:

$ sudo rm /etc/nginx/sites-enabled/default

As we have added additional server names (our domains), it is good to correct the hash bucket size for server names to avoid potential conflicts later on:

$ sudo nano /etc/nginx/nginx.conf

Find the server_names_hash_bucket_size directive and remove the # symbol to uncomment the line:

...
http {
    ...
    server_names_hash_bucket_size 64;
    ...
}
...

Finally, it’s time to restart Nginx in order to reload our config. But first, you can see if there are any syntax errors in your files:

$ sudo nginx -t

If there aren’t any problems, restart Nginx to enable the changes:

$ sudo systemctl restart nginx

Nginx will now serve requests for yourdomain.com (assuming you have set up an A DNS record pointing to your server). Navigate to http://yourdomain.com and you should see the same landing page as before. Any new files added to /var/www/html will also be served by Nginx under your domain.

Enable HTTPS

If you already have SSL certificates for your domain names, you can easily setup Nginx to handle HTTPS requests. Makes sure that your firewall is setup to allow connections on port 443 first:

server {
        listen 443 ssl;
        listen [::]:443 ssl;

        root /var/www/html;
        index index.html index.htm index.nginx-debian.html;

        server_name yourdomain.com www.yourdomain.com;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/ssl/certs/example-cert.pem;
        ssl_certificate_key /etc/ssl/private/example.key;

        ssl_session_cache shared:le_nginx_SSL:1m;
        ssl_session_timeout 1440m;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";
}

The above uses the same configuration as Let’s Encrypt to set strong ciphers and disable old versions of SSL. This should get you an A in SSLTest. I will also add a post on setting up Let’s Encrypt with Nginx to automate the process of using free SSL certificates for your site.

Custom Error Pages

By default, Nginx will display it’s own error pages in the event of a 404/50x error etc. If you have your own versions, you can use the error_pages directive to specify a new path. Open up your server block config and add the following:

server {
    ...
    error_page 404 /custom_404.html;
    error_page 500 502 503 504 /custom_50x.html;
    ...
}

If required, you can also specify a completely new location (not in the main root directory of the server block) for your error pages by providing a location block which resolves the specified error page path:

server {
    ...
    error_page 404 /custom_404.html;
    location = /custom_404.html {
        root /var/html/custom;
        internal;
    }
    ...
}

Log File Locations

  • /var/log/nginx/access.log: Every request to your web server is recorded in this log file unless Nginx is configured to do otherwise.
  • /var/log/nginx/error.log: Any Nginx errors will be recorded in this log file.
Read More

Ubuntu Server Setup Part 3 - Installing a Firewall

By default, your server may not come with a firewall enabled - meaning that external users will have direct access to any applications listening on any open port. This is of course a massive security risk and you should generally seek to minimise the surface area exposed to the public internet. This can be done using some kind of firewall - which will deny any traffic to ports that you haven’t explicitly allowed.

I personally only allow a few ports through the firewall and make use of reverse proxies through Nginx to route traffic to internal apps. That way you can have many applications running on your server, but all traffic is run through port 443 (with HTTPS for free) first.

UFW Installation

The simplest firewall is ufw (Uncomplicated Firewall) and may already come pre-installed on your server. If it doesn’t you can get it by running:

$ sudo apt install ufw

Once installed, check that the ufw service is running:

$ sudo service ufw status

Configure Firewall Rules

The first thing you want to do is ensure that the port ssh is running under is allowed through the firewall (by default 22). If you don’t, then you won’t be able to log in to your server anymore!

$ sudo ufw allow 22
or
$ sudo ufw allow ssh

Then start the firewall by running:

$ sudo ufw enable

Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

If you have a web server running, you will notice that any http or https requests no longer work. That’s because we need to allow port 80 and 443 through the firewall:

$ sudo ufw allow http
$ sudo ufw allow https

Your web server will now be properly accessible again. You can list the currently enabled rules in ufw by running:

$ sudo ufw status

Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       Anywhere
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
22 (v6)                    ALLOW       Anywhere (v6)
80/tcp (v6)                ALLOW       Anywhere (v6)
443/tcp (v6)               ALLOW       Anywhere (v6)

ufw also comes with some default app profiles:

$ sudo ufw app list

Available applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH
  Postfix
  Postfix SMTPS
  Postfix Submission

You can then pass in the app name to the allow/deny commands:

$ sudo ufw allow OpenSSH

Refer to my post on Common Port Mappings to find out which ports you might need to allow through your firewall.

List and remove rules

To delete a rule, you first need to get the index:

$ sudo ufw status numbered

[ 1] 22                         ALLOW IN    Anywhere
[ 2] 80/tcp                     ALLOW IN    Anywhere
[ 3] 443/tcp                    ALLOW IN    Anywhere
...

If you wanted to delete the 443 (https) rule, pass the index 3 into the delete command:

$ sudo ufw delete 3

Deleting:
 allow 443/tcp
Rule deleted

Finally you can disable the firewall by running:

$ sudo ufw disable

Allow or Deny Specific IP’s

You can also allow or deny access from specific ip addresses. For example, to allow connections from only 151.80.44.180:

$ sudo ufw allow from 151.80.44.180

Or to only allow access to only port 22 from that specific ip:

$ sudo ufw allow from 151.80.44.180 to any port 22

Similarly, if you want to deny all connections from a specific ip use:

$ sudo ufw deny from 151.80.44.180
Read More

Kotlin - Add Integration Test Module

The default package structure for a new Kotlin project generated through IntelliJ looks like the following, whereby you have a main source folder with source sets (modules) for your main files and then test source files.

Kotlin Default Project

Typically, you would place your unit tests within the auto-generated test module, and then run them all at once (within one JVM). IntelliJ is generally set up to support this use case and if that’s all you need, requires minimal setup and effort.

However, if you also need to add integration tests (or end-to-end etc), then this project structure can start to cause issues. For example, consider a typical project setup for a server-side app:

  • main - business logic and main app files
  • test - unit tests,
    • typically with JUnit or similar
    • spin up in-memory H2 database for easy DAO testing
  • test-integration - integration/e2e tests
    • typically testing API endpoints with Rest Assured or similar
    • start up full version of the server and any dependencies

You can’t merge all the tests into one module and run them all at once because you would need to start up multiple database instances etc. Conflicts arise and it’s apparent that you need to run them separately in their own dedicated JVM.

To add the above mentioned test-integration module, you can make some edits to your build.gradle file to define a new source set (IntelliJ module):

sourceSets {
    testIntegration {
        java.srcDir 'src/testIntegration/java'
        kotlin.srcDir 'src/testIntegration/kotlin'
        resources.srcDir 'src/testIntegration/resources'
        compileClasspath += main.output
        runtimeClasspath += main.output
    }
}

Here, a new source set for integration tests is created. Gradle is told where the Java and Kotlin source files live and we specify that the classpath inherits from the main source set. This allows you to reference classes of your main module within the integration tests (you might not need this).

Then, we provide a configuration and task for the new source set to ensure that the new module contains the same dependencies as within the main test module (defined using testCompile in your dependencies). Finally, define a new Task to run the integration tests, pointing it to the classes and classpath of the testIntegration source set instead of the inherited defaults from test:

configurations {
    testIntegrationCompile.extendsFrom testCompile
    testIntegrationRuntime.extendsFrom testRuntime
}

task testIntegration(type: Test) {
    testClassesDirs = sourceSets.testIntegration.output.classesDirs
    classpath = sourceSets.testIntegration.runtimeClasspath
}

Similarly to how you might have previously set the target bytecode version for the main and test modules, you need to do the same for the new module:

compileTestIntegrationKotlin {
    kotlinOptions.jvmTarget = "1.8"
}

If you run Gradle with the option to ‘Create directories for empty content roots automatically’, you should see a new module get created. You might notice one issue though, the new module is not marked as a test module within IntelliJ. You could do this manually, but it would get reset every time Gradle runs. To override this, you can apply the idea plugin and add the source directories of the new source set:

idea {
    module {
        testSourceDirs += project.sourceSets.testIntegration.java.srcDirs
        testSourceDirs += project.sourceSets.testIntegration.kotlin.srcDirs
        testSourceDirs += project.sourceSets.testIntegration.resources.srcDirs
    }
}

Now you will see the desired output after Gradle runs:

With Integration tests module

WARNING - This approach is not without problems. If you look at the Test Output Path of the new module, it is defined as \kotlin-scratchpad\out\test\classes which is the same as the main test module. Therefore, all the compiled test classes will end up in the same directory - which causes issues if you try to Run All for example. To fix this, you have to manually update the path to \kotlin-scratchpad\out\testIntegration\classes. Alternatively, you might not apply the idea plugin and just mark the module for tests each time Gradle runs. Hopefully I will find a fix for this at some point.

Read More