Search Icon

Ryan Harrison My blog, portfolio and technology related ramblings

How to use Google DNS Servers

If you are frequently running into the Resolving Host status message in Chrome and/or are generally having slow page loads, it could be because your DNS lookups are taking longer than they should. Unsurprisingly, the DNS servers provided by your ISP can be pretty bad, but you are free to use other open alternatives (the two most common being Google and OpenDNS) which could give you faster responses.

Follow these steps to use the open Google DNS servers within Windows 10 (there are plenty of alternative guides online for other OS’s):

Start -> Settings -> Network & Internet

Click on Change adapter options

Select which network adapter you are using (WiFi/Ethernet depending on your setup). Right click and choose Properties.

In the list of configuration options select Internet Protocol Version 4 (TCP/IPv4). Then click Properties.

Adapter Properties

In the bottom section, select Use the following DNS server addresses. Fill the boxes with the following depending on which provider you wish to use:

Google DNS




For Google DNS, it should look like the following:

Configure DNS Servers

Hit OK and you should be good to go. There are also equivalent IP addresses for IPv6 if you need them. Hopefully your DNS lookups will not be a little more performant. You might have the potential downside of Google knowing even more about your browsing habits, but if you use Chrome then they probably know all that already - so you might as well enjoy a faster experience!

Read More

Python - RESTful server with Flask

The Flask library for Python is a great microframework for setting up simple web servers. Larger sites or REST interfaces might want to tend towards the Django framework instead, but I’ve found Flask excellent for putting together small sites or a couple endpoints with next to no effort. The API is very Pythonic so of course you can get up and running with very few lines of code. I currently use Flask for the backend API services for this site - which powers the search page, contact page and automated Jekyll builds using Github hooks.


To install and start using Flask, just use pip:

$ pip install Flask

Simple Example

The most basic endpoint looks like:

from flask import Flask
app = Flask(**name**)

def hello_world():
return 'Hello, World!'

We imported the main Flask class and created a new instance passing in the name of the current module as an identifier (so Flask knows where to look for static files and templates). A simple function, which in this case just returns a String, can be decorated with route to define the URL which will trigger the function.

Running on a development server

There are a couple ways to run the above example. The first is the way recommended by the Flask team:

$ export
$ flask run
* Running on

This is fine on Linux boxes (you can also use set instead of export on Windows), but setting an environment variable on Windows is a bit of a pain, so instead you can start the server via code. Apparently this might cause issues with live reload, but Flask starts up so quickly it’s not too much of an issue:

if **name** == '**main**':'')

You can then navigate to http://localhost:5000 and you will see the return value of the hello_world function. You can easily return HTML or a JSON objects as needed depending on what services you wish to build.

Handling GET Requests

I have focused mainly on using Flask to create basic RESTful web endpoints instead of serving HTML - which Flask can do very well using the Jinja2 templating engine. The below snippet shows how to create a simple endpoint to handle GET requests to retrieve a user by their unique id. The returned object from our dummy service is converted into a JSON response via the built in jsonify function:

from Flask import jsonify

# here the user_id parameter is restricted to an int type

@app.route('/user/<int:user_id>', methods=['GET'])
def get_user(user_id): # get the user from some service etc
user = user_service.find_user(user_id)
return jsonify(user) # return the user as a JSON object

Handling POST Requests

The below snippet shows how we can handle a POST request, taking in a JSON object and returning a response from our service:

from flask import request

@app.route('/user/' methods=['POST'])
def save_user(): # retrieve the json from the request
new_user = request.get_json(silent=True)
created_user = user_service.create_user(new_user) # return the newly created user as a json object
return jsonify(created_user)

As you can see, setting up simple endpoints is very quick and easy using Flask. The framework also offers a ton of other useful features including:

  • built-in development server and debugger
  • integrated unit testing support
  • RESTful request dispatching
  • Jinja2 templating
  • support for secure cookies (client side sessions)
  • great documentation

Flask website

Quickstart guide

Read More

ElasticSearch for your Jekyll Blog

Search functionality is very helpful to have in pretty much any website, but something that’s not particularly easy to do in a static Jekyll site. Fully fledged blog solutions such as Wordpress give you a partial solution (no full text search) for free, however you have to also deal with all the associated bloat and need for a database running in the background. On statically generated sites, you have to role your own. Most of the solutions on the internet seem to lean towards doing full text search completely on the client side using a library such as LunrJs. This will work well, but you end up having to ship your whole site to the client as JSON blob before you perform the search. For smaller sites this might be fine, but otherwise that file can get quite large when you have to include all content across your entire site - no thanks.

My, perhaps heavy handed, solution (which won’t work for GitHub Pages) is to use a small ElasticSearch instance on the server side to provide great full text search across your site. It takes a little more work to set up, but once you have it all automated you can just leave it and still take advantage of all the capabilities of ElasticSearch.

I put together elastic-jekyll which is a small Python library that you can use to automatically index and search across your entire Jekyll blog. I’ll cover below how it all fits together and how to use it.

Parsing your Posts

The first step in the process is to find all of your posts within your site and create an in-memory representation of them with all the attributes we require. In this case the library will try to go through ~/blog/_posts unless you pass in another path to Once all of the markdown files are found, each one is parsed using BeautifulSoup to extract the title and text content (

def parse_post(path):
    with open(path, encoding="utf8") as f:
        contents =

        soup = BeautifulSoup(contents, 'html.parser')
        title = soup.find('h1', { "class" : "post-title" }).text.strip()
        post_elem = soup.find("div", {"class": "post"})
        post_elem.find(attrs={"class": "post-title"}).decompose()
        post_elem.find(attrs={"class": "post-date"}).decompose()

        paras = post_elem.find_all(text=True)

        body = " ".join(p.strip() for p in paras).replace("  ", " ").strip()
        return (title, body)

    raise "Could not read file: " + path

The output is passed into create_posts which creates a generator of Post instances. Each contains:

  • Id - A unique identifier to let ElasticSearch keep track of this document (modified version of the post filename)
  • Url - The relative url of this post so we can create links in the search results (again uses the filename and site base directory)
  • Title - The title of the post extracted from the frontmatter of the markdown file
  • Text - The text content of the post. Note that this is still in markdown format so contains all of the associated special characters. A future extension might be to do some sort of sanitization on this text

Indexing your Posts

Once we have all of the current posts properly parsed, we’re ready to dump them into ElasticSearch so it can perform its indexing magic on them and let us search through it. In Python this is very straightforward to do using the Python ElasticSearch client library.

First we establish a connection to the ElasticSearch server you should already have running on your system. It defaults to port 9200 although you can override it if you want.

from elasticsearch import Elasticsearch

def connect_elastic(host="localhost", port=9200):
    return Elasticsearch([{'host': host, 'port': port}])

For simplicity, the library will currently blow away any existing blog index that may already exist on the Elastic instance and recreate a new one from scratch. You could of course figure out delta’s from the version control history etc, but for a small set of data it’s way easier just to re-index everything each time:

# remove existing blog index and create a new blank one
def refresh_index(es):
    if es.indices.exists(index=index_name):

Then we just loop through each of the posts we got from the previous step and push them into the index:

for post in posts:
    doc = {
        "title": post.title,
        "url": post.url,
        "body": post.body

    es.index(index=index_name, doc_type=doc_type,, body=doc)

At this point we now have an index sitting in ElasticSearch that is ready to receive search queries from your users and turn them into a set of search results for relevant posts.

Searching for your Posts

To actually provide users the ability to search through your index you will need to have some kind of web service open ready to receive such Ajax calls. In my case I have a lightweight Flask server running which has an endpoint for searching. It simply passes the query string into ElasticSearch and returns the response as a JSON object. It is of course up to you how you want to do this so I’ve just provided a generic way of querying your index within

from elasticsearch import Elasticsearch

es =  Elasticsearch([{'host': 'localhost', 'port': 9200}])

user_query = "python"

query = {
    "query": {
    "multi_match": {
        "query": user_query,
        "type": "best_fields",
        "fuzziness": "AUTO",
        "tie_breaker": 0.3,
        "fields": ["title^3", "body"]
    "highlight": {
        "fields" : {
            "body" : {}
    "_source": ["title", "url"]

res ="blog", body=query)
print("Found %d Hits:" % res['hits']['total'])

for hit in res['hits']['hits']:

This snippet will connect to your ElasticSearch instance running under localhost and query the blog index with a search term of python. The query object is an Elastic specific search DSL which you can read more about in their documentation. ElasticSearch is a complicated and powerful beast with a ton of options at your disposal. In this case we are doing a simple multi_match query on the title and body fields (providing more weight onto the title field). We also use fuzziness to resolve any potential spelling mistakes in the user input. ElasticSearch will return us a set of hits which consist of objects containing just the title and url fields as specified in the _source field. We have no use for the others so no point in bloating the response. One cool feature is the use of highlighting which will add <i> tags into the body field within the response. This can then be used to apply styling on the client side to show much sections of text the engine has matched on.

This search query seems to work well for my use cases and I’ve literally just copied the above into the corresponding Flask endpoint. On the client side in your Jekyll search page, I’ve just used a but of good old JQuery to perform the Ajax call and fill in a list with the search results. Keep it simple. You can find the JS I use in the search page source.

As far as automating the process, I have a script which will rebuild my Jekyll blog after a Git push has been performed into GitHub (via hooks). After the main site is rebuilt I just call python and everything is kept up to date. As I said before, it takes a bit of work to set up things up, but once you have it will sync itself every time you make an update.

Full source code can be found in the GitHub repository

Read More

PNG Image Optimisation

Some tools that can be used to reduce PNG file sizes whilst maintaining good quality images. All those below can be installed and used within the WSL (Windows Subsystem for Linux).

Sample Image:

A graphic with transparency is probably better suited for a PNG, but who doesn’t love a bit of tilt shift?

Original Size: 711KB

Original Image

PNG Crush (lossless)

Probably the most popular, but has a lot of options and you may need to know some compression details to get the best results out of the tool.

> sudo apt-get install pngcrush

(also works on WSL)

> pngcrush input.png output.png

> pngcrush -brute input.png output.png

The -brute option will through 148 different reduction algorithms and chooses the best result.

> pngcrush -brute -reduce -rem allb input.png output.png

The -reduce option counts the number of distinct colours and reduces the pixel depth to the smallest size that can contain the palette.

Compressed size: 539KB (24% reduction)

Optipng (lossless)

Based on pngcrush but tries to figure out the best config options for you. In this case so suprise that we get the same results.

> sudo apt-get install optipng

> optipng -o7 -out outfile.png input.png

The -o7 option specifies maximum optimisation but will take the longest to process.

Compressed size: 539KB (24% reduction)

PNGQuant (lossy)

The conversion reduces file sizes significantly (often as much as 70%) and preserves full alpha transparency. It turns 24-bit RGB files into palettized 8-bit ones. You lose some color depth, but for small images it’s often imperceptible.

> sudo apt-get install pngquant

> pngquant input.png

Compressed size: 193KB (73% reduction)

Original Image

If you look closely you can see some minor visual differences between this and the original image. However, the file size reduction is huge and the image quality remains very good. Definitely a great tool for the vast majority of images you find on the web.

Read More

Firefox Quantum - It's Fast Again

Firefox has always been installed on my system and it used to be my browser of choice. For the last few years or so however, it has been lagging behind Chrome in speed and general responsiveness. I have always hated the terrible startup times of Firefox compared to the relative instantaneousness of Chrome. General browsing and usability has also been more snappy in Chrome - which for most people is the single most important factor when choosing a browser.

Firefox Quantum Beta

This story seems to have changed quite a bit in the latest pre-release of Firefox however. Version 57, which is dubbed Quantum, uses a completeley new CSS engine and various components have been recreated in Rust which allow much better use of multi-core processors. Mozilla says that these improvements give Quantum a 2x speed improvement over v52 along with using up to 30% RAM than Chrome.

This all sounds great, but does it actually have any notable difference? I have been using the beta release alongside Chrome for a couple weeks now (both with the same extensions installed), and I must say that the performance improvement is quite significant. I generally don’t care much about RAM usage and have no problem with Chrome eating loads of it as long as its well used to make things faster (if it’s there why not use it?) so I won’t comment on that, but you can definitely notice the difference. Firefox feels a lot more snappy now and page loads are just generally much faster. Can’t really argue with that. I wouldn’t say that it feels faster than Chrome now, but it’s probably just as good which is quite impressive. Always good to have some competition back in the marketplace. Startup times are also much better now!

Other notable differences in Firefox Quantum include the new Photon UI, which I must say I think looks pretty good. Things seem a lot simpler now and they’ve thankfully done away with the old huge hamburger menu which was terrible. Transitions seem smooth and everything is where it should be. One thing to note is that the newer version forces the use of the new web extension framework, so if/when you do update, it’s certainly possible that not all of your extensions will work. One big example is the LastPass extension which has yet to be updated. It’s still a beta though so this is still acceptable. Most of the popular extensions have already been updated to work in Quantum and hopefully more will follow after general release.

Firefox Quantum (v57) is due for release on November 14th. In the meantime, you can still try it out by installing the Beta (or Nightly releases).

Read More