The Flask library for Python is a great microframework for setting up simple web servers. Larger sites or REST interfaces might want to tend towards the Django framework instead, but I’ve found Flask excellent for putting together small sites or a couple endpoints with next to no effort. The API is very Pythonic so of course you can get up and running with very few lines of code. I currently use Flask for the backend API services for this site - which powers the search page, contact page and automated Jekyll builds using Github hooks.
To install and start using Flask, just use pip:
$ pip install Flask
The most basic endpoint looks like:
We imported the main Flask class and created a new instance passing in the name of the current module as an identifier (so Flask knows where to look for static files and templates). A simple function, which in this case just returns a String, can be decorated with route to define the URL which will trigger the function.
Running on a development server
There are a couple ways to run the above example. The first is the way recommended by the Flask team:
$ export FLASK_APP=hello.py
$ flask run
* Running on http://127.0.0.1:5000/
This is fine on Linux boxes (you can also use set instead of export on Windows), but setting an environment variable on Windows is a bit of a pain, so instead you can start the server via code. Apparently this might cause issues with live reload, but Flask starts up so quickly it’s not too much of an issue:
You can then navigate to http://localhost:5000 and you will see the return value of the hello_world function. You can easily return HTML or a JSON objects as needed depending on what services you wish to build.
Handling GET Requests
I have focused mainly on using Flask to create basic RESTful web endpoints instead of serving HTML - which Flask can do very well using the Jinja2 templating engine. The below snippet shows how to create a simple endpoint to handle GET requests to retrieve a user by their unique id. The returned object from our dummy service is converted into a JSON response via the built in jsonify function:
Handling POST Requests
The below snippet shows how we can handle a POST request, taking in a JSON object and returning a response from our service:
As you can see, setting up simple endpoints is very quick and easy using Flask. The framework also offers a ton of other useful features including:
Search functionality is very helpful to have in pretty much any website, but something that’s not particularly easy to do in a static Jekyll site. Fully fledged blog solutions such as Wordpress give you a partial solution (no full text search) for free, however you have to also deal with all the associated bloat and need for a database running in the background. On statically generated sites, you have to role your own. Most of the solutions on the internet seem to lean towards doing full text search completely on the client side using a library such as LunrJs. This will work well, but you end up having to ship your whole site to the client as JSON blob before you perform the search. For smaller sites this might be fine, but otherwise that file can get quite large when you have to include all content across your entire site - no thanks.
My, perhaps heavy handed, solution (which won’t work for GitHub Pages) is to use a small ElasticSearch instance on the server side to provide great full text search across your site. It takes a little more work to set up, but once you have it all automated you can just leave it and still take advantage of all the capabilities of ElasticSearch.
I put together elastic-jekyll which is a small Python library that you can use to automatically index and search across your entire Jekyll blog. I’ll cover below how it all fits together and how to use it.
Parsing your Posts
The first step in the process is to find all of your posts within your site and create an in-memory representation of them with all the attributes we require. In this case the library will try to go through ~/blog/_posts unless you pass in another path to main.py. Once all of the markdown files are found, each one is parsed using BeautifulSoup to extract the title and text content (find_posts.py):
The output is passed into create_posts which creates a generator of Post instances. Each contains:
Id - A unique identifier to let ElasticSearch keep track of this document (modified version of the post filename)
Url - The relative url of this post so we can create links in the search results (again uses the filename and site base directory)
Title - The title of the post extracted from the frontmatter of the markdown file
Text - The text content of the post. Note that this is still in markdown format so contains all of the associated special characters. A future extension might be to do some sort of sanitization on this text
Indexing your Posts
Once we have all of the current posts properly parsed, we’re ready to dump them into ElasticSearch so it can perform its indexing magic on them and let us search through it. In Python this is very straightforward to do using the Python ElasticSearch client library.
First we establish a connection to the ElasticSearch server you should already have running on your system. It defaults to port 9200 although you can override it if you want.
For simplicity, the library will currently blow away any existing blog index that may already exist on the Elastic instance and recreate a new one from scratch. You could of course figure out delta’s from the version control history etc, but for a small set of data it’s way easier just to re-index everything each time:
Then we just loop through each of the posts we got from the previous step and push them into the index:
At this point we now have an index sitting in ElasticSearch that is ready to receive search queries from your users and turn them into a set of search results for relevant posts.
Searching for your Posts
To actually provide users the ability to search through your index you will need to have some kind of web service open ready to receive such Ajax calls. In my case I have a lightweight Flask server running which has an endpoint for searching. It simply passes the query string into ElasticSearch and returns the response as a JSON object. It is of course up to you how you want to do this so I’ve just provided a generic way of querying your index within searcher.py:
This snippet will connect to your ElasticSearch instance running under localhost and query the blog index with a search term of python. The query object is an Elastic specific search DSL which you can read more about in their documentation. ElasticSearch is a complicated and powerful beast with a ton of options at your disposal. In this case we are doing a simple multi_match query on the title and body fields (providing more weight onto the title field). We also use fuzziness to resolve any potential spelling mistakes in the user input. ElasticSearch will return us a set of hits which consist of objects containing just the title and url fields as specified in the _source field. We have no use for the others so no point in bloating the response. One cool feature is the use of highlighting which will add <i> tags into the body field within the response. This can then be used to apply styling on the client side to show much sections of text the engine has matched on.
This search query seems to work well for my use cases and I’ve literally just copied the above into the corresponding Flask endpoint. On the client side in your Jekyll search page, I’ve just used a but of good old JQuery to perform the Ajax call and fill in a list with the search results. Keep it simple. You can find the JS I use in the search page source.
As far as automating the process, I have a script which will rebuild my Jekyll blog after a Git push has been performed into GitHub (via hooks). After the main site is rebuilt I just call python main.py and everything is kept up to date. As I said before, it takes a bit of work to set up things up, but once you have it will sync itself every time you make an update.
The -o7 option specifies maximum optimisation but will take the longest to process.
Compressed size: 539KB (24% reduction)
The conversion reduces file sizes significantly (often as much as 70%) and preserves full alpha transparency. It turns 24-bit RGB files into palettized 8-bit ones. You lose some color depth, but for small images it’s often imperceptible.
If you look closely you can see some minor visual differences between this and the original image. However, the file size reduction is huge and the image quality remains very good. Definitely a great tool for the vast majority of images you find on the web.
Firefox has always been installed on my system and it used to be my browser of choice. For the last few years or so however, it has been lagging behind Chrome in speed and general responsiveness. I have always hated the terrible startup times of Firefox compared to the relative instantaneousness of Chrome. General browsing and usability has also been more snappy in Chrome - which for most people is the single most important factor when choosing a browser.
This story seems to have changed quite a bit in the latest pre-release of Firefox however. Version 57, which is dubbed Quantum, uses a completeley new CSS engine and various components have been recreated in Rust which allow much better use of multi-core processors. Mozilla says that these improvements give Quantum a 2x speed improvement over v52 along with using up to 30% RAM than Chrome.
This all sounds great, but does it actually have any notable difference? I have been using the beta release alongside Chrome for a couple weeks now (both with the same extensions installed), and I must say that the performance improvement is quite significant. I generally don’t care much about RAM usage and have no problem with Chrome eating loads of it as long as its well used to make things faster (if it’s there why not use it?) so I won’t comment on that, but you can definitely notice the difference. Firefox feels a lot more snappy now and page loads are just generally much faster. Can’t really argue with that. I wouldn’t say that it feels faster than Chrome now, but it’s probably just as good which is quite impressive. Always good to have some competition back in the marketplace. Startup times are also much better now!
Other notable differences in Firefox Quantum include the new Photon UI, which I must say I think looks pretty good. Things seem a lot simpler now and they’ve thankfully done away with the old huge hamburger menu which was terrible. Transitions seem smooth and everything is where it should be. One thing to note is that the newer version forces the use of the new web extension framework, so if/when you do update, it’s certainly possible that not all of your extensions will work. One big example is the LastPass extension which has yet to be updated. It’s still a beta though so this is still acceptable. Most of the popular extensions have already been updated to work in Quantum and hopefully more will follow after general release.
Firefox Quantum (v57) is due for release on November 14th. In the meantime, you can still try it out by installing the Beta (or Nightly releases).
Microsoft Rewards has been around for ages now in the USA, but it’s now made its way over to the UK. The general idea is that you get awarded points by using Microsoft services (predominantly Edge and Bing) which you can then redeem for a range of rewards. You can also get points by purchasing products such as Xbox Live etc.
Currently, I fire up Edge every so often and look at the front page news on Bing to rack up points for the day. You can get a maximum of 90 points per day for using Bing to search (although there have been offers to get more if you also use Edge to perform your searches). Unfortunately, you can’t get points by visiting the same page over and over again, but it still doesn’t take too long to fill the daily quota.
There are also daily challenges and quizes on the main Microsoft Rewards portal which give you one off boosts to your points. Again, they don’t take too long and you can pretty much get through them by button mashing.
Above shows the main rewards portal page where you can see your total and redeem your points for prizes. I’ve managed to accumulate over 15,000 points in a few months with pretty little effort. In some ways it’s similar to Google Opinion Rewards - but instead of giving Google all your personal information you just have to use Bing for a bit.
Some of the prizes you can get include:
Skype credit (£2 = 900 points)
Skype Unlimited (3 months for 8000 points)
Xbox Live Gold (3 months for 15000 points or 12 months for 29000 points)
Xbox gift card (£10 for 12000 points)
As you can see my 15000 points equates to around £13 already which is pretty good. Annoyingly the UK version doesn’t include Amazon gift cards like the US one seems to - which is frustrating as that is pretty much the only thing I would redeem for. They have also removed the Groove music passes which seems strange to me. Maybe they will bring that back at some point. Edit - Microsoft is now apparently killing off Groove Music which explains why it suddenly disappeared from the rewards.