Ryan Harrison My blog, portfolio and technology related ramblings

How to capture full page screenshots in Chrome

Capturing full page screenshots of a webpage within Chrome can be useful, but most solutions to this involve having to install obnoxious extensions. Turns out however that this can easily be done within the base Chrome install itself - no extensions or extra programs needed. There are two methods of doing so depending on whether or not you want a screenshot capturing exactly what you see on screen, or want to emulate the view from a different device/screen resolution.

1. Command Menu - Capture full screenshot

The first and easiest method will capture a PNG screenshot of the full page as you see it within your browser.

  • Open up the Chrome Devtools by pressing F12, CTRL + SHIFT + I or Right-Click anywhere -> Inspect
  • Open up the devtools command menu panel by pressing CTRL + SHIFT + P (this is a commonly missed feature similar to the VS Code Command Pallete that gives you quick access to pretty much all devtools features)
  • Start typing capture in the menu - you will see options to capture a full size screenshot or even just a defined area of the page if needed.

Chrome capture full size screenshot

2. Device Mode

The second, slightly more involved option, lets you capture screenshots through the built in Chrome device mode which allows you to view webpages as though you were using other devices such as phones or tablets.

  • Open up the Chrome Devtools by pressing F12, CTRL + SHIFT + I or Right-Click anywhere -> Inspect
  • Enable the Device Mode by pressing the button directly to the left of the Elements tab or keyboard shortcut CTRL + SHIFT + M
  • After selecting your preferred device options, resolution etc, press the hamburger menu on the top right of the page and select Capture full size screenshot.

Chrome capture full size screenshot

Read More

How to backup and restore SMS Messages in Android

Moving all of your apps and data over to a new device has thankfully got a lot easier these days as everything is now stored in the cloud. But there is a glaring omission in the automated process Google provides on Android devices - restoring your SMS/MMS messages and call logs. Not sure why this is still left out considering pretty much all other messaging apps will automatically migrate your data and preferences seamlessly.

The good news is there is of course “an app for that”. A lot of the guides on the web direct you to paid apps, but there are numerous free alternatives on the Play Store.

SMS Backup & Restore

Probably the most popular currently on the store and still actively developed - it’s a simple app that backs up and restores your phone’s SMS and MMS messages and call logs. I just went through the process of migrating everything over to a new Android phone and this app did the job just fine.

Basically, it will backup all of your messages and call logs into two separate XML files which can later be restored by using the same app on the new device. There are a bunch of options to setup automated backups etc. if that’s useful for you.

Android SMS Backup and Restore App

Within the app just select where you want the backups to be stored - Google Drive probably being the best choice and hit the ‘Back Up’ button - that’s it for your old device. On your new device I found it easiest to download the two XML files onto your local storage and then point the app to them within the restore tab. After a little processing, everything should look identical between the two devices. The Messages app got a little confused at first trying to process all the new threads, but if you just leave it open for a while it will eventually sort itself out. Job done with little hassle, but Google please add this is in!

Read More

Using Ktor with Jackson Serialization

Although the preferred JSON serialization library used in a lot of the Ktor examples is GSON, which makes sense due to it’s simplicity and ease of use, in real-world use Jackson is probably the preferred option. It’s faster (especially when combined with the AfterBurner module) and generally more flexible. Ktor comes with a built-in feature that makes use Jackson for JSON conversion very simple.

Add Jackson dependency

In your build.gradle file add a dependency to the Ktor Jackson artifact:

dependencies {
    compile "io.ktor:ktor-jackson:$ktor_version"

This will add the Ktor JacksonConverter class which can then be used within the standard ContentNegotiation feature.

It also includes an implicit dependency on the Jackson Kotlin Module which must be installed in order for Jackson to handle data classes (which do not have an empty default constructor as Jackson expects).

Install as a Converter

Then tell Ktor to use Jackson for serialization/deserialization for JSON content:

install(ContentNegotiation) {
    jackson {
        // extension method of ObjectMapper to allow config etc

which is the same as doing:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter())

With the converter installed, any request to your Ktor server will be served with a JSON response as long as the Content-Type header accepts it.

Reuse an Existing Mapper

The above configuration is quick and easy, however ObjectMapper instances are heavy objects and their configuration is generally shared across various areas of your app. Therefore, instead of creating a new ObjectMapper within the Ktor feature itself, initialise one for your application and point Ktor to it. You can then reuse the same mapper when needed without re-initialising it every time:

object JsonMapper {
    // automatically installs the Kotlin module
    val defaultMapper: ObjectMapper = jacksonObjectMapper()

    init {
        defaultMapper.configure(SerializationFeature.INDENT_OUTPUT, true)

then use the alternate syntax to install the converter, passing in our pre-made ObjectMapper instance:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter(defaultMapper))

You are then free to reuse the same JsonMapper.defaultMapper object across the rest of your app.

Read More

Ubuntu Server Setup Part 9 - Setup a Reverse Proxy with Nginx

In the previous part we covered how to set-up Nginx as a web server to serve static content. In this part, we will configure Nginx as a reverse proxy (one of the main other use cases) to be able to access other services running locally on your server without opening up a dedicated port.

What is a Reverse Proxy?

A reverse proxy can be thought of as a simple ‘passthrough’, whereby specific requests made to your web server get routed to other applications running locally and their responses returned as though they were all handled by the one server. For example, you wanted to give public access to a Python server you have running on port 8080. Instead of directly opening up the port and thus increasing the overall attack surface, Nginx can be configured to proxy certain requests to that server instead. This also has the advantage of easily enabling HTTPS for all services without having to configure each application separately and you get all the other advantages of a high performance web server like load balancing etc.

Follow the steps in the previous tutorial to setup Nginx and also optionally enable HTTPS. The rest of this part assumes you have another server running on your machine listening on localhost under port 8080.

Configure NGINX

Open up the main configuration file for your site:

$ sudo nano /etc/nginx/sites-available/yourdomain.com
server {
  listen 80;
  listen [::]:80;

  server_name yourdomain.com;

  location /otherapp {
      proxy_pass http://localhost:8080/;

The proxy_pass directive is what makes this configuration a reverse proxy. It specifies that all requests which match the location block (in this case the root /otherapp path) should be forwarded to port 8080 on localhost, where our other app is running.

Test the new configuration to see if there are any errors

$ sudo nginx -t

If there are no errors present, reload the Nginx config

$ sudo nginx -s reload

In a browser, navigate to your main public domain and append /otherapp to the end e.g. http://yourdomain.com/otherapp. Because the URL matches the location element in the config above, Nginx will forward the request to our other server running on port 8080.

Additional Options

For basic applications, the main proxy_pass directive should work just fine. However, as you would expect, Nginx offers a number of other options to further configure the behaviour of the reverse proxy.

In the below configuration, proxy buffering is switched off - this means that the request body will be forwarded to the proxied server immediately as it is received, which can be useful for some real-time apps. A custom header X-Original-IP is also set on the forwarded request, containing the IP from the original request (which can then be picked up by the other server as needed).

location /otherapp {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Original-IP $remote_addr;
Read More

Setting up a Python Virtual Environment

You should setup a Python virtual environment to ensure that your library dependencies are consistent and segregated from your global packages. This can help to prevent potential versioning conflicts and makes it easier to package your app for use by others (or you, but on a different machine).

Previously, you had to install dedicated packages to create and manage Python virtual environments - such as pipenv or virtualenv. Python does however now come with it’s own solution venv which accomplishes much of the same and can be run directly as a Python module without any installation steps.

Create a Virtual Environment

As the venv module comes preinstalled, you can create a new virtual environment by running:

python -m venv virtenv

This will create a new directory called virtenv in your current directory (you can call it whatever you want - general naming scheme is venv) which will include it’s own Python interpreter, pip installation and any packages you subsequently install once the environment is activated.

If you look inside the new directory, you will find it has it’s own Lib/site-packages structure (where any new packages will be installed), alongside it’s own Python / pip executables. The version of Python within your new virtual environment will be the same as the one you used to run the venv command above.

Activate the Environment

To ‘activate’ the virtual environment, you need to call the activate shell script which got created by the previous command. This sets a bunch of environment variables to point the python / pip commands to your newly created venv instead of the globally installed version - in effect creating a completely separate Python installation.

If on Windows - virtenv\Scripts\activate.bat

If on Linux/Mac - virtenv/Scripts/activate

You should notice that your shell prompt got changed to include the name of the venv at the start. If you now run the where / which commands to show the location of the executables, it should show those located in the virtenv directory.

where python ==> \virtenv\Scripts\python.exe

where pip ==> \virtenv\Scripts\pip.exe

Install Packages

Running the pip list command shows that we don’t currently have anything installed (even if you had installed something globally).

pip (19.1.1)
setuptools (28.8.0)

You can use the pip command now to install packages as you would normally e.g.

pip install requests

If we check the list of installed packages again, you can see that requests has been added. Note that this could be a different version to the one installed in the global site-packages.

certifi (2019.3.9)
chardet (3.0.4)
idna (2.8)
pip (19.1.1)
requests (2.22.0)
setuptools (28.8.0)
urllib3 (1.25.3)

If you check the virtenv\Lib\site-packages directory, you should find thatrequests has been installed there.

Generate requirements.txt

You can run the pip freeze command to generate a requirements file containing all the currently installed packages - helpful if you want to recreate the exact same environment on a different machine.

pip freeze > requirement.txt

The contents of which will be something like:


Installing Packages from requirements.txt

When on a new machine with another blank virtual environment, you can use the requirements.txt file generated by pip freeze to install all the packages required for your project at once:

pip install -r requirements.txt

Pip will run through each entry in the file and install the exact version number specified. This makes it easy to create consistent virtual environments - in which you know the exact version of every package installed without the hassle of installing each one manually.

Deactivate the Environment

To deactivate the virtual environment and return all the environment variables to their previous values (pointing instead to your global Python installation), simply run:


The virtual environment name should be removed from your shell prompt to denote that no environment is currently active.

Read More