Ryan Harrison My blog, portfolio and technology related ramblings

How to backup and restore SMS Messages in Android

Moving all of your apps and data over to a new device has thankfully got a lot easier these days as everything is now stored in the cloud. But there is a glaring omission in the automated process Google provides on Android devices - restoring your SMS/MMS messages and call logs. Not sure why this is still left out considering pretty much all other messaging apps will automatically migrate your data and preferences seamlessly.

The good news is there is of course “an app for that”. A lot of the guides on the web direct you to paid apps, but there are numerous free alternatives on the Play Store.

SMS Backup & Restore

Probably the most popular currently on the store and still actively developed - it’s a simple app that backs up and restores your phone’s SMS and MMS messages and call logs. I just went through the process of migrating everything over to a new Android phone and this app did the job just fine.

Basically, it will backup all of your messages and call logs into two separate XML files which can later be restored by using the same app on the new device. There are a bunch of options to setup automated backups etc. if that’s useful for you.

Android SMS Backup and Restore App

Within the app just select where you want the backups to be stored - Google Drive probably being the best choice and hit the ‘Back Up’ button - that’s it for your old device. On your new device I found it easiest to download the two XML files onto your local storage and then point the app to them within the restore tab. After a little processing, everything should look identical between the two devices. The Messages app got a little confused at first trying to process all the new threads, but if you just leave it open for a while it will eventually sort itself out. Job done with little hassle, but Google please add this is in!

Read More

Using Ktor with Jackson Serialization

Although the preferred JSON serialization library used in a lot of the Ktor examples is GSON, which makes sense due to it’s simplicity and ease of use, in real-world use Jackson is probably the preferred option. It’s faster (especially when combined with the AfterBurner module) and generally more flexible. Ktor comes with a built-in feature that makes use Jackson for JSON conversion very simple.

Add Jackson dependency

In your build.gradle file add a dependency to the Ktor Jackson artifact:

dependencies {
    compile "io.ktor:ktor-jackson:$ktor_version"
}

This will add the Ktor JacksonConverter class which can then be used within the standard ContentNegotiation feature.

It also includes an implicit dependency on the Jackson Kotlin Module which must be installed in order for Jackson to handle data classes (which do not have an empty default constructor as Jackson expects).

Install as a Converter

Then tell Ktor to use Jackson for serialization/deserialization for JSON content:

install(ContentNegotiation) {
    jackson {
        // extension method of ObjectMapper to allow config etc
        enable(SerializationFeature.INDENT_OUTPUT)
    }
}

which is the same as doing:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter())
}

With the converter installed, any request to your Ktor server will be served with a JSON response as long as the Content-Type header accepts it.

Reuse an Existing Mapper

The above configuration is quick and easy, however ObjectMapper instances are heavy objects and their configuration is generally shared across various areas of your app. Therefore, instead of creating a new ObjectMapper within the Ktor feature itself, initialise one for your application and point Ktor to it. You can then reuse the same mapper when needed without re-initialising it every time:

object JsonMapper {
    // automatically installs the Kotlin module
    val defaultMapper: ObjectMapper = jacksonObjectMapper()

    init {
        defaultMapper.configure(SerializationFeature.INDENT_OUTPUT, true)
        defaultMapper.registerModule(JavaTimeModule())
    }
}

then use the alternate syntax to install the converter, passing in our pre-made ObjectMapper instance:

install(ContentNegotiation) {
    register(ContentType.Application.Json, JacksonConverter(defaultMapper))
}

You are then free to reuse the same JsonMapper.defaultMapper object across the rest of your app.

Read More

Ubuntu Server Setup Part 9 - Setup a Reverse Proxy with Nginx

In the previous part we covered how to set-up Nginx as a web server to serve static content. In this part, we will configure Nginx as a reverse proxy (one of the main other use cases) to be able to access other services running locally on your server without opening up a dedicated port.

What is a Reverse Proxy?

A reverse proxy can be thought of as a simple ‘passthrough’, whereby specific requests made to your web server get routed to other applications running locally and their responses returned as though they were all handled by the one server. For example, you wanted to give public access to a Python server you have running on port 8080. Instead of directly opening up the port and thus increasing the overall attack surface, Nginx can be configured to proxy certain requests to that server instead. This also has the advantage of easily enabling HTTPS for all services without having to configure each application separately and you get all the other advantages of a high performance web server like load balancing etc.

Follow the steps in the previous tutorial to setup Nginx and also optionally enable HTTPS. The rest of this part assumes you have another server running on your machine listening on localhost under port 8080.

Configure NGINX

Open up the main configuration file for your site:

$ sudo nano /etc/nginx/sites-available/yourdomain.com
server {
  listen 80;
  listen [::]:80;

  server_name yourdomain.com;

  location /otherapp {
      proxy_pass http://localhost:8080/;
  }
}

The proxy_pass directive is what makes this configuration a reverse proxy. It specifies that all requests which match the location block (in this case the root /otherapp path) should be forwarded to port 8080 on localhost, where our other app is running.

Test the new configuration to see if there are any errors

$ sudo nginx -t

If there are no errors present, reload the Nginx config

$ sudo nginx -s reload

In a browser, navigate to your main public domain and append /otherapp to the end e.g. http://yourdomain.com/otherapp. Because the URL matches the location element in the config above, Nginx will forward the request to our other server running on port 8080.

Additional Options

For basic applications, the main proxy_pass directive should work just fine. However, as you would expect, Nginx offers a number of other options to further configure the behaviour of the reverse proxy.

In the below configuration, proxy buffering is switched off - this means that the request body will be forwarded to the proxied server immediately as it is received, which can be useful for some real-time apps. A custom header X-Original-IP is also set on the forwarded request, containing the IP from the original request (which can then be picked up by the other server as needed).

location /otherapp {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Original-IP $remote_addr;
}
Read More

Setting up a Python Virtual Environment

You should setup a Python virtual environment to ensure that your library dependencies are consistent and segregated from your global packages. This can help to prevent potential versioning conflicts and makes it easier to package your app for use by others (or you, but on a different machine).

Previously, you had to install dedicated packages to create and manage Python virtual environments - such as pipenv or virtualenv. Python does however now come with it’s own solution venv which accomplishes much of the same and can be run directly as a Python module without any installation steps.

Create a Virtual Environment

As the venv module comes preinstalled, you can create a new virtual environment by running:

python -m venv virtenv

This will create a new directory called virtenv in your current directory (you can call it whatever you want - general naming scheme is venv) which will include it’s own Python interpreter, pip installation and any packages you subsequently install once the environment is activated.

If you look inside the new directory, you will find it has it’s own Lib/site-packages structure (where any new packages will be installed), alongside it’s own Python / pip executables. The version of Python within your new virtual environment will be the same as the one you used to run the venv command above.

Activate the Environment

To ‘activate’ the virtual environment, you need to call the activate shell script which got created by the previous command. This sets a bunch of environment variables to point the python / pip commands to your newly created venv instead of the globally installed version - in effect creating a completely separate Python installation.

If on Windows - virtenv\Scripts\activate.bat

If on Linux/Mac - virtenv/Scripts/activate

You should notice that your shell prompt got changed to include the name of the venv at the start. If you now run the where / which commands to show the location of the executables, it should show those located in the virtenv directory.

where python ==> \virtenv\Scripts\python.exe

where pip ==> \virtenv\Scripts\pip.exe

Install Packages

Running the pip list command shows that we don’t currently have anything installed (even if you had installed something globally).

pip (19.1.1)
setuptools (28.8.0)

You can use the pip command now to install packages as you would normally e.g.

pip install requests

If we check the list of installed packages again, you can see that requests has been added. Note that this could be a different version to the one installed in the global site-packages.

certifi (2019.3.9)
chardet (3.0.4)
idna (2.8)
pip (19.1.1)
requests (2.22.0)
setuptools (28.8.0)
urllib3 (1.25.3)

If you check the virtenv\Lib\site-packages directory, you should find thatrequests has been installed there.

Generate requirements.txt

You can run the pip freeze command to generate a requirements file containing all the currently installed packages - helpful if you want to recreate the exact same environment on a different machine.

pip freeze > requirement.txt

The contents of which will be something like:

certifi==2019.3.9
chardet==3.0.4
idna==2.8
requests==2.22.0
urllib3==1.25.3

Installing Packages from requirements.txt

When on a new machine with another blank virtual environment, you can use the requirements.txt file generated by pip freeze to install all the packages required for your project at once:

pip install -r requirements.txt

Pip will run through each entry in the file and install the exact version number specified. This makes it easy to create consistent virtual environments - in which you know the exact version of every package installed without the hassle of installing each one manually.

Deactivate the Environment

To deactivate the virtual environment and return all the environment variables to their previous values (pointing instead to your global Python installation), simply run:

deactivate

The virtual environment name should be removed from your shell prompt to denote that no environment is currently active.

Read More

SSH Tunneling

SSH Tunneling, is the ability to use ssh to create a bi-directional encrypted network connections between machines over which data can be exchanged, typically TCP/IP. This allows us to easily & securely make services available between machines with minimal effort, while at the same time leveraging ssh for user authentication (public-key) and encryption with little overhead.

Local Port Forwarding

$ ssh -nNT -L 8000:localhost:3306 user@server.com

The above command sets up an ssh tunnel between your machine and the server and forwards all traffic from localhost:3306 (on the remote server) to localhost:8000 (on your local machine).

Since port 3306 is the default for MySQL, you could now access a database running on your remote machine through localhost:8000 (as if it was setup and running locally). This is useful as you don’t have to configure the remote server to allow extra ports through the firewall and handle the security implications of locking all your services down just to access a dev database for example. In this case, the MySQL instance is still not visible to the outside world (just how we like it).

In the above command, the -nNT options prevent a shell from being created, so we just get the port forwarding behaviour (not strictly needed but you probably don’t also want a new tty session).

Remote Port Forwarding

$ ssh -nNT -R 4000:localhost:3000 user@server.com

The above command sets up an ssh tunnel between your machine and the server, and forwards all traffic from localhost:3000 (on your local machine) to localhost:4000 (on the remote server).

You could then access a service running locally on port 3000 on the remote server through port 4000 (again as if it was running locally on the remote server). This is useful because it allows you to expose a locally running service through your server to others on the internet without having to deploy it / setup on the server. Note: to get this working you also need to set GatewayPorts yes in the /etc/ssh/sshd_config file as ssh doesn’t allow remote hosts to forward ports by default.

SOCKS Proxy

$ ssh -D 5000 -nNT user@server.com

The above command sets up a SOCKS proxy server supporting SOCKS4 and SOCKS5 protocols leveraging dynamic application-level port forwarding through the ssh tunnel. You could now configure your network proxy (within the browser or the OS) to localhost:5000 as the SOCKS proxy and then when you browse, all the traffic is proxied through the ssh tunnel using your remote server.

  • It protects against eavesdropping (perhaps in an airport or coffee shop) since all the traffic is encrypted (even if you are accessing HTTP pages).
  • As all web traffic goes through the SOCKS proxy, you will be able to access web sites that your ISP/firewall may have blocked.
  • Potentially helps protect your privacy since the web services you access will see requests coming from the remote server and not from your local machine. This could prevent some (IP based) identity/location tracking for example.

Advanced Use Cases

The above-mentioned use cases are the most commonly used, however, they can be modified slightly and used in interesting ways to be able to establish the ssh tunnel not only between your local machine and your server, but also additional machines, either internal to your network or internal to your servers network:

$ ssh -nNT -R 0.0.0.0:4000:192.168.1.101:631 user@server.com
  • Instead of using the default bind address, I explicitly use 0.0.0.0. This implies that the service available on the remote server on port 4000 (forwarded from local port 631), will be accessible internally to the remote server network across all network interfaces, including bridge networks & virtual networks such as those used by container environments like Docker.
  • Instead of using localhost as the bind address for the local machine, I have explicitly used the 192.168.1.101 IP Address, which can be the IP Address of an internal machine (other than the machine you’re using this command on), such as a network printer. This allows you to be able to expose and use my internal network printer directly from the remote server, without any additional changes from within my internal network either on the router or the network printer.

This technique can also be used while doing a local port forwarding or for setting up the socks proxy server in a similar manner.

Read More