I manage a whole number of device and servers, which are monitored by various utilities, including Nagios. I also have clients who do the same, as well as using other tools that produce notifications – build systems etc.

Nagios is the thing that tells me when my web server is unavailable, or the database has fallen over, or, more often, when my internet connection dies. I have similar setups in various client networks that I maintain.

It logs to the system log, sends me emails, and in some urgent cases, sends a ping to my phone. All very handy, but isn’t very handy for other casual users who may just want to see if things are running properly. For those users, who are somewhat non-technical, it’s a bit much to ask them to read logs, and emails often get lost.

For one of my clients we had a need to be able to collect these status updates from different sources together, make it more persistent, and make it visible in a much more accessible way than log messages (which has a very poor signal to noise ratio) or email alerts (which only go to specific people).

“Known” issues

A solution I came up with was to create a Known site for the network which can be used to log these notifications in a user friendly, chronological and searchable form.

I created an account for my Nagios process, and then, using my Known command line tools, I extended the Nagios script to use my Known site as a notification mechanism.

In commands.cfg:

define command {
        command_name host-notify-by-known
        command_line echo "$HOSTNAME$: $HOSTSTATE$" | /etc/nagios/known_nagios_notify.sh
}
define command {
        command_name service-notify-by-known
        command_line echo "$HOSTNAME$ – $SERVICEDESC$ : $SERVICESTATE$. Additional info: '$SERVICEOUTPUT$'" | /etc/nagios/known_nagios_notify.sh
}

Then in conf.d/contacts.cfg I extended my “Root” contact:

define contact{
        contact_name                    root
        alias                           Root
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r
        host_notification_options       d,r
        service_notification_commands   notify-service-by-email, service-notify-by-known
        host_notification_commands      notify-host-by-email, host-notify-by-known
        email                           root@localhost
        }

Finally, the script itself, which serves as a wrapper around the api tools and sets the appropriate path etc:

#!/bin/bash

PATH=/path/to/BashKnown:"${PATH}"

status.sh https://my.status.server nagios *YOURAPICODE* >/dev/null

exit 0

Consolidating rich logs

Of course, this is only just the beginning of what’s possible.

For my client, I’ve already modified their build system to post on successful builds, or build errors, with a link to the appropriate logs. This particular client was already using Known for internal communication, so this improvement was logical.

The rich content types that Known supports also raises the possibility of richer logging from a number of devices, here’s a few thoughts of some things I’ve got on my list to play with:

  • Post an image to the channel when motion is detected by a webcam pointed at the bird feeders (again, trivial to hook up – the software triggers a script when motion is detected, and all I have to do is take the resultant image and CURL it to the API)
  • Post an audio message when a voicemail is left (although that’d require me to actually set up asterisk, which has been on my list for a while now)
  • Attach debugging info & a core dump to automated test results

I might get to those at some point, but I guess my point is that APIs are cool.

So, let us talk plainly. You absolutely, definitely, positively should be using TLS / HTTPS encryption on the sites that you run.

HTTPS provides encryption, meaning that anyone watching the connection (and yes, people do care, and are absolutely watching), will have a harder time trying to extract content information about the connection. This is important, because it stops your credit card being read as it makes its way to Amazon’s servers, or your password being read when you log into your email.

When I advise my clients on infrastructure, these days I recommend that all pages on a website, regardless of that page’s contents, should be served over HTTPS. The primary reason being a feature of an encrypted connection which I don’t think gets underlined enough. I also advise them to invest in colocation to protect their business data. You can visit this page to know more.

Tamper resistant web

When you serve content over HTTPS, it is significantly harder to modify. Or, to put it another way, when you serve pages unencrypted, you have absolutely no guarantee that the page your server sends is going to be the page that your visitor receives!

If an attacker controls a link in the chain of computers between you and the site you’re visiting it is trivial to modify requests to and from a visitor and the server. A slightly more sophisticated attacker can perform these attacks without the need to control a link in the chain, a so called “Man on the side” attack – more technically complex, but still relatively trivial with sufficient budget, and has been widely automated by state actors and criminal gangs.

The capabilities these sorts of attacks give someone are limited only by budget and imagination. On a sliding scale of evil, possibly the least evil use we’ve seen in the wild is probably the advertising injection attack used by certain ISPs and Airplane/hotel wifi providers, but could easily extend to attacks designed to actively compromise your security.

Consider this example of an attack exploiting a common configuration:

  • A web application is installed on a server, and the site is available by visiting both HTTP and HTTPS endpoints. That is to say, if you visited both http://foo.com and https://foo.com, you’d get the same page being served.
  • Login details are sent using a POST form, but because the developers aren’t complete idiots they send these over HTTPS.

Seems reasonable, and I used to do this myself without seeing anything wrong with it.

However, consider what an attacker could do in this situation if the page serving the form is unencrypted. It would, for example, be a relatively trivial matter, once the infrastructure is in place, to simply rewrite “https://” to “http://”, meaning your login details would be sent unencrypted. Even if the server was configured to only accept login details on a secure connection (another fairly common practice), this attack would still work since the POST will still go ahead. A really sophisticated attacker could intercept the returning error page, and replace it with something innocuous, meaning your visitor would be non the wiser.

It gets worse of course, since as we have learnt from the Snowden disclosures, security agencies around the world will often illegally conscript unencrypted web pages to perform automated attacks on anyone they view as a target (which, as we’ve also learnt from the Snowden disclosures, includes just about everybody, including system administrators, software developers and even people who have visited CNN.com).

Lets Encrypt!

Encrypting your website is fairly straightforward, certainly when compared to the amount of work it took to deploy your web app in the first place. Plus, with the new Lets Encrypt project launching soon, it’s about to get a whole lot easier.

You’ll need to make sure you test those configurations regularly, since configuration recommendations change from time to time, and most shared hosts & default server configurations often leave a lot to be desired.

However, I assert that it is worth the extra effort.

By enabling HTTPS on the entire site, you’ll make it much much harder for an attacker to compromise your visitor’s privacy and security (I say harder, not impossible. There are still attacks that can be performed, especially if the attacker has root certificate access for certificates trusted by the computer you’re using… so be careful doing internet banking on a corporate network, or in China).

You also add to the herd immunity to your fellow internet users, normalising encrypted traffic and limiting the attack surface for mass surveillance and mass packet injection.

Finally, you’ll get a SEO advantage, since Google is now giving a ranking boost to secure pages.

So, why wait?

So yesterday, we were greeted with another bombshell from the Snowden archives.

After finding out the day before that GCHQ had spied on lawyers, we now find out that GCHQ and the NSA conspired to steal the encryption keys to pretty much every sim card in the world, meaning that they can easily break the (admittedly weak) encryption used to protect your phonecalls and text messages.

Personally, I’m not terribly concerned about this, because the idea that your mobile phone is insecure is hardly news. What is of concern to me, is how they went about getting those keys.

It seems that in order to get these keys, the intelligence agencies hunted down and placed under invasive surveillance ordinary innocent people, who just happened to be employed by or have dealings with the companies they were interested in.

The full capabilities of the global surveillance architecture they command was deployed against entirely unremarkable and innocent individuals. People like you and me, who’s entire private lives were sifted through, just in case they exposed some information that could be used against the companies which they worked.

Nothing to hide, nothing to fear

If there is a silver lining in all this, with any luck it will go some way towards shattering the idea that because you have nothing to hide, you have nothing to fear.

This is, primarily, a coping strategy. It’s a lie people tell themselves so they can avoid confronting an awkward or terrifying fact, a bit like saying climate change isn’t real, or that smoking won’t kill me.

Generally, it is taken to mean that you’ve done nothing wrong, i.e. illegal (and of course, that’s not what privacy is about, and what you consider being “wrong” has typically not been the same as what those in power consider “wrong”).

Fundamentally, it misses the point that you don’t get to decide what others are going to find interesting, or suspect you of knowing. In this instance, innocent people had their privacy invaded purely because they had suspected access to information that the intelligence agencies found interesting. This is something that, were I to do something similar, I’d go to jail for a very long time.

Now consider that one of the NSA’s core missions is to advance US economic interests, spying on Brazilian oil companies and European trade negotiations, etc. If I worked at a competitor of a US company, I’d be very careful what I said in any insecure form of communication.

You do have something to hide.