Edward Snowden’s exposure of the illegal mass surveillance of basically everybody conducted by the NSA and GCHQ, has and is still causing international political fallout. Hijacking diplomatic flights and using anti-terror legislation to intimidate journalists, aren’t doing much to help matters.

Glyn Moody suggests that, given the widespread abuse of communication technology by the security services, campaigning to get everyone online may not be such a good idea.

Here’s my response:

People shouldn’t necessarily throw away an entire technology just because a few (thousand) bad apples abuse it. As technologists, what this means is that we need to build in safeguards (encryption, obfuscation, anonymous routing etc etc) which make such abuses impossible in the future.

This is already starting to happen (almost every other post on Hacker news these days is some new product that solves one part of the puzzle).

Everyone can do something:

Joe User can do some simple things – install the EFF’s HTTPS Everywhere plugin, and use email encryption (if we can make encryption ubiquitous then we make the PRISM/Tempora kind of abuse much much harder).

Network admins can do things like move their DNS over to OpenNIC (a drop in replacement domain name system run by volunteers outside of government control, often without any logging of queries) and use DNSCrypt to encrypt lookups.

Coders can look at throwing their weight behind an open source project – perhaps add encryption support to their favourite mail client (or make the UX easier), or take a look around at some of the decentralisation projects going on (particularly worth looking at the #indiewebcamp community).

Basically, we need more engagement, not less. Decisions are made by those who show up, and as Tesco put it, “Every little helps” :)

What are your thoughts?

The fallout from the Snowden affair seems to keep coming, with the shuttering of not one but two secure email services.

For those who have been living under a rock for the past month or so, Edward Snowden is the whistleblower and political dissident who leaked evidence of vast illegal US and UK internet surveillance projects, and who has currently been granted asylum in Russia. Given the American government’s shockingly poor record on the treatment of its political prisoners, as well as their clear desire to make an example of him, I for one am relieved Russia stepped up to its obligations under international law. Granting Mr Snowden some respite from persecution, however temporary that may be, was both legally and morally the right thing to do, even if the cognitive dissonance that I feel from the reversal of the traditional narrative is giving me a migraine.

Known in crypto-analysis circles as “The Rubber Hose technique”.

Lavabit, a Texas based provider of encrypted email apparently used by Snowden, shut down to avoid becoming “complicit in crimes against the American people”. Later Silent Circle, based in Maryland, did the same, taking the view that it was better to close down and destroy its servers than to deal with the inevitable bullying.

The message seems to be simple. You can’t rely on the security of services where the data is out of your control, especially if the machines or companies involved have ties to the USA, but to say you’re safe from this sort of thing because you use a non-us provider (as many seem to be saying) is frankly delusional.

For those who are looking for alternatives to giving all your data to a third party, I do suggest you check out the #indieweb community, especially if you’re a builder. #indiewebcamp-uk is happening in September in Brighton, RSVP here.

It seems it is fast becoming a dangerous time to be a software creator, and no matter how secure your platform, you always run the risk of the rubber hose technique. As an industry, we are living in “interesting times“, it will be interesting where we go from here.

Update: Graham Klyne points out that Silent circle haven’t shuttered their end-to-end encryption offerings.

Image “Security” by XKCD.

WebHookWebmention, as well as the legacy Pingback, provide a way of notifying a third party site that you have made some reference to something on their site. So, for example if I reference somebody elses blogpost in mine, that blog will be notified and my reference may appear as a comment on the post (unless they’re blogging on a passive aggressive silo like Google+ or Facebook, or their blog uses Disqus, which is, in many ways I won’t go into now, the suck).

Pingbacks, and increasingly Webmention, are supported by most major web frameworks and blogging platforms, but adding them to a home rolled platform can be a little bit of a faff, especially if these platforms are build on static HTML.

So, inspired by Pingback.me written by Aaron Parecki, I wrote a bit of middleware that can be installed on a web server with minimal fuss, and can provide pingback/webmention services to other sites.

The reason I didn’t use Pingback.me was that I needed something quickly, I had reasons why I didn’t want to install an entire Ruby stack, and I needed the webhook functionality for a couple of projects (including some funky node.js dataset regeneration to tackle some scalability challenges, but that’s a different story).

Setup

One of the main requirements for pingback2hook was ease of setup, which I hope I’ve achieved.

  1. Start by checking out the source code from the github repo here, and place this on your web server.
  2. You’ll need to have PHP 5.3 running with mod-rewrite, and also CouchDB (to store pings), which on Debian systems, should be a fairly straightforward setup.

    apt-get install apache2 libapache2-mod-php5 couchdb; a2enmod rewrite

  3. If you’re running pingback2hook on its own subdomain, you should be up and running, but if you’re installing to a subdirectory of your site you’ll need to modify the RewriteBase in .htaccess.

Enabling pingback and webmentions on your site

Once you’ve got the software up and running somewhere, you can then start enabling pingback support in your sites. This is a two step process:

  1. On the pingback2hook server, define an endpoint for your site in a .ini file. This file will define the endpoint label, a secret key for communicating with the API, and zero or more webhook endpoints to ping. E.g.

    [mysite]
    secret = "flkjlskjefsliduji4es4iutsiud"

    ; Zero or more webhook endpoints
    webhooks[] = "http://updates.mysite.com"
    webhooks[] = "http://data.mysite.com"

  2. On your website, declare the necessary hooks in your metadata, either by headers:

    // Webmention
    header('Link: <http://pingback2hook.myserver.com/webmention/myendpoint/>; rel="http://webmention.org/"');

    // Pingback
    header('X-Pingback: http://pingback2hook.myserver.com/pingback/myendpoint/xmlrpc');

    And/or in the header metadata…

    <html>
        <head>
            <link href="http://pingback2hook.myserver.com/webmention/myendpoint/" rel="http://webmention.org/" />
            <link rel="pingback" href="http://pingback2hook.myserver.com/pingback/myendpoint/xmlrpc" />
    
            ...
    
        </head>
    
        ...
    
    </html>

Once set up, you will get an entry in the database for every pingback or webmention a given permalink receives. If you’ve defined some webhooks, these will be pinged in turn with the JSON content of the ping.

Viewing your pingbacks

To view your pingbacks you can either make a direct query of the couch database (default: pingback2hook), or make use of the API.

Currently, you can query the API via its endpoint at https://pingback2hook.myserver.com/api/myendpoint/command.format, passing any required parameters on the GET line.

At the time of writing only one command is supported, but I’ll add more as I have need to:

Command Parameters Details
latest.json|jsonp target_url, limit (optional), offset (optional) Retrieve the latest pings or webmentions for a given permalink url.

To make the query, you need to authenticate yourself. This is done very simply by passing the ‘secret’ for the endpoint in your request header as X-PINGBACK2HOOK-SECRET: mysecret. This is very basic security, and your code is being sent in the clear, so it goes without saying that you should ONLY make queries to the API via HTTPS!

Have a play, and let me know what you think!

» Visit the project on Github…