Yesterday, I wrote a post outlining a draft specification for a possible way to handle login on a distributed social network, together with a reference implementation for Known.

I got some really positive feedback, including someone pointing out a potential replay vulnerability with the protocol as it stands.

I admit I had overlooked replay as an attack vector (oops!), but since peer review is exactly why open standards are more secure than propriatory standards, I thought I’d kick off the discussion now!

The Replay problem

Alice wants to see something that Bob has written, so logs in according to the protocol, however Eve is listening to the exchange and records the login. She then, later, sends the same data back to Bob. Bob sees the signature, sees that it is valid, and then logs Eve in as Alice.

Worse, Eve could send the same packet of data to Clare and David’s site as well, all without needing access to Alice’s key.

Eve needs to be able to intercept Alice’s login session, which, if HTTPS has been deployed is largely impractical, but since this can’t always be counted on I’d like to improve the protocol.

Countermeasures

Largely, countermeasures to a replay attack take the form of creating the signature over something non-repeatable and algorithmically verifiable that Alice can generate and Bob can check.

This may be some sort of algorithmically generated hash, a timestamp, or even just a random number, or record whether we have seen a specific signature before.

My specific implementation has an additional wrinkle in that it has to function over a distributed network, in which each node doesn’t necessarily talk to each other (so we can’t check whether we’ve seen a signature or random number before, since Bob might have seen it, but Clare and David won’t have).

I also want to avoid adding too much complexity, so I’d like to avoid, if I can, doing some sort of multi-stage handshaking; for example hitting an endpoint on the server to obtain a random session id, then signing that and sending it back. Basically, I’d still like to be able to talk to a server using Unix command line tools (gpg) and CURL if I can!

Proposed revision

Currently, when Alice logs in to Bob’s site, Alice signs their profile URL using her key and sends it to Bob. Bob then uses this profile url to verify that Alice is someone with access to Bob’s site/post and then users the signature to verify that it is indeed Alice who’s attempting to log in.

What I propose, is that in addition to forming the signature over Alice’s profile URL, she also forms it over the URL of the page she is trying to see, and also the current time in GMT.

Including the requested URL in the signature allows Bob to verify that the request is for access on his site. If Eve sent this packet to Clare or Dave, it could be easily discarded as invalid.

Adding the timestamp allows Bob to check that this isn’t an old packet being replayed back. Since any implementation should have a small tolerance (perhaps a few minutes either side) to allow for clock drift, using a timestamp allows a small window of attack where Eve could replay the login. To counter this, Bob’s implementation should remember, for a short while, timestamps received for Alice and if the same one is seen twice invalidate all of Alice’s sessions.

  • Why invalidate all of Alice’s sessions when we see the same timestamp twice, can’t we just assume that the second packet is Eve?”

    Sadly not – sophisticated attackers are able to attack from a position physically close to you, so Eve’s login may be received first. In the situation where two identical login requests are received, it is probably safer to treat both as invalid.

    Perhaps a sophisticated implementation could delay Alice’s first login for a few seconds (after verifying) to see if any duplicates are received, and only proceed if there are none. This would limit the need to permanently store timestamps against a user’s account, but may be more complex from an implementation point of view.

  • Why use a timestamp rather than a random number?

    I was going back and forth on this… a random number (nonce) would remove the vulnerability window, but it would require Bob’s site to store every number we’ve seen thus far, so I finally opted not to take this approach.

I’d be interested in your thoughts, so please, leave a comment!

GCHQoogle: so much for "Don't be evil" So, unless you’ve been living under a rock for the past few months, you will be aware of the disclosure, by one Edward Snowden, of a massive multinational criminal conspiracy to subvert the security of internet communications and place us all under surveillance.

Even if you believe the security services won’t misuse this, what GCHQ can figure out, other criminal organisations can as well, so like many others, I’ve decided this’d be a good time to tighten the security of some of the sites I am responsible for.

Enabling HTTP Strict Transport Security

So, lets start off by making sure that once you end up on the secure version of a site, you always get sent there. For most sites I had already had a redirect in place, but this wouldn’t help with a number of threats. Thankfully, modern browsers support a header, that when received, will cause the browser to rewrite all connections to that site to the secure endpoint before they are sent.

A quick and easy win, so in my apache conf I placed:

Header add Strict-Transport-Security "max-age=15768000; includeSubDomains"

Auditing my SSL configuration, enabling forward secrecy

The next step was to examine the actual SSL/TLS configuration used by the various servers.

During the initialisation of a secure connection, there is handshake that takes place which establishes the protocol used (SSL / TLS), the version, and what algorithms are available to be used. The choices presented have an effect on exactly how secure the connection will be, and obviously, older and insecure protocols and weaker algorithms present security risks.

I have been, I admit, somewhat remiss in the past, and have largely used Apache’s defaults, which was ok for the most part, but as SSL Lab’s handy audit tool revealed left a number of weak algorithms available as well as not taking advantage of some newer security techniques.

I quickly disabled the weaker algorithms and SSL protocols, and also took the opportunity to specify that the server prefers algorithms which support Forward Secrecy.

Forward secrecy is a property of newer algorithms (supported, sadly, only by newer browsers), that means that even if the key for a given session has been compromised, that key could not be used to decrypt any future sessions. This means that even if the attacker compromised one connection, they would not be able to compromise any past or future connections.

Here is a handy guide for implementing this on your own server.

The downside of these changes of course is that older browsers (IE6, I’m looking at you) are left out in the cold, but these browsers (IE6, I’m still looking at you) are using such old implementations with weak algorithms, they would likely be in trouble anyway.

Security is a moving target, so it’s important to keep up to date!

Update: If you’re running Debian (as I am), you will have some issues with ECDHE suites until stable updates to Apache 2.4. Until then I’ve tacked on AES support at the end, to support IE with something reasonable, but giving forward secrecy to more modern browsers.

Update 27/6/14: Debian recently backported a few 2.4 cipher suites into the 2.2 branch in debian stable. This means that Perfect Forward Secrecy is now supported for Internet Explorer!