Those of us in userland may rarely have to worry about the nuts and bolts of encryption, but they exist nevertheless and must be replaced or updated now and then. Google is taking aim at one particularly stubborn nut, public key verification, with a new open source project called Key Transparency.
It’s a sequel of sorts to Certificate Transparency, a project with similar aims but, necessarily, a very different implementation. Both address a common problem on the internet: the need to verify that the person or server you’re connecting to is the one you think you’re connecting to
One solution is something like Keybase, a collection of verified users and their various cryptographic credentials (PGP and all that). What Google wants to do, however, is to make sure that these contact details are verified at a systematic level that protects privacy as well.
In other words, you should be able not just to trust a verified address that you find online, but that address should double-check itself as part of the process of establishing a connection, in order to prevent things like man-in-the-middle attacks.
Think of it like this: you could look up someone’s address online in a verified source like the voting records — but by the time you visit their house, there’s no guarantee the person you want isn’t tied up in the back while someone else impersonates them. The addition of Key Transparency would be like asking whoever answers the door to show you their ID before you talk.
Okay, so it’s not a perfect metaphor, but you get the idea.
Essentially, Key Transparency uses a large-scale database of accounts and their public keys, encoded in such a way that they are obscure to an attacker but verifiable by users. The specifics, to be honest, are above my level of cryptographic literacy, but essentially the actual info forms the lowest level of a “Merkle Tree” that can be assembled and verified from the bottom up with information traded among its users — but only hashed data, not actual user data like emails and PGP keys.
You start with information you know — or think you know — and the hash of this is added to other users’ hashes and rehashed. If this second (or third or fourth or 256th) hash matches, you’re good. If it doesn’t, one of the piece of information is wrong and needs to be rechecked.
That’s about as specific as I can get; you can read the overview of the technical method on Github.
The end result is that you can verify your information only if you already have it, and without disclosing or learning any information from other users of the database. It’s efficient, auditable, highly scalable (the lowest level supports 2^256-1 “leaves” but verification does not require more than a few at a time), and potentially easily integrated into credential-tracking services like Keybase or into secure communications.
Google collaborated with CONIKS team, Open Whisper Systems, and with the security team at Yahoo! (soon to be Altaba) on Key Transparency. It’s “very early days,” the blog post announcing it reads, so it’ll be a while before it’s added to any products. In the meantime, feel free to peruse or contribute to the codebase.
Featured Image: Bryce Durbin