Skip to main contentSkip to navigationSkip to navigation
Phil Zimmermann launched Pretty Good Privacy (PGP) in 1991. Photograph: Frantzesco Kangaris for The Guardian
Phil Zimmermann launched Pretty Good Privacy (PGP) in 1991. Photograph: Frantzesco Kangaris for The Guardian
Phil Zimmermann launched Pretty Good Privacy (PGP) in 1991. Photograph: Frantzesco Kangaris for The Guardian

Humans are the weakest link when it comes to encryption

This article is more than 8 years old
John Naughton
The theory behind encryption is flawless. It’s just that too few people properly understand it

Encryption works,” said Edward Snowden in June 2013, in reply to a question from a Guardian reader about how he could protect his communications from NSA/GCHQ surveillance. “Properly implemented strong crypto systems are one of the few things that you can rely on.” Mr Snowden is a smart and thoughtful guy and he chooses his words with care. So note the qualifications in that sentence: “strong crypto” and “properly implemented”.

By strong crypto, he meant public-key cryptography, which works by using two separate keys, one of which is private and one of which is public. Although different, the two parts of the key pair are mathematically linked. The concept originated, ironically, in GCHQ in 1973, but only reached the public domain four years later after three MIT researchers, Ron Rivest, Adi Shamir and Leonard Adleman, independently invented a way to implement it. Their algorithm was christened RSA, based on the first letters of their surnames.

In 1991, an American geek and activist named Phil Zimmermann implemented an open-source implementation of RSA and called it “Pretty Good Privacy” or PGP. Its significance was that, for the first time in history, it enabled citizens to protect the privacy of their communications with military-grade cryptography. The US government was not amused, defined PGP as a “munition” and prosecuted Zimmermann for exporting munitions without a licence.

With characteristic chutzpah, Zimmermann found a way round the government ban. Harnessing the power of the First Amendment, he published the entire source code of PGP in a hardback book that was distributed by MIT Press. Anyone purchasing the book could then rip off the covers and scan the code, thereby enabling him or her to build and compile their own version of PGP.

Illustration by Matt Murphy.

This is how I first encountered the program. I was looking for a book in the open shelves of Cambridge University Library and suddenly came on a thick blue hardback with the title PGP Source Code and Internals. As I opened it and saw the thousands of lines of computer code, I experienced what James Joyce would have called an epiphany. The volume I held in my hands provided, I thought, conclusive proof of the revolutionary potential of digital technology. For not only had Zimmermann effortlessly vaulted the surly bonds of old-style national security, but he had given the people of the world a powerful tool to safeguard their privacy from the intrusions of the state (or of anyone else).

Both of those propositions were – and remain – true. PGP (now in its fifth incarnation) does indeed enable one to protect one’s communications from spying eyes. It meets Snowden’s requirement for “strong crypto”. But it hasn’t realised its revolutionary potential because it turns out that powerful software is a necessary but not sufficient condition for effective security. And the reason is that, to be effective, PGP has to be implemented by humans and they turn out to be the weak link in the chain.

This was brought forcibly home to me last week at a symposium on encryption, anonymity and human rights jointly organised by Amnesty International and academics from Cambridge University. It opened with a masterclass in encryption run by my colleague Frank Stajano, who is reader in security and privacy in the Computing Laboratory and for whom the term “security geek” might have been invented. Stajano’s describes his mission as “to make the digital society fair for non-techies” and to that end he is running a fascinating project aimed at finding a more usable and more secure replacement for passwords that does not require people to remember incomprehensible sequences of letters and numbers.

Since the attendees at the event included geeks of various degrees, as well as activists and academics, Stajano opened with some audience research to sort sheep from goats. “I publish my PGP public key on my website,” he said. “Anyone who could send me an encrypted message using that key – and get a reply – go to that side of the room. Everyone else go to the other side.”

You can imagine what happened. The vast majority of attendees failed the Stajano test. And that was not because they did not know about the existence of PGP or that they didn’t passionately wish to deploy it. It was simply that, for them, PGP was too arcane.

And as the masterclass proceeded, it was easy to understand why. Public-key cryptography is intrinsically complex and non-techies find it completely incomprehensible because they can’t assemble a mental model of how it works, with the result that they find it difficult to set it up for themselves. There’s nothing wrong with Zimmermann’s technology. It’s just that humans can’t implement it. Which is why Snowden’s second qualification – “properly implemented” – is the one we have to heed.

Most viewed

Most viewed