'Responsible encryption': Why is it dangerous?



US Deputy Attorney General Rod Rosenstein and Prime Minister Theresa May keep pushing for encryption with government placed backdoors - and they are marketing it as 'responsible encryption' ....

For those that have never heard of the term, it is the new phrase being pushed by various governments to 'provide security for consumers, and allow law enforcement agencies full access, when required'.

Amber Rudd, the Home Secretary for the United Kingdom, recently stated on the Andrew Marr show that WhatsApp's 'end to end' encryption is "absolutely unacceptable...there should be no place for terrorists to hide".

At first glance, this might seem to make sense. After all, if there is a murder investigation, we would expect mobile telephone companies to hand over details of telephone calls and SMS's, and any handshake information the handset may have had with various mobile phone transceivers. The problem arises in the concept of a 'backdoor'.

Introducing a backdoor into a system intentionally is not what you are actually doing. What you are really doing is introducing a strategic weakness into the system. It will get exploited. Not if, when. This is not the technology industry being awkward, it is just a matter of fact.

Governments and Industry partners have not had an amazing track record in keeping confidential data secret. In September, Adobe released their private PGP key in the public domain (picture below), Deloitte has had 350 clients’ details hacked and Equifax has put into doubt the personal details of 700,000+ UK subjects (the list goes on and on). Then we have examples of where Government agencies have used backdoors in the past, that are have now been released into the wild. WannaCry & NotPetya are incredibly effective malware strains used by hackers, and their codebase was originally part of the ETERNALBLUE frameset by the U.S. National Security Agency. Now it is in the hands of criminal hackers.

There will always be unintentional backdoors. Flaws in the code that allow possible compromises. These are generally located and patched when they are located. These are mainly never intentional and are just a part of life, and a good code review structure and a regular updates routine will fix most of these as they develop, combined with a good bug bounty schemes and regular penetration testing.

However, when you are actively introducing backdoors into systems, the system you have designed is already incredibly weak by design. It will get exploited - and not by "the goodies".

The other side of this, of course, is the from the ‘criminal/terrorist’ point of view. As an example, imagine today that it is publicly announced that Whatsapp, Signal, and Viber no longer have ‘end to end’ encryption. The government declares it can now access all of these services as required, but only when absolutely necessary.


Apart from it isn't. Just like myspace, people will happily migrate to another service if they see a need. In this case, criminals will most likely migrate to an XMPP service over TOR and use either a PGP or OTR (Off the Record) encryption (or another type of decentralised encryption communications tool). These could be run from anywhere – and with no possible access to the keys. Considering the fact these tools already exist on Android and iOS stores, even if you removed them from their respective stores, they can be sideloaded. All of these tools require very little technical skill use and will provide everything they need.

In the end, the only thing the government would have achieved is pushing criminals from a service you know about, into a service you can barely locate – all whilst actively making the software and platforms used by all of us inherently insecure by design.

All in all, 'responsible encryption' is anything but responsible.  


Add a comment

Email again: