Artificial intelligence creates its own encryption

Researchers from Google's Brain division have released an academic paper which details how they were able to get neural networks to create their own encryption standard, and communicate between each other.

Given that there is no human involvement in this encryption - does it make it more secure?
Given that there is no human involvement in this encryption - does it make it more secure?

Google's Brain researchers have released a paper named “Learning to protect communications with adversarial neural cryptography” which has detailed how, when tasked, its AI technology has been able to create its own cryptography standard.

The experiment itself consisted of two neural networks, Bob and Alice, who shared a secret key. Another neural network, Eve, was told to intercept the communication between Bob and Alice. There was one condition, the text intercepted had to be as close to the original as possible. Alice would experience a loss function depending on how far from random Eve's guesses were. This created a generative adversarial network between the robots.

Researchers Martın Abadi and David G Andersen wrote: “Informally, the objectives of the participants are as follows. Eve's goal is simple: to reconstruct P accurately (in other words, to minimise the error between P and PEve). Alice and Bob want to communicate clearly (to minimise the error between P and PBob), but also to hide their communication from Eve. Note that, in line with modern cryptographic definitions (eg, (Goldwasser & Micali, 1984)), we do not require that the ciphertext C “look random” to Eve. A ciphertext may even contain obvious metadata that identifies it as such.”

The researchers added: “Therefore, it is not a goal for Eve to distinguish C from a random value drawn from some distribution. In this respect, Eve's objectives contrast with common ones for the adversaries of GANs. On the other hand, one could try to reformulate Eve's goal in terms of distinguishing the ciphertexts constructed from two different plaintexts.”

Jacob Ginsberg, senior director at Echoworx told SCMagazineUK.com: “When you consider that humans are consistently the weakest point in a security chain, there's both financial and operational value in automating encryption between systems. It has the potential to dramatically increase security. It'll be interesting to see how this technology develops over the next few years and what the adoption levels among businesses will be.”

The paper details how over time, the robots had evolved their methods of communicating, and Alice and Bob were eventually able to communicate clearly using a shared key. Eve did manage to decrypt a message, but once Alice and Bob caught onto her, they changed their methods and Eve could no longer crack the message.

Bob and Alice got rather good at encryption, describing their methods as “odd and unexpected,” given they were depending on calculations which weren't found in man-made encryption.

Concluding, the researcher wrote: “In this paper, we demonstrate that neural networks can learn to protect communications. The learning does not require prescribing a particular set of cryptographic algorithms, nor indicating ways of applying these algorithms: it is based only on a secrecy specification represented by the training objectives. In this setting, we model attackers by neural networks; alternative models may perhaps be enabled by reinforcement learning.”

Michael Scott, head of cryptography at MIRACL, told SCMagazineUK.com: "Although it's being played up, I'm highly underwhelmed by this experiment. AI makes big promises and fails to deliver, and this is one of those times. Starting the experiment by giving Alice and Bob the key, cuts out one of the biggest issues in cryptography of key distribution. The experiment is trivial at best." 


Sign up to our newsletters