Artificial intelligence creates its own encryption
Researchers from Google's Brain division have released an academic paper which details how they were able to get neural networks to create their own encryption standard, and communicate between each other.
Given that there is no human involvement in this encryption - does it make it more secure?
Google's Brain researchers have released a paper named “Learning to protect communications with adversarial neural cryptography” which has detailed how, when tasked, its AI technology has been able to create its own cryptography standard.
Researchers Martın Abadi and David G Andersen wrote: “￼Informally, the objectives of the participants are as follows. Eve's goal is simple: to reconstruct P accurately (in other words, to minimise the error between P and PEve). Alice and Bob want to communicate clearly (to minimise the error between P and PBob), but also to hide their communication from Eve. Note that, in line with modern cryptographic definitions (eg, (Goldwasser & Micali, 1984)), we do not require that the ciphertext C “look random” to Eve. A ciphertext may even contain obvious metadata that identifies it as such.”
Jacob Ginsberg, senior director at Echoworx told SCMagazineUK.com: “When you consider that humans are consistently the weakest point in a security chain, there's both financial and operational value in automating encryption between systems. It has the potential to dramatically increase security. It'll be interesting to see how this technology develops over the next few years and what the adoption levels among businesses will be.”
Bob and Alice got rather good at encryption, describing their methods as “odd and unexpected,” given they were depending on calculations which weren't found in man-made encryption.
Concluding, the researcher wrote: “In this paper, we demonstrate that neural networks can learn to protect communications. The learning does not require prescribing a particular set of cryptographic algorithms, nor indicating ways of applying these algorithms: it is based only on a secrecy specification represented by the training objectives. In this setting, we model attackers by neural networks; alternative models may perhaps be enabled by reinforcement learning.”
Michael Scott, head of cryptography at MIRACL, told SCMagazineUK.com: "Although it's being played up, I'm highly underwhelmed by this experiment. AI makes big promises and fails to deliver, and this is one of those times. Starting the experiment by giving Alice and Bob the key, cuts out one of the biggest issues in cryptography of key distribution. The experiment is trivial at best."