Developments in machine learning: we've come a long way, but have far to go
Developments in machine learning: we've come a long way, but have far to go
We are still some way from computers passing the Turing Test but developments in machine learning are making real advances in cyber-security and preventative medicine.


Artificial intelligence is often viewed as a recent development in computer science, but its first stirrings go back to the 19th Century. Countess Ada Lovelace, daughter of the English poet Lord Byron collaborated with the mathematician Charles Babbage in the design of the first steam-driven mechanical computer, known as the Analytical Engine.


The Countess developed ideas about how such a machine could be programmed to display intelligence.In a published letter to Michael Faraday in 1844, she discussed the possibility of it simulating the workings of the human brain.


“I expect to bring the actions of the nervous and vital system within the domain of mathematical science, and possibly to discover some great vital law of molecular action, similar for the universe of life, to gravitation for the sidereal universe.” she wrote.


Fast-forward to the 1950s and her ideas on machine intelligence were the inspiration for mathematical genius Alan Turing, one of the cryptographers that cracked the German Enigma code at Bletchley Park.


He created his “Turing Test”, which remains the benchmark for measuring the intelligence of a machine: if a human judge cannot distinguish the output of a computer from that of a person then it can be deemed intelligent.


Today, we are getting closer to passing that test. IBM's supercomputers have already beaten humans at chess and in 2014; the University of Reading claimed a computer called Eugene had fooled enough judges that it was, indeed, human. 


While such academic exercises are fun, it's the impact of AI on commercial computing that is getting people excited – particularly in the areas of data analysis and security, both physical and cyber. IBM in particular is developing its Watson “engine” to provide intelligent tools for business analytics.


Cyber-applications are emerging in the market that can learn from the behaviour of malware inside a network in order to predict and prevent future attacks. Others can recognise anomalies in human behaviour to stop insider threats, or alert a manager when sensitive data is being accessed. It's a growing area, and cyber-researchers are increasingly of the opinion that AI techniques are the way forward if we are to reduce the level of cyber-attacks. 


It's part of a step change in how we deal with cyber-attacks and crime. Organisations need to be much more intelligent and proactive about how they deal with the onslaught of daily attacks. Analytics and machine learning will give us the tools to learn the tactics of our cyber- adversaries. They can help us anticipate attacks, rather than simply respond. They need also to understand better where their data is and how its is accessed and processed. The onset of GDPR this year, making this a legal requirement across Europe, increases the need for systems that can automate data auditing tasks.


So far, this first wave of AI driven applications rely on machine learning to make decisions for us but they cannot yet think for themselves but that is coming.The Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence, headed Dr Simon Stringer, is developing realistic computer simulations of the brain.


The team recently made a breakthrough in understanding how the brain may solve the classic feature-binding problem in visual neuroscience. This problem refers to the ability of the human visual system to represent the relationships between all of the features within a visual scene at every spatial scale.


Solving the binding problem is a necessary step towards the development of machines that are able to make sense of their sensory world and operate intelligently within it. Replicating the workings of the brain in software will produce far more powerful AI algorithms with a broader range of applications - not just in security.


For example, engineers have not yet developed a machine vision system that can spot when an elderly person has fallen down in their home. The challenge here is for the machine to reliably differentiate between falling down and other benign motions such as sitting down. To understand this challenge, consider how naturally a human can tell the difference.


Could a smart monitoring system be developed, perhaps utilising neural networks, which can spot such an occurrence and alert relatives or the local council?  Work is underway in a number of London Borough Councils to build such systems.


Elsewhere, deep learning neural networks are also able to discover and learn useful statistical trends embodied within masses of recorded data that are not obvious to human observers. 

 

In real terms this could have huge benefits in disease and illness prevention, as data reveals hidden clusters of heart disease or cancer, for example. This ability to learn underlying trends that are imperceptible to humans makes deep learning networks extremely potent. 


In medicine, neural networks are able to help with the interpretation of medical images, for example, spotting tumours in mammograms. Work is underway on developing such neural network systems to process data recorded from remote sensors such as CCTV or microphones.


Although AI researchers claim that their neural networks, and even expert systems, are inspired by the human brain, there is in fact little comparison. In particular, the deep learning neural networks currently used by engineers bear no relation to the function of the cerebral cortex. 


For example, real neurons in the brain communicate by exchanging electrical pulses called ‘spikes', and learning has been found to depend on the timings of these spikes.  These biological details give rise to a very different kind of neuronal dynamics in the brain, which have not yet been replicated by current AI techniques.


All of which brings us full circle to the prescient writings of Countess Ada Lovelace over 170 years ago, who was the first to propose using computers to simulate the “actions of the nervous and vital system” as a route to artificial intelligence. 


Contributed by Brian Kingham, chairman of Reliance acsn, director UCL Institute of Security and Crime Science.


*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.