In the opening presentation of Reset 2018, Mary Haigh, product director BAE Systems dissected the analogy of cyber-immune systems and biological immune systems, concluding there were indeed parallels - but its not an exact fit.
AI has contributed to the increase in cyber-attacks, but in this article Rob Holloway explains how AI could improve the accuracy of predicting, preventing and detecting cyber-attacks.
You won't become a great defender without attack capability. As a goalkeeper you need to play against the best to improve." Red teaming simulations part of AI tool learning process to identify truly malicious events.
Needing constant human input for AI training in cyber-security defeats the purpose of reducing the required human labour. Unsupervised learning is a whole other challenge. But AI needn't be 100 percent supervised or unsupervised.
Data needs context, meaning and insight to move it up to the level of wisdom or understanding. AI should always be seen as decision support rather than decision-maker. The latter role is always best left to the human mind.
As AI technology becomes standard in defence, it is likely to become equally standard in attack - with one side using AI to spot patterns of misuse and malicious activity, while the other uses it to find vulnerabilities and evade detection.
AI needs to be representative of the community it serves. It should use established concepts: open data, ethics advisory boards, data protection legislation, new frameworks & mechanisms, such as data portability & data trusts.
While we wait for technology to mature, businesses need to view AI as a tool to be harnessed by skilled cyber-security professionals, not used in place of them.
The government is making £50,000 cyber-security training grants available, and separately it has invested £1.8 million in 'innovative' machine learning technologies that will help improve threat detection capabilities at airports.
With enterprises struggling with a massive shortage of experienced cyber-security professionals, today's CISOs are placing more faith in machine learning which they believe will be important to their IT security functions.
Most ransomware victims hit more than once, and don't have defences. Industry adopting AI that deploys deep learning neural network machine learning is predictive by looking for and identifying the techniques scammers use.
AI driven applications rely on machine learning to make decisions but they cannot yet think for themselves though that is coming. Neural networks and expert systems may be inspired by the human brain, but there is little comparison.
From reactive network security capabilities we moved to developing predictive capabilities and now we are now able to achieve prescriptive security capability, intervening autonomously or flagging up issues to assist human decisions.
Humans and machine learning will have to come together to test autonomous vehicles, and the idea of crash test dummy with an AI brain may soon become a very necessary reality.
According to research announced during the recent Black Hat conference in Vegas, some 62 per cent of infosec pros reckon weaponised AI will be in use by threat actors within 12 months.
Sándor Bálint explores the need for cohesion between humans and machines in the cyber-security sector.
Laurent Bride explores factors constraining future development of AI while outlining the potential practical opportunities where AI might be used to enhance our lives - a precurssor to exploring infosec concerns and usage.
Bogdan Botezatu discusses what organisations can do to give themselves the best possible chance of evading and protecting against APTs, and how the next wave of cyber-security solutions are using machine learning algorithms to help beat the malware and stay one step ahead of the hackers.
Intelligent and automated systems are currently being touted as the next step in cyber-security to help combat the 'always-on' cyber-criminal, but are they right for us? And are we prepared for them?