Organisations need to equip themselves with a united view of their entire network, extending from the endpoint through to the cloud - including penetrating through encrypted traffic that could be hiding malicious activity.
More data and hack case studies should allow professionals to take a step back and place attacks into a broader context, harnessing the power of AI to learn from these breaches.
Most ransomware victims hit more than once, and don't have defences. Industry adopting AI that deploys deep learning neural network machine learning is predictive by looking for and identifying the techniques scammers use.
AI driven applications rely on machine learning to make decisions but they cannot yet think for themselves though that is coming. Neural networks and expert systems may be inspired by the human brain, but there is little comparison.
Happy New Year! SC Media UK resumes news reporting on 2 Jan 2018. During the break, catch up on our experts' predictions for a range of positive and negative futures, from the impacts of AI to likely new Zero days.
Driving the Autumn Budget - speed of introduction causes concerns over safety of autonomous vehicles, including terrorist take-over, plus where responsibility lays - user, manufacterer (hardware/software) and regulator.
From reactive network security capabilities we moved to developing predictive capabilities and now we are now able to achieve prescriptive security capability, intervening autonomously or flagging up issues to assist human decisions.
Deloitte fell victim to a data breach that could have been prevented by having simple measures that are standard security protocols but businesses must not only focus on the basics, but also incorporate an innovative approach.
Kasparov disagrees with Turing that machines winning chess was such a watershed. He contends that the machine which beat him was not intelligent, it was very fast. It was just a matter of time before they became the winners.
Brian Cox explained that the use of quantum computing for factoring large numbers for cryptography was so effective that it makes classic cryptography redundant.
We can expect to see a cyber-security incident at a category one level within the next few years. The government specifying what it will buy is an effective way of changing the market - Ian Levy, technical director, NCSC
More talent, less technology is the best strategy for keeping networks safe. Rather than use AI, companies are better off paying a bunch of junior engineers to patch vulnerabilities all day says Heather Adkins.
Humans and machine learning will have to come together to test autonomous vehicles, and the idea of crash test dummy with an AI brain may soon become a very necessary reality.
CISOs do indeed need to articulate cyber risk to the board in a business context, but equally, the board need to get a better grasp of cyber and prioritise criticality of security integrity vs continuity of service vs profitability.
Given shortages of skilled staff, Ryan Benson says we need to change processes or adopting new technologies then get better at managing data at scale, at automating the tasks that slow down analysts.
In Case You Missed It: UK data protection; Is AI weaponised; Is Malwaretech; innocent?; Mandiant leak; WiFi vulnerabilities
Staying safe and protecting data is increasingly going to be in the hands of artificial intelligence says Peter Boyle who adds, that we need to get this right, spot attacks and breaches earlier, and cut security costs.
The business world has been battered by successive waves of new technologies, but Sean Harrison-Smith says they need to take the risk now and deploy AI and big data for cyber-security as it may lead to fewer risks in the future.
In Case You Missed It: Rudd crypto-crash; privacy shield invalid; AI weaponised?; Alexa pwned; Swedish breach fallout
According to research announced during the recent Black Hat conference in Vegas, some 62 per cent of infosec pros reckon weaponised AI will be in use by threat actors within 12 months.
Sándor Bálint explores the need for cohesion between humans and machines in the cyber-security sector.
Despite ruling that the Royal Free NHS Trust failed to comply with data protection laws in its experiment with Google DeepMind, the ICO has not slapped the trust with a fine, saying, "The Data Protection Act is not a barrier to innovation."
Laurent Bride explores factors constraining future development of AI while outlining the potential practical opportunities where AI might be used to enhance our lives - a precurssor to exploring infosec concerns and usage.
Carbon Black hosted a press briefing in London this morning to discuss research it has conducted to gauge how security researchers perceive non-malware attacks, and how good Artificial Intelligence (AI) and Machine Learning (ML) are at stopping them.
Moreno Carullo examines how machine learning and AI can be deployed to protect physical infrastructures from cyber-attack.
In Case You Missed It: Uni self-DDoSed; AI beats signature detection; Health sector targeted?; New UK nuclear cyber-strategy; NCSC officially opened.
Kevin Davis discusses how vital it is that organisations look to agility as a way of providing speed of change and embracing new technologies to facilitate customer needs
Researchers from Google's Brain division have released an academic paper which details how they were able to get neural networks to create their own encryption standard, and communicate between each other.
Nearly half of consumers see Artificial Intelligence as having a positive impact on society, compared to seven percent who do not.