Some people have begun trusting machines more than humans when it comes to cyber-security, or so says a global research, but most still have doubts.
More than a quarter of the 10,000 respondents to a global survey said they would rather have their cyber-security being managed by artificial intelligence than human operatives. This begs the question of whether they actually understand the role that both play when it comes to protecting their data.
The study, commissioned by Palo Alto Networks and conducted by YouGov along with input from Dr Jessica Barker, found that when it came to confidence in the AI approach, the responses varied across geographies. Italy stood first in choosing AI, with over 38 percent positive response, while the UK was rather reserved, with just 21 percent of respondents in favour.
Nearly a third (29 percent) of those who preferred the idea of AI security management said such security methods had a "very positive" impact on their online experiences and expectations. Compared to this, just 20 percent amongst those against the AI’s role felt the same level of confidence in human-managed security systems.
Far too many people appear to have a misplaced trust in security, said the study. A total of 38 percent thought smart home devices and wearables were secure. In the UK, this number rose to 46 percent when talking about confidence in IoT security, and a remarkably worrying 71 percent in the UAE.
In a reassuring trend, 54 percent of respondents took responsibility for the security of their own personal data when online. Among them, the younger demographic (18-24) were far less willing to do so (43 percent) than the 55+ age group (58 percent).
"The older generation are more likely to have been exposed to cyber-security training and practices in the work environment, and this could have influenced their mindset to be more security conscious," said research lead Dr Jessica Barker.
SC Media UK was more interested in why a quarter of those asked were so trusting in AI to better protect their data than human infosecurity professionals. This trend is not at all surprising, said Greg Day, VP and CSO EMEA with Palo Alto Networks.
"I wonder today how many put their symptoms into Google before going to see their GP?" Day said. "It doesn’t mean they don’t trust their GP, but it shows they want to validate against much larger data sets that no one human mind can process in the time scales required."
The raw data from the research revealed 37 percent to prefer a human managing their cyber-security and 36 percent who didn't know one way or the other, which suggests there is clearly more work to be done in explaining how data is secured in the real world.
Ethical hacker John Opdenakker agrees. In conversation with SC Media UK, he said the numbers suggest only a fraction of people really understand what AI/ML means. "They seem to think it’s the magic that’s going to stop phishing or malware attacks," Opdenakker argued.
"There are certainly use cases for machine-learning solutions and we see companies applying it with a certain degree of success in malware detection software," he said. However, preventing users from those with malicious intent will always be a best effort exercise, he added.
Application security professional Mike Thompson agrees to this view. The vast majority of those putting their trust in machines over man don't realise that "every computer system takes its initial instruction from a human operator," he said. He asserted that cyber-security remains a very real, very human problem and is likely to do so for some time yet.
"Simply put, AI will not really help with technological debt and asset management. All the AI/ML in the world is not going to make your users not click links in phishing emails," explained Ian Thornton-Trump, head of cyber-security at AmTrust International. "We don’t have true AI, we have advanced pattern matching: I can’t just yell at Alexa ‘secure all my things!’."