In computer systems and network design, human factors impinge on your system in numerous ways: the positive interaction of a person doing the job they are supposed to do, the negative interaction of a person making mistakes or misusing the system and the malicious interaction of the attacker who wants to subvert your system.
To make the situation even more complex, both good actors and bad actors can be either insiders or outsiders.
Getting a handle on all of these factors is a vital but perennially thorny issue for system designers. Failure to take into account human interaction with your system leads to loss of productivity, user failure, system failure and system compromise.
Much has been written about the interface between humans and systems, dating back to the time and motion work efficiency studies of the early 1900s, but human machine interfaces have reached a new level with the development of increasingly complex information systems.
To put this problem into perspective, IBM's 2015 Cyber Security Intelligence Index found that 95 percent of cyber-security breaches in its client organisations started with human error. The 2016 report found that 60 percent of attacks were carried out by insiders, people who either acted deliberately or inadvertently helped an attacker from outside the organisation.
The good news, it says, is that the proportion of inadvertent actors had fallen from roughly one-half to one-third. “A reduction in the number of attacks attributed to inadvertent actors could mean that more organisations are implementing security policies and employee education – and that they are doing a better job of communicating what's expected and why it's important,” the report says.
The bad news is that the average number of serious attacks against its client companies rose between 2014 and 2015 by 66 percent, from 109 to 178 security incidents. That works out at about 3.4 serious cyber-crime attempts per week, up from two incidents per week the previous year.
Many of these incidents are targeted phishing attacks, or spear-phishing, a growing problem for organisations.
According to Mimecast, 55 percent of organisations surveyed reported they were the recipients of high-level phishing attacks within a three-month period. In so-called whaling attacks, the attacker attempts to convince a member of staff to send electronic wire transfers by targeting or impersonating the CEO or CFO. Earlier this year, both the CEO and the CFO of an Austrian aircraft parts manufacturer were dismissed following a whaling attack that resulted in a net loss of €40.9 million (£31 million).
And there are numerous other examples of companies incurring big losses due to phishing, just one of many attacks that rely on human error to succeed.
Symantec reported in 2015 that it detected an average of 73 spear-phishing emails every day and the bad news is that the number is growing. In addition to whaling attacks, a spear-phishing attack can be used to infiltrate malware onto a system, which can lead to further financial losses.
In one notorious instance, bitcoin trader Bitstamp lost 18,866 bitcoins (worth £3.6 million at the time) in an attack in December 2014. In a leaked incident report, it was revealed that senior members of the management team had been socially engineered over months until the attacker managed to compromise a laptop, which had privileged access to servers which contained the wallet data.
However, the biggest problem looking forward is perhaps going to be ransomware.
According to Montana Williams, the cyber-evangelist at ISACA who is currently working on a look-forward to cyber-crime over the next five years, ransomware will soon become the biggest money spinner for cyber-criminals.
A ping's ransom
The current model for making money from crime online is credit card fraud and identity theft, he says. But the ransomware creators are becoming more sophisticated. “It used to be on the first attack they [the attackers] would send you an email saying they would unlock your computer for $1 million,” he laughed. Of course, no one paid it.
However, over the years the number of attacks has grown exponentially and the attackers have scaled down their demands to the point where individuals and organisations find it easier to ping them the ransom money than deal with the consequences of their poor online security hygiene.
“Recently a hospital got a ransom note for US$ 5,000 [£3,000] and they paid them because it was cheaper than the alternative. And then a few weeks later they had to pay another US$ 5,000 because they hadn't cleaned their system – they left a backdoor,” Williams said.
Meanwhile on the consumer front, attackers are targeting thousands of victims per day and pocketing a more modest US$ 350 (£200) per successful attack – but it's a volume game and the profits quickly add up.
Again, human factors come into play. Not only did the attackers trick a user into downloading the ransomware infection, they also rely on the fact that a significant proportion of victims will pay up, even though it would be in the best interests of the population as a whole if everyone refused to pay the ransomers.
In the classic prisoner's dilemma, two suspects have been arrested by the police and taken to separate interrogation rooms. They are given two options: cooperate with the police and implicate their partner or remain silent. If they both remain silent, they will both go free but neither one knows what choice the other will make. If they turn on their partner, they will get a two-year sentence but if they remain silent and their partner rats on them, they will get a five-year sentence.
In that situation (and presupposing that there is truly no honour among thieves), the rational choice is to rat on your partner and get two years because that's probably what he's going to do to you.
If no one ever paid the ransomers, they would stop extorting people, but as long as some people continue to pay (even though it's not the honourable thing to do), maintaining the attacks continues to be profitable.
If you took humans out of the equation altogether, could you build a secure computer system? Your only chance would be if you strictly limited what transactions could be conducted but even then, it would still be vulnerable because there will be flaws in the system that can be exploited by humans. Ergo, you can't take humans out of the system.
If humans are the weakest link, they are also the most flexible solution. Humans will always make mistakes. However, systems will not always be able to identify or correct those errors.
And the real world will always generate exceptions. Systems are inherently rigid and exception- handling must be built into them, but the most flexible exception handler is the human which, with proper training and motivation, is very good at it.
However, many staff within organisations have neither the training nor, frankly, the motivation to seek out and deal with systemic weaknesses in information security. “This goes to these surveys in which 70 percent of people don't even know what ransomware is. And yet 93 percent of all phishing attacks have ransomware in them,” Williams says.
Part of the explanation for the lack of motivation lies in lack of understanding of both the attack vectors and the consequences. As Williams adds, it took the firing of Target's CEO to send a wakeup call to boardrooms globally that information security mattered – as in, really mattered.
And when you talk about the human factor, you have to think about who is fighting these battles.
Fight the good fight
According to representatives from the National Crime Agency's National Cyber Crime Unit, speaking at the launch of a joint report with CREST, teenage computer gaming enthusiasts are being groomed by criminal gangs to write malware for them. Despite making millions of dollars for them, one teenager arrested for computer crimes was being paid a mere hundred pounds a month – he was doing it for the LULZ.
Meanwhile, the government and private sector are desperate to direct these same kids into a life of socially useful cyber-activity.
It is a war of geek vs geek.
We live in a world that is increasingly run by machines, which are in turn guided by computer code that is so complex that no single human being can understand how it works.
So can humans remain relevant in the age of machines?
Fortunately, there are some people who thrive in this world but for most mere mortals, computers remain inscrutable, mysterious and to some extent dangerous. For those who design the cyber-systems – including the cyber-security systems – upon which we all depend, care must be taken to ensure that the geeks don't run too far ahead of everyone else.
That's the view of Google Chrome software engineer, Adrienne Porter Felt. She gave a presentation to USENIX Enigma 2016 about the challenge of making security features simple.
She says that security features should be invisible to non-experts when you don't need it and helpful when you do need it. “This is really difficult, really hard,” she said.
Felt says that usable security should be viewed as a science but not enough people treat it as such. This involves looking for scientific research on the security usability problems that you face and, if you are unable to find the research you need, then run your own scientific studies with a testable hypothesis and control groups.
“There are some things you should not do: you should not trust your gut,” she says. “Your gut, my gut – we are not representative of regular users. Things that seem understandable to us or convincing to us will not generalise.”
And she warns, don't trust “common wisdom” either, unless you can back it up with evidence.
Marco Cova, senior security researcher at Lastline, says he would like to see more technology that is “secured by design”.
He commends the work that Google Chrome has done to improve warnings for users who are about to visit a dangerous page or download potential malware.
“I think the human gets blamed too much for cyber-security,” he tells SC Magazine UK. “We should have systems where you don't have to have a PhD in computer security to recognise that an email attachment, purportedly from your colleague, is actually fake, coming from someone else.”
The quality of cyber-security is incredibly variable, he says – to talk about the “average” is almost meaningless. Some organisations have been hacked out of business while other organisations fare much better because of their security culture.
“There are areas that have traditionally been more secure – places where you can enforce very tightly the environment that they [the users] can use,” he says, “while other places, it's more difficult if everyone can bring their own stuff and it's a struggle for security to keep up.”
So the evidence is in: humans are the weakest link… and the strongest link. Treat them with respect, especially when designing security features.