At the InfoSecurity 2017 keynote session on ‘Securing the user' delegates were informed that it is now official government policy that you can't describe people as the weakest link in cyber-security. Professor Angela Sasse, director UK research, Institution of Science of Cyber Security, UCL, cited the latest NCSC mantra, adding that the culprits to undermining our security are bad tech and unworkable policies - not just humans.
Fellow panellist, cyber-security consultant Dr Jessica Barker, concurred, saying “We (cyber-security professionals) have a view of ‘this is how it should be' or ‘we must have this outcome' and our approach has not been people-centric in considering how they'll respond. And if they don't respond how we expect and achieve the outcomes we want, it's their fault. A people-centric approach would realise people may not react how we would like them to.”
Jonathan Kidd, CISO Hargreaves Lansdown, agreed, saying that we had to move on from the earlier approach that ‘we'll just shout louder and they'll do what we want.'
Prior to the panel discussion, Sasse noted that while security experts bond with bashing their common ‘enemy' – staff - it's a counter-productive approach and doesn't help. Instead she presented her own three-step primer on ‘How to make security work for people.'
1) Security is security-experts main job, for anyone else it's a productivity drain. Your responsibility is to design security that fits with individuals' tasks and the organisation's business process. Time and effort spent on security must be a worthwhile investment. You'll need to get leaders in the business to realise they have to help you. Useless warnings of pop up boxes that don't then tell you what to do are a wasted investment, plus high false positive rates lead people to ignore all warnings. Eg there are 15,000 false positives to one true positive in HTTPS warnings.
So, keep warnings brief, use simple language to describe specific risks, and illustrate the potential consequences of going ahead. This clarity is more important that manipulating how you present the warning to provide a reward or punishment to manipulate behaviour.
2) Complexity and vagueness are the enemy.
Communications must be NEAT – necessary, explained, actionable, tested. An example of a message from a university IT Department that would cause activity to grind to a halt if followed was as follows: We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.
Therefore we need to standardise how we legitimise advice.
3) Security awareness and education are not the answer. They are not a cure for lack of security hygiene, unworkable policies, or useless tools. If you want to change undesirable behaviours – there is not a quick or cheap option. It's not a job for amateurs.
It won't be achieved by sending out a few emails, it will take time. If you have had no training on changing behaviour, don't think you can do it.
Instead approaches such as gameification were cited, so for students d0x3d, also socialising security in organsiaons , area games, self-study games, cartooning techniques, and ‘serious Lego' were all advised to replace web-based training, as well as identifying blockers and enablers to change, where change management experts can help. But Sasse adds that there is no single easy option. Choices (made by the user) have to be genuine and desirable.
She concluded, for security staff, it is time for old dogs to learn new tricks – primarily, engage with your users.
Barker noted how organisations need to give the time and resource and the solution, not just identify the problem. And ‘scary things' need to be done abut in a positive way, not just dire warnings of consequences or people don't want to engage
Kidd adds that things need to be done in the right way so that users can actively choose to do the right thing, not just because they face disciplinary action if they don't.
Barker say, “Often we punished bad behaviour but did not reward good behaviour, eg phishing exercises should also praise success, and those who were successful should spread message of what they spotted.”
Sasse came down strongly against the use of phishing emails for staff training, saying, “Totally and unequivocally because it's not engaging users in a collaborative way. We are working together to keep organisation security. If you start attacking our employees it's not very constructive. From many surveys [on why some staff don't comply with instructions], 70 to 80 percent have a very good reason for why they do what they do. Eg when business leaders prioritise productivity over security – people are often more likely to get sacked for not producing than for lax security, which is why staff don't take it seriously.”
The analogy was drawn with shadow IT, where staff manage risk in the best way they know. They'll bypass password protection etc, but they don't necessarily know all the risks. Upgrading IT security was described as the best way, to deal with the problem, with Sasse commenting, “Half of security problems are down to crap IT,” which raised a cheer from the back of the room.
Instead, companies were advised that they should think of investing a bit more a bit earlier instead of more sticking plasters. “The users can tell you about the problems they face then you can look at how to fix it,” said Sasse.
Kidd adds, “ People have a primary task that they want to do well, and we have to not say, ‘no you can't do that', but ‘what are you trying to achieve and how can we help?' We need to encourage people in our security teams to get out there and see what people [users] are doing. [We] Put people in the user groups to work with them and see what they are trying to achieve. It helps a lot.”