Brian Chappell, senior director, enterprise & solutions architecture, BeyondTrust
Brian Chappell, senior director, enterprise & solutions architecture, BeyondTrust

If there was a zeitgeist phrase of the moment, it would be “fake news” but the idea is sometimes hard to accurately describe. At one end of the scale it could be called blatantly false information put into the public view, portrayed as accurate journalism but actually designed to promote a particular political, social or economic agenda. The concept of propaganda is not new yet its relevance is growing in an era where the internet has dramatically altered how people discover fact from fiction. As IT professionals, it may seem that fake news is hardly relevant but the concept is actually creeping into the world of information security in a much subtler format.  

The most common issue is the over sensationalised story often written by non-specialist reporters. Although a respected publication, a recent story from The Washington Post entitled “Russian hackers penetrated US electric grid through a utility in Vermont” led to outraged reaction and commentary on TV networks and from politicians. However, following clarification from the local utility, it was only a single computer infected with malware that was not even connected to any part of the grid.

Another issue is lack of explicit detail or misinterpretation. Take for example the recent US-Cert announcement to disable SMB1 (Samba version 1) on 16 January 2017. The recommendation cites security flaws in SMB1 and also recommends “blocking all versions of SMB at the network boundary by blocking TCP port 445 with related protocols on UDP ports 137-138 and TCP port 139, for all boundary devices.” An experienced InfoSec professional would take this to mean internet firewalls and edge devices including servers but in some follow up stories, later corrected; it was suggested disabling the ports on all devices, regardless of location.

When asked about avoiding fake news, some people respond that they only get important news from people or organisations that they trust. Yet trust is not always beyond doubt. The most common type of fake, if you can even call it “news” is the 'pop-up alert from Microsoft' that certain malware families generate to get more gullible users to phone up a “hotline” or download some even more potent malware via a link. Some InfoSec professionals would snigger at the idea that anybody would fall for these ruses but the truth is that people trust the Microsoft name on the fake logo as an indication that the alert is genuine.

Even more trusted communication methods are not immune to fakery. In Michigan earlier this month, reports of  emails containing racist and anti-Semitic messages along with threats of violence had been sent to computer science and engineering students who has raised concerns about the integrity of the voting system in some states. The message purported to come from J. Alex Halderman, a professor. Some reports talked about email being hacked but security experts later discovered that the messages were in fact "spoofed". In this instance, the ability to quickly detect an IT security issue stopped a potentially fake news story from spreading. Yet there are more common instances where seemingly validated messages lead to issues.

Take for example, emails purporting to come from an internal IT department asking employees to carry out a specific action like a password reset. Penetration testers that regularly use this technique, which involves manipulating email header information, openly report that it almost never fails to have at least one victim who will do as the email commands – if the company is of a large enough size.

Is there an answer to these issues? Maybe a magic piece of software or a technique that will filter out forgeries? In an IT security context, the answer is a resounding no, although there are practical controls that can help. To make email spoofing harder, organisations should consider setting up DomainKeys Identified Mail (DKIM), that lets a domain associate its name with an email message by affixing a digital signature to it that is verified using the signer's public key published in the DNS. This is not entirely foolproof but will stop the less capable hackers and many spam artists. 

Dealing with confusing security alerts is more a case of painful experience and not assuming that a little bit of knowledge is enough where, instead, regular investment in security training can really help. In the case of something like the SMB1 example, if in doubt, consult a subject matter expert directly. For on screen pop-ups and fake calls purporting to be from Microsoft or some other large technology vendor, InfoSec professionals in concert with human resources departments need to help draw up security policies that are actively and continually used to educate to staff. People and scams change so you should provide assurance that it is OK to pick up the phone, yes a real phone call, and that asking the IT department for guidance is not a sin.

Fake news, whether it's about politicians or security alerts will always be around. But like most things, it will take time, experience and a bit of common sense to help us all differentiate the real from the fantasy.

Contributed by Brian Chappell, senior director, enterprise & solutions architecture, BeyondTrust

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.

This blog was written prior to GCHQ being accused of hacking Donald Trump.