Shai Morag, CEO and Co-founder, SECDO
Shai Morag, CEO and Co-founder, SECDO

With all the sophisticated tools at their disposal, the first reaction by IT remediation teams charged with fixing the damage caused by hackers and rooting them out of a network is usually the best - and often the only - response available. According to a recent survey from the SANS Institute, these IT remediation teams “manually isolate infected machines from the network while remediation is performed,” or “shut down the system and take it offline” - basically pulling the plug on the network, or on the computer or server.

As in “real” life, a withdrawal response indicates that a person – or IT team – is overwhelmed, and is forced to deal with the limited means at their disposal. Indeed, that seems to be the case when it comes to dealing with sophisticated APTs and other threats that IT teams face today; lack of skills, lack of funds, and lack of data about what is happening in their network are, according to the teams responding to the SANS survey on which the study is based (they included IT teams as some of the country's largest enterprises), the main impediments to effectively responding to threats and attacks.

Lack of data is perhaps the most interesting factor cited. Nearly half of respondents said they felt that they “did not have enough visibility into events happening across different systems or domains,” despite their massive investment in security technology. The lack of information about where to look – or perhaps how to look – for problems was also likely responsible for the relatively long average dwell time hackers had before they were detected (an average of two – seven days, but with some IT teams reporting dwell times of as long as three months before an attack was detected).

If security tools aren't doing the job, IT teams might have more luck with tools that provide a deeper, more analytical look at endpoint data. It makes sense; after all, endpoints are generally where hackers make their entry into a network, often through a phishing technique or some other social engineering scheme. And endpoints are more vulnerable  - and more a target of hackers – than ever. The web is rife with stories of how hackers passed ransomware – the new big moneymaker for hackers – onto networks, and you can be sure that for every story told, there are a dozen others kept under wraps. With so many security systems on the market, systems should be far more immune to hacks of all kinds than they seem to be; clearly, something is wrong with this picture.

So, there is definitely a need for effective endpoint visibility technology. IT teams realise this, too; lack of endpoint visibility ranked as the second biggest impediment to effective incident response. But unable to implement sophisticated endpoint data collection and analytics systems, perhaps because they are unaware of their existence, only 16 percent of those surveyed by SANS considered endpoint visibility infrastructure mature. When an IT team has to deal with a breach - meaning they have to check perhaps 5,000 endpoints manually - you can understand their cynicism.

But with systems that can automatically collect, examine, analyse, store, and report on endpoint activity, endpoint detection and investigation becomes a much more manageable task. While examining endpoints manually - analysing log files, anti-virus messages, and other data from traditional endpoint security monitoring systems - is a next to impossible task, doing so with an automated system ensures that nothing is overlooked, that all anomalies, problems, breaches, and everything else that could indicate how a breach occurred are examined, with the system examining the source, activity, and effect of anything that shouldn't be there. Administrators can then focus on stopping the problem fast by going to its source and cutting it off.

While there are endpoint forensics tools that can do such analysis as well, some are better than others. Most, in fact, are limited - able to take a picture of the system's current state, but unable to go back in time in order to understand the root cause. Only a few such systems are capable of storing data and retroactively analysing it - providing information about when and how an attack actually began (such as when the original Trojan that eventually download the malware was installed, and where it came from), and allowing for historical comparisons on breaches as a whole, or on specific points, such as whether there is a pattern of breaches in specific user accounts, on specific computers, devices, ports, processes, or anything else. Armed with that information, IT teams can decide on a course of remediation, or even real-time defence, much mire efficiently.

In addition, many of the forensic systems require special training to use, because it's difficult to understand the information they provide; in fact, many systems include as part of a package deal access to (high-priced) consultants who work with the IT team to understand the meaning of the collected data.

Clearly, though, IT security teams need to automate endpoint data collection and analysis, instead of the traditional manual approach; that would go a long way to changing the minds of CTOs, CFOs, and anyone else involved in deploying or paying for cyber-defence systems. “Organisations have shown improvements in technology integrations; however, they still struggle with successfully analysing the amount of data collected and detecting anomalies in their environments,” according to SANS. If the reason is indeed because of their reliance on manual investigation and remediation methods, that may have to change; the ransomware pirates and the myriad of other hackers looking to beat security and invade a network know an opportunity when they see one. The lack of implementation of automated endpoint visibility tools is certainly a golden opportunity.

Contributed by Shai Morag, CEO and co-founder, SECDO