How DLP must evolve to deal with dynamic new threats
How DLP must evolve to deal with dynamic new threats
For many years, data loss prevention (DLP) served as the de facto strategy for ensuring that sensitive or critical data did not leave the confines of the corporate network. DLP works by classifying all data types on a corporate network and assigning rules that limit how they can be shared.

While still effective in the right circumstances, DLP has fallen out of favour in recent years as the approach has a number of shortcomings. DLP is prone to false positives, which can create a large number of alerts and quickly lead to event overload for security teams as well as frustrating end-users. Worse still are false negatives which can allow major data breaches to occur. 

DLP also struggles with unstructured data and has little or no visibility of encrypted data – which accounts for more than half of traffic today. Crucially, the widespread use of cloud applications means that employees are constantly working beyond the network perimeter – and indeed the very concept of a perimeter has begun to dissolve.

Moving beyond the perimeter 

Many companies have attempted to address this threat by replacing network DLP with a cloud access broker (CASB), a security solution that sits between an organisation's on-premise infrastructure and that of a cloud provider. This enables the company to extend its policies beyond their own infrastructure. 

CASB needs to be used alongside traditional DLP, as it has no visibility of local activity. It would not, therefore, have any visibility of an excel document full of sensitive data already on an endpoint, or detect it being copied to a USB.

This blind spot should be covered by a host DLP so, on paper at least, a combination of DLP and CASB should cover all avenues for sensitive data leaving the network. In practice however, this set up is far from ideal. Running two distinct solutions to cover what is essentially the same risk is an unnecessary drain on resources and budget.

A bigger issue is that the two solutions are not designed to work together – a scenario that inevitably leads to gaps in security that attackers can exploit. There are several circumstances in today's perimeter-less world of work where the setup will fail to detect a threat.

For example, let's take a salesperson attending a conference, who decides to get some work done from their hotel. They are not inside the company or on the VPN, but they can still log on to cloud applications like Salesforce and Dropbox. No matter what they have in their perimeter, the company will not be able to see what the employee is doing – including whether or not they are inadvertently installing malware from a phishing email. 

If the malware executes and exfiltrates information while they are offsite and outside the network perimeter, no one would be any the wiser. Now, when the salesperson next connects to the corporate network, they could be enabling the spread of dangerous malware and providing a way in for an advanced persistent threat. 

The future of data loss prevention 

To address this, organisations need to equip themselves with a united view of their entire network, extending from the endpoint through to the cloud – including penetrating through encrypted traffic that could be hiding malicious activity. 

Alongside deeper visibility, monitoring needs to move beyond a rigid set of policies about data sets, and focus instead on the behavioural activity of users. By monitoring for unusual activity on an endpoint device in real time, it is possible to identify a potential malicious attack and raise an alert before it can be executed.

Say, for example, we have an accounting employee. For the first time ever, they suddenly start running some PowerShell script. This is far outside of their usual work activity but is a favourite method for threat actors to execute a fileless malware attack and a clear sign that the early stages of an attack are in progress. This in-depth search for indicators of compromise can be conducted in real time across all endpoints – even those that go offline. 

Machine learning models can be created to understand any user's behaviour and any deviations from the norm. This lets products alert and block automatically in real time, saving the security team from wasting resources on a barrage of alerts, greatly reducing the probability for both frustrating false positives and potentially devastating false negatives. 

By increasing both the depth and width of their security monitoring and changing how they think about threats to their data, organisations can leave the old, static approach to DLP behind and evolve their defences to match the dynamic new world of work and the attackers exploiting it. 

Contributed by Raj Rajamani, VP Product Management at SentinelOne

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.