Threat intelligence: what to share?

Threat intelligence: what to share?
Threat intelligence: what to share?

Year after year, I hear the same refrain in information security: "We need to share more data about security threats." I know I, too, have been singing the same song for at least a decade.

IT is a fast-moving field. Ideas arise, reach prototype and go to market quicker than it takes for the average clinical trial to be cleared; yet this one concept within information security – that defence requires greater visibility than can be obtained from any single network, and to have a fighting chance we should reciprocally distribute data on the attacks and attackers we identify – remains an unresolved debate.

Those in favour of sharing information show that while they've had some limited success, the process has been difficult to build out and integrate, and the results are mixed due to insufficient data. More data sharing seems like an excellent idea, but they can only conjecture the curve on the return on investment for it at higher levels.

Those against sharing demonstrate a few early experiments where they have publicly collaborated on data sharing, been burned by the public data being used as counter-intelligence and promptly returned to either not sharing at all, or sharing within a very limited group.

There is some sharing of security data happening out there right now, in varying degrees of scope and success. Public and semi-public clearinghouses such as MalwareDomainList.com provide an excellent free source of single-scope threat intelligence. But the data is limited and organisations must construct their own processes and technology to consume it effectively.

Private data-sharing arrangements exist between some large organisations, and some government bodies mandate the need to share information, but we live in an age where the tide is turning against publicly available information. Information is valuable beyond measure and things of value are instinctively hoarded away in private.

So after a decade of discussion and attempts to reach critical mass in the move to a sufficiently effective level of data sharing, we still find ourselves at this impasse.

By all means, let's all share YOUR data with one another and we'll all be better off; but let's not share MY data – that would be bad.

I think we've hit the problem right on the head: stalled progress is not down to limitations in technology or legal implications (both of which can be overcome with little effort); people don't want to share because of those old faithful standbys still gnawing at the human mind: fear and greed.

Fear of how whatever we share may be used against us; greed for anything we can get for free, or, better yet, make money from. We're not going to make this go away overnight. If we're going to find an effective middle ground towards a security data sharing network that is effective for all, we are going to have to find ways to route around these two mindsets.

So how do we move forward? Stressing the importance of 'enlightened self-interest' may be the only winnable argument in this debate. The idea that if I help others, it furthers my own goals, seems to be a perfectly reasonable compromise.

Data sharing doesn't mean giving things away Any good data-sharing solution is going to result in you receiving more than you give. A data-sharing solution that allows one participant to gain a fundamental business advantage over other parties is likely broken.

Data sharing is not an all-or-nothing arrangement Within the security realm there are a great number of layers of data within the field. Being selective about what is shared and to what level of detail is perfectly reasonable.

Paranoia leads us to start from a default-deny position, and try and justify what can be opened up after the fact For anyone who has ever filed a Freedom of Information request, it is easy to see how this is a method that gains few results for a great deal of work. Instead we should consider the alternative of starting out from the viewpoint of 'everything is good to share' and then selectively removing the things identified as not OK to share.

Beyond FUD, the real problems with sharing It would be remiss of me to stand here and claim that universal public security data sharing will fix all our woes overnight, though I'm certainly saying it would give us more of a fighting chance. There are significant hurdles to encounter and overcome when dealing with data and intelligence sharing.

All intelligence is counter-intelligence/the most valuable intelligence becomes less valuable the more widely it is distributed Open information-sharing networks will be infiltrated by attackers, without a doubt. So long as the system does not enable the attacker to infer detailed information about what a particular target knows, you can stay a step ahead of them.

Intelligence data that cannot be acted on is worthless The more open an intelligence source, the more generic the format in which it must be communicated. Public sources of threat intelligence are published in the lowest-common denominator format (text files of IP address, CSV files, etc). For many security organisations using these feeds, they process the information manually via analysts performing searches across logs.

The path forward It is essential to the success of data sharing that the content is detailed and consumable, or it won't work. Getting organisations to overcome the reluctance to share detailed information outside their borders will require more detailed, incremental programs of information sharing; ones that start out with simple statistical sharing (like the Verizon VERIS framework) and then ramp up through programs of threat agent information.

Adoption of tokenisation and anonymisation techniques and standards that can be implemented without significant effort will be an important factor in allowing organisation to collaborate without undue legal or operational liability.

Finally, some level of assurance that the information shared will not (or cannot) be used against the contributing organisation directly is a requirement only the most reckless would ignore.

Conrad Constantine is research team engineer at AlienVault

Sign up to our newsletters