What goes into SIEM these days is not as well defined as it once was, but basically it aggregates network activity into a single addressable dataset, says SC's technology editor Peter Stephenson.
Since Gartner coined the term security information and event management (SIEM) in 2005, there have been a lot of changes in what constitutes such a product. Originally, the abbreviation was a combination of security information management (SIM) and security event management (SEM). This was straightforward, but today, what goes into SIEM is not quite so well defined.
According to Gartner, a SIEM should have the abilities of “gathering, analysing and presenting information from network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data”. That seems pretty broad, but actually it comes down to some specific requirements.
In order for SIEM to work, it needs data. It gets its data from a variety of sources that we can think of as sensors. However, all of this data needs to be aggregated into a single addressable dataset, and SIEMs do that. Then, they correlate it to make sense of it. That includes normalising disparate data formats into a single form that can be consumed by the analysis engine of the SIEM.
Once the data is correlated, there is a lot that can be done with it. It can alert to security conditions that need addressing immediately – a sort of intrusion detection system on steroids. It is receiving data from lots of sources and each of those sources is contributing to the picture the SIEM sees. How that picture is interpreted should be, in large measure, configurable. Most capable SIEMs have robust policy engines that allow customisation, but also many policies that are commonly used available right out of the box.
Second, the data can be used for reporting, which is a critical aspect of compliance. It also allows administrators to see meaningful charts and graphs. Reporting can be file- or paper-based, or real-time displays. Analysis is another important aspect. In the early days, SIEMs were much better for analysis than they were for compliance reporting. Today, they should be able to create compliance-specific reports.
SIEMs often take vulnerability data from tools such as Nessus, so they can calculate IT risk. Threat data is the meat and potatoes of the classic SIEM. However, risk is a combination of threats and vulnerabilities, so when the SIEM takes vulnerability data as well, there is the potential for risk measurement.
Developing a risk picture, though, is not quite that simple. If we look at the enterprise on an asset-by-asset basis, we find that some are more critical or sensitive than others. So, for a credible risk picture, the SIEM must not only be able to take threat and vulnerability data, but also be able to parse down to the asset level. From there it must be able to weigh assets based on sensitivity, criticality, or both.
SIEMs retain data in a variety of ways. Some keep entire logs, and their drill-down capabilities let administrators go all the way to the source files. Some retain metadata parsed from the logs. In that case, drill-down usually gets header information, and that is all. The trade-off is the space required for archiving full logs.
While SIEMs are not inexpensive, prices have come down over the past few years. When selecting a SIEM, do not judge cost of ownership based solely on price. The most important metric is value in your environment.
The number and types of sensors are the only criteria to consider. Where the data is being collected on the enterprise is critically important. Also, it is useful to be able to feed flow data into the SIEM. This provides data-flow vectors that help identify the paths that attackers and malware take.
The following reviews are the products that scored most highly. For the full range of reviews from the SC group test, go to: www.scmagazineuk.com/group-test/section/332