Data modelling has been described as an efficient method of control to prevent breaches and to help with secure storage and access.
Kevin Dowd, director of security assessment at CNS, said that there is often 30-40 retained copies of a document after it has been sent around and as soon as information begins ‘spraying around', you need some sort of information security control to be implemented.
Speaking at a recent industry event, Dowd said: “What we are finding is with risk assessment, you are not a drop in the ocean. If you begin classifying all data now it will take you five years, so you start with the important stuff and move on.
“There has to be a better way. Anyone with a business can follow through on processes and doesn't want to do a search on shared drives and examine file stores. We are moving away from controls and to data modelling and that is often poorly implemented in the first place.”
Speaking to SC Magazine, Dowd explained that data modelling is about knowing where sensitive data is, and this is about ways of knowing where data exists rather than it being copied and stored everywhere.
He said: “You can stop sensitive data getting out but put it in data stores and put controls around that data store rather than around everything. You can have a different view of data, as you cannot solve the problem easily and you have to centralise it.
“Data modelling is about the location of the data to be able to make sensible decisions on it. You can do encryption and secure access to it and be more sensible about controls as it allows the rest of the data to be used more sensibly too.”
He went on to say that most security products focus on applying controls and this is needed, but concentrating completely on all data will never work, as it is too big a problem and it is expensive. “It is a question of enabling and looking after secure stuff, then you are confident that no one can leak it,” he said.
Asked what sort of businesses are doing data modelling, Dowd said that it is mostly small businesses that are highlighting identifiable information where it is easy to apply controls. He said: “It is about understanding the scale of the problem and minimising it. With PCI DSS, it is enforcing it with credit card data but this is not always achievable so you have to do this.”
Graham Taylor, head of IT security (UK and Asia Pacific) at Michael Page International, said: “We do it because we do lexical scanning on the gateway to identify data by sorting out what shouldn't be sent out, but it is difficult to do as we normally look at names. This is candidate data and as we are only regulated by the Data Protection Act it is more personal information that is a concern for us, such as names, email addresses and telephone numbers.
“Data modelling is definitely a concept, identifying data that we are trying to protect in the business and identifying what is being sent out by the gateway. Other businesses do different things with records, it is about how data is stored and you are monitoring that.”