This site uses cookies. By continuing to browse this site you are agreeing to our use of cookies. Find out more.X

The information security implications of change

The information security implications of change

Microsoft has recently warned businesses that they should be well on the way to upgrading their legacy desktop environments.

It announced some time ago that it will end support for Windows XP and Office 2003 on 8th April 2014, with Server 2003 following suit a year later in July 2015.

For many, the passing of Windows XP and Windows Server 2003 does not sound a significant landmark. The architecture upon which the systems are based is over a decade old, and there have been several landmark advances within the software market from all major players.

However, for many industries, the death knells for support on what have become corporate IT system cornerstones over the past decade pose significant risks to the enterprise environment. Gartner has predicted that more than 15 per cent of medium and large enterprises will still have Windows XP running come April 2014.

While the large, cash rich businesses have the buying power and support teams in order to replace their entire infrastructure in a structured and resourceful manner, or have the investment in firewalling in order to be able to isolate the critical systems from the outside world; the majority of businesses do not have the same luxury.

The migration of corporate systems from a 2003 platform to an efficient, supported environment going forward is a significant IT headache. The rollouts of updated OS deployments bring their own flavour of issues within a number of critical areas. Functionality, security, business continuity and a timely and structured response must all be considered.

However, at a time when budgets are being squeezed, many company and policy decisions are being made with an eye on outspend and without a structured view on the wider implications.

While the rollout of updated server hardware poses a significant headache within corporate infrastructure, the majority of IT technicians are used to overcoming upgrade issues, and have built in disaster recovery plans.

However, the elephant in the room still remains with the vast quantities of PCs, laptops and workstation devices that form the backbone of the vast majority of modern business life. If systems have been updated over time, this risk is much lower.

However, for many companies, the vast swathes of XP devices deployed nationally, and often internationally, all require a structured upgrade path and often hardware replacement and continue to pose significant headaches for IT directors worldwide.

Even within medium-sized enterprises, the upgrade path is significant and requires the backend system to be appropriately configured to support the new systems. This must be achieved without compromising the existing system security or functionality and incorporating procedure to close the weaknesses inherent from the legacy environment once all systems have been migrated.

There are significant logistical issues to be incorporated in such deployments, and with the time taken to process change requests and gain approval for widespread change, those companies not seriously considering the implications of laissez-faire attitudes towards system change may lead to significant difficulties and outlay in the near future.

By considering the long-term corporate position and planning ahead, the transition into a new generation of IT systems can be achieved with the minimal disruption. Furthermore, the significant upheaval caused through the upgrading of core infrastructure provides an opportunity for all businesses to review, and place a consideration for good corporate information security practice at the core of their deployed infrastructure.

It is possible to iron out the inherent weaknesses within many older systems without disrupting the service or functionality of the corporate environment.

Sam Raynor is a consultant at Information Risk Management

The beginning of the authentication ice age

The beginning of the authentication ice age

This week I was invited to sign the new online Petition Against Passwords which I was delighted to do and I urge you all to do the same.

We at Winfrasoft have been banging the drum to make this dinosaur of the security world extinct for several years now, and it seems that real momentum is at last being gained. However, there is a long way to go before we can condemn passwords to an authentication ice age.

Visit Google and it will give advice on passwords that says “Passwords are the first line of defence against cyber criminals”. It's crucial to pick strong passwords that are different for each of your important accounts and it is good practice to update your passwords regularly. Follow these tips to create strong passwords and keep them secure.

This advice is as recycled as many of my passwords! Also to be honest, how many of us (even those of us in the IT security industry who should know better) adhere to this process rigidly? Today, we need passwords for so many online services, so we either use the same one repeatedly, which is not ideal from a security standpoint, or we simply forget and waste valuable time getting it reset time and time again.

Crucially, every time we enter a password we are giving it away and are at risk from malware, cyber crime attacks and data leakage.

In fact, when John Shepherd Barron, the chief inventor at De La Rue invented the first ATM machine in the 1960s, he proposed a six digit PIN, but his wife suggested four as it was easier to remember. Imagine if he had proposed combination of eight case sensitive alpha numeric characters! The thing is, as humans we are programmed to memorise patterns far easier than a sequence of letters and numbers.

Of course, some sites will invite you to remain logged in, but that defeats the purpose of the password in the first place. Also these days, we are likely to be accessing the site from multiple devices (tablet, laptop, desktop PC, smartphone), so being logged in to all of them all the time increases the risk of identity theft and fraud.

The burden of forgetting passwords is not only an inconvenience for the user; it is also a burden for the organisations that implement them. A financial services house in South Africa recently calculated the cost of the number of calls its IT helpdesk was receiving to handle password resets. Of the 3,000 calls they handled each month, 40 per cent were related to passwords, and with an average cost of £23 per reset, it was costing the company £27,600 per month and £331,200 annually.

Those in favour of password protection (although it must be said these people are becoming as rare as the Tyrannosaurus, and they often have a vested interest) argue that there is no viable alternative, but this is simply not the case.

If these people had wandered around the aisles, or visited the seminar theatres at Infosecurity Europe several months ago, they would have seen and heard from vendors about a plethora of solutions, such as pattern-based authentication, that are more secure, easier-to-use and far cheaper to administer in the multi-device, multi-platform, multi-service environment in which we work, shop and play.

The password is one of many authentication dinosaurs that we need to confine to the history books, along with key ring generators, calculator tokens and card readers, all of which continue to roam the wild.

Steven Hope is CEO of Winfrasoft

The chilling effects of the Volkswagen injunction on British research

The chilling effects of the Volkswagen injunction on British research

At this week's Black Hat conference in Las Vegas, Charlie Miller and Chris Valasek will present on on-board car computer insecurities to thousands.

 

The DARPA-funded research will show how they can control a Toyota Prius and Ford Escape from the vehicles' ECU (Electronic Control Unit) including moving the car, stopping the car and sounding the horn.

In contrast, a House of Lords injunction granted by Volkswagen has gagged Univerity of Birmingham lecturer Flavio Garcia from presenting his research into the Megamos crypto RFID chips. These chips are used by the immobiliser in Volkswagen's cars.

 

Volkswagen claim that it could lead to the theft of millions of Porsches, Audis, Bentleys and other high end cars. According to a recent Telegraph article, Mr Garcia and his colleagues asserted that they are "responsible, legitimate academics doing responsible, legitimate academic work". Despite this, Mr Justice Birss ruled against the academics on the grounds that it would mean "that car crime will be facilitated".

So far, so reasonable, or so you might think. After all, such flaws may be difficult to fix and could lead to £250,000 cars being stolen by criminal gangs. Sadly this kind of thinking is all too common. The fact remains that the vulnerabilities in the RFID chips used to protect the cars still exist and are just as likely, if not less likely to be fixed.

As co-organiser of the UK's biggest technical information security conference, 44con, we often see research that's controversial. It's often controversial research that spurs changes. Barnaby Jack, the sadly recently deceased vulnerability researcher who famously 'jackpotted' ATMs at Black Hat, went on to do considerable research in the field of medical hardware, research that was in the process of improving medical security, just as his earlier work on ATMs led to improvements in ATM security.

 

44Con hasn't shied away from controversy either. At our first event one speaker presented ground-breaking research on weaknesses affecting the keylogging protections in Trusteer's Rapport, an anti-phishing security product used in homes across the world to securely access their bank accounts.

This injunction is a severe impediment on the ability for legitimate academic research to be openly discussed. It sets the precedent that if a vendor doesn't like what you're doing, they can gag researchers rather than address vulnerabilities providing they can convince a judge that there's criminal risk, which is no doubt easier than fixing a problem.

This will only lead such research underground, which is no good for anyone. When there's a market for vulnerabilities and exploits and legitimate routes for research are closed, then for many, selling to the bad guys will be the only option left.

 

 

Steve Lord co-organises 44Con and is technical director of Mandalorian 

Information risk: what can businesses learn from each other?

Information risk: what can businesses learn from each other?

Well-managed information has become a precious business asset.

 

Inevitably, as it becomes more valuable, information becomes more vulnerable. Data breaches, cyber threats and fraud are all on the rise. Such malicious threats combined with human error are exposing points of weakness in a fast-changing, complex information landscape and are putting brand reputation on the line.

 

Against a regulatory backdrop that is not always clear, companies are struggling to cope with the need to manage legacy archives along with the exploding volume of data generated by new technologies. As a consequence, businesses are facing unprecedented levels of information risk.

 

A recent report by Mountain and PwC revealed some significant differences in the way that younger and older firms perceive and address their information risk. Each side has important insight to offer the other. Things that older firms can teach younger firms:

 

1. Having a plan is as important as ‘getting the job done'.

Just under half (49 per cent) of younger firms – those which have operated for between two to five years – admit freely that they are much better at doing things than they are at strategic planning. Older firms on the other hand – those that have been in business for a decade or more – appear to have learned that knowing why you do something is just as important as what you do, with over half (56 per cent) having a monitored information risk strategy in place, compared to just 14 per cent of younger firms.

 

2. It is alright to be cautious about trusting employees with information.

Younger firms are far more trusting when it comes to their employees and their data. Just 18 per cent believe employees are a threat to information security, and only half have an employee code of conduct; while a more significant 42 per cent of older firms see employees as a threat and two thirds have an employee code of conduct in place. If caution leads to codes, guidelines and training to help employees better understand the risks and protect information then caution should be encouraged and applauded.

 

3. Information risk should be a boardroom issue.

Half of younger firms say the board does not see information security as a big issue, whereas the boards of the mature business are far more likely to see information risk as worthy of their attention. Senior-level support is critical if information risk is to be taken seriously.

 

Some interesting points that both young and old firms should pay attention to:

 

4. Today's complex world of hybrid information is here to stay.

Younger firms are more likely to feel comfortable managing structured and unstructured information in digital and physical formats across multiple locations (55 per cent compared to 38 per cent for older firms.) This multi-format, multi-channel data world is the new reality; there is no turning back, so you might as well embrace it.

 

5. Money isn't everything: the greatest victim of a data breach could be your reputation.

All firms agree that the impact of a data breach will touch customer loyalty (58 per cent for both) and brand reputation (52 per cent for both), but older firms are nearly twice as likely to be concerned about financial and legal consequences.

 

Information risk touches us all. Just as firms hold their employees' and suppliers' data, not to mention their own precious knowledge and intellectual property, many also hold personal information about us as the consumers of their products and services. This information needs and deserves to be protected.

 

 

Marc Duale is president international at Iron Mountain

 

 

Beyond awareness: the growing urgency for data management in the European mid-market, PwC for Iron Mountain. PwC surveyed senior managers at 600 leading European businesses with 250 to 2500 employees in the legal, financial services, pharmaceutical, insurance and manufacturing and engineering sectors. The results were assessed for France, Germany, Hungary, the Netherlands and Spain.

Do you know who is looking at your data?

Do you know who is looking at your data?

Given the current anti-EU sentiment gripping certain shires of England, it might not be fashionable to highlight the positive role that the EU plays in setting the regulatory framework for certain aspects of business behaviour and personal rights.

Nevertheless, there's no doubting the valuable service provided by a recent report from the directorate general for internal policies (entitled Fighting cyber crime and protecting privacy in the cloud), which highlights serious concerns over the safeguarding of cloud-based data from European companies and citizens in a multi-jurisdictional framework.

The report accepts that cloud computing is making data processing global but warns that “jurisdiction still matters”. It said: “Where the infrastructure underpinning cloud computing (i.e. data centres) is located, and the legal framework that cloud service providers are subject to are key issues.”

This is particularly so with regard to the US, home of many large technology companies and cloud computing providers, and two specific pieces of legislation, the US Patriot Act and the US Foreign Intelligence Surveillance Amendment Act (FISAA) of 2008.

The report believes both acts give rise to conflicts in the relationships between states and companies.

“Major cloud providers are transnational companies subject to conflicts of international public law,” the report states.

“Which law they choose to obey will be governed by the penalties applicable and exigencies of the situation, and in practice the predominant allegiances of the company management.”

Those allegiances are likely to be sorely tested by the scope of FISAA, which essentially authorises the mass-surveillance of foreigners outside US territory whose data is within range of US jurisdiction, including data accessible in US clouds. The question that needs to be addressed is whether EU-based businesses and citizens should be prepared to gamble the integrity, security and privacy of their data against the loyalties of managers of US-based companies.

The report warns that cloud computing breaks the 40-year-old model for international data transfers because once data is transferred into a cloud “sovereignty is surrendered” and it advocates the use of prominent warnings concerning the dangers of cloud data being exported to US jurisdiction.

It's a concern UK businesses should heed very carefully if they don't want to put their data at risk from being spied on by US authorities. For those already ‘in the cloud', the report represents an opportune moment to ask what country their cloud provider is storing their data in.

Many cloud providers are global operations, which leaves them (and their customers' data) vulnerable to surveillance from the authorities in the US and other jurisdictions.

One way for UK businesses to ensure their data is safe and not being snooped on by the US or any other country's authorities is to choose a cloud provider with a geographically diverse cloud platform spread across the UK. A UK company gives them the comfort of being able to visit the data centre and an understanding of where their data lives.

Until the US authorities change or amend the Patriot Act and FISAA, that's the only way businesses in the UK can guarantee their most critical asset stays outside the jurisdiction of the US authorities (or those of any other country).

Campbell Williams is group strategy and marketing director at Six Degrees Group

Would you like an eat to bite?

At the time of writing I'm not sure if Edward Snowden is still sitting in a Moscow transfer lounge or settling in to his 'luxury apartment' in a barrio in Venezuela.

Regardless of where he is, I've become relatively blasé when it comes to hearing about yet another security breach, or of stories that Big Brother is watching us. It's almost like a traffic policeman going to the press and saying that speeding fines are a money-making racket; as if the average person in the street is going to be surprised.

Of course the rather predictable shock and protests from certain EU governments that the US government was eavesdropping is really a case of the pot calling the kettle black. For those old enough to remember last century, the French government admitted to being actively involved in extensive international spying to try and give French companies an advantage in the international market.

So it seems that when the French president Francois Hollande said allegations that the US bugged European embassies could threaten a huge planned EU-US trade deal, and that there could be no negotiations without guarantees that spying would stop immediately, he seemed to conveniently forget that the French government has been doing this for years. Maybe he just didn't like the idea of a level playing field.

In fact one of the earliest examples of industrial espionage goes back to the beginning of the 18th century with the French stealing porcelain manufacturing methods from the Chinese. What goes around comes around as they say. During the early 1990s, France was described as one of the most aggressive perpetrators of industrial espionage, and it seems like the Americans and the French have been having a ding dong battle for years.

It's not just these two countries that have either been suspected, or even caught red handed – they're all pretty much at it. In fact the Chinese government must be enjoying this period of relative tranquillity since they're usually blamed for everything.

So spying is not really news, and neither is yet another 'insider' abusing privileged access to steal confidential data from IT systems. According to NSA director Keith Alexander, Snowden reportedly “fabricated digital keys that gave him access to areas way above his clearance as a low-level contractor and systems administrator”. 

Now I'm sorry, but anyone stupid enough to decide that an airport was the place to settle down cannot be that clever. Or maybe he thought that having seen the Tom Hanks movie 'The Terminal', he'd have a Catherine Zeta Jones moment and try the chat up line “Would you like an eat to bite?” Who knows, but anyone who has the slightest understanding of digital keys will know that you don't just simply fabricate them.

By now you would think that every organisation, whether governmental or private sector would have realised that protecting passwords and keys is an absolute essential. Additionally, technology that monitors the activity of systems administrators has been around for years. 

The problem frequently starts with the failure of organisations to know where the accounts are throughout the infrastructure. For example, all of your Windows systems have service accounts, scheduler task accounts, COM+ accounts, IIS6 Metabase accounts, IIS7 accounts, etc. It's not just simply the administrator accounts. A typical example of how easy it can be to circumvent policies is what happens when IT support departments are pressed to solve a problem.

Take for example, a situation where a user is unable to gain administrative access to their systems. The workaround is to call the IT helpdesk, who will have a solution. Very often IT will have set up an account that allows administrator access to every machine and once this is given to the user, unless it is immediately changed, the user has unlimited access. 

More disturbing is the question of who is the IT admin? However, the same organisation will most likely have spent a fortune on perimeter security, blocks loads of malicious websites and constantly reminds its staff of the dangers of malware,

What this shows is the massive risk that organisations are faced with if they do not control access to privileged accounts. In the case in point, not only should the IT helpdesk have required an audited approval process to gain access to the backdoor password, but once accessed it should have immediately been changed. 

Without properly managed and secure control of the credential that gives privileged access, everything underneath becomes vulnerable. As in the example of the NSA, it would appear that badly managed passwords and keys gave Snowden the access he needed to discover SSL keys, SSH keys, symmetric keys and other passwords.

Having good processes for your SSL, SSH and Symmetric is all well and good, but ultimately flawed if you don't control your privileged accounts. 

Finally, my advice to Snowden would be to watch another Tom Hanks movie called ‘Castaway' since that may be his safest bet as far as a good location goes, and maybe president Hollande may want to check the origins of the word espionage.

Calum MacLeod is vice president EMEA at Lieberman Software

The role of the individual in the great game of cyber intelligence

The role of the individual in the great game of cyber intelligence

In the current debate over Edward Snowden, there are two opposing attitudes to consider: the ideology of individualism and the interest in Edward Snowden as an individual.

Snowden, the individual, prioritised himself and his values and interests over those of his country. His conscience allowed him to believe that disclosing classified information was acceptable and technology was able to provide him with the means to conceal the documents.

However, the increasing interest in the surveillance of individuals from every government also suggests individualism. Individuals have become an important source of intelligence, with every individual capable of causing substantial damage and being potentially dangerous, regardless of social class. With the help of technology, potential danger can be avoided.

In this on-going debate, the solution is to identify who is on whose side in the great game of cyber intelligence. Throughout history, intelligence has had three primary motivators: political, military and economic gain. Military strategists and political theorists praised intelligence as a game changer, as it has also been used to gain a commercial or technological advantage – or to fill in a competitive gap.

The relationships between these motivators have varied throughout the course of history. Intelligence has proven to be a game changer, with a purpose to construct a comprehensive view of a certain situation and to understand the forces that influence it. It requires the analysis of several factors and the intentions of multiple individuals. Even if the basic principles of intelligence remain constant, the nuances of intelligence evolve alongside social and material contexts.

What is considered as unacceptable or acceptable and profitable is essential in this constantly evolving culture. Because of this, suitable methods of information collection and the individuals worth finding information about change historically and vary culturally.

Unthinkable methods often deliver the advantage a business is looking for. The role of insiders to trick and deceive cannot be disregarded as individuals can act as whistle blowers.

In a modern society, information is the most valuable asset as it provides both a competitive edge and strategic advantage. Because of this, the efforts to collect and filter valuable or controversial information and to hide it have intensified.

The evolution of technology has always created opportunities for both intelligence and counter-intelligence. For example, the evolution of telegraphing, photograph or aviation, or more recently computers and the numerous networks enabled by technology, have all had an impact on intelligence.

Regardless of the efforts to conceal important information, information is widely available due to the interconnectivity of technology and the ubiquity of cyber space.

Cyber intelligence has emerged from the evolution of signals intelligence, with new opportunities emerging as technology and society develops. Therefore, the electronic traits that individuals have left behind have multiplied over the past few decades, proving to be an efficient intelligence approach.

Additionally, technical skills, such as big data analysis, are now available to analyse these traits. However, intelligence remains to be embedded in the physical world as cyber efforts often need to be coupled with more traditional intelligence methods, such as wiretapping or honey trapping.

The concept of cyber intelligence remains blurred. It has been used in reference to technical and automated information collection and is frequently referred to as to 'cyber threat intelligence', often provided by private firms on commercial grounds. It has also been referred in a wider context, called cyber intelligence or 'cyber collection', which is part of the overall intelligence efforts conducted by governments.

In the cyber era, the question to be asked is whether 'cyber threat intelligence' and 'cyber collection' can be distinguished. This is important for individuals who are looking for the best solutions to address their own cyber challenges.

In attempts to enhance government capabilities, cyber intelligence has had a central role, providing many opportunities for defence and offence. For this reason, public and private businesses are allocating resources for the development of cyber capabilities, which has increased the relative power of the cyber-industrial complex.

This on-going debate over right and wrong in cyber intelligence is due to the colliding developments of individual tendencies in social situations and technological development.

The modern world is infiltrated by technology and the virtual space that it provides us, which enables vast amounts of data on individuals to be collected. However at the same time, democracy and individualism demands respect for an individual's rights and freedom.

Information collection is only part of the problem. Bigger questions, such as 'how is information categorised, stored, handled, disseminated and protected?', 'what happens to the information?' and 'who will eventually own the data?' should be asked by individuals about their own personal life.

Jarno Limnéll is director of cyber security at Stonesoft

ISSA chapter meeting looks at regulation and penetration testing

ISSA chapter meeting looks at regulation and penetration testing

The recent ISSA UK event was held aboard the HMS President in London once again, and Fujitsu's James Gosnold reported for SC Magazine on the day.

Opening the event on 11th July was Lord Erroll, who spent some time discussing small-to-medium enterprises (SMEs) and their need for cost-effective and trustworthy security advice. He said that over half of the UK workforce is in the employment of SMEs and many of these will be in the supply chains of larger companies.

Lord Erroll also pointed out that officious regulatory authorities in the UK need to understand that rules sometimes need to be broken by an SME to get the job done – this is understood on the continent. He said that he is a strong advocate of 100 per cent UK-wide broadband coverage and feels public money would be far better spent addressing this than expanding rail/road infrastructure.

He also made a bold prediction that the government's ‘Digital by Default' policy will be tested in the next few years when someone in the UK population, who does not have broadband, will die because they cannot access public services.

Following on from this was the first ‘Dragon's Den' session – the format of which is that four vendors get 12 minutes to pitch their wares to the audience. At the end of the day the best presenter and product are voted for.

Up next was SC columnist and penetration tester Ken Munro who is always an entertaining presenter and gets the right blend of geek and practical advice. Ken asked how we decide what to buy. F1 hospitality? The biggest ad in SC Magazine? Nice coloured lights on the appliance? (I imagine it would be quite depressing to learn how much of those criteria is often used).

Munro ran a live session to demonstrate how easy it is to pack/encrypt malware code to evade traditional anti-virus scanners using a £150 tool commonly used by games developers. Some interesting advice was also given, specifically that organisations should remove the anti-virus information from email signatures, as it is advertising externally what anti-virus you are choosing to defend yourself with and therefore makes it easier for an attacker to tailor their code to avoid it.

Next was Tom Davison from Check Point who talked about how easy it is to evade traditional signature-based tools, and the launch (in Q3 2013) of a new tool forming part of the Check Point threat cloud. This tool is called ‘Threat Emulation' and inspects files going in/out of the system; Davison gave examples of file types often causing the most concern such as .PDF and .exe.

At this point I was thinking I could name several products that already do this, but where this tool brings something apparently unique to the party is that it opens a suspicious file in a virtualised sandbox environment (currently XP and Windows 7) and then monitors the behaviour of the system looking for characteristics such as changes to the registry or new network connections being invoked.

Speaking next was Michael Whitlock from MPWA that announced the forthcoming Nice (Network & Internet Content Exception) solution, which has been in development since 2006. The solution uses a USB key and gateway, and its primary aim is to protect the end-user.

In summary the user connects the USB key to their system, uses a fingerprint to generate encryption between the key and the gateway (at corporate HQ) and then creates a secure link and uses “H-Browser” (which is apparently un-hackable.) The presentation did actually give an interesting insight into the launch of such a product – Whitlock was seeking investment.

The next speaker was Peter Wood of First Base Technologies whose credentials include chairing White Hat and generally doing pen testing for longer than I've been in long trousers. The presentation was quite basic given the audience – covering off how vulnerable SNMPv1 is and that systems with default credentials are a big threat.

The most interesting sound bite I took was that Wood felt “it is impossible to both design something and think of how to break it” when discussing the gap between security and developers/designers.

After a second Dragon's Den presentation, the final talk of the day was given by Chris Phillips from IPPSO on terrorism trends. He explained that the insider threat is the biggest to organisations, accounting for a significant portion of the estimated £27 billion hit to the UK economy from cyber crime in 2012.

Three crucial steps to improving personal security defences against the insider threat are recruitment processes (pre-employment checks, etc.), an on-going security regime and leaver's processes (termination of all access).

In respect of staff travelling abroad, Phillips stressed precautions should be taken as 600 British citizens were kidnapped in 2012 – to this end the CPNI site is an excellent resource for physical and personal security guidance.

At the end of the day, the Dragon's Den award for best presenter was given to Ken Munro, and the best product award to Adrian Wright from Secoda for his Rule Safe GRC tool.

The lazy attacker

The lazy attacker

Most cyber attackers are likely to use the easiest route in. They're lazy and no different from your run-of-the-mill hijacker who will gladly steal the car of someone who leaves the keys in it.

In the case of the cyber criminal, he will of course test the 'lowest common denominator' method against the widest range of IP addresses from the same source set of IP addresses.

Even advanced attackers will use a 'recycled attack platform' upon identifying and inspecting a target. Inevitably, this results in attackers using the same type of attack against a wide surface area, the same toolsets and the same set of source IPs or command-and-control (CnC) servers.

This is a cyber criminal's 'bread and butter' and as long as it remains effective and lucrative, attackers will continue this languid approach. The sad truth is that the cost to attack and exploit a system is dramatically less than the cost to defend it.

Take, for instance, advanced persistent threats (APTs). These get most of the attention from the cyber security community because, as defenders, we want to be vigilant against the most sinister techniques. Also, it is far more interesting to analyse and discuss sophisticated attack tools, techniques and profiles, but this unilateral mindset ignores a much wider reality: Cyber criminals are just as lazy as criminals in the real world.

Let's first consider the costs of broad-based (lazy) attacks: Setting aside the incremental costs of exploit kits and the potential legal risk, there are no significant costs to launching an attack.

With easy-to-use and readily available exploit kits, an attacker can use a single machine to attack thousands of targets searching for one with susceptible defences. The cost of acquiring a new target is merely the cost of generating a new random number.

On the reverse side, each new attack vector requires additional effort on the part of the defender. They must deploy and maintain numerous security controls while also keeping all of their systems updated with the latest security patches. This is a substantial cost that is all too familiar to anyone in the industry.

So, the advantage is completely with the attacker. While each defender must incur substantial cost to protect their organisations, the attackers can easily find targets that have not paid that price.

The question becomes: How can we increase the cost that an attacker must pay for each target? Clearly, the risk of criminal prosecution is a cost risk the attacker incurs. However, the technical difficulty of attributing attacks and the ease of crossing geo-political boundaries complicate prosecution efforts; and as a result, this risk is negligible. 

Even those attackers who are deploying more targeted, advanced attacks against a specific industry or organisation will reuse the same techniques and exploit code in targeted attacks against similar organisations in the same industry. Good examples include ‘Sykipot' and ‘Red October' – both of which primarily targeted defence agencies and governmental organisations. In each of these cases, the original exploit code was developed years ago and over the years, the code has ‘evolved' as it has been reused and repurposed against new victims.

The way they do this is that cyber criminals are highly adept at sharing information with each other. On hacker forums and other underground communities, attack tools and techniques are widely shared, discussed, vetted and promoted. This sharing gives attackers additional resources to be more effective in their efforts and adds plenty of weaponry to their arsenals.

Clearly, the same collaborative approach is needed for defenders. Remember that recycled attack platform used by attackers. Why wouldn't defenders likewise collaborate on the source, tools and techniques used for these attacks and reap the tremendous benefits of threat sharing? Not to mention that such collaboration among defenders can also increase the costs associated with executing these attacks.

Once an attacker has targeted any member of a collaborative platform, such as AlienVault's Open Threat Exchange, command-and-control servers are easily identified by their IP addresses throughout the network. This means that attackers can no longer benefit from the isolation of their targets; they must use a new IP for each attack that they launch.

Instead of being able to launch thousands of attacks from a single IP, they have to pay the cost of acquiring a number of IPs that is proportional to the number of attacks they wish to mount. Additionally, an attacker's tools and tactics become much less effective when defenders collaborate to protect themselves from the attacker. A ‘neighbourhood watch' for the internet makes sense from an economic perspective as well as from an operational one.

So, next time you get focused on the ‘shiny object' of APTs, remember there are cyber criminals out there still using easily defendable broad-based threats to compromise your systems. Sharing information about attack methods with others - especially those in the information security industry - is an essential first step to combating the widest amount of threats.

Jaime Blasco is director of research at AlienVault Labs

(ISC)2: 'We constantly look at ways to make our members stronger'

(ISC)2: 'We constantly look at ways to make our members stronger'

Revamping credentials is key to ensuring that they remain inclusive and represent the best people.

Speaking to SC Magazine, (ISC)2 executive director W. Hord Tipton said that revamping and improving credentials and certifications was a part of what (ISC)2 was trying to do. After some criticisms that CISSP is not representative of modern, changing skills, and that some professionals have chosen not to keep their CISSP certification, I asked Tipton where he sees the future of the standard.

He said: “We've been looked at as the gold standard, particularly with the CISSP for several years now, and the reason for that is that we constantly revamp that credential, as we do all of our credentials. Having more credential holders in the CISSP with the demand to keep it updated and to keep the credential inclusive of all of the current technology as quickly as it changes [is difficult].”

Asked if there is a need to keep updating things so they are very current, Tipton said that it was a very delicate balance as "no one credential does it all".

“We do have to constantly explain and communicate that as broad as the CISSP is, for example, and as well adapted it is to all of the things that security professionals have to touch in one way or the other, in some ways deeper in certain areas than others, it found the right fit from the very beginning of its development in the beginning and that's why I think it rose to the top,” he said.

“The jobs that security people have to do constantly change; you may be doing access controls one day, configuring firewalls the next and you may be setting up a telecommunications network the following day. There are not enough people to go around, to begin with.

“Our research estimates we need about 300,000 additional people around the world for 2013, so you need people that are very versatile and you need people who can demonstrate that they're capable of being trained. Training is expensive. You want to be sure you invest in the right people, because, as I say, no one certification has all the answers.”

Tipton said that as a global standard, in some developing countries, the CISSP is simply too difficult, and the better need is for the systems practitioner credential (SSCP) that requires less experience and is more hands on.

Talking about the skills gap, I asked Tipton if CISSP can be seen as an entry standard to get a job – a way of proving your capabilities. He said the problem is that computer science graduates are still missing the security piece.

He said: “We will feel successful when we start getting some kids say ‘I want to be a digital forensics expert'. What we don't want to hear is ‘I want to be a hacker'. So part of that programme of getting a foot into the door is to make kids aware that you can get in trouble very, very quickly on the net. We just think education is the key to solving a very, very serious problem.”

Tipton said that when it comes to attracting the right people, this industry does attract people who are well-rounded, but they have to be more prepared at a much earlier age; where they go into the industry with a full set of skills, and not just a partial set.

 

I asked Tipton if the problem with CISSP was that everyone who has it is not at the same level of capability. He said that it is not a matter of just getting it, but that CISSP establishes and prescribes a minimum level of experience and knowledge that you have to have, not the maximum.

 

“We don't give the actual passing scores or the results of people who pass. It's like the law, the Bar exam, in the US at least, whereas if you pass your law or if you don't pass, then you have to take the test again,” he said.

 

“Only do we provide the results of the exam to people who have not passed it. We tell them areas where they need more work and where they need to study. We do the same thing with people who do pass, in terms of getting their continuing professional educational (CPE) credits, which is another vitally important part of this, to make sure that the 2004 CISSPs stay up to speed with the change in the technology.

 

“They need 120 CPDs every three years, and I consider this part of the revamping; we constantly look at ways to make our members stronger, and those members that we have, to keep them stronger, and to lead them, prepare them for the next generation of technologies.”

 

Tipton said that as well as re-evaluating CISSP holders every three years, it was considering additional things to give its members help along that way and get the exact right type of CPDs that they need. "So we constantly have a number of areas of potential improvement, but then within a now already rigorous system [the challenge is] to try to stay current and relevant," he added.

 

I asked if people are getting the right amount of professional development to make sure they're keeping up, because otherwise you have the ratio balancing out and not the right people coming in, and those who are there are not at the right level of experience.

 

Tipton said that (ISC)2 is ‘probably' the only security certification organisation that had a pretty rigorous scheme for demanding CPEs in the first place, and then secondly, sorting out the types of CPEs that would be acceptable for research certifications.

 

He commented that (ISC)2 was seeing other certifying bodies now emulate its CPE process, and other international accreditations have been more insistent that all certifications have a CPE requirement tied to them.

 

“There have been a number of certifications that once you pass the exam, it's like a college degree, you have that credential for life, and that's not the case in our world,” he said.

 

Does CISSP have a future? Of course it does, it is the recognised standard for information and IT security professionals and if you were to hire someone with a CISSP you know that at some point they passed the exam and to keep the certification, they have done the CPD requirement.

 

The dilemma (ISC)2 and other accreditation companies face is making their certifications appear worthwhile having. If it seen as too expensive or too cumbersome to keep up with or even if employers do not see it as a crucial CV entry, the number of more experienced CISSP members may begin to decline.

EC releases guidelines on locking up cyber criminals

EC releases guidelines on locking up cyber criminals

Upon returning to the office after a couple of days off, I found my inbox bulging at the seams with perspectives on the change in the punishment for cyber crimes across Europe.

According to the BBC report, European politicians agreed a draft directive outlining minimum jail terms for some crimes. These include: three years in jail for those found guilty of running a botnet; five years for those who do serious damage to systems; and five years to be served by those who attack computers controlling a nation's critical infrastructure.

Following the sentencing of three UK-based members of the LulzSec hacking group in May of this year, I guess that there needs to be some policy on how long should be served for hacking and so-called ‘cyber' crimes.

The European Commission's directive stated that the emergence of attacks against information systems or the illegal entering of or tampering with information systems has risen steadily in Europe.

The changes to the directive that was originally created in 2005 state the penalisation of illegal access, illegal system interference and illegal data interference, and seek to specify what malware and botnets are, and calls on better intelligence sharing with nations obliged to collect basic statistical data on cyber crimes.

Cecilia Malmström, EU Commissioner for Home Affairs, said: “The perpetrators of increasingly sophisticated attacks and the producers of related and malicious software can now be prosecuted, and will face heavier criminal sanctions. Member states will also have to quickly respond to urgent requests for help in the case of cyber attacks, hence improving European justice and police cooperation.

“Together with the launch of the European Cybercrime Centre and the adoption of the EU cyber security strategy, the new directive will strengthen our overall response to cyber crime and contribute to improve cyber security for all our citizens.”

The worst thing for the EC now would be divided laws among its 29 members, and instead to have one set of rules on how to react and to penalise those found guilty of internet crimes. Perhaps the next stage would be to determine what the word cyber actually means?

Etay Maor, fraud prevention manager at Trusteer, said that the concern is that in many cases, the people running the botnets and hijacked computers do not reside at the place where the crime takes place and those caught are money mules in most cases, and not the bot masters or ring leaders.

“Until the day that we see tight cooperation between national law enforcement and criminals brought to justice, it is up to organisations and users to prevent fraud,” he said.

John Yeo, EMEA director of SpiderLabs at Trustwave, called the move "another example of adding to the ever growing patchwork of cyber risk laws", and was especially critical of the descriptions of terms.

Amichai Shulman, co-founder and CTO of Imperva, said: “I think that standardising penal law with respect to cyber crime is an important corner stone in the true battle against criminal attacks.

“I don't think that setting up minimal jail time is by itself going to be a deterrent or going to affect whether perpetrators are actually getting caught. However, explicitly referring to botnet operators as criminals does make prosecution easier and hence is a deterrent (to the point where we believe that criminal prosecution is a deterrent for any criminal activity). It also encourages law enforcement agencies to actually catch perpetrators as they have higher confidence that prosecution will lead to conviction.”

Echoing what Maor said, the problem here is that cyber crime is a global problem and while European cohesion will aid in the fight, a total collaborative effort seems unlikely.

Going virtual? Then get the right security tools for the job

Going virtual? Then get the right security tools for the job

In the last few years, virtualisation and cloud computing have transformed the way organisations do their information processing.

However the rapid adoption of these technologies, often driven by an urgent need to save money, has created a new set of headaches for security professionals.

A recent study that we conducted with research firm Vanson Bourne confirms that in the rush to adopt these new technologies, security has often been treated as an afterthought, and has left firms struggling to keep themselves safe. Of the 100 IT decision makers interviewed for the study, more than a quarter said that security had been given too little attention during their virtualisation process, and only 16 per cent judged that security had been integral to the project.

Just as worrying, interviewees felt that the virtualised environment was more complex and that this had made systems harder to manage. The vast majority (96 per cent) said they were struggling to manage these more complex IT infrastructures, and one in five said that patch management was now more difficult in the virtualised environment.

A similar story emerged in the area of cloud computing, with 39 per cent saying that using Infrastructure-as-a-Service had made it more difficult to manage security.

Despite these findings, it's important to establish that security need not suffer in a virtualised or cloud computing environment. On the contrary, if organisations take the rights steps and adopt the right tools to manage security in this more dynamic IT model, then they can reap all the benefits without laying themselves open to more security breaches. Virtualised environments present organisations with new security risks and demand a new security mindset to tackle these accordingly.

Using the right tools for the job is essential. The great majority of respondents (85 per cent) said that they were using the same security tools in the virtualised environment as they had when they had been running traditional physical servers.

Despite the need for greater security, they had stuck with their old anti-virus, intrusion detection and firewalls, even though the environment in which they were operating had been completely overhauled. No wonder then that they were struggling to manage their security with inappropriate tools, and that they felt they were more vulnerable to a future breach.

With the right tools in place, it is possible to create a single security model across the whole of the IT infrastructure: physical, virtual and cloud. One security model can be managed from one console; making the task easier and the security tighter.

This single model, managed from a single console, can ensure that security follows the workload as it shifts around in the dynamic environment. It means that when machines are relocated in the virtual environment or cross the border from on-premises into the cloud, security controls can move with those machines and maintain their integrity.

Finally, it's essential for the security team to play an integral role in any virtualisation or cloud project, and to be involved right from the start. Both the information security and data centre management teams need to collaborate closely to achieve a high performing and secure virtual environment.

In that way, security questions can be tackled at an early stage, and the complexities of these new more dynamic environments can be managed without any unwelcome surprises.

Michael Darlington is technical director of Trend Micro

Exploit kits for sale on a website near you

Exploit kits for sale on a website near you

Exploit kits are now responsible for the majority of malware infections across the world, representing a serious threat to computing systems and data.

An exploit kit is simply an off-the-shelf cyber crime bundle that can be used by people without expert technical hacking skills to identify software vulnerabilities and mount attacks. This typically involves executing malicious code on the target system with various objectives in mind, but revenue generation and information theft are the principle motivators.

In short, exploit kits make hacking easy. It's the difference between having to understand internet protocol and code used to put up a website back in the early 1990s compared with pointing and clicking to post to Facebook today.

The most recent infamous example of an exploit kit that was put to widespread and damaging use was Blackhole, which heads up a list of others such as Crimepack, Elenore, Neosploit and Phoenix. An advertisement for the latest version of Blackhole was posted on an underground forum and some kits even offer technical support.

Exploit kits represent the commoditisation of hacking for malicious intent with minimal visibility and traceability. The posting of exploit kits on the internet is like handing out grenade launchers to criminals with minimal technical skills.

Typically, exploit kits rely on web browsers as their principal vector of attack. The way in which a browser may be directed to a web server hosting an exploit kit may be via spammed emails containing links, or by hijacking other web servers to direct the browser on to the server hosting the exploit kit.

Once the browser reaches the malicious server and requests a page – which may be via a chain of further intermediary servers to obscure the final location – the process of target identification is performed by the exploit kit.

This is a crucial step, as software components and their versions running on the target system can be identified for cross-referencing against packaged exploits held by the server. The operating system and browser, along with the presence of installed plug-ins such as Java, Adobe Reader or Adobe Flash, are identified and this information is used to determine which exploit has the highest chance of success.

If the exploit is successful, the targeted software component will then proceed to download the malicious executable and launch it, infecting the system with the malware of choice. The technical mechanism used by the exploit depends on the software component under attack and the nature of the vulnerability.

It is of note that the majority of recent successful exploitations have been through Java browser plug-ins; but exploit vectors are in continuous flux and depend on the speed with which discovered vulnerabilities are fixed by the vendor and patched on end-user systems.

The growth in the use of exploit kits is largely down to their ability to target software vulnerabilities on a large scale and their ease of use among a non-technical audience. It is also due in part to their widespread availability on the black market for a small cost, or even for free on file sharing networks.

As long as vendors continue to ship vulnerable software and are slow to patch and update systems, the use of exploit kits will remain a fundamental problem and will continue to evolve.

Thus, mitigation advice is the same as ever. End-users or system administrators should update vulnerable software components as soon as patches are made available by vendors. A system should be in place to automate this and an approach of defence in-depth should be adopted.

Yet the source of the problem is ultimately in the insecure coding practices and insufficient testing procedures of software vendors in the first place. So, perhaps the key to cracking this problem lies more in incentivising them to change and improve their practices.

Kevin O'Reilly is lead consultant at Context Information Security

Cyber security: Why it needs to be a board-level issue and not just left to the IT department

Cyber security: Why it needs to be a board-level issue and not just left to the IT department

It's May 2012, and it's a rainy day in South Wales.

A large multi-national, multi-disciplinary company with ambitions to break into the FTSE 100 top companies is holding its annual conference at a local country club. The CEO and finance director have just driven down from the delivering the company's annual results.

It has been a great year and the air is full of self-congratulations. The CEO stands up to give his keynote address; however despite the good share-price news and profit margins, his expectant audience is surprised to find his demeanour is unusually serious.

He explains that moving the company into the top FTSE will expose it to greater world-wide scrutiny from a range of industry competitors and those interested in the work it does on behalf of government departments. Indeed, he then reports that he thinks it is already happening, as the company had recently ‘lost' some of its critical intellectual property and missed out on a few large contracts.

The reason: poor cyber security that is the now the company's top priority. All eyes turned to look at the CIO who had gone as scarlet as a guard's uniform and was visible squirming in his seat! Were they right to single him out?

Actually they weren't! When the facts were looked at the incidents comprised the following: a mis-sent email (a strategy document sent to a competitor); commercial papers lost on a train; a former employee who was not legally prevented from taking bid information to a competitor; a laptop left on a plane with passwords attached; and careless use of social media giving away IPR.

In this particular case all breaches were down to human errors, none of them a direct fault of the ICT department or indeed any single department, but reflected a collective failure of the company to invest in its people. There was a distinct lack of collective education, training and a focus on critical information to support the company's business objectives, as well as suitable ICT products to use, business processes and fostering the correct pervasive culture of information risk management.

Does this sound like an IT department issue to you? No, and this particular case isn't unusual. Indeed, it was seen that human errors (and systems glitches) caused nearly two-thirds of data breaches globally in 2012, all of which could have been prevented with a holistic approach to cyber security within the organisation.

Of course, any of the most successful companies are organised along separate lines of business, each of which are often quite independent, with a light corporate centre managing a small number of corporate functions. The problem with cyber security is it's only as effective as its weakest leak. Therefore to ensure cyber resilience the whole company must be marching to the same beat; played to the same standard from the board to the factory floor.

Good cyber security comes from a holistic strategy set by the board and it's a matter of leadership and proactive information governance. All elements of a company must know ‘who, what, why and when' they are to share company information with. This needs a shared corporate understanding as to the threat and risks to different types of information and shared processes for safely handling them while exploiting the information to get as much ‘bang-for-the-buck' from it as they can.

All this will take top-down leadership and board-level commitment if it's to pervade throughout an organisation. It is no good if board members are recklessly using social media, emailing sensitive work to their home accounts, viewing board's papers on the latest insecure ICT, just so as to look good at the next conference they turn up at.

This poor leadership will not inspire cultural change, no matter how hard internal communications try to advertise best practice.

Andrew Fitzmaurice is CEO of Templar Executives

Women in Security mentoring scheme launched

Women in Security mentoring scheme launched

This week I attended the launch of the (ISC)2 Women in Security mentoring scheme, which was previewed here.

The event was held at the offices of EY, formerly Ernst & Young, and attended by a number of the company's security specialists including Mark Brown, a former SC Magazine Information Security Person of the Year. Its purpose was intended to reign in mentors and mentees, and promote a speed dating event (to be held later this year).

The one thing that the chapter was keen to push was that this is not an exclusively female event, but what it did show is that the male/female balance in the industry is not as one sided as we are led to believe, but everyone needs a helping hand.

Liz Bingham, managing partner for people at EY, said that this had been a particular passion of hers and from her experience, it was hard earlier in her career to find a mentor and the career ladder had been more of a 'climbing wall'.

“Mentoring is so important as back in the days I did think of it, but it was so hard to find the next handhold on the climbing wall. Access to mentors to pass the pitfalls is incredibly important and careers could have been accelerated with a mentor as so much is about confidence and self belief,” Bingham said.

My concern about this process is that mentees may be attracted, but will mentors be attracted to take part in such a process? From looking at the mentors already with the scheme and those who signed up on the night, it was actually rather reassuring to see that there was a passion to participate in this concept.

Speaking on the night was Vicki Gavin, compliance officer at the Economist Group, who said that she didn't think of herself as a mentor, and that she had never been involved in a mentoring scheme and had always been informed within the industry.

She said: “There is a lot of advice out there if you are prepared to listen for it. I joined 'working skills for women', which was about women for women and it was about achieving goals and it made me see how important it was that women help other women as they were turned off learning maths and science. However this helped people learn in a new way from women and it really gave me the bug.

“We are now setting up a mentoring scheme and internships, as we need to set up the leaders of tomorrow to help them get their first job. We get so much from these people and if you don't have a scheme, I strongly advise you try and start one as it is the best type of resource.”

The Women in Security mentoring scheme will have three objectives: to enhance technical skills; to help expand professional networks; and ensure newcomers are not put off by jargon and can feel part of a group.

Mentors will be asked to allocate a minimum three to four hours a month to prepare a structured meeting and commit to doing the program for 12 months, while mentees will be asked to invest a minimum of eight hours a month and act on areas covered,

Again I was concerned that busy professionals would not be able to commit to the time requirements and that there would not be the feeling of 'I can lead' in a generally humble industry. However I caught up with Soraya Iggy, Women in Security volunteer and B-Sides London organiser, who oversaw the conference's rookie track programme this year. She said that there were more mentors offering time and advice than there were proposed speakers, and that was because of the "fantastic generous community who want to help someone else".

“The mentors said give me one mentee and they focused on forming a partnership and we have learned a lot for next year,” she said.

“We had to turn people away who wanted to be mentors as we had 24 slots and we had people on a waiting list.”

With constant talk of a skills shortage and no real clarity on how things can be solved, perhaps the best solution is to offer advice to those either already in the industry or considering it. I've always been sceptical of the imbalance concept as it may be the case that there are not enough women in senior security positions, but if this scheme helps build networks and enhance communications, it can only be positive.

A smarter approach to defend against advanced persistent threats

A smarter approach to defend against advanced persistent threats

Most cyber threats originate from outside networks and exploit known vulnerabilities.

These attacks have been responded to via conventional security methods, such as with anti-virus, firewall and IPS solutions. However, more recent and sophisticated cyber attacks have targeted organisations by injecting malware or files into web applications or e-mail used by employees.

This final blog post from our three-part series discusses a smarter approach to defending against these threats.

One thing is common to all advanced persistent threat (APT) attack scenarios: although the methods are diverse, all are triggered by malware. The attack initiates with the distribution of malware past conventional security solutions that were unable to identify the unknown or variant codes.

Because of this type of sophisticated attack, many organisations remain vulnerable to APTs. There is no magic bullet to protect against APTs. Protection requires diligence, intelligence and a constant proactive effort.

Understanding that protection

Some APT response solutions emphasise signature-less analysis. This approach is based on the idea that APT attacks use unknown codes that render signature-based solutions useless.

However, most of the files flowing into the enterprise network are either normal files or malicious files that use known codes. Furthermore, up to half of APTs utilise known malware, which means that signature-based malware detection technologies are useful for detecting a large volume of malicious activity, without needing to use more sophisticated analytical techniques. Stopping the remainder requires a sophisticated solution.

There is a logical order making best use of resources to identify and stop threats — both known and unknown. Signature-based analysis should be done first and then be followed up with other technologies, such as behaviour-based analysis. Additional static and dynamic analysis is best performed in the cloud.

Behaviour-based analysis in a virtual machine (VM) environment has some limitations. These limitations include the CPU and memory load required to analyse a large number of files. By identifying known malware with a blacklist and normal files with a whitelist, you only need to use the VM environment to detect unknown or variant malware.

This technique minimises unnecessary analysis processes. This helps maximise appliance performance.

Performing multi-dimensional behaviour analysis  

An APT begins with cyber criminals gaining access through a single endpoint. Even with fully patched machines, attackers are getting in by using zero-day attacks. Layered security is still needed, but now you must look to new forms of malware analysis. Malware analysis involves both static and dynamic techniques.

Static analysis requires malware experts to spend a fair amount of time evaluating files. Dynamic analysis, on the other hand, requires very little time to uncover changes in the operating system, such as network behaviour, registry alteration or file system alteration.

False positives or false negatives can occur with dynamic analysis, but most APT response solutions employ dynamic analysis because it provides quick results. Furthermore, dynamic analysis offers a means of identifying malicious characteristics of unknown or variant codes in a VM environment that is not available with signature-based solutions.

Correlation and reputation

All aspects related to file execution must be considered when analysing behaviour. The results of the behaviour analysis have to be used in combination with signature-based analysis. Additional information about the associated files is reviewed, including malicious characteristics, the risk level of the URLs or IP addresses that the file connects to, reputation information, and comprehensive behaviour patterns.

The reputation-analysis method uses contextual information, such as source and collection time and the number of file users, to analyse both the sample file and associated files.

This analysis technique has an important role in detecting targeted attacks that use new or unknown codes because it allows for a more fundamental response. An effective solution should not automatically flag malicious activity if suspicious behaviour is found in the behaviour-analysis results. Instead, the differentiation feature minimises false positives and false negatives by considering the reputation-analysis results.

 

Dynamic intelligent content analysis

The most important feature of a targeted attack is that a web browser, plug-in, or application such as a text editor is used to enable the attack. Not long ago, an attacker could damage a victim easily by attaching malware directly to an email or redirecting the victim to a malicious URL.

However, this type of attack is no longer as effective because of built-in security functions in web browsers and client email programs. As a result, attackers have focused their attention on non-executable files such as documents. Enticing a victim to open a .pdf or .doc file that contains a shell code has a higher probability of success.

Looking at the overall picture

The frequency of APT attacks has been increasing sharply over the last few years. The techniques have evolved, and the targets have become wider in scope. Previously, APT attacks mainly aimed to steal confidential information. However, some of the more recent attacks have attempted to inflict serious damage on governmental agencies and critical infrastructures.

Despite these escalating threats, most organisations continue to respond with conventional security solutions, such as anti-virus solutions, intrusion detection/prevention systems, firewalls, next-generation firewalls, and web application firewalls.

These organisations are limited by the time required to perform multidimensional threat analysis, the inability of these devices to perform this analysis, and the lack of an automated response to identified threats.

Tom Hance is vice president of operations at AhnLab

The battle of blacklisted apps: can IT managers balance security with productivity?

The battle of blacklisted apps: can IT managers balance security with productivity?

Mobility, particularly as bring your own device (BYOD) continues to gather momentum, can keep IT managers up at night.

 

One of the main concerns is that today's vast array of mobile devices are bustling with new apps. Whether they be recreational such as Angry Birds, productive such as a CRM app, or mixed use such as Dropbox, IT managers are under increasing pressure to ensure that all apps, regardless of the device type they sit on, cater to enterprise security, while also making certain that performance is not compromised.

 

We recently carried out research into the leading apps that IT managers have blacklisted on iOS and Android devices across 4,500 global organisations from Fiberlink's customer database.

 

 

 

What was clear from the findings was that the pursuit of protecting corporate data and ensuring employee productivity is driving the decisions behind which apps get blacklisted. While the apps included in both lists will not come as an earth shattering surprise to IT managers, it is interesting two see the two very distinct flavours of recreational and corporate apps blacklisted.

 

The very nature of the more productive file sharing apps such as Dropbox, used by employees to collaborate and access important documents, puts corporate information at risk if it falls into the wrong hands. Alternatively, the more recreational apps such as Google play, often used by employees to watch movies, limits bandwidth and slows the performance of business critical apps as a result.

 

As a result, IT managers are truly wrestling with the challenge of how to unlock maximum productivity for employees through these apps, while ensuring that first and foremost security is not impaired. This is why more and more IT managers are exploring more effective ways in which to manage apps, as unmonitored apps can be more disruptive than productive.

 

As the findings highlight, any employee can seek out their own tools for collaboration and could very well use popular apps that are designed for the mass market. The trouble is that these apps are often times not designed for enterprise use and more critically, they don't have enterprise level security.

 

Even when employees have the intention of using apps such as Facebook for business purposes, the security credentials are often an afterthought. IT managers cannot afford to assume that any employee will think about the level of security required.

 

Moving forward, it is highly unlikely that the number of apps, whether recreational or business, entering the enterprise will decrease anytime soon. If anything, BYOD rollouts will accelerate the adoption of even more apps.

 

In order to be best prepared, many IT managers are exploring the use of enterprise application management in order to deliver an easy-to-use enterprise app catalogue, with full security and operational lifecycle management across mobile device platforms.

 

Organisations taking this proactive approach will continue to have access to the best technology options to win the long-term battle with reducing data and apps risks, without restricting employee productivity.

 

 

David Lingenfelter, information security officer of Fiberlink

Mobile Helix secure data with HTML5 offering

Mobile Helix secure data with HTML5 offering

This week saw a new company launch in the mobile space to help with the continual problem of ‘bring your own device'.

I spoke with New York-based Mobile Helix this week and co-founder, chief operating officer and president Matt Bancroft, who explained that the issue as he sees it is about data security, as well as offering an HTML5 platform.

The company offers the Link HTML5 platform that Bancroft said allows any existing or new browser-based enterprise application to be deployed across multiple device types, which was where he saw was a problem to be solved.

He said: “We really believe that the idea of managing the device is the wrong way to go about security; we believe that all that matters is the data. It really doesn't matter what device you are using or what the user can do, my experience is that this is unpopular as it discourages the user from using the solution and it is easy to hack a device and access secure information.”

Bancroft explained that the platform will allow applications written in different code to run without being rewritten, and that HTML5 can deliver a better user experience to give more access to the device to support offline access with greater security.

Link deploys applications through internet browsers to enable a simplified ‘download free' application delivery and support model, removing the need for a separate enterprise app store, app catalogue, and dedicated distribution and update processes for every different application and mobile operating system, according to Bancroft.

“The data is secure irrespective of what happens to the device as it leverages HTML5 and it needs to work right away regardless of where the user is,” he said.

“HTML5 adds an important capability for mobile as it allows you to run JavaScript where you had to run a plug-in previously, so there is a much better user experience.”

He added that accessing the corporate data through the device allows a corporate environment to be launched where applications are available. “This is not using desktop virtualisation, it is running inside a container inside the device and the container connects to the gateway and then on to the enterprise network and makes a connection between the device and gateway using HTTPS.”

Bancroft said that encryption keys are not stored on the device and it just provides the control and management between the data and business, and its offering does not look to manage the device at all, as it is managing the container.

Seth Hallem, co-founder and CEO of Mobile Helix, said: “Link is designed with unrestricted enterprise productivity at its core. It is the first product on the market to combine the unmistakable benefits of device-independent applications built using HTML5, with a disruptive data security platform that ensures sensitive corporate data is safe on any device.”

I concluded by asking Bancroft why he felt that this was the right time to launch, after two years in development. He said that this is an exciting time as IT security has shifted with the introduction of mobile devices and that users have become more sophisticated over the past few years on how they use data. 

“IT cannot dictate as the power has shifted and the freedom of the user to choose the tools and software means that they need to work in the world of the user than with secure devices,” he said.

“The next 20 years of investment in the web will be all about existing enterprise the way they want to work. It will be about keeping users happy, but you don't have to reinvent security and application development systems, as the web was created in a way that enterprises embraced.” 

Encryption is everywhere: A practical look at steps to consider

Encryption is everywhere: A practical look at steps to consider

Given the dual forces at work of easy-to-obtain encryption software and ever-increasing amounts of data associated with investigations, it is very likely that IT departments and third party investigators will encounter encrypted data in some manner.

Consequently during the course of an investigation or through the eDisclosure (or eDiscovery) process in either litigation or regulatory matters, it is important to be familiar with encryption techniques and how to get around such issues.

Encryption can range from enterprise-level software suites that deploy full-drive encryption on all computers attached to their network, down to specific, file-level encryption meant to prevent any access to sensitive information contained within the file.

This concept of ‘whole device' encryption is certainly not exclusive to laptop or desktop computers, as modern smartphones and tablets usually offer the user the ability fully-encrypt the data present in the devices, as well as any storage or memory cards (i.e. MicroSD memory cards) attached to the device. 

While there are some commercial digital forensic tools that support decryption of certain encryption software suites (when provided with the appropriate credentials), the traditional digital forensic standard of creating an image of an entire hard drive from a desktop or laptop computer needs to be re-evaluated should full-drive encryption be in place.

Instead, forensic experts may need to work with their clients to determine exactly what is needed from the drive, and then conduct a more focused forensic data collection from a ‘live' system, which means that the data can be accessed in an unencrypted state. 

IT departments or third party investigators should also be on the lookout for other, more esoteric forms of encryption.

In one of our recent engagements, we encountered an organisation that had developed their own proprietary encryption technology; its design was such that it operated in a completely unnoticeable manner to the organisation's employees, as most users were not even aware that their data was encrypted.

Data was decrypted in real-time as it was accessed on the computers, but only if the computer was connected to the corporate network. Once disconnected from the corporate network, access to the data contained on the computer would not be possible.

The identification and understanding of these encryption technologies as early as possible in the investigative process is vital to save both time and money as the investigation unfolds. From a legal standpoint, it's also essential to check if you have the legal authority to attempt to open encrypted data. Usually the answer is yes, but within the European Union, Asia-Pacific region, and its many jurisdictions and governments that entitle custodians to varying levels of data privacy, it is always worth checking with local counsel and the human resources department of the client.

Where a client or legal statute requires that data must stay within a jurisdiction, care must be taken to ensure distribution of a document is not causing a breach of data privacy requirements.

Unfortunately, because of the myriad encryption technologies available in the marketplace today, there is no simple answer to what to do when you've found encrypted data. However, there are some common methods of dealing with encrypted data that can be employed in most investigations:

Brute force attacks: Brute force attacks are one of the most common techniques used in an attempt to access encrypted data. As its name implies, the attack relies on throwing millions, if not billions, of possible passwords at the encrypted data with the hope that one of the passwords will work.

Typically, a language-specific or industry-specific dictionary is used (many of these can be found on the internet), and each word in the dictionary is tried successively. If all words in the original dictionary are unsuccessful in accessing the encrypted data, then often other permutations are tried such as a ‘reverse' dictionary, where the words are spelled backwards; or a full dictionary, where all words from a language may be tried. The number of dictionary permutations and permutations of permutations can allow for multiple trillions of possible passwords to be tried in a brute force attack.

Golden dictionaries: Something of an offshoot from brute force attacks – a ‘golden' dictionary is a record of all previously-recovered passwords identified in past investigations. Often this is tried first in decryption efforts because studies have shown that certain passwords have been used by large numbers of users.

Most decryption software suites will keep a record of these previously-recovered passwords and create a custom golden dictionary based on past investigations. This type of attack can yield good results when dealing with encrypted data from multiple devices belonging to the same person, as most people do not create a different password for each device they use.

Observational dictionaries: People will often use passwords that are personal in nature, such as a spouse or a child's name, anniversary date, etc.  The rampant rise of social media has now placed people's once personal details in an easily-accessible, searchable format in Twitter feeds, Facebook pages and LinkedIn profiles.

A cursory examination of an individual's social media profile may allow an investigator to create his own custom dictionary of names, places and events that hold some importance to a person, and as such, may be used as passwords for encrypted data.

The ultimate success of your outcome lies in the practical and technical prowess of your technical team, who should be able to utilise appropriate software and procedures designed to properly retrieve and decode the electronic data.

After that has been accomplished, the formerly-encrypted data can then be processed and analysed along with the remainder of the data for the matter.

Jonathan Fowler is vice president of digital forensics at Consilio

DDoS evolution and the importance of preparation

DDoS evolution and the importance of preparation

When distributed denial-of-service (DDoS) attacks first started appearing in the late 1990s, the response from businesses was broadly similar to that of most new cyber threats: A shrug of the shoulders and an ‘it won't happen to me' attitude.

Then, as they became more prevalent, companies began to take notice. Yet until relatively recently, products that could successfully defend against a DDoS attack weren't available to many businesses. Businesses that did get hit had no option but to grin and bear it.

Vendors now offer a wide range of mitigation solutions that offer protection to companies that find themselves under siege. While their effectiveness can't be guaranteed, it allows firms to be proactive and put together defence strategies, instead of simply waiting to be targeted.

The frequency of DDoS attacks is growing at a frightening rate, with one report claiming a 200 per cent annual increase.

A week rarely goes by without the media running a story about a high-profile victim of a successful DDoS attack. With our always-online culture coupled with businesses migrating more of their services onto the internet, the threat has become more acute.

This increase in attacks and greater public awareness has moved DDoS onto all businesses' risk dashboards - from start-ups to multi-national corporations, but simply putting mitigation measures in place and hoping for the best isn't enough.

It's been suggested that defending against a DDoS attack can cost as much as £2.5 million. Although this may be an overestimation, businesses do need to be certain that their mitigation investment will pay dividends.

In other areas of cyber security, the cost effectiveness of this type of investment can be assessed. For instance, a penetration test can measure how effective a network's defences are and pinpoint vulnerabilities. But with a DDoS attack, how do you know that your investment is worthwhile, until it's too late?

There's also practical preparation to think about too. Do IT employees and service providers know what a DDoS attack will look like? Do they know the signs to look out for, and do they know their role during an attack scenario?

In the workplace, we all know what to do if there was ever a fire because of fire drills; we run over the steps we¹d need to take so that, should the real thing happen, we are prepared.

That is exactly the mind-set that businesses should have when it comes to DDoS attacks, and why we've created a DDoS fire drill service. Building on our DDoS assured simulation service - which emulates a real attack through our own botnet in a secure, controlled manner - we can test businesses with a controlled, low level DDoS attack and allow them to test their response processes.

While we control the attack, companies can examine staff and supplier reaction and ensure realistic procedures are in place to manage not only the attack itself, but also discourse with the supply chain without having to wait until a real attack occurs.

For instance, working out whose responsibility it is to phone the necessary third parties might seem like an inconsequential issue, but if employees don't know their roles or have never had a chance to practice then it shouldn't be assumed.

What about the mitigation solutions that aren't fully automated? Whose role is it to man them, and do they know how? With the DDoS fire drill, everyone can learn exactly what part they're expected to play. When the fire alarm goes off, employees know exactly where to go -­ it should be the same once the tell-tale DDoS signs appear.

Being prepared and ready is paramount when it comes to any emergency, and cyber security is no different. Too many businesses are like rabbits in the headlights once a DDoS attack starts. But prepare and practice accordingly and it is possible to minimise the damage.

Paul Vlissidis is technical director at NCC Group

Mitigating and protecting against APTs

Advanced persistent threats (APTs) are serious perils.

They are campaigns; long-term attacks on your network to extract data, to sabotage, or to purposefully intrude. If at first the APTs are unsuccessful, attackers will try other combinations of exploits or target other doors and windows.

Threats themselves can morph to evade detection, as well. However, in order to protect your organisation from APTs you need to understand the doors into your network.

APTs can enter your network in many ways, but a good place to begin your search for vulnerable openings is with the computer you sit in front of every day.

Clients - The client is the part of your network that connects to more parts of the internet than any other. Each APT can use different combinations of modules, exponentially increasing the damage potential, and each module gives the APT different capabilities.

The APT also collects information such as trust relationships and passwords, which can be used to connect to other targeted clients and servers. Clients need to make sure that any endpoint software is current (and installed). Signature/pattern files need to be current. Patches on other devices need to be current. Browsers need to be current.

Servers - Servers can also be directly targeted, but it is less frequent to use an APT for this purpose. Servers are usually targeted via application-based vulnerabilities, such as an unpatched Apache server.  

Portable media - USB drives are often today's favourite intrusion point. The first thing you can do to protect your network from a threat is to either never install anything from portable media unless it is from a trusted source, or scan the portable media first.

Social networks and social engineering - Many people use the same username and password combinations on multiple systems and for both personal and business use. This information gives an attacker access to multiple systems, so the information can be more valuable on the black market than a credit card number.

Network administrators should require that individuals change passwords at some interval they define. They should also use programs that define how strong the new passwords are, require that passwords be greater than a certain length, and use combinations of different characters in addition to uppercase and lowercase letters, and numbers.

Wireless networks - Many people don't realise that by simply sitting in the company parking lot, an attacker may be able to access a wireless connection. Moreover, a simple antenna can allow attackers to receive a signal from quite a distance.

Add on top of this the many known issues with common wireless security measures, and you have an easy remote way for attacks to gain their initial infection point. The first thing you can do to protect your wireless network is limit access to it. A wireless network should be treated as if it was wired. Passwords should be required to access the wireless network. The network should also be hidden from individuals lurking in the parking lot.  

Now that we understand all the entry points to the network, it is worthwhile looking at what security technology has proven to be ineffective when faced with APTs.

Traditional security solutions such as firewalls, anti-virus, intrusion prevention systems and web filters are extremely useful and valuable for their intended purposes, but are all blind to these new types of advanced and sophisticated threats. It takes a different approach to identify targeted attacks inside an organisation.  

Next-generation firewalls - The problem with NGFWs is that the list of signatures is relatively static, and even frequent updates can't keep pace with the dynamic nature of advanced malware. Over 300 million new variants of malware were found last year alone.

The same inability to keep up with changing malware threats also applies to both anti-virus and intrusion prevention systems that rely heavily on signature-based lists as their primary detection methods.

Reputation-based solutions - Content that is not categorised simply ends up uncategorised, resulting in a weakness in the reputation method. It isn't possible to manually analyse the content fast enough to provide protection.

In 2011, over 150,000 new URLs were created daily. Most of these URLs were tagged as uncategorised content. Not surprisingly, most of the malware exists in uncategorised content.

Heuristic security methods - The weakness in the heuristic type of solution is that you must first define what is considered normal behaviour, and this definition can vary substantially from organisation to organisation. The definition can even vary within different parts of the same network. To avoid detection, all an attacker needs to do is simulate normal behaviour and not engage in any activities that draw attention.

Challenges at the network perimeter - The only way to ensure that targeted malware can't enter across the network perimeter is to ensure that every potential connection point to the network is policed 100 per cent of the time.

However, that complete and constant level of coverage is impossible to achieve. Laptops, smartphones and tablets routinely connect to external networks and return to the parent network. USB flash drives attach to everything, contractors and partners are only as secure as their own networks, and cloud services represent yet another difficult-to-secure point of access. These tools are a fundamental part of modern business, yet their communications are open to eavesdropping and exploit.

Large enterprise organisations also have overlapping technologies and competing areas of responsibility. For most organisations, the risk of compromise and the effects of the security solution are not worth interruptions to services and poor systems performance.

While the seriousness of the threat is understood, organisations need a solution that actually solves the problem and is transparent to users.

APTs are an imposing threat, which can have a severe impact on an organisation's network. It is therefore imperative that IT managers understand the most vulnerable areas of the network and the best way to protect against them.

Brian Laing is a vice president at AhnLab

Mind the TMG gap

Mind the TMG gap

In September last year, Microsoft announced that it was discontinuing its Forefront Threat Management Gateway (TMG) as part of a number of major changes to its Forefront product line.

This was in an "effort to better align security and protection solutions with the workloads and applications they protect". While Microsoft has pledged to provide current Forefront TMG customers with mainstream support up until the end of 2015 and extended support until 2020, the move – that surprised many customers – does present some challenges and raises the question about what will replace it.

Microsoft's Forefront TMG, formerly known as Microsoft Internet Security and Acceleration Server (ISA Server), has been a key component of the solution for organisations deploying Microsoft Exchange, Lync or SharePoint.

One of the key features of TMG is that it offers customers a way to publish and protect workload servers such as Exchange Client Access Servers; especially in internet facing deployments where a clean and secure separation between the backend critical infrastructure and the public internet is essential.

TMG has proved particularly popular for use with Exchange infrastructures because of its relatively easy-to-deploy, reverse-proxy functionality. This is essential when you have a demilitarised zone (DMZ) to ‘sanitise' incoming connections from the internet before passing traffic onto servers hidden by an internal network.

Microsoft's decision to end TMG is part of a bigger picture. The company plans to integrate more security controls into the cloud with its Microsoft Office 365 solution and also replace TMG with its Unified Access Gateway (UAG).

However, it's not quite that simple. For a start, UAG can be up to twice as expensive and depending on what part of the world you are based, the cost of transition could be painful.

Secondly, for applications such as Exchange, there are some functionality gaps that UAG currently does not cover, such as two-factor authentication for ActiveSync devices or certificate-based authentication for OWA. Also, it is not just Exchange; while UAG has more features than TMG it also does not, as yet, fully support some Lync functionality and is overkill if used for only this purpose.

So for companies that do not want to migrate to Office 356 or adopt UAG, what are the options?

Many companies already deploy hardware load balancing appliances from companies such as Kemp in conjunction with TMG in order to publish Microsoft workload servers for internet facing applications. As well as separating the critical infrastructure from the external internet, load balancers stop traffic ‘at the gate' and make sure that users are automatically connected to the best performing server.

If one becomes inaccessible, the load balancer will automatically re-route traffic to other functioning servers so that users always experience optimum performance. The load balancer may also offload processor intensive SSL encryption to speed up the throughput. 

So, now that ‘end of life' time has arrived for TMG, other companies such as Kemp, will be looking to build on existing core technologies such as the reverse proxy function to fill the gap left by TMG.

For organisations and businesses facing life without TMG, the addition of security features into their load balancers will continue to deliver protection along with scalability and high reliability.

Leigh Bradford is UK sales manager at Kemp Technologies

IT security: M&A transactions are a different matter

IT security: M&A transactions are a different matter

A rather astonishing example of carelessness over data privacy was displayed recently at Bloomberg, escalating the discussion around data security.

Bloomberg customers were shocked to find that employees are able to see what terminal users are viewing, and the actions that they were taking.

With the combination of a confidential service, such as Bloomberg's financial information service, and a publishing service, one would assume strict ‘Chinese walls' were used. However a lack of Chinese walls has been a sensitive issue in the past.

A few years ago, a similar issue around investment and research also caused contention. The issue is not one that lies solely with the corporation, but instead is representative of a broader issue around the cultural differences that the US and the EU have with respect to data protection and privacy.

Recent events such as this highlight a significant difference in national data protection laws and in the perception of what data ‘protection' should be. This is becoming a prominent concern amongst businesses, particularly of those that are engaged in a financial transaction or merger.

In Europe, regulations are harmonised by the European Union, and information can be securely stored on servers within the EU without risk of access by third parties. Yet when using US-based servers to store information, the legalities become more complex.

The EU criticism is that the US government has, through the Patriot Act, low-barrier access rights to digital information stored by US companies. Such rules are unmatched in the EU. Storing data with a US company exposes every European firm to the risk of sharing this information with US authorities.

This was a criticism by German officials and the Fraunhofer Report, which stated that it was not adequate for the protection of European companies' data. As corporations – especially in relation to M&A transactions - continue to become increasingly concerned about the location and security of their sensitive data, we are seeing a significant upsurge in enquiries from firms about how best to protect their information.

Our advice is to take the following into consideration when selecting a secure server:

  • Does your provider utilise the cloud to store your data? If so, then what guarantees of security are provided?
  • Is your provider a US-based company or the subsidiary of one? If yes, then be aware that any data stored is potentially accessible to US governmental agencies through the Patriot Act.
  • Does your provider rely on security-challenged third party applications such as Flash and Java? Be aware that these are currently the subject of a number of security concerns.
  • Are you fully cognisant of EU data protection laws and how they protect you? Any breach is potentially a breach of EU regulations punishable by heavy fines. (New EU rules could soon empower authorities to impose fines of up to two per cent of global turnover).
  • In the context of financial transactions, the risk of a privacy breach can be severe. Damages or a failure of the transaction can easily cost the participating parties tens of millions of Euros.

    The IT security considerations on these transactions are structured around potential attacks by strangers and, more significantly, people that are entitled to receive the information. Considering the following should reduce the risks of a security breach:

  • To avoid risks caused by third party products (e.g. browser plug-ins, Java, PDF-Viewers), a standalone data room has obvious advantages.
  • Remote communication via the internet must be secured by an https connection.
  • Files on storage must be encrypted with algorithms such as the Advanced Encryption Standard (AES).
  • Activate full compliance of activities, including a complete audit trail of all changes and viewings.
  • A dynamic watermark on all viewed pages and print outs with username and timestamp makes it possible to track activity to specific individuals.
  • Enable a view only solution to avoid users being able to print/copy/save files.
  • Server infrastructure should be hosted in certified data centres (ISO 27001).
  • Access to any documents must be granted by a granular setting of permissions.
  • Depending on the requirements of the sellers, the level of security should be increased, e.g. by IP filter, two-factor authentication or customised password policies.
  • Data security and privacy will continue to remain a contentious issue. Currently there is a strong American lobby in Brussels to prevent EU data protection laws from increasing in strength, which would have a significant impact on corporations with limited data protection, such as Google and Facebook.

    The data privacy news such as that of Bloomberg is unlikely to be the last, so while the debate continues, it is down to individual businesses to educate themselves on data protection and ensure that they are fully aware of where data is stored and the protection laws that it falls under

    Jan Hoffmeister is co-founder of Drooms

    The shifting sands of data regulations - threat or opportunity?

    The shifting sands of data regulations - threat or opportunity?

    We've all seen enough news stories to know what can happen when a business doesn't get compliance right or falls foul of data protection legislation.

    No organisation wants the negative exposure that results – exposure that reduces public trust, puts brand and reputation at risk, incurs financial penalties and invites customer churn. However, it's not just the fear of negative exposure and financial loss that is putting organisations under pressure – it is the changing nature of the laws and regulations surrounding data protection.

    Critical changes are in the works to certification requirements for the Payment Card Industry Data Security Standard (PCI DSS), to legal compliance with the European Data Protection Regulation and to enforcement of data protection requirements from the UK Information Commissioner's Office (ICO).

    While the story of compliance is inextricably woven into the story of data security, many businesses have fortunately, come to recognise that data security does not equal compliance. In saying that, there are three important updates planned to industry compliance standards and legislation that will have a direct impact on organisation's security directions and buying decisions this year.

    Firstly, the update to PCI DSS is expected in October 2013. Ahead of the announcement of the update, the PCI Security Standards Council (PCI SSC) has released a new guidelines supplement to advise organisations on how best to meet the updated compliance mandate.

    Updates to the standard – designed to enhance security – are extensive, and in all areas. Again, the reality of the porous nature of today's networks and systems to advanced attacks means that organisations would be well advised to concentrate on enhancing the protections that surround their critical data.

    Second of the pertinent regulations, and likely to affect any company doing business in Europe in the future, is the proposal for the European Data Protection Regulation and on-going discussion regarding its contents. While there has been clamouring support for the legislation to reflect the business world of the 21st century – the detail of the fine print is beginning to cause some discomfort.

    While the exact details of the pending regulation are not yet final, the legislation is widely expected to harmonise European legislation so that the same rules apply to all businesses providing services to EU residents and to include data breach exclusions if data has been rendered unintelligible (in other words, encrypted).

    Finally, the UK Information Commissioner's Office (ICO) has also been vocal on the thorny issue of cloud security and the related data protection responsibility. Despite issuing guidelines on the subject in September 2012, it appears the message is still falling on deaf ears – even if your business data resides on a shared infrastructure or has been outsourced, the cloud does not absolve you of your data protection responsibilities.

    Here compliance starts with the basics; any organisation deploying resources to the cloud needs to scrutinise the security assurances given by their cloud service providers and consider whether these are sufficient for their data security needs. Traditionally, the expectation has been that the cloud provider would keep data safe but, as these salient guidelines from the ICO lay bare, the onus for effective data protection rests unequivocally with the ‘data controller' (i.e. the organisation that gathers the information in the first place for operational use).

    Pertinently, points 63 and 64 of the guidelines specifically recommend encryption and key management as a means of mitigating the twofold concerns of data security and data governance in multi-tenant environments. Moreover, these points are reflective of principle seven of the present day Data Protection Act, which asserts that: 'Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data'.

    With the anticipated changes to the EU data protection directive also promising to become yet more prescriptive, savvy European businesses – or those doing business that falls within European mandates – would do well to put in place security solutions that enable them to effectively maintain control and manage data in shared environments.

    That's the best way to achieve both compliance and operational peace of mind in the face of the rapidly changing laws and regulations.

    Paul Ayers is EMEA vice president at Vormetric

    Give up confidentiality, integrity and availability

    Give up confidentiality, integrity and availability

    Confidentiality, integrity and availability (CIA) are invariably mentioned as cornerstones of information security.

    Realising the limitations of these three concepts, many professionals and standards add to these with some of the following: non-repudiation, authentication, audit, authorisation, privacy, possession, utility, risk, accountability and identity. Donn B. Parker suggested the Parkerian Hexad as an improvement over CIA, to little avail.

    CIA suffers from the following ailments. Firstly, there is not a wide agreement on what they mean; different people have a different understanding of their meaning and different standards define them in slightly different ways.

    Secondly, they are incomplete. How do you explain the companion constellation of concepts? Worst of all, they are a barrier for communication with the business and add no value to actual information security work.

    Picture a simple situation when a new database needs to be protected. You try to find out the integrity of the database. You interview the development team or the business owner of the database and ask ‘what is the integrity of the database'?

    You will find yourself explaining what you mean, providing examples to extract an answer. You get the answer (medium, type 2) and you feel you did your job.

    When it comes to actually protecting the database however, do you know if it is acceptable to lose days, hours or minutes of work? Do you know if the information in the database is legally required to be readable ten years down the line? Do you know if the transactions are extremely time sensitive, like in a fast trading system?

    This is the part of the information you actually need in business terms to design and protect the database, making it more likely that the business needs will be met. If you don't know, then after finding the integrity, someone goes back and asks questions. Or worse still, asks nothing because they already know the integrity.

    So why do professionals still use CIA? Tradition for one, the fact that you study and go through exams in order to achieve a certification, is another one.

    There is also resistance to change, similar to the resistance in the medical professional when the germ theory arose. More importantly, changing and giving up CIA would imply acknowledgement that we had wasted some of our time, and the time and money of many people for a very long time.

    I am not saying that confidentiality, integrity and availability are incorrect. They are not. What I am saying is that confidentiality, integrity and availability are not useful.

    You may ask what the alternative is. Simple - realise once and for all that information security is about guaranteeing that the security objectives of the business are met.

    Forget confidentiality. Who should use the system? Who should not? Is there private information? Are there any secrets, intellectual property?

    Forget integrity. Is there a need for attributing every change to a specific user?

    Forget availability. What are the business hours for this system? What is the longest acceptable interruption? Wake up and move on. It is high time.

    Vicente Aceituno is the leader of the ISMS standard O-ISM3, a member of The Open Group's Security Forum and president of the ISSA Spanish Chapter

     

    Defining an advanced persistent threat

    Defining an advanced persistent threat

    Advanced persistent threats (APT) are without a doubt one of the biggest IT security buzzwords.

    APTs don't create huge disruptions; they quietly do their evil over time. It seems hardly a day goes by without a story in the press about a company discovering that they have been hit by an APT.

    However, understanding APTs and how to protect against them can be a daunting task for any IT manager. In a series of blog posts we will explain exactly what APTs are, how they affect systems, what types of protection are effective and ineffective and the best approach to defend against them.

    So, first things first – to help better understand APTs, let's dig into the meaning of each part of the acronym:

    • Advanced: The attacker has significant technical capabilities to exploit vulnerabilities in the target. These capabilities may include access to large vulnerability databases and exploits and coding skills and the ability to uncover and take advantage of previously unknown vulnerabilities. The bad guys may purchase zero-day attacks to help them. They may even rent access to a bot network.
    • Persistent: APTs often occur over an extended period. Unlike short-term attacks that take advantage of temporary opportunities, APTs may take place over the course of years. Multiple attack vectors can be used, from web-based attacks to social engineering. Minor security breaches may be combined over time to gain access to more significant data.
    • Threat: In order for there to be a threat, there must be an attacker with both the motivation and ability to perform a successful attack.

    Looking at the stages of an APT

    APTs typically progress through a series of stages as they develop and spread. It's useful to understand these stages in order to see how the threats come about. For example, an APT might follow these stages:

    • Reconnaissance: Attackers research and identify their targets.
    • Intrusion: Spear phishing emails target specific users within the target company with spoofed messages that include malicious links or malicious PDF or Microsoft Office document attachments.
    • Establishing a backdoor: Attackers try to get domain administrative credentials and extract them from the network.
    • Obtaining user credentials: Attackers gain access using stolen, valid user credentials.
    • Installing utilities: Programs installed on the target network install backdoors, grab passwords and steal email, among other tasks.
    • Privilege escalation, lateral movement and data exfiltration: Attackers grab emails, attachments and files from servers.
    • Maintaining persistence: If the attackers find they are being detected or remediated, they use other methods, including revamping their malware, to ensure they don't lose their presence in the victim's network. Attackers don't break a window, steal some things and leave. They harvest initial data and wait patiently for more information to become available. An APT tends to stay for an extended period, potentially years, and attempts to remain undetected.

    Targeted attacks represent a very special type of threat — one that is silent, very difficult to trace and potentially devastating in the damage it can do, which ranges from stealing an organisation's intellectual property or stealing passwords from systems so they have unlimited network access.

    It's essential that enterprise organisations protect themselves against these threats, and do so cost effectively, without placing an inappropriate burden on end-users or interrupting daily operations.

    Brian Laing is a vice president at AhnLab

    Butch Cassidy and the hacking kids

    Butch Cassidy and the hacking kids

    The recent media interest surrounding the heist of several million pounds worth of money from cashpoints across the globe highlights the fact that, with the connectivity introduced by the internet age, the definitions of national boundaries have changed beyond recognition.

    Information security has often been considered as the afterthought in many organisations. The primary concerns of cost efficient systems that suit the functional requirements of the end-user are all too often prioritised, while the technical hardening and resilience to potential threat vectors are passed down the line, and considered as the final piece of the jigsaw serving little more than lip service to the notion of security due to tight budgets.

    A number of recent high profile breaches highlight various facets of this issue; although in reality the majority of these weaknesses stem from an underlying human precondition towards minimal effort. The headlines provide striking news stories; however the underlying weaknesses are default and weak user credentials.

    The string of cash withdrawals across the globe has taken a fundamentally different approach and casts suspicion against the security model of offshoring potentially sensitive data, while raising the political spectre of responsibility in the global economy.

    While credit card security has improved significantly in recent years, the security of debit cards has lagged behind. Media interest focuses on credit, and debt, over accounts that are tied to physical account balances.

    While investigations are on-going; the true detail of the break-ins will remain unclear, and subject to speculation. However, it is clear that sensitive account information for a number of accounts was held offshore. While this detail may not have included the entire card details, a number of critical components for these were exposed, including account balance information for pre-paid debit cards.

    In reality, these weaknesses are liable to be similar in nature to their more publicised neighbours, where the human weakness allows cracks to appear in the outer security layers. The introduction of multi-national boundaries introduces issues such as language and process priorities that can provide skilled individuals the opportunities to social engineer themselves into privileged positions.

    These cracks can then be utilised in order to gain access to underlying infrastructure. With even the tightest of security hardening, access by ‘legitimate' users into an environment will be allowed.

    However, this heist has hit the headlines because the unnamed perpetrators took the process one step further: by enrolling a number of operatives across the globe, they were able to change the ‘back room hacker' stereotypical attack into a physical theft of millions of dollars, and bypassed many of the underlying bank security measures that were put in place.

    The use of technological warfare, combined with the age old art of card cloning, provided the means for significant number of withdrawals in a targeted and coordinated fashion. Significant and inherent weaknesses in the ATM processes, and account security measures were unearthed, which will have caused the many card companies sleepless nights as they rush to react to the media spotlight.

    It must be noted that the majority of countries where the fraud was targeted continued to utilise the card magnetic stripe as the primary means of card security, while nations such as the UK, where chip and pin technology is used extensively, have been targeted less.

    Technological advances such as this will have reduced a number of the lowest hanging fruit for such an operation. However, the underlying weaknesses still exist, and should not be overlooked.

    For many, such a news article provides a wake-up call. Where once the world of the ‘hacker' was considered a minor consideration for many in the private sector; either the domain of the bedroom teenager, or the James Bond style spy, the real implications for many will have been realised. This is physical money, and is a global issue.

    Many current initiatives, such as the PCI security standard and Barclays Risk Reduction Programme, aim to raise awareness of the hardening of systems involved in the processing of card data and provide reassurances against the underlying processing methods.

    Such programmes provide markers that can be used as part of a broader education and awareness that can be adopted by companies in order to integrate a solid understanding of security, both from the board level down, and from the ground level up.

    Sam Raynor is a consultant at Information Risk Management

    CISSP - more than words?

    CISSP - more than words?

    I recently came across a very interesting blog by Wendy Nather on her not renewing her CISSP certification.

    Nather, who is a well-respected analyst at the 451 Group, has been IT security director at several firms in previous years and probably needed to keep her CISSP accreditation. However in her blog, Nather said that she had decided to let her CISSP certification lapse as since getting the accreditation, "having that certification has done nothing for me, except to make me have to look up my number every so often when registering for a conference".

    She said: “I never actually planned to get it to begin with; I only signed up for the exam because there was a job I thought I might apply for, and the CISSP was required.

    “By the time I decided to go in a different career direction, it was too late for me to get my exam fees back (and for that amount of money, I could have bought a laptop or some designer shoes). So I crammed for about a day and a half, went to the exam, came out two hours later, and was done. Relatively painless, except for the extortion I had to do of certain former colleagues to get the recommendation forms filled out.”

    Back at the RSA Conference in February, I attended a panel session on the value of certifications, where comments were made on the need for these accreditations and whether people are hired for competency or because of certifications such as the CISSP.

    In that session, Andrew Ellis, chief security officer at Akamai Technologies, said: “We look at certificates, if they have them they say 'with this, this person is qualified to practise with quality', but then if a practitioner has a certificate such as CPA, that is the most common reputational certificate.

    “The challenge is as those who have them grows, so it becomes the bottom bar and it carries the reputation of the lowest person who owns that certification.”

    Nather also said in her blog that she was not happy with paying every year to have letters after her name, and that CISSPs were so common, they would be of use for people starting out in security and it was a handy first sorting mechanism when you're looking to fill certain levels of positions. “But by the time you're directly recruiting people, you should know why you want them other than the fact that they're certified. And then the letters aren't important,” she said.

    The blog naturally stirred quite a reaction, with Nather posting an update saying she really respects and admires what members of the (ISC)2 board are trying to do, and while the CISSP is not completely useless, it's just not something she personally wants to put time and money into maintaining.

    Wim Remes, (ISC)2 board member, commented that he disagreed on CISSP being an entry level cert and admitted that the organisation needed to work on communication. He said: “In my opinion the cert, first and foremost, establishes a common vocabulary among professionals that allows us - even though from different backgrounds and with different focus areas - to talk the same language and understand each other.”

    Among the many responses to Nather's blog was one I spotted from Gal Schpantzer, a contributing analyst at Securosis. He said in a Securosis blog that after years working in IT, “I no longer want to bother proving how much I know”. He admitted that while the CISSP has a powerful sway over the infosec industry's hiring practices, the HR process is what it is, and many HR shops bounce you in the first round if you don't have those five magic letters, so the CISSP has on-going value to anyone going through open application processes.

    In my chosen career there isn't such an encompassing industry certification that you are required to have. Part of me thinks that is good that journalism is open to all, but at the same time, is that a double-edged sword? If there is a filter surely that makes life easier to sort the qualified from the chancers?

    There are many who will disagree with Nather's decision and many who will feel she is correct. At that RSA session, I spoke with a major researcher from a vendor I spotted in the room, and he told me that while he saw the CISSP as important, as you only had to sit the exam once and were not re-examined continually, he questioned what value it holds for senior professionals in IT.

    Establishing new norms for data privacy

    Establishing new norms for data privacy

    In the modern world, data is collected on who we are, who we know, where we are, where we have been and where we plan to go.

    This trend is increasing and there is no end in sight. Analysing this data gives enterprises the ability to understand and predict where humans focus their attention and activity at the individual, group and global levels.

    As some will say, personal data is the new 'oil', a valuable resource of the 21st century. It will emerge as a new asset class touching all aspects of society.

    High-profile data breaches and mis-steps involving personal data seem to be reported by the media each day. Tension has arisen between individuals (who feel powerlessness over this data-grab) and businesses (that rely on our data to market to us).

    A Hogan Lovells whitepaper said: “Every single country that we examined vests authority in the government to require a cloud service provider to disclose customer data in certain situations, and in most instances this authority enables the government to access data physically stored outside the country's borders, provided there is some jurisdictional hook.”

    Another factor in this debate is what some consider 'cyber war' between countries. When introducing the Cyber Intelligence and Sharing Protection Act (CISPA) in February, US House Intelligence Committee chairman Mike Rogers declared: “American businesses are under siege. We need to provide American companies the information they need to better protect their networks from these dangerous cyber threats.”

    Of course, the more individuals believe we are ‘at war' and buy into nationalistic rhetoric, the more willing we are to give up privacy, freedoms and control over how the internet is run.

    I recognise an increasing momentum to establish new norms to guide how personal data can be used to create value. For example, the Organisation for Economic Cooperation and Development (OECD) and its member governments have been discussing how to refresh their principles for our hyper-connected world.

    Other groups, such as the Centre for Information Policy Leadership, have been focusing on accountability: ‘Who has data about you? Where is the data about you located?' In addition, different business sector associations/consortia and regional authorities have been considering how these principles apply to their particular applications.

    The Global System for Mobile Communications Association (GSMA) has developed principles for mobile privacy, and the Digital Advertising Alliance has developed principles for the use of data in online behavioural advertising.

    The proposed European Commission Data Protection Regulation, which is currently under discussion by the European Council and Parliament, is the most comprehensive attempt to establish new norms for the flow of personal data.

    The Asia-Pacific Economic Cooperation forum is establishing a cross-border privacy-rules system to harmonise approaches throughout the region.

    In short, there are bodies exploring these issues.

    That is a good thing, considering that we are at an important juncture regarding this topic and the decisions we make today will have serious implications long into the future.

    Yves Le Roux is a member of the ISACA Guidance and Practices Committee and the ISACA Privacy Task Force and principal consultant at CA Technologies - France

    SSO and beyond - giving CIOs control in the cloud

    SSO and beyond - giving CIOs control in the cloud

    On 17th May 2013, Yahoo advised users in Japan to change their passwords, as a precautionary measure, following the potential theft of a file containing 22 million user names.

    A week earlier, Google announced that it plans to make two-factor authentication compulsory. Online authentication is becoming an increasing burden on users and administrators.

    In February, the Fast Identity Online (Fido) Alliance published a new set of authentication standards that aim to end the reliance on passwords. By creating open and interoperable standards based on the Online Security Transaction Protocol (OSTP), the Fido Alliance hopes to authenticate users to all online applications using the Trusted Platform Module chip on the device, or biometric information supplied from the computing device.

    Are passwords passé?

    It has been well documented that users struggle to remember passwords. As a result, people often create weak passwords and reuse the same one to access multiple online applications, putting all services at risk in the event of a breach. Passwords also lend themselves to licensing problems, by allowing login details to be shared between authorised and unauthorised users.

    Play it again SAML?

    It was these very same authentication issues that led to the development of the Security Assertion Markup Language (SAML) standard ten years ago by the Organisation for the Advancement of Structured Information Standards (Oasis).

    The principal goal of Oasis was essentially the same as that of the Fido Alliance: to create a new standard for confirming identity and authorising access to web services.

    A decade later, one of the most important uses of SAML is to enable single sign-on (SSO) to web applications. SSO, like OSTP, removes the reliance on passwords by creating an authentication system that uses XML-based assertions between the identity provider and service provider (web application).

    SaaSID supports a range of SSO methods within its solutions, but it recognises that authenticating users to online applications is just the first step.

    As more corporate applications are delivered online, CIOs need more than just a record of who logged in and logged out. They need to be able to manage and record what happens between those two events.

    CIOs operating in regulated industries, such as financial services, healthcare and pharmaceuticals, need to go beyond authentication and SSO and provide an audit of employees' interactions with web applications; visibility that SSO cannot provide.

    The benefits of going beyond SSO

    Data protection regulations require CIOs to prove that they restricted access to personal data and that they prevented unauthorised processing, changes or breaches of that data. Without being able to control and audit interactions with web applications, CIOs cannot show how risks to data have been effectively managed and this affects their ability to comply with a range of information security-related standards, regulations and legislation.

    For example, if you can only see login and logout information, how do you prove to an auditor that you prevented customer lists from being exported from your organisation's CRM application?

    This lack of visibility in the cloud is preventing some organisations from achieving the scalability and productivity benefits of web applications.

    A cunning device?

    The Fido Alliance's approach requires software to be downloaded to devices to enable authentication to online services. However, this may not be acceptable to employees working on personally owned devices.

    One of the concerns voiced by CIOs is how they can ensure that new web applications are quickly rolled out to all devices and that access to corporate data is just as quickly revoked when employees leave the organisation. This is critical for combating data loss and remaining compliant.

    However, because each employee tends to use multiple computing devices, this may delay roll-out and revocation of access to web applications and corporate data using a device-centric approach such as the Fido authentication system.

    When using SSO, CIOs still need to tackle the compliance blind spot created when employees use web applications to process corporate data. They also need to drive out the complexity caused by employees using multiple computing devices in the corporate environment.

    Identity in the cloud

    Computing and mobile form factors change on an almost weekly basis, so the Fido Alliance's device-centric approach is in some ways surprising.

    We realised three years ago that when users access web applications, the only point of commonality is the browser, so that's where we put our SSO - in web application management and auditing software.

    By using browser-based security, CIOs can go beyond SSO to enable web application features to be controlled, while creating a detailed audit trail of user activity, regardless of the device used.

    This browser-based approach hands back control to CIOs, so that employees can benefit from using web applications, regardless of the device, without CIOs losing visibility of interactions with corporate data and without IT teams wasting time on multiple password resets.

    Richard Walters is chief technology officer of SaaSID

    Tale of a risk assessor delights Eurovision

    Tale of a risk assessor delights Eurovision

    Along with the usual power ballads, nonsense songs and dance spectaculars, Saturday's Eurovision Song contest also brought a bit of IT to the mix.

    Performed by Gianluca Bezzina as the Maltese entry, 'Tomorrow' told the story of Jeremy working in IT, whose option was risk assessment.

    No chance of a nil point, the song scored a total of 120 points, placing him eighth overall. If you missed it, the song is below:

    Sharing outside the Box

    Sharing outside the Box

    The concept of cloud-based file sharing is one plaguing security managers, as it is often putting data out of their control and at fear of being out of compliance.

     

    Without mentioning any names, it seems that this concept has elevated users to not only bring their own devices into the workplace, but also take data out of the perimeter and into an unmanaged cloud. There are several solutions to this, with one of the leading consumer players now offering a business solution, but among the other more business-ready solutions is Box.

     

    Fresh from partnering with CipherCloud to offer encryption of data inside an application, Box is now offering a similar service with the business and control as its heart. This week I met with Whitney Bouck, general manager of Box, who was announcing the company's accreditation with the ISO 27001 standard.

     

    Bouck said: “This certification demonstrates our commitment not only to the security and control of our customers' data, but also our commitment to our global customer base. We started down this path last year and our compliance efforts are gaining steam.

     

    “While this is an important certification for Box, it's just one more step along our long-term roadmap and commitment to providing the highest level of transparency and assurance to our customers about the quality and security of our platform, top to bottom.”

     

    This achievement aside, using the cloud still fills security types with fear. Speaking at SC Magazine's Data Protection conference in March, G4S technology director Glyn Hughes said "that internal due diligence and continual assessment needs to be done when it comes to the cloud, as a move to the cloud cannot result in a loss of control of data".

     

    Bouck said that when it comes to data protection fears of storing data in the cloud, this is a conversation that she is having frequently with CIOs and CISOs and, as technology has become more sophisticated, this is pushing and pulling users to and from the cloud. She said that while challenges such as cost, availability and agility are a concern, "there are lots more to the cloud".

     

    Bouck went on to say that where there is fear of using the cloud, there is also a change, as trust has been added as well as availability. She said: “Where we shine is we allow data to be put on any device so you can share it with anyone you want to so you can sync and share.

     

    “The other area is content and collaboration. Where we focus on business content and a lot of it is back and forth; often it is too large [and] goes into an FTP server, so we try to thread that together and put it into Box where you can track it rather than a disconnected model. You do stuff with executives and third parties, so storing and sharing content is at the core.”

     

    Talking about the type of users that Box has, Bouck mentioned enterprises, airlines, electrical firms and telcos. She said: “Look at banking, a heavily regulated sector. What are they in business for? To provide financial services to their users; their core business is not managing data centres, it is about managing wealth and money and that is why we are in this business, we offer services for data management. It all matters for the cloud: how safe is it, can service providers offer security?”

     

    Talking about the recent launch by Dropbox of 'Dropbox for Business', Bouck said that while the initial technology is similar to what it offers, this solution "added very few controls for adding and deleting users". Bouck said that Box's management adds the ability to allow use on a certain device, password security, limits on sharing content and permissions to limit the control of information so it is all logged and audited.

     

    “In Box it is all tracked so you can see what is against policy and alleviate problems,” she said. “The administrator control fits within a user's ecosystem. We integrate with 240 business applications and we have achieved compliance with HIPAA, Fedramp and now ISO 27001.”

     

    Bouck went on to say that consumerisation of IT has changed the way people share data, as it is so accessible in consumer models. She said it is becoming known as a 'Dropbox problem', so Box saw the opportunity to give users a tool to be secure, to scale and which offers visibility too.

     

    She said: “We focus on scale and security and it all makes IT happy, as nothing makes users and security people happy! Our secret sauce is how this affects users without inhibiting users. Look at the work from home model, how is that done securely? If you use Box you can do it securely, but if you bring in a device, security has to okay it first, or you have to use a VPN to get in.”

    Resilience ‒ the way to survive a cyber attack

    Resilience ‒ the way to survive a cyber attack

    The claim that any Western information technology dependent society could be brought down by a 15-minute cyber attack has recently provoked intense discussion.

    In reality, a well-prepared cyber attack does not need to last for 15 minutes to succeed. After preparations it takes only seconds to conduct the attack, which may hit targets next door as well as those on the other side of the world.

    It is the society's capability to withstand the attack that determines whether or not it will lead to all-round chaos ‒ and in what time. As a general rule, it takes a lot longer than 15 minutes for all consequences to manifest themselves and for the society to absorb and react to them. Re-establishing the equilibrium that existed in the society before the attack may take years.

    There is no such thing as absolute security; neither in the physical nor in the virtual world. While technology entails a promise to eliminate human error from the threat catalogue through automation, it brings novel and constantly evolving threats.

    Information technology vows to enhance situational awareness necessary to the production of security, yet carries even unknown vulnerabilities with it. Incomplete security is nothing new in itself, but the enmeshment of physical and virtual worlds creates new kinds of security opportunities and needs that societies have to address.

    Today's overall threat catalogue is versatile and in constant change. As it includes both un-emerged and just gradually appearing threats, it forces societies to plan and prepare also for the unknown. Preparing for the unknown can only take place through strengthening the society's resilience.

    Resilience stands for the continuation of operations even when the society faces a severe disturbance in its security environment, the capability to recover from the shock quickly, and the ability to either remount the temporarily halted functions or re-engineer them.

    Resilience is a multidimensional phenomenon. It affects societies at present, yet even more their futures. It is required from both physical and virtual systems, and from their intermingled reality. Resilience is not only a headache of the decision-makers trying to secure the functions vital to society at any time, but also a feature of states, organisations and corporations, as well as that of individuals.

    The society's overall resilience builds upon the capabilities of its constituting parts to prevent and resist exceptions from the ‘business as usual' ‒ as well as to adapt to them rapidly and flexibly.

    The Cabinet Office in the UK categorises resilience into ‘infrastructure resilience', ‘community resilience' and ‘business continuity', and ‘corporate resilience'. All of these are deemed important for the survival of the society in the contemporary security environment. Resilience is not only a physical but to a large extent a mental feature.

    Hence it also entails, for instance, the capability to make justifiable decisions and act upon them under distress. Tolerance for crisis should be seen as a function vital to society.

    The Western societies are used to the prevailing state of peace and have managed to construct well-functioning societal operations based on the utilisation of technology.

    As a drawback to this state, which in itself is worth pursuing, they have lost some of their capability to survive. Especially, their mental ability to deal with distress is declining for the lulling belief that no major things can go wrong. This can lead to a situation in that the physical features of the society recover from an attack relatively quickly, but the poor mental tolerance keeps the society from re-balancing itself for years or decades.

    Developing and maintaining resilience is a central demand presented by the contemporary security thinking. Its importance will only become highlighted in the future as the world becomes ever more interconnected, threats more complex and addressing the complicated security questions requires cooperation.

    Resilience enables both efficient operating in times of distress or conflict and smooth functioning of society or any of its constituting parts anytime ‒ as well as people's trust on the aforementioned.

    The intertwined nature of physical and virtual worlds requires that preparation, action and education take place in the intermingled reality. This enables the utilisation of opportunities that information technology and cyber space create without exposing oneself to unnecessary risk.

    Even the virtual world, that relies heavily on automation, does not always function. Minor disturbances in it, such as temporal interruptions in communications networks or defunct ATMs, are only beneficial, because we tend to trust too much on the operability of bytes. If bytes do not function, we become helpless.

    Temporal cyber disturbances and shocks will always happen. This is important, because they keep societies alert and able to both react and pro-act. As a result, building resilient societies is vital for anyone's survival for the future ‒ that is a fact.

    It depends on the success of this building project whether cyber attacks can or cannot bring societies to their knees in indefinable time.

    Jarno Limnéll is director of cyber security at Stonesoft

    HP seeks secret sauce to fill the gaps

    HP seeks secret sauce to fill the gaps

    Attending a recent social event, I was able to get together with some major names from IT giant HP.

    The four executives at the event represented some of the technology acquisitions that the company had made over the past few years, including FortifyArcSight, Vistorm and TippingPoint, via the acquisition of 3Com.

    Speaking with Andrzej Kawalec, CTO of HP enterprise security services UK; Jason Schmitt, director of product management for HP Fortify; Frank Mong, VP and general manager of enterprise security product solutions; and Rob Greer, VP and general manager of HP Software, network security; I firstly asked the group if they felt that managing a collection of technologies would work better if they could ‘cooperate' and share information among each other.

    Kawalec said that HP "looks after the biggest companies in the world and we share that intelligence deep into our capability". Greer commented that what made ArcSight great was its capability of looking into all sources, and that HP has the technologies to deliver to it.

    He said: “With TippingPoint, we integrate with ArcSight to find that information to integrate to get better intelligence. It is my belief that people get in externally and each phase of technology and services gives intelligence on this. If you integrate the environment, then the game is not over.

    “Do you know where the assets are? That is what ArcSight correlates and shortens the timeframe to identify and do something about it. The integration with Fortify means that security challenges can be addressed initially, so you can create a kind of ‘digital vaccine' to make changes.”

    Kawalec said that the eight security offices that HP has around the world send information into ArcSight to collect and correlate information in order to get context. “Tying it all together, it can be really amazing,” he said.

    Mong said that HP is focused on what the customer wants, and its ecosystem of servers means it takes the knowledge and loads it into the technology. He said: “If you look at a data breach, you don't just look at the network or the firewall or intrusion protection system, as 84 per cent of vulnerabilities are in the applications, so that is where Fortify comes in.”

    Schmitt commented that HP's view on application security is not about identifying flaws, but simulating attacks, and its WebInspect technology is focused on this. Kawalec said that this offers a hosted model to allow the user to use the tool and move on.

    Greer said: “We know you cannot stop and you cannot be 100 per cent secure, so we make it secure so that those who attack you give up and go somewhere else.”

    HP talked up a concept called the ‘five step kill chain', which they said was the following:

    • Research
    • Infiltration
    • Discovery
    • Capture
    • Exfiltration

    Mong said that it is all about countering a threat and knowledge, and that it makes sense to protect the user and make it harder for the attacker to get in.

    He said: “We put together the complete package, as it is not a case of ‘if', it is ‘when'. It is not layered defence; it is the process and what layers there are. We're putting in technology that slows the attacker down.

    “We are still talking layers, but it is just tools and users need a process to understand and counteract the threat.”

    Getting back to the point of technologies working together, Kawalec said that often there is "fracture points between products and services", while Greer said that often a lot of technologies do not standardise and there is too much of a trade-off between risk and accessibility.

    “People don't want to pay for security, they will not compromise on performance. Security should be an enabler.”

    I asked the group where they felt HP was in the security space, following on from the same question I had asked the security brands of Dell last year. Mong said that "HP has to be in security", whether it is for PCs and laptops or servers, all environments have security at the core.

    Kawalec said: “No one is doing consumer, hardware, software and servers at such a massive scale. From tablet to the printer to the network, if it is running for enterprise, no one is doing it. Security is a massive market.”

    Greer concluded by admitting that there are some gaps between its technologies and that the rules of the game have to change, but that "HP has the best way of addressing that".

    So less a case of mind the gap in the long term, but could the company be on the lookout for those technologies where it feels it is not delivering to users in order to deliver a full package at the moment? Its acquisition strategy has been pretty quiet for a couple of years, but could things be changing?

    AhnLab announces entry into the UK

    AhnLab announces entry into the UK

    A few weeks ago I had the pleasure of meeting ‘advanced internet security protection' vendor AhnLab as it made the first stage of its move into the EMEA market.

    I first became aware of the company at this year's RSA Conference through its prominent advertising. At Infosecurity Europe, the company made its first step into EMEA, which it has now followed with the opening of a UK-based office.

    The company specialises in integrated internet security solutions for small-to-medium businesses (SMBs) to enterprise organisations with a firm eye on advanced persistent threat (APT) protection.

    Speaking to SC Magazine, Brian Laing, director of marketing and products at AhnLab, said that it provides maximum defence by offering protection of email and web attacks ‘all within one box'. “We offer our own anti-virus with one licence, we have two sandboxes and offer email and web protection,” he said.

    “We don't do a one-size-fits-all solution, but we look at ‘beyond application detection and behaviour', as well as the executable to give you a full detail of what is warranted to be malicious. We then compare signatures and anomalies to that list of behaviours to what is in common with malware.”

    Laing explained that there are no binaries sent to the cloud, and instead there is a list of behaviours to get the DNA of an attack into a database. He said: “We put the data in their cloud, and there is a copy of our cloud in your network, you can also get updates for queries with signatures and behaviour patterns.”

    AhnLab has been in the anti-virus business since the mid-1990s since its foundation in Seoul, South Korea. It offers a range of products including a ‘Total Internet Security' package, as well as specific layers of security.

    It is tricky to talk to AhnLab without the name of its main competitor coming up. In fact its PR claims that its products "have been identified as being faster, producing fewer false positives and having a lower TCO than FireEye in a number of third party tests".

    FireEye has made an incredible mark in the information security space since its own entry into the UK almost two years ago, so it is not hard to see why companies are aspiring to its level of attention.

    Asked if he felt that there was a market for APT protection in the UK, Laing said: “There is some, but we look at FireEye's revenue and success and also the amount of press coverage that they have got.”

    Laing confirmed that the company sees itself in the same space as FireEye and I am sure that they will not complain about the competition or being held in such esteem.

    AhnLab EMEA territory manager Simon Edwards said: “By establishing EMEA headquarters, we are going to be able to provide a better service to our customers within the region. We see Europe as one of our biggest areas for growth over the next few years and we have set ourselves ambitious business targets.

    “The company offers customers unrivalled products, which have already attracted the attention of some of the major industry players. As today's cyber criminals continue to develop highly sophisticated pieces of advanced malware, it is imperative that organisations deploy a suitable security solution, which can cope with these threats.”

    Information security - within budget

    Information security - within budget

    Have you ever wondered why the chief information security officer or head of information security is often ostracised by the CFO/CIO/CEO during the budget allocation period?

    At best, the CISO often walks away with the crumbs or leftover budget. Why? Is it because information security has a budget-busting reputation? Does information security fail to do a good job in demonstrating value or return on investment? Is the typical head of information security lacking an understanding of technology and management's expectations?

    Regardless of the reasons, put bluntly, the information security department is frequently seen as a ‘money bleeder' and thus, is a frustration for the CFO/CIO/CEO.

    Let's face it, the bottom line matters and in today's ‘cliff and ceilings' environments, every company, big or small, is looking to cut its budgets. The information security budget, if it exists, is often among the first at the altar of fiscal sacrifice

    Fortunately, the CISO can still do a lot with little funding. The below six points are ways CISOs can successfully work with a limited budget:

    1. Implement a management framework. If budget is a concern, begin by aligning all existing projects, tools and controls to those set out by the respective frameworks. Among many (including ISO 27001) two frameworks that can be easier to adopt and align with include:

    (a) Either a risk management framework (RMF) such as Management of Risk (M_o_R) or ISACA's Risk IT (this is available as a standalone framework based on Cobit 4.1 and is also integrated into Cobit 5). A RMF is critical to an organisation's bottom line, and creating a framework to manage risk will allow the security department to manage and highlight exposed and unmitigated risks. This approach often ends up influencing top management to take notice and, hence, take action

    (b) Cobit 5, is a framework created by the non-profit, independent ISACA to improve the governance and management of enterprise IT (GEIT). Its implementation allows managers to bridge the business and IT gap.

    2. Use what you have. Often over-zealous or simply ignorant predecessors have spent large sums on hardware, software and systems that never get used (often called vapourware or tins). Despite all urges to throw the useless ware away, attempt to negotiate with the vendor to come in and demonstrate value, and drop its support charges. Give your requirements to the vendor and request that the vendor meet as many of them as possible. No vendor wants to lose an existing client.

    3. Market information security. Avoid spending thousands of pounds on a limited number of training days. Instead, arrange for regular awareness sessions and/or open information security days where you can provide brief talks and presentations with a prize draw to attract the maximum audience.

    4. Invite vendors to give demos of their products to the masses. This is even easier if the vendor has not yet got a foothold in your company, and it is very useful for you, as you get to do a 'requirements analysis' (albeit high level and non-scientific) cheaply.

    5. Detail both technical and business requirements around a significant information security risk before attempting any project request or initiative. That way you know that your project will help fulfil a requirement.

    6. Never say no. Do not deny a request to increase security just because there is no more money in the pot.  Review the above five points.

    Yes, budgets are necessary for tools, systems and, most importantly, hiring good people. However, if you are aware of one or more business-impacting and as yet unmitigated risks, carry out a formal risk assessment, using ISACA's Risk IT framework (covered in Cobit 5) to build your case and increase your budget.

    The more CISOs can prove a business need for information security, the higher the budgets are likely to be.

    Amar Singh is a member of the ISACA London Chapter Security Advisory Group and CISO of News International

    Businesses need to recognise BYOD and DDoS threats

    Businesses need to recognise BYOD and DDoS threats

    The recent Infosecurity Europe exhibition was a great opportunity to talk to fellow security experts and businesses, and it's unsurprising that both bring your own device (BYOD) and distributed denial-of-service (DDoS) attacks remain high on most business agendas.

    A high percentage of the visitors to our stand wanted to talk specifically about BYOD and DDoS solutions – it seems that many have reached a tipping point where the threats can no longer be ignored.

    In fact, in the survey of 120 attendees we ran at the event, we found that BYOD tops the challenges that IT leaders are facing when trying to secure their networks and devices. We found that 87 per cent said that it is more difficult than ever to secure businesses from the threat of cyber attacks, with almost one in four citing BYOD as the largest contributing factor to increased vulnerability in their organisations.

    This may be surprising to many. The talk surrounding BYOD certainly seems to have been going on for years. However, businesses of all sizes are continuing to discover that they must navigate the murky waters of managing new devices on their networks and putting in place the right levels of authentication to enable entire workforces, without putting too many restrictions on access.

    It's certainly true that the introduction of smartphones, laptops and tablets to the workplace has been a huge element in enabling mobile working – but it has also come with its fair share of threats to business.

    The focus for anyone looking to implement a BYOD solution should be to first understand the user-base and their needs, the types of device they are using, where are they accessing information from, what type of data they are accessing remotely, and so on. Once you understand the workforce, it is possible to map a solution to ensure the right levels of authentication to protect the network and ensure the best possible end-user experience.

    Alongside BYOD concerns, an alarming number of our survey's respondents admitted to a worrying lack of knowledge about the latest DDoS threats. Only 10 per cent of the security professionals we surveyed could describe accurately how DNS reflection attacks work (despite the coverage of this type of attack following the now infamous Spamhaus attack) and just 11 per cent would be completely confident that the day-to-day operations of their business would not be disrupted, should they be hit by such an attack.

    These are strikingly low numbers given the amount of attention that Spamhaus and DDoS have received over recent months. However, the message about the risks does seem to be getting through; 22 per cent of respondents highlighted reputational damage as their main concern about potential DDoS attacks, 20 per cent worried about the impact on customers and 16 per cent on data loss, while more than one in 10 respondents picked out revenue loss as one of their top three DDoS fears. So what can businesses do to protect themselves?

    It's crucial that we get on the front foot when it comes to tackling cyber crime and consumer devices in the workplace to try to limit the damage. The results speak for themselves. Businesses need to take note and prioritise security or run the risk of allowing cyber criminals to access data through a BYOD backdoor or hacktivists to knock them offline with DDoS attacks.

    Success is in the detail, though – it's not a case of buying DDoS or BYOD solutions just to tick a box, it's about establishing what your organisation needs and how you can better support your employees. If you keep that focus in mind, you won't go far wrong.

    Joakim Sundberg, worldwide security solution architect at F5 Networks

    Cyber slings and arrows

    Cyber slings and arrows

    The concept of cyber war is a 'to and fro' subject for us in the media.

    On the one hand, we hear stories of espionage, state-sponsored hacking and Stuxnet-style malware that allow us to write stories about ‘a new cold war'. On the other hand, there is research that debunks the concept of cyber war, and points out a lot of the flaws in the research. A good point of reference for this is the talk at last year's Brucon conference.

    A new angle for this came about last month, when the US air force announced that it was designating some cyber tools to be ‘weapons'. The concept of this does give us evidence of government awareness of the concept of cyber war and is further proof that cyber attack and defence is a serious business for governments.

    Considering this new angle, I turned to some industry spokespeople to see what the view of this decision was. Bob Ayers, a former cyber intelligence officer for the US Army and the Defense Intelligence Agency (DIA), who recently revealed the truth about the state of defence by the UK government to SC, referred me to a Washington Post article from 2003. This stated that President George W. Bush had "signed a secret directive ordering the government to develop, for the first time, national-level guidance for determining when and how the United States would launch cyber attacks against enemy computer networks, according to administration officials".

    Ayers said: “What makes this so interesting is not that it said ‘go away and build cyber weapons', it said ‘come up with the rules on how we'll use them'. You don't worry about rules of engagement when you don't have any weapons to engage with.”

    I also asked US-based security blogger Jeffrey Carr what he thought about the news. He told SC Magazine that you can take Lieutenant General John Hyten ‘at his word'.

    Asked if he felt that this was a method to gain a slice of the government defence budget, Carr said: “I've heard this - that it's difficult to justify expenditures and this sounds like a reasonable approach by the air force.

    “The armed forces of dozens of countries have been adding cyber as part of their war fighting since before 2010, and that is continuing. The reports from security companies marginally affect that, but aren't the drivers in my opinion.”

    From a vendor perspective, Jason Mical, vice president of cyber security at AccessData, said that as cyber crime has overtaken terrorism as the top threat in the government's eyes, reclassifying cyber tools as weapons "was a necessary approach".

    He said: “Cyber criminals will not wait for the government agencies to catch up. Unless budgets are allocated to arm the government with the technologies to detect the new threats, they will continue to fall victim to these attacks.

    “But just detection of these new threats is not enough; agencies have to have solutions in place to respond efficiently. Identifying a breach is important but quickly determining breach scope and deploying remediation tactics is critical. It is imperative that the agencies spend the necessary dollars to implement true cyber intelligence and response technologies.”

    So if this is a case of clambering for budget, then perhaps this is a situation that CISOs will be all too familiar with. The circumstances may be different, but could you justify targeted attack defence, DDoS protection and forensic and remediation investment to be a realistic investment in light of current threats?

    If so, you may not be far off from what the US air force has done in this instance. Commenting, Ed Skoudis, founder of Counter Hack Challenges and a SANS Instructor, said that concept "makes a lot of sense", especially "given that computer action can have a significant kinetic impact on the real world".

    He said: “Manipulating computers, electrical distribution, water supplies, manufacturing equipment and more can be impacted just as significantly as if someone attacks them using a traditional weapons system.

    “These kind of weapons and their effects are still being explored and understood, but they are definitely real, as indicated by this move.”

    Does this make cyber war more real? Of course not, but it is a move by government to acknowledge that this is a challenge that needs to be met and the US air force has stepped into this with vigour. Whether it gets that budget allocation and this effort works remains to be seen.

    The circle of vulnerability management

    The circle of vulnerability management

    Following the recent acquisition of nCircle, I recently got a chance to talk with Tripwire's chief technology officer and senior manager of corporate communications about the company's new addition.

    I dealt with the company fairly regularly for the first few years of my time here at SC, in which time Tripwire was a major player in the compliance market. It was acquired by the Thoma Bravo organisation in 2011 and until its purchase of the vulnerability management company nCircle, it had remained fairly silent.

    Dwayne Melancon, who was promoted to the position of CTO after the departure from the position by founder Gene Kim, explained that the deal was intended to better cover security and to "add a vision of how to be secure and add security controls".

    He said that it was important for the company to produce a solution that would address how vulnerable something is to attack. "If you can assess vulnerabilities and work with the Tripwire engine, you can tell a user how vulnerable they are," he explained.

    Melancon said that the process since the purchase was to work with nCircle's team on how to build the patching systems into its own products to connect the infrastructure. At the time of meeting, the acquisition was only a few weeks old so the process of baking the vulnerability management software into a compliance engine was challenging.

    “All feeds lead to intelligence to see what you are looking for, and knowing what is wrong and what the state of the systems are, and feed this into the log management and security incident and event management (SIEM) technologies,” Melancon said.

    Melancon said that the company was about file changes, but as more and more ports were opened its technology needed to tell the user more about additional users, administrator privileges and who opened files and where they were sent.

    He said: “Security from a network activity and traffic perspective is necessary but not sufficient, but part of what is happening. The Verizon Data Breach Investigations Report shows that there is too much time between discovery and detecting early enough to do the detection.

    “With vulnerability and application management, it is about basic hygiene. You take application and you want to know if it is good or bad. However there is so much to do that you want to automate the discovery and inform Tripwire Enterprise so you have got a way to tie it to the business. It is more about context and helping to make an informed choice.

    Offering a total security model is what every vendor wants to do, and with one sensible branch done a few years ago, Tripwire has added to its offerings to provide solutions that the users want.

    Five reasons not to worry about lost laptops

    Five reasons not to worry about lost laptops

    Laptops, Ultrabooks and Surface Pros are very easy to lose and a target for thieves.

    The FBI has estimated that the cost to a company for losing a single laptop is just under $50,000. However, despite this, you need not worry if you take the right approach to data security is keeping your data safe and protecting your company from possible bad publicity and also fines is something that every organisation can do.

    With the correct policies in place caring about lost laptops and smartphones will soon be a thing of the past. Here are five reasons why I don't care if things get lost or stolen.

    Encrypt

    One of the best ways to keep your data secure is to encrypt it. In today's mobile world the majority of data is held on some form of mobile device; if that gets lost that could be a massive headache for the company, not just through potential fines from the likes of the Information Commissioner's Office (ICO), but also from customers losing faith in the company and voting with their feet.

    However, if all the data on the device is securely encrypted then it doesn't matter who gets their hands on the phone, tablet or laptop because they have no way of accessing the information.

    The trouble is only 25 per cent of companies actually use encryption, according to a recent study by the Ponemon Institute. This leaves them wide open to massive data losses especially when you consider that a laptop has a 10 per cent chance of being stolen during a standard three-year lifecycle.

    When implementing encryption it needs to be fast and non-intrusive. If just one employee thinks that the encryption process is slowing down their job and ability to work productively, they will turn it off.

    Remotely delete

    Even when you know that everything on a laptop, phone or tablet is encrypted, sometimes it's just even better to get rid of the data once and for all. It's much easier to sleep well at night knowing that any stolen or lost laptops don't have any information on them, encrypted or otherwise, because the data has been remotely deleted.

    Up to the minute, continuous backup

    In the same way that encryption should be almost invisible to the users, so should the backup process. Researchers at the Ponemon Institute found that only eight per cent of corporate laptops are backed up to the company's servers, and when it comes to actually doing the backup it must be continuous, not just done at the end of the day, otherwise when the data is restored it isn't up-to-date.

    Use the cloud to help you store your backups in a way that they can be easily recalled should the need arise. Backups are also best done locally during peak times to minimise the strain on each office's bandwidth; when the network isn't being used, such as during non-office hours, back up the data to the central secure store.

    Doing your backup this way means that if you have teams travelling the world and they lose their devices, they can get new ones and back them up quickly and easily.

    Audited

    Another issue that can cause a great deal of trouble when mobile devices get lost is the fact that a lot of the time the company and the individuals involved don't know what was on the device in the first place. Often this leads the company to admit that it has lost a lot more data than it actually has, not that they know for sure of course.

    In order for this not to happen, all data stored on any mobile device needs to be audited so that you know who not only has access to what, but also what they are doing with it and if they are taking it off-site. A lot of the time people think they need more information than they actually do and open the company up to unnecessary risk.

    This practice should also be applied to USB memory devices too. It used to be that a company's databases were stored in big, metal filing cabinets that couldn't be removed. Now a company's entire filing system can be held on one USB stick.

    Staying productive while waiting for new hardware

    For some people, those who don't have the right procedures in place, losing their laptop means that they can't do any work along with all the other problems that they may well have caused the company. However, with the right policies in place you need not be dead in the water or have to miss a step with your work. Companies need to embrace the reality of mobile access from whatever devices people are carrying.

    As many business people travel with smartphones and tablets as well as laptops, you should be able to access your backed up data with either of these or simply through a web browser over a secure connection. Then when you do get your device back, all your data can be quickly restored and will be up-to-date.

    The sheer number of mobile devices in use in the workplace and the number that go missing every day means that every company needs to plan for the worse. You can't just sit there and say ‘it won't happen to me', because the statistics show that it will happen to someone in your organisation.

    This leaves every IT manager with two options. One is to lose sleep at night worrying that one of the sales staff will leave their laptop on a train or in a taxi. Second, they could implement the right systems and sleep soundly knowing that if a marketing executive did lose their laptop with the company's next year's predictions on it at an airport somewhere in the world, it wouldn't matter because the data was safe from prying eyes.

    Phil Evans is vice president of business development and sales at Datacastle

    In Jack Daniel's infosec fantasy world

    In Jack Daniel's infosec fantasy world

    Last week I had the pleasure of meeting self-declared 'infosec curmudgeon' Jack Daniel.

    Known for his influential views on threat research and of course, his co-founding of the B-Sides conference circuit, Daniel also works as a product evangelist at Tenable Network Security, having left his previous position at Astaro following its acquisition by Sophos in 2011.

    I began by asking Daniel how he had come to work for a vendor in the vulnerability management space. He said that having talked with CEO Ron Gula about a role at the company he believed that having the opportunity to work with him allowed him "to fill a nebulas role in the middle". He said: “A lot of what product management is, is reactive, so I talk to the users and with the marketing team to generate content and keep them up to speed on problems.

    “I do more with the channel and my stuff with B-Sides and they give me the flexibility to do that, so I am having fun.”

    I asked Daniel about vulnerability management and, following a start to the year that saw zero-days and software vulnerabilities hit the headlines over and over again, if he thought the sector was enjoying a new path of interest. He said that while vulnerability management is nothing new, it has got more attention and that the company was expecting vulnerability management to be "all the different things that what we expected SIEM (security incident and event management) to be".

    Daniel continued by explaining that it is about a company's vulnerability and security posture, and Tripwire's acquisition of nCircle showed the interest in vulnerability management and he hoped that there would be more data from companies such as Microsoft, Verizon and others to feed the market.

    “You can just patch stuff, but in a 150-node network you have to wonder if you are tracking enough or patching to scale, as just patching doesn't work, so you need to prioritise,” he said.

    “This is a big part of what is driving vulnerability management. Look at the mobile impact; investigation is not sufficient but consider how vulnerable these technologies are. Personal devices need a better view. With Android you are at the mercy of the carrier and Apple you can do over the air, but how often is a vulnerability turned into a compromise, and what are the implications for business?”

    Daniel bemoaned the concept of 'bring your own device' (BYOD) in this case, saying that businesses need to do investment for high performance in order to counter the lack of management. He said that the 'we don't allow' attitude was naïve, especially as policies are not enforced by martial law 'thankfully'.

    So what is his solution? Daniel said that in 'his fantasy world' we would use what would be appropriate for us, restricting applications and enforcing encryption.

    Talking about the recent story regarding the Microsoft patch that led many users to have a 'blue screen of death', Daniel said that this was not that bad, but again 'in his fantasy world' administrators would be able to re-image systems so they were finely tuned and depending on your settings, you could push patches out until 72 hours and everything would be secure.

    He said: “You will be able to log in to use whatever tools you have, be able to re-image quickly and disaster recovery concepts work and put a new image on. These are all things to imagine in my fantasy world!”

    We concluded by talking about third party software, with Daniel saying that the significance of third party software has been demonstrated in compromises, particularly with the Verizon Data Breach Investigations Report showing the influence of captured usernames and passwords being used.

    He said that in his fantasy world, companies would watch their networks and third parties to look for signs of compromise, so it would tell you about problems so you could mitigate it.

    “It will look for signs of compromise, tell you about attacks and tell you what to worry about as machines talk to each other,” he said.

    SC Magazine Awards Europe 2013 - highlights

    SC Magazine Awards Europe 2013 - highlights

    This week we announced the winners of the SC Magazine Awards Europe.

    Video highlights of the evening are available below, we are also able to offer a digital version of our awards Book of the Night which is available here: http://fmgstatic.ceros.com/sc-magazine/sc-awards-book/page/1.

    The full results of the night are available in our story here: www.scmagazineuk.com/sc-magazine-awards-europe-2013--winners-announced

    Also to see the photos from the evening, viiew them here:

    .

    Video interview: Palo Alto Networks at Infosec 2013

    SC Magazine was given the opportunity to ask Palo Alto Networks questions about modern topics, with the answers delivered by a video.

    We asked 'can reports on espionage and cyber war be taken seriously' and 'should businesses in the UK be concerned about APTs that impact businesses in the Middle East and how can they protect themselves'.

    Responding at the Infosecurity Europe 2013 conference in London, Alex Raistrick, director for Western Europe at Palo Alto Networks, gave us the following answers:


    Training up the infosec troops

    Training up the infosec troops

    The UK has a desperate need for more infosec professionals, according to the National Audit Office (NAO).

    In its February 2013 landscape review of the UK's cyber security, which assessed the UK government's progress in implementing its cyber security strategy, the NAO seized the opportunity to flag the ever-growing information communications technology (ICT) and cyber security skills gap.

    Backed by a sizeable injection of £210 million over 2014-5, and the rallied forces of 15 government organisations, government strategies to tackle the infosec talent shortfall in the coming years include the development of cross-cutting knowledge, skills and capability via the National Cyber Security Programme.

    However, before being seduced by the boon in job openings, training opportunities and government support, infosec professionals are wise to choose carefully when boosting their skills-set.

    A widening gap

    As with the rest of Europe, which despite high unemployment is experiencing a severe shortfall of talent in the face of surging infosec job openings, the skills gap in the UK is being widened by growing demand for professionals across the board.

    Although state sector employment is no longer a safe option for many white collar workers, and continues to be hit by rounds of downsizing in many areas, information security is one area that is bucking the trend. Today, infosec professionals are keenly sought by both the public and private sectors.

    Of these, highly skilled and qualified players with a broad range of skills and experience are at a premium. However, according to a 2012 market report by corporate governance recruiter Barclay Simpson, individuals with a highly specialised range of security expertise are also in hot demand, thanks to the complex challenge presented by cyber threats.

    Compliance and implementation

    Such demand for infosec professionals is impacted not just by the surge in cyber attacks and organisations' better understanding of the threats posed by cyber crime, but also by the rising importance of rigorous compliance and implementation.

    In the UK, effective compliance and implementation depends on employing people who are certified by the relevant professional certification bodies. Whereas examination bodies presuppose a level of experience, but do not generally follow formal requirements, certification bodies for information security generally have a great deal more clout. Usually membership based, they expect set prior experience, require continuing professional development (CPD) to maintain the qualification and invariably follow a code of ethics. As a result, certification provides greater credibility, helping recipients not only retain current employment, but also win future roles and climb the career ladder.

    Professional certification bodies that generally win plaudits from employers include, among others, IBITGQ, ISACA and (ISC)², which offer a range of  core qualifications.

    British infosec professionals are wise to consider the collection of competences surrounding ISO 27001 implementation and auditing, given the growing local and global importance of this international information security standard.

    At present, official ISO 27001 certifications are awarded by IBITGQ. They include ISO 27001 Certified ISMS Foundation – CIS F; ISO 27001 Lead Implementer - CIS LI; ISO 27001 Lead Auditor - CIS LA; and Certified ISMS Risk Management - CIS RM.

    Meanwhile, the Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM) and Certified in the Governance of Enterprise IT (CGEIT) qualifications – all awarded by ISACA - are globally accepted standards of achievement among information systems audit, control, security and IT governance professionals.  

    Finally, the CISSP (Certified Information Systems Security Professional) skills framework has a key part to play in terms of information security skills and disciplines. Developed and maintained by (ISC)², this qualification is particularly challenging to achieve.

    That said, it provides information security professionals with an objective measure of competence and a globally recognised standard of achievement. Those seeking to travel along the technical track however, should consider Exin-Cloud and EC-Council qualifications, underpinned by vendor certifications.

    The management track

    For individuals tempted by a management career path and roles such as chief information security officer (CISO), chief information officer (CIO), certified information security manager or lead implementer, relevant qualifications are likely to relate to the development of their skills and competences. They should emphasise the creation and management of managing information security and its components inside the organisation.

    In addition to rigorous certification, those with an eye to operating effectively at senior management level should not expect their technical or engineering skills to be sufficient. Many would-be managers, CISOs and CIOs do not understand the nuts and bolts of management or running a business that would be bread and butter to an individual with a business administration background. Specialist skills are, of course, essential.

    The vital next step for any ambitious infosec professional is therefore to broaden their business knowledge through an MBA or similar qualification. Only then can tomorrow's boardrooms receive the input they so badly need. 

    Alan Calder is chief executive of IT Governance

    IT Governance is exhibiting at Infosecurity Europe 2013, held on 23rd – 25th April 2013 at Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk

    Adult sites, bad on the inside?

    Adult sites, bad on the inside?

    I had better word this correctly, but I was very interested in some research around adult websites that recently appeared on the BBC news website.

    The article was based on research by Conrad Longmore, which evaluated several adult websites and the number of infected pages within each domain. It deemed the largest infection rate to be within the website Pornhub, which according to Google was ‘not currently listed as suspicious'. However, the research by Longmore found that of the 13,955 pages tested over a period of 90 days, 1,777 pages resulted in malicious software being downloaded and installed without user consent.

    Longmore's advice was pretty security standard and the reality is that this is another malvertising story, with third parties and adverts and frames being infected with malware compromising the user. The follow-up story by the BBC found that those infected were clearing the problems up, and confirmed that the number of infected pages in comparison with the total number of pages on the website was ‘minute'.

    Just over a year ago I looked at whether or not adult websites were the most secure or most targeted following some high-profile data breaches, while research by Bitdefender discovered that 63 per cent of users attempting to find adult content on their computers compromised their security on multiple occasions.

    Research from Blue Coat from this year found that when mobile users go to pornography sites, they have a high risk of finding a threat.

    While writing that article I looked to talk to IT and security teams from major adult websites and organisations and after a lot of effort, did not succeed, but I did reach writer Patchen Barss, author of The Erotic Engine, a book about the adult industry and technology.

    When asked if he felt that adult websites were leading the way in security technology, he said that it makes sense that they were as "they were pioneers in selling content online, which meant they were the first to learn how to protect their product".

    Malvertising is a challenge for all different types of businesses, not just those in the darker corners of the internet, as we have seen the websites of both Spotify and Major League Baseball affected by this problem.

    Longmore said that "there was clearly a problem just one week ago, there may not be a problem today, there might be a problem tomorrow, of course", suggesting that even though the adult websites have cleaned their houses up now, it may only be a matter of time before they are vulnerable to third party code.

    So is the answer to not have any advertising or content that you have not created? Of course not, that is impossible, but what you can do is assess what is on your site more frequently and scan your domain when new content is added, otherwise you may be someone else's bad news statistic.

    Head in the clouds

    Head in the clouds

    There are three key action points to consider if you are a business that uses cloud services.

    The primary driver is that your data (or data that you collect and use on behalf of your customers) will no longer be under your direct control. However it is still your data, both practically (i.e. you need it to run your business, while your cloud provider views it simply as a source of revenue) and legally (the UK Data Protection Act is clear on the subject of data ownership, data control etc.). The challenge is that you can no longer build an infosec ‘moat' around your data.

    Third-party provider trust

    There are two angles to consider here: how much do you trust providers delivering services in the cloud? Do you know what your 'traditional' non-cloud providers are even doing? Are they using cloud services themselves?

    Cloud adds a whole new raft of supply chain risks, from the failure of a cloud provider (how do you get your data back), to technical problems (loss of internet connectivity) that prevent your business from reaching your cloud providers - and importantly your data.

    Contractual controls must be applied because technical measures, such as a robust infrastructure, can only partially mitigate risks. Your need to ensure that your cloud service providers are securing your data in line with your security posture (not theirs) and your right to recover your data (in a useable form) from them is compulsory for your business' stability. Often, the use of a cloud provider may weaken your security posture - is this a risk your business can take?

    Data location - compliance

    Data location challenges have been discussed at length. However, no adequate solution has ever been found. Arguably, an inherent characteristic of 'the cloud' is that you don't necessarily know where your data is stored; you need to assess the scale of legal and compliance challenges.

    There may be data that you cannot store in the cloud, but how will this affect your operations? Will this prevent your business from realising any of the efficiency gains and cost savings promised by cloud service providers?

    You may even need to decide what can go in the cloud and what can't.

    Clearly, having large quantities of data in the cloud means that you are entrusting your critical assets to a third party of third parties, and allowing them to effectively manage risk on your behalf.

    However, their risk appetite may be different from yours. It is crucial that you understand your cloud suppliers' attitude to risk. The shared nature of cloud infrastructure brings specific technology challenges with respect to data privacy.

    Shockingly, there have been reported incidents of deleted data being recovered by different cloud clients. Once again, this illustrates the need for appropriate contractual controls, or if this is not possible you may need to adjust your attitude to risk, and very probably modify your business continuity and disaster recovery practices.

    Cloud security vs. traditional IT security

    The cloud security model is fundamentally different; adopting a cloud service will mean your information security management system undergoes one of the most significant changes it will ever face.

    Cloud forces a totally data-centric view of information security: gone are traditional network perimeters, segmentation etc. You need to know what data you are processing, where it is, why you are using it and when you might need to get rid of it.

    Be ‘infosec smart' about the cloud, and you have a disruptive enough opportunity to align previously disparate departments (for example legal, internal audit, IT, compliance) behind your information risk management initiatives.

    Paul Midian is consultancy director at Information Risk Management

    The danger of losing your memory

    The recent rise in targeted attacks has led many IT security chiefs to cite Advanced Persistent Threats (APT) as their biggest headache in 2013.

    With attacks directed at energy companies, government agencies and the likes of Google and Adobe, APTs are causing a lot of damage: valuable commercial secrets are heisted, sensitive government information is leaked, and sometimes industrial-scale havoc is wreaked. These attacks are shielded by the taboo subject of hacking, yet motivated by political control or monetary gain. And, the opportunity for success is high - advanced persistent attacks routinely bypass traditional defences.

    What is particularly noteworthy about APTs is the large number based on memory exploitation. In fact over the past 25 years, of the 54,000 software vulnerabilities given a CVSS rating, about 14 per cent were memory-based attacks. Memory injections are popular amongst hackers as they are particularly sneaky. Take ‘Skape/JT', which was discovered in 2004, it used an injection technique to copy code straight into memory. It was able to execute because the dynamic-link library (DLL) wasn't written to disk. Of course, attacks increased in sophistication over the years and along came, Reflective Memory Injections. This technique copies DLL straight into memory and executes without requiring any OS functions. Since the DLL is copied and executed straight out of memory without relying on any local functions (as in the case of Skape/JT), it is called ‘reflective' and is not stopped by traditional anti-virus and application control products.

    Unfortunately, Reflective Memory Injections are gaining ground among hackers, as the popular Russian RIA Novosti news agency (www.gazeta.ru), found out to its readers detriment. In that particular attack the malware wasn't hosted on the website. It was served to visitors through banners displayed by a third-party advertising service. It lived in the computer's memory and didn't create any files on the computer's drive. In some cases, the instructions given out by the code were to install an online banking Trojan horse onto the compromised computers. The danger with memory injections is that once the malware gets loaded into the memory the system generally considers it to be a trustworthy action. As such it's much harder to detect and can be used to do pretty much anything. 

    While sophisticated memory injections have been notoriously difficult to detect, they can be halted by patented technology that monitors an endpoint's memory address space and associated processes for distinct evidence of exploitation. If an executable library is found, an event is generated and the injected process is terminated. But, that is just one opportunity to halt malware from executing. Memory injections are often used to gain a foothold in a system once a buffer overflow has taken place. The main attack can be thwarted at four different stages. The first is to eliminate the vulnerability; if this opportunity is missed, the organisation has the opportunity to defend the buffer, stop the injection or stop the payload executing on disk. To have any real chance of stopping the attack in its tracks, a combination of technologies need to work in tandem, spanning patch and remediation management, application control and anti-virus. It is just simply not enough to rely on catching the really tricky malware at one particular point in time.

    Alan Bentley is SVP worldwide at Lumension

    Spamhaus - a sign of things to come?

    Spamhaus - a sign of things to come?

    Last week news broke that the internet around the world had been slowed down in what some described as the biggest cyber attack of its kind in history. It's also something that I predicted around a month ago.

    The Spamhaus attack is a demonstration of the kind of distributed denial-of-service (DDoS) attack I have been expecting for some time: DNS reflection. The major driver for this kind of attack is the decreasing number of bots available for rent, with the authorities more effectively cracking down on major botnets. With a lower number of bots now available, hacktivists and other cyber criminals are finding new ways in which to amplify their attacks.

    This shows that the nature of the open DNS servers can act as a springboard for huge DDoS attacks, and this is just one among many that we will see throughout 2013. It might be the largest amplification attack to date but I would predict that this will be seen as relatively small when we will look back at the end of 2013. One thing to remember however, is that very often a DDoS attack is just a smoke screen for a more sophisticated attack that can potentially cost the company even more money; meaning that IT professionals need to be prepared to respond to other threats while a DDoS attack is underway.

    For businesses, it's important to know that there are things we can do to protect the internet infrastructure and also services. People running open DNS resolvers will need to start filtering requests and companies under attack should filter DNS responses that will allow legitimate responses to be delivered and stop DNS reflection attack responses in their tracks.

    The time to build business and government defences against this form of attack is now. As cyber criminals increasingly push the boundaries of what businesses think is possible, we need to do everything we can to stop them in their tracks and protect our most valuable data.

    Joakim Sundberg is security solution architect at F5

    'Teching the tech' - why language matters

    'Teching the tech' - why language matters

    Cyber security as a profession seems uniquely prone to acronyms and obscure jargon.

    Even the name 'cyber security' has replaced what we used to more prosaically call information or IT security. This obsession with jargon leads to poor levels of reporting in the non-specialist press, hype in marketing campaigns and, at its worst, misplaced strategies from governments.

    In a keynote speech at the 2009 New York Television Film Festival, Ron Moore, writer for Star Trek and creator of the re-imagined Battle Star Galactica, described the secret formula for writing Star Trek: The Next Generation scripts: writers just inserted the word ‘tech' into the scripts and let others fill in the blanks with scientific sounding words later. A typical script would read thus:

    La Forge: "Captain, the tech is overteching."

    Picard: "Well, route the auxiliary tech to the tech, Mr La Forge."

    La Forge: "No, Captain. Captain, I've tried to tech the tech, and it won't work."

    Picard: "Well, then we're doomed."

    At which point Data would suggest they reverse the flow of the other tech through the deflector dish, and save the day. I am a big fan of ST:TNG.  I am less of a fan of the security industry's habit of 'teching the tech'.

    How often have you read a sentence such as this: “A sophisticated APT using customised HTTP and weaponised malware pose a significant threat to the IP of the targeted organisations”?

    That sentence is as meaningless as the Star Trek script, and yet those terms can all be found frequently on security websites, press releases and subsequently in the mainstream press. Even expanding the acronyms doesn't help with clarity.

    What is an advanced persistent threat, and what is meant by tactics, techniques and procedures? Is there non-weaponised malware? Even ‘malware' is meaningless to the average reader.

    Clear language would help everyone immensely: “Clever, careful persistent hackers can, using a variety of tools and techniques, pose a significant threat to the corporate information of targeted organisations”. Using plain English is not hard, and moves cyber security from the realm of mysticism to the real.

    Also language matters; we security professionals strive to get people to understand and do security better. We throw our hands up in collective despair when we read of the latest security fail resulting from common errors. We lament when management boards don't own cyber risk and allocate appropriate resource to solutions. Yet part of this problem stems from our collective use of jargon.

    An APT is a vague and elusive problem, but hackers are real. Espionage is an understandable motive. Targeted emails and malicious software are concepts that need no specialist knowledge to understand, and about which countermeasures can be discussed and evaluated.

    Proper understanding of cyber issues suffers from the flawed narrative. Too much is made in the media of nebulous concepts of cyber attack and cyber war, while politicians, who do not understand the issues, have to discuss them, publicly fall back on the same language. It's a vicious circle.

    Even technical people struggle to translate security speak into English. How is an IT manager supposed to know what the threat of cyber attack means to their organisation, never mind justify spending money on the problem.

    I'm not claiming that effective cyber security is an easy task. The modern business operates in a complex environment and needs to be able to make risk-based decisions. However, cyber security is underpinned by concepts that are well understood.

    Computing is a scientific discipline. The underpinnings of the modern internet, and the hardware and software that contribute to it are principles and standards that are often decades old.

    Even very technical security topics (encryption, forensic analysis, reverse engineering, obscure vulnerabilities) are concepts that anyone with an IT background will understand. Threat actors and consequent risks need no specialised knowledge.

    Cyber security is neither magic nor theoretical physics. We do not need to deliberately obscure our methods, and should not assume that our readers are specialists, indoctrinated in the language and acronyms of the genre.

    Clear language would not only help those trying to defend networks and systems, it would enhance the quality of public debate in a topic that is only going to grow in importance. It is time for the cyber security industry to embrace clarity, and say what it means.

    Rob Pritchard is the director of Abstract Blue Consulting

    Proving the point of targeted attacks

    Proving the point of targeted attacks

    Last week I had the opportunity to meet with Proofpoint CEO Gary Steele for the first time in a few years.

    Still a well-known name in the web gateway business, Steele talked about recent technology additions and its own work in detecting threats.

    Recently Proofpoint highlighted a threat called ‘Long Lining', which Steele called "mass customisation and sending volumes of messages". The concept is industrial-scale phishing, which is not as targeted as advanced persistent threats (APTs) or at any particular entity, but with tens of thousands IP addresses marked by the attacker.

    Steele said: “This creates problems for signature-based systems as no two pieces of malware look the same. We ran 900,000 messages through 46 anti-virus engines and only four were able to detect it. Anti-virus scans many times but this is customised.

    “There is no doubt that anti-virus is still important and you need defence in-depth, but the struggle is not going away.”

    Proofpoint launched Targeted Attack Protection technology (TAP) last year, and Steele said this solution can trap a threat in a cloud-based sandbox, rewrite the URL in the cloud and determine its authenticity and behaviour.

    He said: “We see that APT is what users are concerned about, as we are talking about something topical and we are seeing market acceptance.”

    Steele cited a significant rise in Q4 for the company after its initial public offering (IPO) last year. “We are seeing rapid acceptance and adoption of our technology. Malware is not about scale, it is about the user and quickly you realise it and take it down,” he said.

    I asked Steele how this concept differs from standard phishing, he said that those messages are not very good and a spear phishing message is harder to discern as a malicious URL can look like a legitimate one.

    “The frequency is a ten per cent success rate, which may not sound like much but when 100 are sent out, then ten have clicked on it, so this shows the effect. I think that this is a grey area that is getting interest,” he said.

    The company also announced a partnership with Box to offer internal security and provide data loss prevention to manage security within the online storage system.

    I asked Steele if he felt that cloud and data protection could ever co-exist, as highlighted at the recent SC Magazine conference on data protection. He said that he agreed that not enough people ask the right questions, and you should get the cloud provider to sign up to a deal that the user can negotiate with.

    In a different angle, I asked Steele how he felt he could have dealt with the Bit9 attack and as a CEO, how we would have responded. He said: “We would let out customers and partners know as we share with them in real-time using Big Data and sandboxing to warn people about not going to malicious sites. As we sandbox everything, we can say ‘we see this and give the security team visibility and the nature of it'.

    “We have an understanding of it and we give knowledge to the IT team and share that out. A good example is we see something and block it and inform the IT team.”

    Following the lead of other vendors, Proofpoint has seized upon the targeted attack threat as something to market and wrestle with and with public ownership and a strong lead in the gateway market, perhaps it could be a good year for the company.

    Ten secrets to successful patch management

    Ten secrets to successful patch management

    The recent spate of Java vulnerabilities has required a number of large vendors to react almost instantly to ensure security levels are kept to an optimum.

    As good as these reactions are; organisations urgently need to apply greater insightful strategic thinking to ensure that security updates are reaching the entire organisation's IT estate.

    From our analysis of anonymous hardware and software data (including thousands of PCs and servers in 6,000 organisations across public sector establishments currently running our solution), we've found that 40 per cent of servers and workstations are missing security patches.

    In addition, six vendors: Microsoft, Adobe, Mozilla, Apple, Oracle and Google, together released 257 security bulletins/advisories fixing 1,521 vulnerabilities in 2011. In 2010, these vendors fixed 1,458 vulnerabilities, demonstrating the extent of the issue, as well as the numbers of bulletins we annually face.

    With more and more bodies utilising remote working, the challenge isn't just to implement patches as they are released, but also to be fully confident that devices have been updated and are thus continuously safeguarded. So what are the ten key areas IT experts should tick off the list for a successful patch management implementation?

    Transparency is key

    At the heart, asset discovery is essential. If you don't know what you've got, you don't know the extent of the problem you may have. If you do nothing else, make sure you know where your IT assets are; this is a quick gain that will put your house in order.

    Once the estate is established, it's key to have real-time visibility of the assets you support.  With the urgency in which we need to manage patches, the first secret is to not only have full awareness of the estate but instantly know the health of it too.

    Don't just look at the security

    Knowing the whereabouts and health of the IT estate is paramount, as it provides the intelligence for ensuring it is secure. A study of public sector chief information officers in December 2012 found that 87 per cent of respondents were either concerned or very concerned about the risks associated with IT security breaches.

    This is a clear indication of the priority this is for many technology experts. In addition to security, also keep an eye on securing IP, as it can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host.

    Define your patch nirvana

    While the audit and assessment element of patch management will help identify systems that are out of compliance with your guidelines, there needs to be additional work to reduce non-compliance.

    Start by creating a baseline, a standard to which you want the entire estate to comply to. Once complete, it's easier to bring controls in line to ensure that newly deployed and rebuilt systems are up to spec with regard to patch levels.

    Face the facts

    You must know which security issues and software updates are relevant to your environment. Further analysis of our data showed that 50 per cent of PCs and laptops are still running Windows XP, and 32 per cent of devices are over four years old.

    Beyond patch management and the protection against vulnerabilities and exploits, which by now must have caught the attention of heads of IT globally, is the preparation and planning ahead of end of life Windows XP support. If you do not replace, there is no way to safeguard.

    If you do replace, this has implications on expenditure. So ensure you not only have a realistic view of patch management and its limitations, but also ask whether the discipline of patch management indirectly ensures the infrastructure and IT estate is a viable one from support and budget perspectives.

    Do it your way with software policies

    You can customise policies targeted at filters or groups at the account or profile level. The filter targets can be either the default filters provided within your account or any custom filters than you have previously defined.

    The secret here is to define custom filters or groups to identify devices with specific criteria, and one or more of these filters can be associated with a policy so as to target those devices. Of course, this goes back to your baseline creation: set the policies from the outset and customisation will be a simple step forward.

    Is the time right?

    Why wouldn't you implement a patch management update as soon as you can? With baseline mechanisms in place, there's no need to delay. If you have a solution that automates the process, then you would have given some consideration as to your ideal timing.

    Consider the time of day for updates by policy – what time will have the least impact on day-to-day business? The ideal timing for updating patches should follow any rollout best practice.

    Consider the day of the week, the impact on the business if something doesn't go smoothly, and consider whether there is sufficient resource and time to rectify if necessary. Furthermore, if your IT management solution is on-premise rather than cloud-based, you might even have to take responsibility of scale and load of the update.

    Audit first – is it too broken to be fixed?

    Gaining visibility of devices that are vulnerable is crucial, but so is analysing the overall health of each device. Ensure all devices are audited prior to rolling out patches or patch policies. There could be a more urgent matter requiring attention before the device can be brought in line.

    Big estates, big patches, big problems?

    We are led to believe that the bigger the enterprise estate, the more complex the management, but in most cases, solutions are easily scalable. The issue comes with usability, as complexity increases (and in some cases, the number of solutions and providers also grows), the technology team is used more and more to ensure the estate is kept up to date.

    Keep usability as simple as possible, as there are solutions that do not require a technically skilled person to ensure the estate is kept up to date quickly and easily. In fact, it's much easier than you think.

    Tiny budgets, smart people

    So often patch management is a distress purchase because vulnerabilities such the ones we've seen recently place patch management in a crisis management budget and not an ongoing IT budget. This has financial implications of course.

    Visualise your patch management

    Make sure you can see a graphic representation of your patch management, tailored by severity and whether the patch requires a reboot or user interaction.

    This also fundamentally supports measurement and service level agreements to report service level agreements in a way that's visual. Not only will this help with compliance, but it will demonstrate that you, as the technology expert, is making a difference to the business. This makes for better relationships throughout an organisation, whether internal or external.

    Ian van Reenen is CTO of CentraStage

    Getting the knack of NAC

    Getting the knack of NAC

    De-perimeterisation, wireless, mobility and sophisticated threats have rejuvenated the adoption of network access control (NAC) technologies.

    NAC is employed to ensure that only acceptable and trusted devices can appropriately access network resources as per policy.

    Even though NAC tools have evolved and more than 25 per cent of global enterprises are using it, there still exists modest trepidation towards deployment. After going through the entire process of researching, evaluating and selecting a NAC product, you're now faced with actually deploying the solution in-house.

    If you're in charge of the deployment there are steps you can take to make the entire process flow easier and be less stressful for everyone involved:

    Identify the project owner and success criteria: Most successful IT projects require a champion; someone to remove internal roadblocks that emerge as an organisation works through the deployment process. In order to report progress, some success criteria need to be assembled for each phase of the implementation.

    Create a cross functional NAC deployment team: NAC, like any security tool, can and will affect employees, guests and other departments in an organisation; working with the project owner, select key individuals from each department to participate in a deployment team.

    Develop NAC use cases: The deployment team should be leveraged to create use cases for NAC, spanning topics including: employee device and security configuration requirements, how guests and their devices will be registered and segregated, how contractor devices will be managed and how to manage the use of corporate-provisioned and personal mobile devices. The use cases should be ranked according to need/risk and could be put into main issue categories.

    Agree upon key security issues to be addressed by NAC: These items should be formally documented now that use cases have been identified and prioritised. Key security issues can include endpoint compliance, guest management, wireless access, mobile security, inventory, internal and governmental compliance requirements and bring your own device (BYOD).

    Develop policies to address these security concerns: Review how these security issues are currently being addressed (or not). For example, what anti-virus product is required on all of the endpoints and how current do signature files need to be? How will non-compliant endpoints be handled? Should the incidents be logged and should an internal department be notified? During this stage internal deployment obstacles will be identified, and can include: poor processes, lack of cooperation between internal departments, refinement of corporate politics and, possibly, additional network equipment or changes.

    Determine the deployment timeline and milestones: Identify how the NAC solution will be rolled out internally. This includes planning installation and activation by location, ownership or segment, and allowing time to assess the success of the rollout before moving to the next location. To avoid impact on user experience, identify exceptions and security gaps, NAC should initially be deployed in an audit-only mode with enforcement and remediation actions disabled. 

    Inform IT staff and end-users of NAC deployment and policy: Given added technical controls to discover and classify endpoints, identify violations and enforce policy. It is imperative to notify all departments of any new or changed policies and any changes to operations. Also, interface with HR and legal to relay acceptable use policy that may be enforced using NAC. For different IT departments, relaying the potential access to NAC capabilities may also expand tool use and value. For example, help desk staff may consider using NAC to facilitate incident response and resolve IP, Mac and location of a given endpoint under investigation.

    Audit the internal network with the NAC solution using the agreed upon policies to assess compliance: NAC is able to provide real-time security posture assessment of all endpoints. By providing audit data, all members of the deployment team can gain a clear view of what needs to be done to meet agreed-upon compliance levels. Once endpoint exceptions and reasonable compliance has been reached, more advanced enforcement and automated remediation options are available through the NAC solution.

    Refine NAC policies, procedures and operations: NAC offers very powerful and useful features to identify all devices – managed and unmanaged, wired and wireless, PC and mobile – attempting to access network resources. Once NAC has been deployed and activated, most organisations will identify additional policy and procedures that need to be refined, including: infrastructure integration, upgrading, exceptions, remediation and reporting. These can be documented, reviewed, phased-in and adjusted on a frequent basis.

    Monitor and report on the deployment results: The NAC deployment team should meet regularly to assess the success of the deployment and evaluate how the security concerns and compliance requirements are being addressed. By reviewing the success criteria, performance metrics can be more easily shared and new policies, exceptions and initiatives can be discussed and agreed upon.

    Toni Buhrke is a systems engineer at ForeScout


    Forescout is exhibiting at Infosecurity Europe 2013, the No. 1 industry event in Europe held on 23rd – 25th April 2013 at the prestigious venue of Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk


    Why the ICO's BYOD guidance may translate to 'bring your own data breach'

    Why the ICO's BYOD guidance may translate to 'bring your own data breach'

    When the UK's Data Protection Act (DPA) was instigated in 1998 almost all data was generated, consumed and stored on company owned and managed equipment.

    This was usually desktop PCs that had a typical lifecycle of five to seven years and used an ‘industry standard' operating system. The situation is very different in the contemporary business landscape; while data is still largely produced on PCs, it is increasingly accessed on user-owned, and often hand-held, devices such as laptops, tablets and smartphones.

    This brings a range of complications to businesses, notably the increased risk of data loss through theft or misadventure, the requirement to secure a bewildering array of mobile phone operating systems, including multiple versions of those operating systems and a higher disposal/recycle frequency.

    It is encouraging to see that the Information Commissioner's Office (ICO) has recently issued updated guidance notes that re-interpret the 1998 Data Protection Act to reflect how mobile technologies are changing the workplace.

    One key issue addressed in the document is ‘bring your own device' (BYOD). This states the data controller (i.e. the business) must have security in place for BYOD to prevent personal data from being accidentally or deliberately compromised. This means that although corporate IT has less control over the configuration and specification of devices used by their information workers, any data breach reported to the ICO and found to originate from a device owned by an employee/contractor is still the legal responsibility of the data processor.

    It is also worth noting that the Data Protection Act (DPA) is applicable to any company operating in the UK, regardless of whether it is registered in this country or overseas.

    While the ICO has successfully addressed a number of core issues to bring the DPA in line with the times, it does not cover the full lifespan of users' mobile devices. Even if a business has a functioning BYOD policy to safeguard sensitive corporate and personally identifiable data while a device is in use, these efforts can be futile if that data is not systematically wiped when the handset is sent for disposal or recycling.

    This issue is exacerbated by the shorter upgrade cycle for consumer mobile phone contracts, which are typically 12 to 24 months.  

    At the end of last year, the ICO went some way to tackle the issue of how to deal with obsolete or surplus devices by issuing its IT Asset Disposal Guidance Notes. While this acknowledged the importance of deleting personal data, it did not specifically address one key problem facing businesses: standard data wiping techniques simply will not work for devices using solid-state drives (SSDs).

    This is becoming significant given that SSDs are used in two devices that are becoming ubiquitous in the corporate world: smartphones and tablets.

    Data security legislation is in its infancy and cyber crime is endemic in these markets, so any inadequately wiped mobile device ending up in the wrong hands has the potential to wreak havoc. This means data processors must use data wiping solutions that are auditable and offer a certificate of data sanitisation in order to ensure BYOD schemes will benefit, not harm, their business – even after a device has been decommissioned.

    Ken Garner is business development manager at BlackBelt

    "We can offer 100 per cent protection rather than detection"

    "We can offer 100 per cent protection rather than detection"

    Last week I met with a new company to the UK, who offered the above guarantee which I suspect may raise more than a few eyebrows.

    Founded by the team behind the open source Xen Hypervisor, Bromium are company bringing their virtualisation solutions with them to offer the 'micro virtual machine' (MVM) vSentry to desktops which it claimed, could eliminate infections by surfing and malicious document by opening and viewing inside the virtual environment. Now with research and development, sales and support established in Cambridge, they are ready to take on the UK market.

    Meeting with co-founder and senior vice president of products Ian Pratt and vice president of marketing Franklyn Jones, they told me that users get zero value and worth from endpoint security 'and the area needs innovation and reinvention'.

    Jones, who previously worked Palo Alto Networks on its launch into the UK five years ago, said that the reality of the endpoint is that users have given up on trying to secure it, and a renewed vision is needed.

    He said: “There are options out there: for anti-virus signatures are broken; for whitelisting you need to decide what is allowed on it and what is not; sandboxing is good at detecting but there are are vulnerabilities to find your way out; for threat forensics this is false positive hell, so there is not a solution.

    “What we offer is not detection, we focus on protection and do it by isolating the threat and activity and treat it like an isolated task. We leverage the Intel VT to offer a layer of isolation, so it is 100 per cent protection.”

    Pratt told SC Magazine that said that what he and fellow Xen Hypervisor co-founder Simon Crosby wanted to transform the client impact on security. “A user can go about their business, opening emails and they do not compromise the network,” he said.

    “Existing technology is useless at this; it requires the system to detect and incidents show that it is impossible to do detection. There are many different ways to get on to an endpoint: email; USB; vulnerabilities, this is what we deal with. We isolate the threat using virtualisation, we believe we can transform the client.”

    The concept involves putting the MVM on to the device, therefore everything you open opens in the virtual environment which is built into the operating system. “We created the MVM to make endpoints immune to malware as it throws it away,” Pratt said.

    I asked them that if the session secures the activity, then what if the session is compromised, and how long does it last for? They said that it will exist for the length of the life of activity, so for example if (like me) you start your computer and open three or browser tabs which are open for the entire working day, that session is always open as long as you keep it open, the same with any Office document or PDF file.

    Pratt said: “It is a different approach from what everyone else is doing. We are working with Intel and Arm and every device that has an Intel chip has this built in. Also ARM devices such as the Nexus 10 and Samsung Galaxy S4 have virtualisation capabilities built-in too. This is the same hardware capability built-in and this will revolutionise endpoint security as it leverages protection through hardware.”

    I asked them how this can be managed from a business perspective, as it is one thing to say you can stop all infections and be immune to attacks, but what visibility are IT managers given? Pratt said that a management console shows the footprint trail of an attack and how it would have infiltrated the enterprise.

    So will this remove the need for anti-virus too? “This is running in an isolated environment. You can install anti-virus alongside it, but those who see our vision will realise security tools are no longer required,” Pratt said.

    Finally what about that '100 protection' claim, Pratt said that he appreciated that some would sneer at this, but what Bromium is doing is 'making it methodically better', while Jones said that this will also remove the rush to patch as the 'security body is the Hypervisor and outside of that, you don't care'.

    Some will not have read all of this article, while others will have seen the headline and possibly read for the disbelief of the claims. Whichever, some will read this and realise that something different is being done by Bromium which is what security has cried out for; rather than reinventing, they have taken their original concept of virtual containers and applied it to create a secure environment.

    As for that 100 per cent protection claim, we'll see how that holds up if the worst should happen.

    Mobile security - the final frontier

    Mobile security - the final frontier

    Awareness of mobile security is ‘there' but this is just one part of it.

    In recent conversation with AVG chief technology officer Yuval Ben-Itzhak and senior security evangelist Tony Anscombe, they told SC Magazine that awareness of security on PCs had been achieved, but there was some way to go on mobile.

    Ben-Itzhak said: “We can put the spotlight on Facebook, but also in LinkedIn, people spend time on their phones so we will see more spending on security for phones this year. Once a product receives five per cent market share, then malware appears, and we are starting to get mass attention.”

    A recent survey of 5,107 smartphone users from the UK, US, France, Germany and Brazil by AVG found that yet 70 per cent of consumers are unaware of security features on their device that allow data to be deleted remotely. This was after it found that one in four stored intimate photos or videos on a smartphone or tablet.

    Anscombe said that this showed that there was uncertainty around control over mobile devices, particularly if you lost a device where a user could be socially engineered ‘as they already have half of your credentials'.

    He said: “We asked if they do their banking on their phone and 36 per cent said yes while 78 per cent do it on their PC. 80 per cent said that they were aware of the threat, so this showed that there is a perception of insecurity. When it comes to banking, people are aware that they need something, they buy a PC and it comes with anti-virus, but when you buy a phone it doesn't come with anything and you begin downloading apps.”

    Research released by Appthority found that of 50 Apple and Android applications assessed, 100 per cent sent and received unencrypted data on iOS, compared to 92 percent on Android. It also found that 60 per cent of apps tracked user location on iOS, compared to 42 per cent on Android. Finally 60 per cent of the iOS apps shared user data with third parties, as opposed to 50 per cent running on the Android platform.

    Ben-Itzhak said that the lifetime of applications can be very short and as more and more people demand content from outside their specified application stores, that if an attacker were able to infect a phone, it would be used as a botnet rather than to steal something – because people are not banking on their phones.

    “So it becomes part of the ecosystem as you can have a premium number in a country and this makes the life of the hacker ten times easier,” he said.

    “We see an increase here; it is not quite at the stage of PC malware yet though. BlackBerrys now run Android apps too, so this can open a backdoor and allow an attacker to exploit two platforms and once that is connected to a PC, it can be used as a way to transfer malware.”

    The rise of mobile malware has been well covered and researched, and each year I will receive many predictions that this will be the year that there is an explosion. The likelihood of malware for mobile reaching the level of PC is unlikely for some time due to the quantity of platforms and inherent security of the devices.

    But if attackers knew these sort of statistics and could put the time and effort into targeting specific users, then we would be in all sorts of trouble.

    Twenty-five years of vulnerabilities - don't believe the modern hype

    Twenty-five years of vulnerabilities - don't believe the modern hype

    Vulnerabilities and flaws are a part of everyday security it seems, especially with the same software constantly affected by zero-days.

    Is this a historical problem, and when did the issues begin? I recently met with Sourcefire, whose senior research engineer Yves Younan had compiled a report on '25 years of vulnerabilities' using information from the Common Vulnerabilities and Exposures (CVE) database and National Vulnerability database.

    Younan said that assessment of 54,000 vulnerabilities allowed him to see an overall trend, particularly that there was a high severity level in 2006 with 6,612 reported, while the number of vulnerabilities with a ‘high severity rating' was a year later, with 3,159 reported in 2007. Since then the number has slowly declined, with 1,760 reported in 2012.

    However last year saw the largest number of vulnerabilities with the highest CVSS score of ten reported.

    I asked Younan what the typical vulnerability was, and he told me it was mostly buffer overflows, which had 7,809 reports over the 25-year span of the research, while widely reported application flaws such as SQL injection and cross-site scripting (XSS) "are not that severe".

    The research found that the most reported vendor flaws was Microsoft with 2,934 (1,696 high severity), while the most reported product was the Linux kernel. Younan said that this was likely down to it being a ‘coded open project'. I asked him if he felt that this was also down to Linux being used by more tech-savvy users who would find and report such flaws, he agreed.

    For overall severity, Windows XP takes the top spot with 453 reports, while Firefox was the highest for critical severity reports with 174.

    I asked Younan why he felt there was such a rise in 2007 and a steady decline since. He said: “There is a relatively small drop in severity until 2010. I am surprised at how many total vulnerabilities Linux had, but they were all relatively not severe.”

    The report covers the 25 years from 1988 to 2012, and Younan told me that he selected this time period for the round number, although CVE did not begin compiling reports until 1999 so there was not much research data until then and most of what was there was added later.

    The report makes for interesting reading and shows that while news about flaws and vulnerabilities is well covered, perhaps we are in a better state than we realise.

    Evernote - a story that has combined all security trends?

    Evernote - a story that has combined all security trends?

    The attack on Evernote that was reported last weekend could be deemed to be a new stage in the battle of man v password.

    According to the blog post issued by the cloud-based data storage application, it suffered a "coordinated attempt to access secure areas of the Evernote Service", forcing it to reset 50 million passwords after suspicious activity was detected and blocked on its network.

    While it said that no information or payment data was accessed, it is a knock for cloud-based applications, less than a year after Dropbox suffered similar problems.

    According to a post by security blogger Brian Krebs, Evernote didn't say which scheme it was using to hash passwords "but the industry standard is a fairly weak approach in which a majority of passwords can be cracked in the blink of an eye with today's off-the-shelf hardware".

    Phil Lieberman, president of Lieberman Software, said: “The reality of the situation is that the loss of the encrypted password file is probably a non-event since the ability to figure out the actual passwords is pretty much impractical. I believe that the company figured that the decision to ask users to change their passwords was in actuality a real abundance of caution and a potential protection against a lawsuit.”

    Password security is as old as the internet, at least it feels that way, as the situations go around and around and we end up at the same old problem – how to tell users not to re-use passwords and use something more secure and memorable.

    Intego's Lysa Myers said that password reset notices from breached vendors is becoming a weekly occurrence. “The attackers made off with only email, usernames, and password information, but the passwords were salted and hashed and thus not as useful for malicious purposes,” she said.

    Jody Allen, consultant at Information Risk Management, said: “The Evernote security breach is the latest in a recent succession of attempts by hackers to circumvent corporate security procedures to gain access to their data as well as our own.

    “For the common user, the releases that followed would do nothing to reassure them that they have little to worry about. Talk of hashes and salts being compromised rather than actual passwords will frankly be overlooked - the overarching point being that 'my details were compromised'.

    “As we move into the future of cloud computing, with users putting more of their everyday life into unknown online storage, maybe others should be heeding these examples as Evernote moves towards new ideas. That is not to say the process of two-factor authentication is new, just a new ‘concept' for end-user products. After all, it has been well implemented for quite some time within corporate work places.”

    Aside from the angle of the password issue, is this more a case of the security of cloud-based applications such as Evernote and Dropbox? I spoke to Luis Corrons, technical director of PandaLabs, who said that no one can raise their hand and say they can prevent something like this happening.

    He said: “Even more after the last cases we have seen (Twitter, Facebook, Apple, Microsoft) there is not a 100 per cent safe place. However that cannot be used as an excuse to not be aware of what is happening in your internal network.

    “From a user perspective, it is true this kind of cases can cause them fear. Which is not a bad thing for a number of reasons: danger is out there, attacks are happening all the time and it is very important to be aware of that; and users will demand their providers to ensure they are taking serious security measures.

    “We thought 2012 was a bad year, with lots of attacks happening everywhere. Well, it looks like 2013 is going to be even more interesting in this field, in the first two months of 2013 we are learning how any company, no matter which size, can be a victim of these attacks. And we still have ten months left of this year to enjoy.”

    Lieberman said that as systems become bigger and populated with ever more valuable resources, these type of flaws become harder and harder to find and the value of discovering a flaw becomes higher.

    Mark Bower, vice president of product management at Voltage, said that in the cloud, an attack can topple many systems like dominoes.

    “So, if Evernote was following best practices as it seems, how did the attackers get in? Very likely there was a Java or zero-day exploit leading to system penetration. Maybe an insider opened a malicious email from spear phishing. We may never know, but once again it shows that what was once considered the impenetrable barrier, the enterprise perimeter, we really now have just a semi-permeable membrane only as good as the weakest link,” he said

    Bower, like Corrons, predicted that we will see more breaches of this type in 2013, saying that cloud application adopters who have assumed that the cloud infrastructure or firewall is sufficient to protect data are likely in for a few surprises and may need to rethink their data security strategy very quickly.

    The key themes of this attack – application security, cloud security, passwords, zero-day vulnerabilities – have been covered over and over, but this does not make it any less important, especially in a case when it combines all of these factors.

    There remains a shortage of programme testing and recovery plans to protect data in virtualised cloud environments

    There remains a shortage of programme testing and recovery plans to protect data in virtualised cloud environments

    Virtualisation and the cloud are bringing greater flexibility, agility and capabilities to users - but very little has been done to test data recovery plans.

    Yet this lack of preparation can have serious consequences if a data disaster strikes. Adoption might be inevitable but it takes time and investment to create a data recovery plan that can protect businesses. It might require some cost upfront but safeguarding data can provide long-term savings that are too big to ignore.

    Over the past few months Kroll Ontrack has conducted research with VMware to gauge perceptions about virtualisation and the issue of data recovery. Our findings reveal that while both trends have gained a lot of ground in terms of adoption, most organisations fail to test and implement data recovery plans.

    This is a serious oversight when one considers the surge of information being transferred into virtual environments – and the impact that losing it can have on the reputation and financial performance of a company.

    Data loss and virtualisation - reality check

    Data loss isn't the first thing that comes to mind when businesses adopt a virtualised environment. Organisations are often too caught up with the benefits that these trends bring – namely the cost savings associated with maximising the use of computing resources and streamlining processes.

    However, businesses that buy too much into the cost saving benefits of trends such as virtualisation don't take the necessary measures to protect data and end up having major data losses. Users will only make cost savings with virtualisation if the implementation is solid and data is secure.

    In a virtualised environment, the most important component is the data and this is the only thing that does not get virtualised. Users can reconstruct and recreate any other component in a virtual environment within seconds and with just a few clicks, but this cannot be done with the data that is created in one's virtual environment.

    Therefore, while businesses can make savings everywhere else in a virtual environment they should be spending more money protecting the data when they move to a virtual environment. Of course, this seems to go against what virtualisation is about, which is saving money. But up-front investment to protect data is more important in order to avoid even costlier data losses down the line.

    Increased chance of losing data

    According to a Kroll Ontrack and VMware survey completed by 338 IT professionals at a recent VMworld conference, 37 per cent of respondents believed that virtualisation significantly decreases the chances of data loss.

    In reality, the chance of minimising data loss is only possible if data backups are performed correctly and tested carefully. Otherwise, the impact of data loss is greater than before, since a data disaster in a virtualised world can bring down many servers that share the same storage.

    Surprisingly, 20 per cent of respondents believed that virtualisation doesn't affect the chance of data loss at all. Are they not responsible for backups/data recovery? Perhaps they do not understand the complexity of virtualised systems. Data loss is always an issue, regardless of what IT infrastructure is used.

    Rebuilding data creates more risk

    Another important finding of the survey was the respondents' answers to the question of what to do to recover lost data. A third (36 per cent) of respondents said if virtualised data is lost, they would try to rebuild the data themselves instead of calling a data recovery company – 22 per cent of respondents said they would take this decision.

    Doing it yourself often makes data recovery much harder - and in some cases impossible to retrieve anything. A lot of the complexity is hidden from the users and administrators when systems are virtualised. Without a solid data recovery programme, it's very easy to lose data. There are too many risks involved in rebuilding data and using a reputed recovery company is the best option to avoid any problems.

    Lack of data recovery plans

    In another survey conducted by Kroll Ontrack and VMware, respondents were asked whether they tested data recovery plans regularly to ensure proper protocols are in place to protect data on virtualisation and the cloud.

    This survey, carried out at VMware Forums globally among 367 IT professionals, found that while 62 per cent of survey respondents admitted to leveraging the cloud or virtualisation, only 33 per cent of these organisations tested data recovery.

    This is an important finding - and a remarkable one - considering that 49 per cent of organisations also reported experiencing some type of data loss in the last year. A quarter (26 per cent) of the respondents reported a data loss from a virtual environment, while three per cent reported a loss from the cloud.

    Minimising data loss

    Kroll Ontrack's research shows how quickly cloud and virtualisation are gaining ground among organisations. However, history has taught us that data loss can occur in any environment - regardless of the specific technology.

    The way to reducing data loss risk and successfully recovering from a loss is asking the right questions prior to adopting a new storage medium and amending your policies and procedures accordingly. Important questions to consider before adopting cloud or virtualisation include:

    • Are backup systems and protocols in place? Do these systems and protocols meet your own in-house backup standards?
    • Does your cloud vendor have a data recovery provider identified in its business continuity/disaster recovery plan?
    • What are the service level agreements with regard to data recovery, liability for loss, remediation and business outcomes?
    • Can you share data between cloud services? If you terminate a cloud relationship can you get your data back? If so, what format will it be in? How can you be sure all other copies are destroyed?

    Data loss incidents continue to grow in size and complexity as more organisations move into virtual environments. There has been a 140 per cent increase in virtual data loss when compared with the year before, and this number will undoubtedly increase as more companies embrace new trends such as desktop virtualisation and BYOD.

    The only way to minimise these risks is to know which problems to watch out for and to establish a formal incident management plan that includes a data backup strategy.

    Robert Winter is a chief engineer at Kroll Ontrack

    Proactive security: is one technology enough?

    Proactive security: is one technology enough?

    It's becoming a daily occurrence – another day, another article about how anti-virus is unfit for the task of protecting customers against different types of attacks.

    Instead, you should be applying X, Y, or Z brand-new bleeding-edge malware detection technology. Yes, there is an ever-growing tidal wave of malware, which is not going to slow down any time soon (if ever). No, anti-virus on its own is not enough (nor was it ever intended to be) for the sole protection of any network.

    For that matter, neither is any other technology a panacea of protection. But does that mean we need to invest in a totally new technology at the expense of more traditional measures?

    It depends, naturally, on the needs of your network or your company policy. But, barring some very uncommon set of circumstances, anti-virus is still a very useful tool that many organisations are not using to full advantage.

    Traditional anti-virus is intended to do two things: identify and remove known malware. I would be hard-pressed to imagine a situation where a network has no need of identifying machines that are harbouring known malicious files – even if they have a forensics team to find out precisely what malware has done or is capable of doing, or even if they also format and re-image all machines that have been compromised.

    Likewise, anti-virus is helpful for filtering known-bad content. Whether it's only filtered at the gateway or also at the desktop, it can be a way of easing network congestion.

    Yet security should not stop at anti-virus, as this only attacks one aspect of information security. Proactive security requires both intelligence and containment. If anti-virus only detects known malware, what do you do about unknown malware? Many traditional AV vendors now offer fully-fledged security suites, with firewalls and various other behavioural or reputational scanning components.

    If you're buying an enterprise-level anti-virus product, you are almost definitely getting some of these features, and there are plenty of other types of security companies that offer standalone versions of these types of tools, which means there are many excellent options of tools to choose from.

    If you're not familiarising yourself with all the features in your existing product or the other well-established products on the market, you may not know what you already have. Those 'bleeding-edge technology' purchases may well be fruitless or redundant, trading well-tested technology for something whose weaknesses are not yet known.

    If you're using all of the technology you have and you're still having more security incidents than you would like, you need to get more intelligence about what's happening in your network. Do you know how these incidents are happening? Is it a particular group or individual, or a particular type of network traffic? Can (or should) the problem be solved without resorting to additional purchases? Once you have gathered some data, you can make meaningful decisions and purchases.

    Rage-quitting poorly applied protective technologies are a great way to shoot yourself in the foot. Most companies fail to both apply traditional tools and gather security incident information adequately, and criminals are easily able to slip through the cracks. Once you've covered both bases, you can make useful and informed decisions about seeking out hot, new tools.

    Lysa Myers is a virus hunter for Intego

    Headspace - the security mindfield

    Headspace - the security mindfield

    Information security, like a post pubescent teenager, is struggling to shrug off the mistakes of the past.

    Many of us remember our path to maturity as one of awkwardness, your adult body setting the wrong expectations for your childlike mind and your responses to new and novel situations often involving lashing out. Of course at the time you didn't know that you were doing this, indeed, any probe by an adult into your irrational behaviour would have been met defensively and loudly.

    The information security world has gone through this phase, and now finds itself with both an adult body and adult mind looking back at itself and wondering 'what was I thinking'. We've all come across the old irrational way of doing security; a business change request to leap onto the latest innovation, to be ahead of the curve has fallen flat at the last hurdle - security.

    When probed as to why it couldn't be done, security has locked up tight, reacted defensively and slowed the process down while a ‘review' gets underway. More often than not a self-styled expert proverbially kicks the tyres  before loudly proclaiming that the solution is safe. Again press them for details as to why it is safe, and they would baffle you with technospeak.

    The new way of running a secure environment is one based on risk. An approach that considers the business impact of putting in place security controls over not putting in security controls then wrapping this up in a statement that business people can understand. This enables the business to understand the problem before they make the decision.

    Lest we forget that while security may hold the keys to the gate; they do not own the kingdom they guard. Perhaps ironically security should be transparent, it should be simple to see why we are doing things the way we do, and it should be simple to quantify the business benefits against the cost of implementation and maintenance.

    Of course the adoption of this new way of thinking is slow and like the post pubescent teenager finds; it can take a while to heal the wounds you inflicted during your ‘growing phase'. It doesn't have to be this way.

    In the modern world, businesses that are not agile will surely fail or at best fail to make as much money as they could. Business leaders need to call in their security teams and get them to explain to them in simple terms what they are doing and, most importantly, why they are doing it. Business leaders need to understand that the sky isn't falling and that the decisions made in the past may have been more about a young industry making its mark rather than best business practice.

    Leaders, do not let security run rings around you, learn to trust those that can explain why you need to spend your money and question them. If you find that someone struggles or cannot fairly and realistically justify a security control then they need to do something (anything) else. Watch out for the Judge Dredds of the security world, they are a throwback to yesteryear when we needed someone to make a quick call based on instinct.

    Keep an eye out for the gems in your business and nurture people from customer-facing roles; people who strive to help are the people you need.

    Security people out there need to watch out for the word 'no' and replace it for the word 'sure' followed by 'let me work out how to make that safe'.

    Lee Barney CISSP-ISSMP is an information security risk management consultant

    Appthority on mobile risk management

    Appthority on mobile risk management

    On the first morning of the annual RSA Conference in San Francisco, I met with a company whose story began almost exactly a year ago.

    The first day of the RSA Conference traditionally sees the announcement of the winner of 'Innovation Sandbox', effectively a 'best newcomer' award for security vendors, and this year I met with Domingo Guerra, the president and co-founder of the company who won it last year.

    Appthority formed in 2011 to offer a reputation-based scanning technology for mobile apps, but was relatively unknown until its win at RSA, which Guerra called "a great catalyst" for the company, giving it great visibility and exposure in the market. He said that after winning the award, Appthority went from 10,000 scanned apps to one million, five employees to 15, zero funding to a good series of investment and five partners to 25.

    Guerra previously worked at Brocade, while his colleagues include McAfee alumni. He said he wanted to set up a company in the mobile device management (MDM) space, but that he didn't want to become the "158th company in the space", so chose instead to focus on applications and what they do.

    He said: “We saw that bring your own device (BYOD) was overhyped and IT was bowled over by what employees want, so now we see mobile application management (MAM) and containerisation. We wanted to get away from enforcement and management and look at intelligence and overall risk. What do IT want to block?”

    Appthority offers application reputation technology, which Guerra said was like URL reputation scanning and ultimately based on the behaviour of the application. With millions of applications available for the iOS and Android platforms, he said that it is almost impossible to know what to white or blacklist, so they provide the service to know where an application shares credentials with a third party or gives away geolocation data, for example.

    “Mobility is so broad; you can do MDM, you can use our software-as-a-service portal where you send the apps to us, or you can build our API into your firewall to do more granular control, as we know the application's URL so we are on top of the data and can raise security and privacy concerns,” he said.

    “We call it mobile application risk management, we scan the application, take the information an decide what to do with it.”

    Guerra explained that the company has the ability to scan 10,000 applications a day and it actively goes to application stores and looks at what is sent by users, expanding from security threats to more privacy-focused concerns. This has led to a partnership, announced this week, with Arxan that undertakes research on 'Trojanised' applications.

    I asked Guerra what he thought of the BYOD space, particularly after BlackBerry launched devices to try and solve the problem with two distinct hubs. He said that this was an interesting approach, however it is flawed by the fact that some applications bridge the work and play gap, such as the camera and social networking. “It is better to say that the application can steal your information without your permission and realise that it is difficult to protect your intellectual property,” he said.

    “Instead educate on risk of applications and make developments to improve on the security of applications.”

    Asked about the future of the company, Guerra said that it will be in working more with firewall providers and collaborating with handset manufacturers. “The enterprise used to use software from a handful of vendors, now it is tens of thousands and it is not clear what the applications do, as they do not see the code.”

    The recent McAfee threats report for the fourth quarter of 2012 found that the number of mobile malware samples it discovered had increased by 95 per cent, and it mainly saw malware that searches for user details and Trojans that send SMS messages to premium services, then charge the user for each message sent. Considering this, it seems the reputation of apps is truly a key area for mobile security.

    Guerra said: “Malware accounts for a half of a per cent of what we see, while 80 per cent will try to access corporate data. Applications are not built with security in mind, as often even the third-party code will be built in afterwards.”

    The next step for Appthority is to launch into the UK and European markets. With a message as strong as this, it may well be heard loud and clear.

    The Hunt for Red October version 2

    The Hunt for Red October version 2

    The recent Red October wave of concerted cyber assaults demonstrates that social engineering is by far the most potent tool in the hacker's arsenal.

    As the Red October malware infected its victims via a targeted spear-phishing email and by employees downloading the customised Trojan dropper, the attackers managed to infiltrate organisations across the world.

    As discovered by the Kaspersky researchers, the malicious code was delivered via email in the form of Microsoft Excel, Word or PDF documents. The attachments contained the exploit code for known security vulnerabilities in these applications. In addition to the Office files, the hackers also used Java exploitation, which maximised the impact of the assault.

    In dealing with a potential sequel to Red October, business should adopt a more comprehensive approach. The fundamental change is recognising that our IT systems only make up ten per cent of data security - the first ninety per cent is our own behaviour and the physical security of our buildings.

    IT security can't really deal with this kind of socially engineered danger. However, a planned, socially led, security programme can help combat the problems an attack could create. Educating staff members needn't be a complicated and costly affair.

    The initial step should be an evaluation of current systems and processes, after which a plan of action for countering IT security risks could be produced. Penetration testing is one of the key ways in which a company can stay safe and protect their data. 

    Business owners should look for comprehensive penetration testing services that are fully integrated into ISO 27001 and ISO 9001 security and quality management systems. This provides an extra layer of confidence when it comes to the quality and confidentiality of the process.

    Although penetration testing is the most common method of managing data security risks, it isn't the only way. Large businesses will often appoint a Chief Information Security Officer who will provide the knowledge and experience needed to manage the threat in an organised and effective manner.

    The catch is that the typical price tag that comes with this kind of appointment is in excess of £120,000 per year. However, a more affordable Virtual CISO, or vCISO, programmes, managed by senior level people experienced in the CISO role, are also available.

    If your organisation is large enough to require a security leadership role, but not quite ready to dedicate an internal resource to the task, these tailored CISO programmes can help achieve your objective by working as a member of your senior management team leading security programs and initiatives.

    Fully managing the vulnerabilities such as egress controls around communication systems will significantly reduce exposure to cyber threats. However, keeping your data secure calls for more than IT, it requires individuals to reach a certain level of vigilance and act as key holders to the company assets and information.

     

    Peter Bassill is managing director at Hedgehog Security

    Hackers take control of Burger King Twitter

    Hackers take control of Burger King Twitter

    Hackers took control of the official Burger King Twitter account last night, claiming that the fast food chain had been sold to rivals McDonalds.

    Around late afternoon, the icon was changed to a McDonalds logo and the text said that the chain had been sold 'because the Whopper flopped'. As captured by Buzzfeed, the takeover lasted around an hour with some tweets containing racial slurs, obscenities and references to drugs.

    The account was eventually suspended and put back into the hands of the owners, who said in a statement: “It has come to our attention that the Twitter account of the Burger King brand has been hacked. We have worked directly with administrators to suspend the account until we are able to re-establish our legitimate site and authentic postings.

    “We apologise to our fans and followers who have been receiving erroneous tweets about other members of our industry and additional inappropriate topics.”

    Twitter said that it does not comment on individual accounts for privacy and security reasons.

    McDonalds absolved itself of any responsibility, tweeting that it empathised with its counterparts at Burger King. “Rest assured, we had nothing to do with the hacking,” it said.

    As for who was responsible, hacktivists Anonymous left a teaser tweet of ‘Who DID hack BK? Well… that is still anonymous', while Gizmodo pointed to a collective called the Defonic Team Screen Name Club.

    Burger King itself picked up tens of thousands of new Twitter followers and tweeted after an ‘interesting day', that it was back and hoped the new followers ‘all stick around'.

    This once again shows the weakness of a service when it is only protected by a single password and if it is not particularly strong, it can lead to incidents such as this.

    Proactive vs Reactive approaches

    Proactive vs Reactive approaches

    The concept of being prepared for the worst crosses over all types of incidents.

    In security, this means being proactive rather than reactive. Is this a pipe dream? Well arguably not according to one company I recently talked to. Jay O'Donnell, CEO of identity management firm N8 Identity said that the challenge with IAM is that it allows you in or out, while its idea of ‘continuous compliance' is about knowing who has access to what and how they login and what access and privileges they have.

    O'Donnell said that the 13 year old company, which began life as a consultancy, focuses on trying to accomplish things than sell technology. He said: “We found quite a gap on what companies were using, what they wanted to accomplish and what the products could do – it was fairly significant,” he said.

    “We help operationalise and give an opportunity for companies to layer in their own technology. The challenge is products don't have the scale and flexibility to solve problems as they are made up of components: directory, connectors, user stores and role management, and they are built to have ‘baskets' of access.

    “Systems do not work when they are scaled with a small rollout as they are not capable of role management and require coding and ‘on data' store level, and it is not technology for storing who has access to what. Most companies have serious gaps.”

    This, O'Donnell said, is down to products being reactive in nature, and as an organisation scales up, the issues multiply and become more complex. “Business managers are not in a position to certify users and what happens is the business doesn't understand what is pushed down, but it is not solving problems from a business perspective as they are not fixing what they are intending to fix,” he said.

    “We say identity management is a process with business involvement and they all need to participate in it. You can do it in a concerted fashion. You cannot be reactive; you want to prevent access from happening in the first place, instead you are putting out fires.

    “We say that proactive compliance is preventing access in the first place, we say be more reactive as people are not fixing the problem before it happens.”   

    O'Donnell said that in an organisation if 20 people this is not a problem, but if there are 10,000 then it becomes a huge problem. “We help identify who has that access to see who has access to what,” he said.

    Companies, he said, have spent upwards of £50 million on trying to solve this problem and ‘have still not scratched the surface of compliance', and that people are still trying to solve the problem after the event. “The reactive model is much more complex.”

    Listening to these comments, I did feel that O'Donnell had a point about the problem but the question has to be asked on how many companies are able to be proactive against a threat that is unseen and mostly unknown. Compliance and risk management are often the key factors in helping with this.

    Nok Nok - we're the solution to your authentication problems!

    Nok Nok - we're the solution to your authentication problems!

    Often product announcements come from users, industry or simple trend demands.

    In the case of a company launching today, the driving force was an industry think tank who were looking to not only enhance the username and password, but offer a solution that enhances the concept ‘that is based on 50 years old technology'.

    This has led to the launch of Nok Nok Labs, which is led by former PGP managing director Phil Dunkelberger, who told me that the industry needs an authentication protocol to have the plumbing work together.

    He said: “The working group started with a vision about not just enhancing username and passwords, but how to make them more resilient to everything we use.”

    Started in late 2009 with the working group, the Nok Nok Labs project really began in 2010/2011 where prototypes were built to present a trusted software model. “It was designed to be used on any device and on any operating systems as a protocol,” he said.

    “This doesn't involve the public key infrastructure (PKI), certificates or anything, the group's vision is of strong authentication. I put a team together and the idea was to make technology more robust and easy to use.”

    CEO Dunkelberger said that the idea is to have an authentication system that binds you to a device, something that is not present at the moment. “You get in through weak authentication at the moment, we need a better way to get past the dissatisfaction and big implications of username and password,” he said.

    Backed by the working group and a management team of security industry veterans including PayPal CISO Michael Barrett and ‘Father of SSL' Taher Elgamal, Nok Nok Labs said that while there are many technologies that offer additional security, none are easy-to-use or scalable to internet-size populations.  

    Michael Barrett, chief information security officer at PayPal, said: “By creating an authentication infrastructure that leverages existing technologies such as fingerprint scanning and webcams, Nok Nok Labs is giving businesses the opportunity to authenticate anyone, anywhere and on any device. Given the billions of connected Internet devices and future growth of online commerce, PayPal sees a critical need to implement strong yet flexible authentication solutions.”

    Dunkelberger explained that the technology sits and waits for the user and says to the backend ‘what do you want to use as the second factor?' He said that there is no connection, so it mitigates man in the middle attacks and authentication is based on risk and profiles. He said: “How can you take the stuff out there and make it usable every day? If you want to put in a four digit PIN number, voice biometric or swipe a fingerprint you can.”

    This seems like a good idea, those of us who carry to two-factor token would probably see the benefit in using it for more if not all services. So if a user is bound to a device and authenticated by it, but what if the user loses that device?

    Dunkelberger said: “If you lose your phone, you go to your PC and de-provision the device as you don't want it to identify you. You are using a custom piece of code, you use this for authentication and for multi-factor capabilities. We are not about selling authentication tokens; we enforce better use of strong authentication.”

    We've seen launches in the authentication technology space for many years now, all trying to cover the same ground in getting all users to use their technology to solve the problem. What Nok Nok Labs that is different is to allow users to keep on using the same tokens and devices, but build a better backend that may iron out some of the password storage and breach issues that give IT managers and administrators nightmares.

    Will it succeed? With the right people and concept behind it then it may. Also in case you were wondering, the name, Dunkelberger explained, came from ‘knock knock - who's there'.

    Even Presidents get the blues

    Even Presidents get the blues

    It is 23 years since President Bush Senior talked of a ‘new world order' of a trusted and peaceful world.

    Now in 2013, he and his family have been the victims of a modern threat – cyber crime.

    The Bush family, who spawned two US presidents, were been the victim of an email hacker who accessed personal photos and sensitive correspondence.

    According to the Smoking Gun website, the details were on both George H.W. Bush's Bush (senior), who served as president between 1988 and 1992, and George W. Bush, who served two terms between 2000 and 2008.

    The hacker told the website that they had got ‘a lot of stuff', including ‘interesting mails' about Bush senior's recent hospitalisation, ‘Bush 43' and other Bush family members, after they accessed at least six separate email accounts.

    Among the details were a confidential October 2012 list of home addresses, Bush junior's home address, mobile phone numbers and emails for dozens of Bush family members. The posted photos and emails contain a watermark with the hacker's online alias ‘Guccifer'.

    Jim McGrath, a spokesman for George H.W. Bush, told the Los Angeles Times that the situation is being investigated by authorities, while the Sydney Morning Herald reported that the US Secret Service is investigating how Guccifer gained access to material including pictures of Bush senior in a hospital bed, and the security code for a gate to one of his son's homes. CNBC reported that the FBI was keeping quiet, with Houston FBI spokeswoman Shauna Dunlap, saying: “We do not confirm or deny the existence of any investigation.”

    There was little detail of how the email accounts were breached, although as email accounts were accessed it can be assumed that this was done by guessing the passwords to the accounts. A similar incident occurred in 2008 when the Yahoo account of Sarah Palin was attacked.

    Michael Sutton, vice president of security research at Zsacler ThreatLabZ, said: “It is an unfortunate violation of personal privacy when anyone's email is publicly shared, but when it relates to former US presidents, national security becomes a concern as well. The fact that the attacker was all too willing to make the email contents public suggests that the attack was done for the challenge, as opposed to something more nefarious.

    “The attacker did however clearly target the Bush family directly, having compromised accounts from multiple family members and friends. This should serve as a reminder to everyone that public email systems are accessible to the world and protected solely by a password. If that password is easily guessable, compromise is trivial.”

    While he was in power some ten years before malware became prominent and the technology and cyber rules were detailed by his successor Bill Clinton at December's Dell World conference, it could be argued that Bush senior had visions of the world of cyber crime from his comments in the early 1990s.

    Bush senior spoke in 1991 about creating a new world order, where ‘the rule of law, not the law of the jungle, governs the conduct of nations'. He said: “When we are successful, and we will be, we have a real chance at this new world order. An order in which a credible United Nations can use its peace-keeping role to fulfil the promise and vision of the UN's founders.”

    Of course this was delivered around the time of the first Gulf War, but bearing in mind the recent moves to encourage collaboration, this stance by Bush shows that the need to share and work together is crucial. Would that have stopped this hacking happening? Arguably not, as what Sutton said about this being a glory hack ‘for the challenge' shows that anyone can be attacked if the hacker is hard working enough.

    For the new world order, maybe the idea is still yet to be achieved.

    The truth about the UK's cyber security capabilities?

    The truth about the UK's cyber security capabilities?

    At the end of last year, I highlighted a news story that described the UK's response to a cyber incident as ‘fragmented and failing'.

    The original story had come from Computing, which also featured a former cyber intelligence officer for the US Army and the Defense Intelligence Agency (DIA) named Bob Ayers. He told Computing that he felt that Britain's cyber security program was "a collective of independent entities" rather than a streamlined unit.

    In SC's story, we assessed these comments with some industry responses, mainly because of the claim that the UK is 15 years behind the US. This week I met with Ayers for the first time to further gauge his comments on this subject. Ayers, who set up the first US Department of Defense (DoD) response code lab in the early 1990s.

    He said: “I started from scratch and in the second year, I had 155 staff and a $100 million budget.” Ayers, who began in 1969 as a counter intelligence analyst using the Arpanet system in the Pentagon and working on a Tektronix 4051, said that he was approached by defence agents to look at defending against adversaries and putting together programs with students from Stanford University "to go through environments we were not familiar with".

    “In 2010-11 I was the senior cyber security adviser for Britain's £650 million cyber security defence programme and the bottom line is that in 2010, the Ministry of Defence (MoD) was roughly where the DoD was in 1992, roughly 20 years behind,” he said.

    Asked why he felt this was, he said that it was a combination of cultural features, but especially that when something new is suggested there are two answers: ‘it's not my job' and ‘we don't have the budget'. He said that often people did not want to be dealing with anything that they did not understand.

    “If it is not your job, then it is not your responsibility. Issues are about protecting the £650 million budget and this was approved while the Conservative party were in opposition and I wrote the security policy for them with a heavy emphasis on cyber,” he said.

    “When this was put in place, everyone put in their nomination for some of the budget without knowing what a mature cyber program would look like and including specifics on technology, training, processes and facilities.

    “But this didn't exist; all we had was a target figure and while all users of the money were legitimate, there was no consolidation within departments. There was no orchestrated master plan to drive the budget. They all went in and competed for budget and got it, but there was no one in charge and no one made a decision. It was all made by a committee to make a judgement.”

    Ayers said that the challenge for the UK is realising that cyber doesn't fit into one department, it cuts horizontally through everything and everyone has a responsibility. “If you don't have that, you don't have a heterogeneous cyber programme,” he said.

    “More is said than done; if you don't have a documented programme in advance with milestones you have no way to measure whether you are successful.”

    He also said that the departments pitched for funds that would be distributed in four years, and that no one knows what the challenges or needs will be in four years. “If you justified the money for Wordstar, you cannot use it for Windows 7, but the guy who asked for it has left now and now another guy has to finish it,” he said.

    I put the recent proposals of the Cyber Security Strategy to create a volunteer force of cyber experts to Ayers, and to consider the Pentagon's plans to increase its cyber command from around 900 to 4,000 military and civilian personnel over the next few years.

    Echoing comments made by former British army intelligence analyst and now director of information security at Ernst & Young Mark Brown, Ayers called this "security on the cheap", saying that IT security is a full-time job where people have to be trained, current and "there".

    He criticised pay scales at the MoD, saying that £15,000 analysts will be offered much more to work in the private sector after collecting training, leaving the department with "second rate people".

    He said: “There is not much hope for the MoD and it has Tsars and created positions to create the programme but they have had no authority, power or money, it is just a name.”

    Ayers now works in the private sector as commercial director of Glasswall Solutions, who look for anomalies in documents and reproduce the document with suspicious code and links removed. While his comments may seem harsh, it does feel that the UK is treading carefully when it comes to cyber security and it may be the case that the expertise is just under the radar.

    Mitigating data compromise at the server level

    Mitigating data compromise at the server level

    Until fairly recently, headlines regarding data loss more often than not concerned lost laptops, USB drives and even CD-Roms.

    The last 12 months however, have seen cyber attacks become increasingly focused on getting to the heart of an organisation's crown jewels – server data. While events such as those affecting the New York Times, IEEE, Yahoo and LinkedIn have motivated a number of organisations to revaluate their security measures, a troubling proportion still have some way to go in locking down defences.

    As businesses grow, more and more data is becoming dispersed across corporate networks, putting all sorts of sensitive data – from human resources records, credit card and payment information, customer details, even transactional and warehouse data – at unnecessary risk.

    Over the years, many enterprises have invested in strong perimeter defences creating a customary checklist of firewalls, network IDS/IPS and gateway anti-virus. The traditional security model of a hardened perimeter around the data centre protecting everything inside has eroded with the advent of virtualisation and cloud computing. Moreover, as the destructive capabilities of cyber crime continue to grow in sophistication, such conventional network layer controls are doing less to protect against breaches (see the New York Times and Wall Street Journal examples).

    Data is still the lifeblood of an organisation, and so any threat to sensitive data constitutes a threat to the overall well-being of the organisation. However, as the enterprise becomes increasingly distributed, dispersing more and more information to various locations in the network, it is becoming difficult to understand exactly where data resides at any one time, making securing it a growing challenge.

    Though the threefold process of data discovery, classification and segmentation can be arduous and typically a manual process, it is essential to figure out what to protect and, more importantly, how.

    As a starting point, consider the type of data being processed and logged. Unfortunately, it is not uncommon to find that companies do not have a full inventory of the type of data that they accumulate, or even where that data is being stored (an issue being further compounded with the abundance of new storage technologies). This can be an invitation to an ICO fine.

    When classifying data, it is important to segment information according to the level of risk associated with a compromise of that data. For example, information that will not do any harm to a company if it is exposed can be classified as ‘public', while on the other hand financial, regulated or personally identifiable information, may cause significant harm to a company in the event of it being leaked by inadvertent or malicious means and should therefore be classified ‘sensitive'.

    Give particular attention to privileged users and their management. Privileged users frequently have blanket access to an organisation's networks and all data held within. Unnecessary authority in the hands of one party can risk a careless or rogue employee taking actions that result in compromised data. Holding a review of the data and segmenting it accordingly can therefore serve to crucially reveal the access control flaws present and, by consequence, indicate where to implement restrictions over who can – and should – access what data. For example, the office IT administrator should only have the authorisation to backup/restore files, while the application developer or data owner can be given the privilege to manipulate data.

    Evolving business requirements are driving a need for data centric controls that can travel with data. Placing controls right down on the data itself rather than at the storage or volume level provides a separation of duties based on employee function, reducing risk and mitigating against both internal and external threats.

    The ramifications of any data breach are extremely negative in terms of public perception. While a compromise of data may not always result in regulatory fines or legal sanctions, it can result in lost revenues, diminished trust, reduced competitive advantage, damaged reputation and other negative consequences.

    Understanding what information needs to be protected from the outset is paramount, not only does this ensure that the most appropriate controls are installed in the right place, but that tight security budgets are strategically invested to maximum effect.

    Paul Ayers is vice president of EMEA at Vormetric

    Did the 'internet of things' go UP(nP) the junction?

    Did the 'internet of things' go UP(nP) the junction?

    One of the most talked about stories last week was the vulnerability affecting around 40 million devices via the unplug and play (UPnP) protocol.

    A detailed report authored by HD Moore, chief security officer of Rapid7 and inventor of the Metasploit project, claimed that between 40 and 50 million networked devices were vulnerable to attack due to the protocol being set to open by default and being present in printers, routers, media players and smart TVs, among many others.

    He also discovered that over 81 million devices on the internet used the UPnP protocol, 17 million of which appeared to be remotely configurable, while his scans showed over 23 million devices were vulnerable to a remote code execution flaw.

    To determine the scope of the threat, Rapid7 researchers scanned the IPv4 address space looking for devices that responded to UPnP queries (UDP port 1900) and found that over 81 million devices responded to their queries. They also learned that the majority of these devices use four common UPnP development kits, and that many of these development kits suffer from a variety of critical software vulnerabilities.

    Causing concern to many researchers, this even led to the US computer emergency readiness team (Cert) to issue an advisory about the ‘multiple vulnerabilities' in the open source portable SDK for UPnP devices libupnp. “US-Cert recommends that affected UPnP device vendors and developers obtain and employ libupnp version 1.6.18, which addresses these vulnerabilities,” it said.

    We have seen research papers go viral in the past and with good reason, but with the scope of this affecting so many and tapping into the ‘internet of things' concept of multiple-connected devices, this could prove to be a watershed moment.

    Speaking with Moore, he said that the problem was that there were devices that were connected to the internet that should not have been. Ahead of the Rapid7 research, Moore said that there was some research done on it in the past, with some advisories issued in 2001 and some attention paid to some of the exposures in 2011, but he said that none of them really went in as deep or as wide as this recent report did.

    “There's been partial coverage of this, there were issues before in the past but no one had really gone and done a full assessment of the entire internet and then going deep on the specific software libraries that were most commonly used,” he said.

    “I've been running a larger, more comprehensive research project in the background that this is just part of, and one thing that stood out from that was there were almost as many UPnP exposed devices in that data set as the web servers I was finding. So this project is scanning a lot of different services and finding the information about what's exposed to the internet.”

    While Moore confirmed that it was pretty tricky to exploit, he said it was really easy for an attacker to identify all the systems that are vulnerable and it is fairly straightforward to go about what they have or the next place to target.

    The issue falls into three vulnerability categories: the main flaw that there is a vulnerability in the discovery protocol that make it exploitable; secondly that a lot of these devices also expose the user interface to the world; and thirdly that is a vulnerability in the software.

    Asked if he was aware of any attacks or exploits in the wild, Moore said he had not seen anything but he suspected that in the coming weeks there would be more activity.

    He said: “There was a researcher back in 2006/7 who I believe worked on an exploited interface. He was able to get remote access but the exploit itself was never made public.”

    As this affects so many devices, I asked Moore what type of users this affects? He said it was mostly small businesses and consumers, while enterprises use more internally-developed protocols. “The main advice we give right now is make sure that your network is not vulnerable. If that's sorted out then understand what you have and figure out how critical they are,” he said.

    He also confirmed that there was "a flood of responses coming from all the different hardware vendors" to the research, with Cisco's advisory stating that it is looking at the issue and that none of the devices it's looked at were vulnerable. “We're seeing still a fairly small response from all the vendors that are affected, most of the vendors have at least two months of work on it,” Moore said.

    “For vendors who make a lot of consumer electronics, there's very little chance they're going to fix their devices they shipped two or three years ago. They'll only really focus on what they're selling today basically.”

    Rapid7 did release a scanning tool, which it said had been downloaded 13,000 times as of a week ago, but the biggest story here is a flaw that affects so many in so many connected devices. Rather than the Java and Internet Explorer zero-day that have caused so many headaches and headlines in 2013, this may have much wider implications for the future of threats.

    Calculating the cost of privacy: lessons from Hurricane Sandy

    Calculating the cost of privacy: lessons from Hurricane Sandy

    It was estimated by the US Department of Energy, that 8.2 million homes were left without power as a result of Hurricane Sandy.

    Yet despite significant efforts to restore power to affected homes, there are reports that almost 150,000 homes were still without power two weeks after the hurricane. The impact of losing power in the words of those affected is that 'it's dark, it's frightening and it's freezing'.

    Getting critical services restored is imperative, and yet any perceived delays resulted in a fierce response from citizens. In Connecticut for example, it was reported that utility workers were pelted with eggs and other objects because citizens felt the utility company was taking care of wealthier residents first.

    What is very clear is that in the words of New York Governor Andrew Cuomo is that the city needs to "not only rebuild, but rebuild stronger and smart". Such words clearly apply to the power grid not only in the United States, but are also echoed by approximately 700 million Indian citizens who at the end of July were left without power after the failure of three of the country's five electricity grids.

    While there are significant technical and economical challenges with adopting some of the recommendations following Hurricane Sandy, for example burying power lines is estimated to cost $5.8 billion in Washington DC alone, the challenge is to effectively address natural events that occur with real irregularity.

    However implementing a smart grid could reap benefits that could be realised even without a natural disaster, such as the management of peak demand for power. Following Hurricane Sandy, the utility Pepco, which serves Maryland and Washington DC, was able to use its smart meter deployment to improve resilience in the grid.

    With 425,000 smart meters deployed, the company was able to use smart meters to improve the restoration of services by quickly being able to isolate impacted homes, and certainly quicker and more efficiently than having to send an engineer on site.

    One customer said: “Sometimes a mere wind or rainstorm, or even on sunny days, we routinely lost power for a few hours. Finally, last year, Pepco installed some smart meters here and things have improved. Much less blackouts and, during Hurricane Sandy, we briefly lost power for a few minutes [about] six times but never a prolonged loss.”

    So what is holding back the wide-scale deployment of such meters? There are many reasons: cost is often seen as a significant barrier. Yet such costs could potentially be offset by the ability to schedule the use of power during off-peak hours that is financially beneficial to not only the end-user, but also the utility company.

    One significant concern that has generated real vitriol amongst smart meter critics is the issue of privacy, with reported cases of homeowners preventing smart meter installation engineers from entering their homes by gunpoint. Such meters do indeed have the potential to report on energy usage, and the specific appliances that are being run in the home.

    This of course allows the operator to deduce the number of people in the home, when residents are at home, as well as other information that would be of interest to not only themselves but also other third parties. What's worse is that in some cases, the polling interval to the meter (e.g. how often the meter is asked for information) is set so low that researchers were recently able to deduce what film the homeowner was watching simply based on the brightness levels of particular scenes in the film!

    Privacy is quite rightly a concern. It is therefore a critical requirement for the collation of any personal information to be done in a transparent fashion. In other words collecting data about the homeowner is done with their explicit consent, particularly if it is not required for critical operations (e.g. billing purposes).

    This degree of transparency adheres to the White House Consumer Bill of rights, which provides a number of Fair Information Practice Principles, that include:

    • Individual Control: Consumers have a right to exercise control over what personal data companies collect from them and how they use it.
    • Transparency: Consumers have a right to easily understand and access information about privacy and security practices.

    According to the European Commission's task force for smart grids (Expert Group 2: Regulatory Recommendations for data safety, data handling and data protection), "in Europe energy theft and privacy are the most important concerns related to smart grid implementation, in other parts of the world (e.g. in the US) it is energy theft and malevolent attacks that are the main concerns".

    Whether you agree or disagree with that statement, one thing is clear – privacy concerns associated with the grid are significant, and research is uncovering more concerns with regularity. Therefore preserving privacy and ensuring that the implementation of privacy controls should not be based on where you live, but rather should be an imperative for all customers the world over.

    Only until this is done will we dramatically improve the likelihood of a broader acceptance for the smart grid, and also dramatically improve our capability to respond and react to natural disasters the world over. 

    Raj Samani is EMEA chief technology officer at McAfee and EMEA strategy advisor of the Cloud Security Alliance

    Data Privacy Day - for who exactly?

    Data Privacy Day - for who exactly?

    Today marks the 32nd European Privacy and Data Protection Day, with an effort to ‘recognise the importance of privacy for our human values and fundamental freedoms'.

    According to the website, today is "a platform that gives visibility to events, organised by governmental and other institutions and civil society that draw the attention to the value privacy and data protection we have in our societies or engage the citizen in privacy relevant activities".

    According to statistics from Iron Mountain, more than half of UK businesses think that data loss is inevitable and 66 per cent of the 1,250 European business decision makers said that the threat of fines was having little impact on their company's data protection policies to protect sensitive information.

    Christian Toon, head of information risk at Iron Mountain Europe, said: “The fact that more than half of European organisations see data loss as an inevitability is worrying and it illustrates that businesses of all sizes are failing to take appropriate steps to protect information.”

    Alan Woodward, from the department of computing at the University of Surrey, said that he felt that privacy and security were two sides of the same coin, and the point at which security and privacy most definitely collide is not just what the data holder does with your data (they might not sell you on), but whether they are protecting your data so that others who you have not authorised to access such data cannot access it, intentionally or unintentionally.

    He said: “Most smaller companies these days hold some form of ‘sensitive' data but they tend not to understand that point. For example, a database (usually a customer relationship management (CRM) system) with customer data might not necessarily be thought of as ‘sensitive'.

    “Ask a small business if they encrypt such data or make any extra efforts over the standard CRM system, and you'll find few do. Businesses tend to rely upon the security of the software being provided.  However, often those developing the software come from jurisdictions that have a very different approach to personal data, and their software does not necessarily even pay lip service to the Data Protection Act.

    “The whole situation is being complicated by the emergence of cloud and software-as-a-service. Businesses see that managed services offer significant cost savings, but very few stop to look at the small print. Fewer still stop to ask where the data will be physically stored. When I have talked to people in this position they assume, for example, that if data is sent offshore to the US then there is some equivalent law there. Hardly anyone understands that companies in the US must sign up to be ‘safe harbours'.”

    The role of enforcing the Data Protection Act in the UK falls on the shoulders of the Information Commissioner's Office (ICO) to educate and ensure that the laws are being followed. Aside from the proposed changes to the data protection directive, that the government creates the act from, the ICO is also responsible for rolling out awareness campaigns, and has issued almost £2 million in monetary penalties to those who fail to protect data.

    Asked if he felt that the ICO was doing a good job in broadcasting the regulations of the Data Protection Act, Woodward said that while there is a great deal of support material available, he was not sure how many smaller or medium-sized business really know what the ICO does or that there is practical help available.

    “I think there is a degree of increasing awareness but it tends only to be when large fines are issues that the ICO comes to the fore. Ideally, we'd have awareness being raised before it gets to the point of fines being issued,” he said.

    “With scandals having happened in everything from health data to financial records when it is entrusted to those in jurisdiction outside the EU it is something that the ICO could be doing a great deal more to help raise awareness of. I get the impression that the ICO is focussed very much on the UK/EU. What is happening with globalisation of services, and the increasing role of outsourcing/offshoring of the back office, means that people need truly global advice.”

    John Thielens, chief security officer at Axway, said that failing to ensure utmost data security within a business is as risky as walking a tightrope with no harness, especially as so many businesses have an increasingly mobile workforce.

    “Many businesses are now operating in an open network that can be more vulnerable to threats if the right precautions are not taken,” he said.

    “Businesses must ensure they know exactly where their corporate data and the data their customers have entrusted to them is, who is accessing it, how, and for what purpose. Consumers have obligations to behave safely online, but businesses are ultimately the custodians of their private data, and have more complex duties to safeguard it.

    “There's no doubt that the likes of cloud technology and BYOD are creating a world of opportunity for businesses, but it's crucial that businesses understand they come with a new set of rules. Arming employees with the right balance of knowledge and sound security tools is key to ensuring business security remains airtight.”

    Data protection is a challenge, there is no doubt about that, and that is why it falls under the umbrella of business compliance. However warning businesses that they need to protect data is like telling them that they need to breathe. Therefore does a day like this exist to raise awareness, slap wrists or gauge the public interest? In my view it is all three.

    Asked if a day like this will have any impact, Woodward said: “I suppose it can't hurt, but there does seem to be a lot of awareness ‘days' and I think their volume leads to a degree of apathy. It's rather like people getting compassion fatigue with so many charity appeals. Security and privacy are assumed to be someone else's issue, so it's good if campaigns can raise awareness of personal responsibility.”

    SC Magazine will present two events on data protection in the coming months. The Data Protection Summit will be held on the 21st March 2013 at the ILEC Conference Centre in London, while a webcast will take place on ‘Data Protection in 2013 - Regulation Versus Reality?' on Thursday 9th May.

    Deperimeterisation - nine years on

    Deperimeterisation - nine years on

    January 2014 will mark ten years since the Jericho Forum announced its concept of 'deperimeterisation', with regards to network IT.

    This is a topic that we will revisit as the anniversary approaches, but this week I spoke to Intralinks EMEA CTO Richard Anstey who described the advent of consumer-based cloud storage as a key factor in this move – not only with Dropbox, but also with the introduction of Mega.

    Anstey said: “You could say that 4G is making deperimeterisation more real, as why would you bother with connecting to a corporate local area network (LAN)? If you have a device with faster bandwidth, would you ever connect to a corporate LAN, would you even connect to a wireless network? If we can get to it via 3G, then email, Office 365 and SharePoint are all outside the perimeter.”

    He said that while the firewall serves a purpose, as it was used to 'protect wires', the next move should be for the firewall to protect data. “The idea was that if it is inside the perimeter, it is safe but if data is outside the perimeter, what is the firewall for? Why protect what is within the perimeter when everything is outside,” he said.

    Anstey said that the concept is that deperimeterisation is much more real, as there is pressure for IT to do something and change with the users demand.

    Likewise, Fortinet's Darren Turnbull said that as users have become more computer literate through using technology frequently, they figure a way to access services even when the company policy says 'you cannot do that'. Turnbull said: “The user doesn't want to waste time trouble-shooting despite what the business says.

    “There is an expectation as well. Now everyone wants to connect but without being told how to do things.

    Anstey said that he felt that deperimeterisation really happened when the Apple iPad was launched, as it allowed the user to do things on a device that was far more powerful and portable than anything the business could offer. “Why is the CISO so concerned about protecting wires and not data?” He said.

    “The challenge in the perimeter should be data and not wires. Secure the data and focus on content, don't insulate the device, you don't need to go down the expensive route.”

    I asked Anstey if he meant that deperimeterisation was the end of the firewall. He said no, but said that the firewall should be closer to the data and inside the data centre, rather than being seen as similar to the physical wall of the business.

    He said: “The employee is growing up; IT used to treat them like a toddler by putting them in a playpen and throwing them the toys that they can use. Now they are like the teenagers; finding their own way and discovering things, and they need help.

    “Likewise, the CISO needs a safe way to do things and tries to guide people with a mature approach with the right direction, as opposed to pushing them. Deperimeterisation is because of users, and IT is the super nanny.”

    Regardless of who is to blame, deperimeterisation has happened for many reasons – devices, users and data – and it has gone past the point of recall and control. Dealing with it is now part of the everyday business, and taking the power back may be yesterday's problem.

    The tactics behind a spear phishing attack

    The tactics behind a spear phishing attack

    Marketing tactics have changed. Marketers now target each individual customer, just like Amazon's recommendations page.

     

    Criminals have learnt the same lesson as phishing emails are no longer sent to thousands of people. Instead, criminals now target individuals with well-crafted messages that are designed to appeal to them, a practice known as spear phishing.

     

    Spear phishers start by identifying their target. Perhaps they want to get into one company to access its research and development records or to install malware on the network. The first step is to spend some time online researching that company and deciding which employee (or group of employees) they should attack. For instance, a company's LinkedIn page will reveal the names of individuals who work there.

     

    Criminals will look into those individuals and find out as much as they can about them. An individual's public LinkedIn profile could reveal his corporate email address (as well as the naming structure used for email addresses at that company) and the names of his supervisor and co-workers. His public Facebook page could reveal personal information, such as how many kids he has, the names of his boss or a co-worker, or a recent conference he attended. A few simple web searches of information freely available to the public can provide enough information to develop a well-disguised, credible spear phishing email that is of interest to the recipient.

     

    Once they have done the research, phishers will build their emails. This will include a spoofed email address and will be mocked up to look genuine. For example, they might send an email that looks like it came from the organisers of the recent conference, telling the recipient that she won the draw for a new Kindle Fire and to click a link for more details.

     

    At first glance that link will look genuine (it will be the underlying URL that in fact takes the user to a different site). Or perhaps it will look like it comes from her boss's work email account with an attachment to a document marked '2013 Budget – FINAL'. Whatever they do, the criminals will make that email appear genuine and deal with a topic of interest to the recipient.

     

    Inside that email will be the trap they want the recipient to fall for: opening an attachment (which will install malware on the network), clicking a link on a URL, or entering specific information (such as a username and password).

     

    The clever phishers won't stop there though – they will make the rest of the process look as incongruous as possible so they don't arouse suspicion. Perhaps the link will take the recipient to a genuine-looking site that says someone will contact them in a month, or the budget spreadsheet might just be empty and look like a mistake.

     

    But the spear phisher has succeeded and managed to get inside the company's network. Now they can take over your email account and start sending more malicious emails internally; or siphon data from your customer database and perhaps access financial information. For the determined spear phisher, the possibilities are seemingly endless.

     

    Technical controls are of limited use against individual targeted attacks, as well-crafted spear phishes can often slide through an organisation's filters without being detected, so user education is essential. Users must be trained to look at every email they receive and try to spot the red flags indicating that everything isn't as it seems – such as spoofed URLS – and try to ensure they don't fall for them.

     

     

    Aaron Higbee is CTO of PhishMe

    Bored by BYOD?

    Bored by BYOD?

    In my inbox it is often a case of another day, another bring your own device (BYOD) survey.

    One day it is ‘security people embrace BYOD'; in another it is ‘BYOD causes nightmares/headaches for security people'. Now rather than ‘flaming' PR companies and vendors for issuing such reports that often get ignored, I figured it would be worth contrasting some of the most recent reports that I have seen to get an overview of the findings.

    The (ISC)2 2013 Global Information Security Workforce Study was a short preview of the full report to be released next month. Of its 12,000+ respondents, it found that "company policies supporting BYOD are being widely embraced as a win-win initiative", with 53 per cent saying their companies actively allow users, either employees, business partners or both, to connect their devices onto their networks.

    It also said that 54 per cent identified BYOD as a growth area for training and education within the information security profession. As for the risks, 78 per cent consider BYOD to present a somewhat or very significant risk.

    Looking at another survey from a company that has really profited from the boom of BYOD, Good Technology, the findings from the 100 customers that took part revealed that 76 per cent had adopted BYOD policies. It also noted that five per cent of companies had no plans to support BYOD.

    A final survey that dropped into my inbox was from Dell Software. Its survey of 1,485 senior IT decision makers from around the globe found that as a result of implementing BYOD, 74 per cent of companies experienced improved employee productivity and 70 per cent saw better customer response times. It also found that 59 per cent of respondents believed that they would be at a competitive disadvantage without BYOD.

    Finally, its survey respondents identified four personal gains for their employees: more flexible working hours; the ability to foster creativity; speed innovation; and to facilitate teamwork/collaboration. This led to 56 per cent of respondents saying that BYOD had completely changed their IT culture.

    Chris Hazelton, research director for mobile and wireless at 451 Research, said: “It is clear that companies are supporting BYOD in large numbers as it gives employees the choice to use the devices that make them most productive.

    “While there is a lot of focus on supporting and controlling the device, the next challenge for IT will be provisioning and securing large volumes of enterprise apps and data in BYOD deployments.”

    As well as the statistics mentioned above, there were some more quirky findings. Dell Software found that 60 per cent of organisations wanted employees to sign an agreement adhering to country-specific data regulations, while Good Technology found that 50 per cent of companies supporting BYOD require that all costs be covered by employees, who are more than willing to take their employers up on the offer!

    The (ISC)2 report found that business drivers for adopting BYOD put the user at the centre of IT strategy, as the desire to improve end-user experience at 60 per cent was almost equal to the business requirement of supporting a mobile workforce (64 per cent).

    Is this similarity uncommon between survey topics? Of course not, sometimes there can be even more surveys than this on a similar topic in the same period of time, but as was said in our 2013 predictions, this is not a trend that is going away. In fact if you read the reports from the last two years, it is only becoming more and more realistic.

    How public distrust is affecting cyber security strategies

    How public distrust is affecting cyber security strategies

    Consumer confidence in cyber security has clearly eroded over the past couple of years, and there is an urgent need for organisations of all industries, whether public or private, to reassure consumers they are capable of safeguarding networks.

    Recent headlines have increasingly been dominated by cyber attacks on public sector organisations – especially worrying given that the consequences of government organisation cyber security breaches do not merely result in loss of sensitive information or financial repercussions. With cyber criminals deploying ever more sophisticated tools, an attack of this nature can also cause damage to physical assets and in certain scenarios, the loss of life.

    This all came to a head in January this year, with MPs on the Defence Select Committee producing a report stating that the UK's armed forces are now so dependent on IT that they could be ‘fatally compromised' by cyber attacks. Indeed, the threat may be particularly high for the UK's armed forces, which is becoming an increasingly popular target for both independent cyber criminals and those controlled by other governments as its dependence on IT increases.

    Furthermore, in December of last year, cabinet minister Francis Maude warned that Britain's national power and water infrastructure is increasingly a target of foreign cyber attacks, so it's no surprise that calls for urgent government action to improve cyber security are growing. LogRhythm research has shown that two-thirds of the UK public now back pre-emptive cyber strikes on enemy states, while 45 per cent believe that the UK government needs to step up its protection of national assets and information against cyber security threats.

    However, a knee-jerk reaction of pre-emptively attacking the networks of potential perpetrators could incite disturbing consequences, such as escalation of even more sophisticated attacks on the UK's critical infrastructure. Rather than attacking ‘enemy' networks, the scale and nature of today's cyber threat calls for proactive, continuous monitoring of IT networks to ensure that even the smallest intrusion or anomaly can be detected before it becomes a bigger problem for all – after all, you can only defend against that which you can see.

    It is therefore unfortunate that most government-led cyber security policies focus on catching and punishing criminals as opposed preventing computer crime. The other serious issue when it comes to cyber attacks on government organisations is that even once the breach has been remediated, there often remains an enormous amount of uncertainty surrounding the origins of the attack.

    Without confirmation of the source of attacks, inaccurate finger pointing often occurs – and when this happens between nation states, diplomatic tensions can arise.

    This means that further forensic analysis of the breach is often required, which traditional point security solutions, such as anti-virus or firewall tools, just don't provide. With IT security data volumes increasing at unprecedented rates, many organisations are neglecting the fact that Big Data analytics can offer invaluable intelligence, and will actually help them improve their IT security and overall network efficiency.

    A cyber security strategy focusing on the continuous monitoring of IT networks provides the network visibility and intelligent insight needed for deep forensic analysis of growing amounts of data. Only with this deep level of network visibility can cyber attacks be effectively mitigated and accurately attributed, giving the public more faith in the government's cyber security policies.

    Public opinion also plays a significant role in data breach disclosure strategies – in his December address, Maude further urged organisations to declare publicly when they have suffered a serious cyber attack, as too many fear the loss of competitiveness. Interestingly, the LogRhythm research revealed that 80 per cent of the UK public implicitly do not trust organisations to keep their data safe; with nearly half (41 per cent) feeling that it has become inevitable their data will be compromised by hackers.

    However, the research also shows that since 2011, the same percentage of respondents had concerns over the ability of organisations to safeguard their data – perhaps showing that the nation has already reached a plateau of distrust. This growing frustration over inadequate cyber security measures isn't helped by an over-reliance on perimeter defences despite the fact that they have repeatedly proven inadequate in securing IT systems. Instead, only by baselining normal, day-to-day activity across all dimensions of IT infrastructures can organisations proactively secure both data and infrastructure – and hopefully, rebuild public trust.

    Ross Brewer, vice president and managing director of international markets, LogRhythm

    Five billion strikes in five years for Malwarebytes

    Five billion strikes in five years for Malwarebytes

    Today marks the fifth anniversary of the first public release of Malwarebytes Anti-Malware and according to the company, it has removed five billion threats in that time.

    Its founder Marcin Kleczynski, who built the anti-malware program from scratch in his college dorm room "to accomplish what his anti-virus could not, said: “Traditional anti-virus does a good job with the known threats but typically isn't as effective against the growing threat of new and emerging malware. That's where we come in.

    “With our unique blend of protection and detection technologies and agility as an organisation, we adapt to new threats faster than traditional anti-virus.”

    Malwarebytes' Anti-Malware product is also marking 200 million downloads since it was first launched.

    To celebrate these milestones, the company is making a donation to the Electronic Frontier Foundation (EFF).

    We do hear of anniversaries and another security vendor has already said that it is its 20th anniversary this year, so to make a mark in such a crowded space is commendable. Happy fifth birthday to Malwarebytes and here is to a long and successful future.

    Will your business be a board walking empire?

    Will your business be a board walking empire?

    There should be a better connect between the board, security team and employees – sound familiar?

    This is a recurring theme in the news when it comes to budgeting, appreciation of IT and security by businesses' board of directors. Statistics released this week by Swivel Secure found that 51 per cent of business owners are ‘unconcerned' with the security of their corporate systems, so are they as disconnected as ever?

    At this week's Infosecurity Europe 2013 press conference in London, Sue Milton, president of the ISACA London Chapter, said that board members should walk with the security team around businesses in order for the board to realise that security can be an enabler as opposed to a cost saver.

    In another session at the conference, the concept of board acceptance came up again. Thom Langford, director at the global security office at Sapient, said that "getting people down is challenging but vital" and that a group of professionals bringing their concept of risk from across the business who can help filter the conversation is also important.

    “The moment the CISO reports into the CIO, nothing will happen - the more independent you get, the better,” he said.

    He later agreed with the concept of the board walking around the business, saying: “In any good company you need the board walking around and in my experience it is done and it is very effective. Everyone has their own problems, not just us, but any good company will engage the board as they have challenges and motivations. It is important to address challenges, just don't think you are going to get all of their time.”

    A part of the UK Cyber Security Strategy was an executive briefing on cyber security to UK businesses, with the aim of putting cyber security on the agenda. An analysis by Trustwave of the UK FTSE 100 companies examined the most recent annual reports and whether the board had explicitly itemised cyber security as a material risk to their business.

    It found that 49 per cent highlighted cyber risk in their annual reports, with healthcare and basic materials companies giving little or no attention to cyber risk. It did find a good take up in the consumer services sector, and a 100 per cent appreciation in technology and telecommunications, but perhaps it should be expected in these more ‘connected' industries.

    As for Langford's point that independence is key within the board and getting security woven into the fabric of the business, well surely this is the whole point of an awareness campaign – to get people thinking security?

    In the meantime, watch out for the walking board members, they're there to learn you know.

    Security of Scada systems scrutinised

    Security of Scada systems scrutinised

    A survey of connected Scada computers identified that 500,000 machines could potentially be targeted.

    The survey, carried out by Bob Radvanovsky and Jacob Brodsky of security consultancy InfraCritical and featured on BBC News, saw the men write a series of scripts that interrogated the Shodan search engine using 600 terms compiled from lists of Scada manufacturers and the names and product numbers of the control systems it sells.

    From this, it identified 500,000 potential targets and after working with the US Department of Homeland Security, it determined the most important 7,200 targets that are being contacted.

    According to details originally featured by Threatpost, Radvanovsky and Brodsky found not only devices used for critical infrastructure such as energy, water and other utilities, but also Scada devices for HVAC systems, building automation control systems, large mining trucks, traffic control systems, red-light cameras and even crematoriums.

    This research was filed into a report by the US Department of Homeland Security's industrial control systems cyber emergency response team (ICS-CERT), who highlighted the control systems used by critical infrastructure in the US that are susceptible to attack from viruses and other malware.

    The report claimed that "internet facing control systems devices were also an area of concern in 2012", as the ICS-CERT said that it worked with tools such as Shodan and ERIPP to identify and locate internet-facing control system devices that may be susceptible to compromise.

    In another experiment it credited researcher Eireann Leverett, who used Shodan to identify over 20,000 ICS-related devices that were directly IP addressable and vulnerable to exploitation through weak or default authentication. This research found that a large portion of the internet-facing devices belonged to state and local government organisations, while others were based in foreign countries.

    Speaking to SC Magazine in 2011, Dominic Storey, EMEA technical director at Sourcefire, said that there is no best practice for connecting network security layers for Scada-based systems, and no way of looking for connected sensors or what came from a sensor.

    “Also, think of Scada as a hardware system, nine times out of ten it is an old Windows system, so often there are vulnerabilities. Technology needs to be proactive and able to take action,” he said.

    Chris McIntosh, CEO of ViaSat UK, said: “This highlights a great weakness in critical infrastructure both in the US and beyond: security is still firmly rooted in the 20th century. While this is fine for physical security, the interconnectivity of the grid and the trend toward distribution automation, [it has] granted malicious attackers a multitude of ways to cause major disruptions.

    “With an interconnected grid, a single vulnerable utility becomes a weakness for every single part. As mentioned previously by the US Department of Homeland Security, malware can lurk for months before detection: companies should be working on the assumption that their systems have already been compromised and plan accordingly.

    “Protection of the network must go beyond typical IT solutions, and address the unique nature of these interconnected systems. Encryption of data in transit and rigorous authentication protocols, for example, should become de rigueur. The genie of cyber warfare is out of the bottle: organisations now need to get their heads out of the sand.”

    The security of critical infrastructure may be a key theme for 2013 after a year when major espionage tools were detected. The impact upon systems such as Scada, which was such a key part of the Stuxnet infections, may prove to be a telling point when it comes to critical national infrastructure security.

    Shocking and scaring into awareness?

    Shocking and scaring into awareness?

    A report appeared in the Telegraph this week that said that security awareness campaigns should be as striking as the AIDS campaigns of the early 1980s.

    Speaking on BBC Radio Four's Today programme, Major General Jonathan Shaw, who was formerly head of cyber security at the Ministry of Defence, has called for a  widespread cyber hygiene campaign, in response to the UK being ‘extremely vulnerable' to cyber attacks.

    Shaw said the government must "launch a cyber hygiene campaign like they did with the AIDS epidemic in the 1980s" and said that individuals are "on the front line" and must be warned their computers are at risk, as the government is "not in charge of cyber space".

    Those who remember the early 1980s (whether you were there or not) will recall the impact of the AIDS awareness adverts, with icebergs and the dramatic John Hurt voiceover, and how they scared the general public into reading the leaflet that dropped through their letterbox.

    Is this the sort of impact we really want to have upon the general public? With the AIDS awareness campaign, the guidance was pretty straightforward and while it required lifestyle changes for some, I suspect that for the majority the fear turned into confusion.

    That could be the case here, as if the campaign says: ‘there is a new threat that is not a physical one' or ‘you must change your password to a multiple character and one that cannot be guessed by anyone', some people may ignore it and consider it as hot air, while it may be taken aboard by others but without any lasting effects.

    Brian Honan, founder of BH Consulting, who has recently been appointed as a partner to Securing the Human for security awareness training programs, said that the trick is to get appropriate messages that will resonate with people.

    He said: “Ongoing security awareness campaigns are crucial to ensuring people are aware of the security threats they may face at home or at work.

    “Scaremongering only has a limited value in that it focuses on one particular threat or issue. As a strategy this does not work on its own but can work as an element of an overall campaign. So instead of the AIDS campaign, I would suggest road safety campaigns as being a better model. With road safety campaigns, messages are targeted to different audiences in different ways.

    “A good example is how many people of a certain age still remember the green cross code? And today you see TV advertisements after the watershed showing the effects of graphic car accidents. So tailoring a consistent message and delivering it in the most appropriate format for the audience is better long term than shock tactics.”

    The example of ‘clunk click every trip' is often cited to me as a good example of how a campaign can be straightforward, effective and memorable.

    Last week plans were announced to get better messages about online security to school children and to men who use the internet but are not aware of the risks, but how would such a campaign work to both demographics?

    Maybe the issue is one that should ensure that the power of the internet is not lost upon the beholder. Ronnie Khan, managing director EMEA North at Qualys, said: “As more and more computing power makes its way into the homes and pockets of the general public, Major General Shaw is right to raise the point that the public will need to be taught the dangers, as well as opportunities this presents on a personal, professional and national level.”

    Likewise, Yogi Chandiramani, senior manager of systems engineering, Europe at FireEye, said: “We now rely on internet connectivity to support so much of our daily lives that Shaw's call for an aggressive public awareness campaign can only be welcomed.  Human error still accounts for too many cyber incidents, and a widespread lack of understanding – coupled with the increasing sophistication of cyber criminals – has led to a significantly raised threat level.

    “Today's hackers are moving beyond the typical phishing attempts of previous years to more targeted, intricate and complex attacks. With this in mind, continually educating and re-educating the public on the growing security risks would be a positive step for the government in controlling the threat.”

    While any campaign would be welcomed, and at the same time critiqued and analysed for its effectiveness, I can see the point that Shaw is trying to make. He wants to strike at the heart of awareness to make sure that people take note, and remember the rules for ever.

    However in the last 30 years the world has changed a lot and the public are arguably much more cynical than in the early 1980s, and such a tactic may be lambasted rather than appreciated.

    Will 2013 be bigger and badder than before?

    Will 2013 be bigger and badder than before?

    Over the last couple of weeks my inbox has been bulging with predictions for 2013's security trends.

    This is not something that is particularly uncommon, as journalists, researchers and analysts are used to vendor predictions of doom and gloom in the year ahead, and how what we have seen in the past year will return bigger and worse than before.

    Alongside the 2013 predictions article that ran in our recent January/February 2013 issue, I thought it would be worthwhile to identify some of the common trends. Certainly the most frequent were advanced persistent threats (APTs), state sponsored espionage - specifically in regard to hacking - mobile malware and a continuation from 2012: Big Data.

    So from the dozens of predictions I received, which when compiled resulted in a 50+ page Word document, those were the key themes. To give you some idea of how many were talking about those subjects specifically, here is a breakdown:

    • APT - 4
    • State-sponsored - 4
    • Mobile – 10
    • Big Data – 5

    You may seem surprised that the numbers were relatively low, and I am too when I can say that I collected 32 perspectives. Looking deeper into these topics, APT was predicted to hit smaller businesses (by Imperva) and people (by Fortinet), while Stonesoft predicted more targeted attacks, nation-state sponsored espionage and more aggressive hacktivism than seen before.

    In terms of Big Data, Acronis claimed that 2013 would be the year that it would become highly available, while Six Degrees Group claimed that Big Data would cause an evaluation of cloud and hosting providers, specifically as users need to find enterprise-grade cloud drive technologies to safely and securely meet demand for online storage and access.

    The mobile area is always a common theme, and it is no surprise that so many predict stronger and worse things for the device this year. There was more targeting of Android (Stonesoft and Eset) cross-platform attacks that will impact PCs, Macs and mobile devices (Bit9 and Websense), malware in app stores (Websense) and a general ‘commodotisation' of mobile malware (F-Secure).

    Also related to mobile, the escalation of mobile payments was predicted by Validsoft and Selective Media, while something that caught my eye were predictions that the consumerisation of IT and the problems around bring your own device (BYOD) could be solved.

    For example, Lookout said that "businesses would strike a balance between control and employee empowerment" to find the right balance between protection and employee empowerment, and that would be the greatest challenge of 2013. Qualys predicted that organisations would develop strong asset management programs to deal with these issues that will not go away.

    Also, Wick Hill Group predicted that 2013 will see companies trying to integrate BYOD into their networks, as "strategic requirements will become increasingly important". It also claimed that mobile device management (MDM) solutions will need to address the problem of managing both employer-owned and employee-owned devices, and differentiating between business use and personal use, with clear separation between the management of business and personal data on devices.

    Looking into other areas, the concept of malicious software has now become so widespread that to pigeon-hole it is tricky – can you put something such as Flame and a low-detected sample into the same space?

    However, the impact of major worms that were detected in 2012 hang heavy over 2013. Both Venafi and Websense predicted that there would be more Flame-style attacks, with the latter saying that access to that quality of programming would be easier.

    Elsewhere, nCircle predicted that there would be more attacks courtesy of SQL injection flaws; F-Secure said that after Macs were hit by their first botnet there would be another ‘Flashback' for Mac; and Symantec predicted that ransomware will surpass fake anti-virus as the premiere cyber crime strategy, although BitDefender said that banking Trojans would dominate fake anti-virus space.

    To pick another over-arching trend, there were some predictions around the future of the cloud. To summarise, Imperva said that Identity-as-a–Service would be used by attackers for different activities, Canon predicted a rise in ‘bring your own application' for online storage, while Verizon said that the concept of hybrid clouds would become more prevalent in 2013.

    On the other side, BitDefender predicted that while denial-of-service attacks would get worse, attacks against virtualised environments would become more realistic.

    Acronis claimed that making data in the cloud accessible in real-time will mean 2013 is the year that cloud storage becomes a reality, while Venafi said that 2013 would see the first fine (likely from the Information Commissioner's Office) against a cloud provider for data loss, while Eset also predicted the first data leak from the cloud in 2013.

    Overall a lengthy summary, but these are the companies with the perspectives and I believe it is right to wait for them to be proved right or wrong or disagree with them as you see fit. If 2013 is going to see these all come true, I hope we are ready.

    How electronic signatures improve security and business process efficiency

    How electronic signatures improve security and business process efficiency

    When introducing any layer of information and document security you need to mitigate the risk of negatively impacting the day-to-day running of the organisation.

    However, electronic signatures (e-signatures), when integrated with content and process systems, provide an opportunity to not only enhance document security, but also speed up many business processes that can be delayed by the need to obtain one or more signatures.

    Of course, the use of e-signatures (the act of signing a document electronically whether using keyboard, mouse, signature pad or mobile device) is not new. In fact the regulation that governs them is more than a decade old, such as the E Act 2000 and UETA (Uniform Electronic Transactions Act) in the US and an EU directive. 

    Until now, e-signatures have been slow to fulfil their real potential. Firstly, there is still a lot of education that needs to be done to demonstrate that e-signatures are far more secure than their wet ink and paper-based counterparts. After all, we have been physically signing documents (whether via a wax stamp, or ink) for hundreds of years and we are creatures of habit.

    However, this anxiety does not seem to be shared by generation X and Y that have grown up online. Geography has also had a bearing on the growth of e-signatures, with countries such as Estonia and Latvia leading the way as a part of national e-identity initiatives.

    Another factor has been technology. An organisation may have the slickest processes and workflows thanks to their core systems, but at some point it is likely that a signature from an external third-party will be needed for an order, contract or claim to be completed.

    Internally they may have been using e-signatures for years, but the system has prohibited them from being able to deal with third parties in the same way, access to the application is necessary.

    As a result, a very familiar, frustrating and cumbersome process begins whereby the document is attached to an email and sent. The recipient then opens the attachment, prints it out, signs and then returns the document either via post, fax, or scan and email.

    From a security perspective, the problem with this approach is a loss of control over the whereabouts of the document, as it could be printed multiple times, mislaid or lost in transit or stolen. Furthermore, there is no way to validate whether the person who was required to sign the document was indeed the person that took the action.

    Imagine if this is a consent form for a medical procedure, a high value order, transaction or claim - the subsequent ramifications could be severe. Also from a business perspective, this process is slow and a delay in a business process typically means an increase in cost, especially if multiple signatures are required.

    Today, the latest systems are able to seamlessly integrate e-signature technology (such as AssureSign) wherever they are required within the process, allowing protected documents to be securely signed by the authorised person/s, using a secure HTTP connection and a valid email address.

    Now when a document needs signing, an email is securely routed to the intended recipient, but rather than an attachment, a link is embedded within the message that once clicked opens the document, which resides securely on the organisation's controlled and contained portal. Once the recipient signs the document (either dynamically, typed, or in some instances using biometrics) an automatic notification is sent to the person/team overseeing the process.

    Crucially, at no stage does the document leave the control of the organisation holding the information and it provides a clear audit trail history, as you can see at what date and time the document was accessed and signed and from which IP address. If the document is challenged you can quickly trace and reconstruct events.

    Whenever we think of security we typically think of how we can improve the integrity of our systems and minimise user inconvenience. When it comes to e-signatures heightened security sits hand in glove with improved performance from your core business systems.

    Shawn Hickey is head of product management at Perceptive Software

    Security through the browser

    Security through the browser

    The concept of securing data is one that has been present this year with a number of new names in encryption, and one that will continue to be a key area for business.

    Talking this week with Ed Macnair, CEO of SaaSID, he said that the area to add security was in the browser as ‘everyone has a browser'. He said that as everything is now done in the cloud, be it application or even platform, that cloud-based security was ‘not a pipe dream, it is happening'.

    He said: “It not possible to secure devices any more because the form factor is changing so rapidly. The only area of commonality you have is the fact that users access applications via a browser. Securing email and web browsing is easy to do using proxy servers.

    “However, if you're looking to secure web applications, then the proxy server approach doesn't work as all the manager sees is a URL, not what has actually been accessed by the user on the application. This is especially the case with modern single page interface applications.

    “The point of control needs to be next to the user, not at a proxy level.”


    Macnair who was previously CEO of Overtis, said that when a user uses a proxy, the manager does not see their web activity, just the URL they visit and that could be any application. “If you are not monitoring or controlling the user, you are not seeing the whole picture,” he said.

    “We're different; we are in the browser so we see what the user does.” SaaSID provide a solution that helps organisations use of public, private and hybrid cloud-based models, but also combines authentication, management and auditing solutions to help organisations address the productivity, security and compliance issues associated with the growing use of web applications.

    The company won the cloud category at the inaugural Tech Trailblazer awards this week, while SaaSID has also landed contracts with the Government's G-Cloud initiative and customers such as Groupon. Macnair said: “How do you secure customer data within applications? Groupon are using our technology for business intelligence on who can use what and with which applications.

    “There is no solution to manage multiple identities. You have got to provision access across the system to de-provision access from it.”

    He said that the three benefits of its browser-based technology, Cloud Application Manager are: to use single sign-on for any application; do granular access control; and to monitor what a user did on an application.

    Macnair said that on average, its customers use seven different types of application from the business-focused to social networks, and asked if all of users need access to them all of the time? Arguably not, but how do you prevent that happening? One way is with privileged user access management, but if these applications are in the cloud, how do you control it?

    This year we have looked at the concept of encrypting data that is stored externally, and this has seen the likes of CipherCloud, CertiVox and Vormetric be spoken of.

    However if the issue is one of being in control at all, then it may be down to granular control. Macnair said: “If you control browsers, you can control everything. You are getting control of what is important and people are only waking up to it now.”

    At the end of 2012, where we have learned about the future of data management being less about device and more about the application and what it transmits, the future is within the browser. Just remember when and where you heard this first.

    2012 in review: September to December

    2012 in review: September to December

    So to summarise so far, I started my look back at 2012 thinking ‘that was a quiet year'. Almost 2,500 words and numerous hyperlinks later, perhaps I was wrong.

    In this final part, I look at the latter part of 2012 starting from September. The big news event for this time was of course the London Olympics and Paralympics and aside from our 100-day countdown, the only coverage we were able to achieve was on the SEO poisoning, while BT later said that attacks had been thwarted.

    This period saw some government intervention into cyber security, with the White House preparing an executive order for cyber attack readiness – although surely that is what the US Cert is for. It may have been a worrying time when it was announced that Chinese hackers had been able to access the White House Military Office, but it eventually cleared Huawei to sell into the US after it was suspected of being a spy threat.

    Here in the UK it was announced that the private sector would work with GCHQ to learn how to thwart cyber attacks and how to create a more security-conscious culture.

    The UK government also announced plans for a £3.8 million cyber research institute, while Foreign Secretary William Hague revealed plans to open a European cyber crime centre in acknowledgment of the on-going challenges that the internet faces.

    December marked the one year anniversary of the Cyber Security Strategy by focusing on work already completed and that which is yet to be done, particularly within GCHQ. One aim was to introduce a cyber reserve army of volunteers, a concept slammed by a former military and security man.

    In attack news we looked at how the Mexican government had prepared in the event of a DDoS attack, something European governments could be better prepared for according to an Enisa-backed stress test that was carried out in October. Enisa later called for greater cooperation in such tests.

    Following on from the Home Office being hit by a cyber attack over the Easter weekend, a man was arrested in connection with the incident. In December, a student was charged with attacks on PayPal; while the co-founder of the Pirate Bay Gottfrid Svartholm Warg was also arrested.

    Data breaches continued in relentless fashion but some of the details appeared a little sketchy. Firstly, Go Daddy suffered a four-hour outage that Anonymous claimed responsibility for, although it later said that it was down to "a series of internal network events that corrupted its router data tables".

    In the other incident, a million Apple unique device identifiers were leaked after hackers claimed to have obtained them from an FBI breach, but a Florida publishing company named Blue Toad said that the database was stolen from its servers. It's unclear who tells the truth in these instances. Is it a case of hackers jumping on a problem and claiming it before the analysis in the hope of press coverage?

    One company telling the truth was HSBC, who admitted to being hit by a denial-of-service attack that did affect the availability of online services but not customer data.

    Also having problems in this period were Sophos, who flagged its own update as malicious; at the same time it had to deal with criticism of flaws by a researcher.

    A zero-day in Internet Explorer caused the German government's Federal Office for Information Security to instruct citizens not to use Internet Explorer following the discovery of a zero-day bug in the browser.

    After the problems that VeriSign and Symantec had at the start of the year, it was the turn of software giant Adobe to admit that it suffered a targeted attack on its digital certificate code signing infrastructure, however its Flash, Reader, Shockwave and Air products were not impacted. It later revoked all code signed since the 10th July.

    Following the loss of a laptop with unencrypted data on it, Nasa was forced into an ultra secure mode where it locked down all devices.

    Finally, another company forced to analyse internally was VMware, who after probing claims of a source leak in April, had source code dumped to mark Guy Fawkes Day.

    To prove that hacktivism did live on, a hacker named ‘NullCrew' hit Sony and planned to sell his haul. Yet he told SC Magazine that he was no longer selling the data.

    Anonymous continued action, this time against the controversial Westboro Baptist Church after it planned a protest at the site of a school shooting in Connecticut.

    After accusing Russian programmer Andrey Sabelnikov of being behind the Kelihos botnet, Microsoft ‘reached a confidential settlement' with Sabelnikov to close the case and to prove that major threat tools never die, Kaspersky Lab revealed a smaller but still effective ‘mini' version of Flame.

    Picking up a previous thread, research by IBM X-Force found the Flashback botnet was the most widespread and sophisticated Mac malware to date.

    In other news, Sophos' James Lyne got on his bike to reveal the lack of security around London's WiFi networks.

    Throughout the latter part of 2012, I attended a number of conferences, and the first was the Gartner security summit in London. Among the presentations, a claim that there are too many industry guidelines was interesting, as was the Information Commissioner's Office saying that it was "pressing for custodial sentences".

    At the annual European leg of the RSA Conference, a lot of the talk was on attack and defence, with executive chairman Art Coviello talking about shrinking budgets, guest speaker Alec Empire highlighting the threat of hacktivists, and Wikipedia founder Jimmy Wales calling for HTTPS to be "used everywhere".

    On to another show, and at the ISSE conference in Brussels the talk was of major trends and some instruction of how to create the ideal solution for ‘bring your own device' (BYOD), while a BlackBerry spokesperson was forced to take steps back after saying that BYOD was a ‘nightmare'.

    Next I visited the Irish conference Irisscon, hosted by the Irish Cert, which detailed the level of threat faced by the emerald isle, and presentations included work on preventing child abuse and the problem of annual penetration tests.

    The theme of SC's final conference of the year was governance, risk and compliance, and views there detailed how to achieve this and the correct road to take with management. Finally, I attended Dell World, where a company who acquired three security vendors in a year talked up security and software. Giving the opening keynote was former US president Bill Clinton, who talked philanthropy and made some interesting references to technology and collaboration.

    On to those acquisitions once again then: Dell acquired both Quest Software and Credant; Google bought inspection service Virus Total; Veracode acquired mobile scanning technology vendor Marvin Mobile Security; SecureData purchased Quadrant Networks; Axway bought API provider Vordel; and rubber-stamping the mobile market, Citrix completed its acquisition of Zenprise.

    Finally, to once again finish on some good news – Facebook followed Jimmy Wales' advice and announced that it was rolling out HTTPS for all users, and after learning that he would not face extradition, Gary McKinnon learned that the Crown Prosecution Service would not be bringing charges against him.

    So that was 2012 in around 3,500 words and a lot of work for me and hopefully happy reading for you.

    2012 in review: May to August

    2012 in review: May to August

    As the first part of my look back at the last 12 months demonstrated, 2012 was a lot busier than I remember.

    For a start, I suspected that there had been no major hacktivism, acquisitions or data breaches; I was wrong on all three counts.

    Traditionally, the summer is when things go pretty quiet, but not in 2012. At the beginning of May, payment provider Global Payments was forced to revalidate its PCI DSS status, which was followed by claims of debit card fraud.

    The concept of hacktivism did not die in the middle period of 2012, as the websites of the ICO and SOCA were attacked, although suspects for the latter attack were later arrested.

    Also attacked in this period were Wikileaks, Russia Today and Reuters, the latter being plagued with postings of false stories. Former LulzSec leader Sabu was given a six-month reprieve from sentencing, meaning he should face trial sometime in early 2013.

    In the last blog we looked at some 2011 stories that continued into this year, well another that persisted was the case of Google's Street View cars collecting data from unsecured WiFi networks.

    After regulatory investigations and slapped wrists, it emerged that the Google engineer who wrote the software told two colleagues and a senior manager about the flaw.

    In a separate incident, Google was fined $22.5 million (£14.4 million) by the Federal Trade Commission (FTC) over charges that it placed cookies on user's computers via Safari. The laws on cookie compliance also came into effect in this period.

    The ICO later announced that it was re-opening the case, causing Google to strongly deny that the payload data was 'pre-prepared'.

    Picking up another story from April, the Queen gave her annual speech in the House of Lords, which detailed the surveillance law plans. Home secretary Theresa May later announced that the data will only be accessed by senior police and not held by government.

    In false positive news, Avira flagged a Microsoft update as malicious and Yahoo was forced to fix its Axis browser after a security flaw was detected in the Chrome extension. Later, Symantec was forced to fix a blue screen of death issue after an update caused some PCs running Microsoft Windows XP software to crash repeatedly.

    In people news, after being appointed by President Obama following his election in 2008, Howard Schmidt announced his retirement from that position. He later joined the board at Qualys.

    One of the biggest stories at this time, and arguably of the year, was the detection of the Flame surveillance worm that can sniff network traffic, take screenshots, record audio conversations, intercept the keyboard and passed details on to the operators via its command and control (C&C) servers.

    It later emerged that it had the capability to sign its own certificates, ensuring successful infections, and this led to major discussions with the United Nations issuing a warning on it. As well as analysis of its capabilities and strengths, there were genuine concerns about the level of technical capability required to develop such a tool.

    This led to claims that the failure to detect Flame marks 'the end of signature-based anti-virus', while there were serious concerns about the time taken to detect it, especially as it was rumoured to have been sent three years ago. Microsoft later announced it would revoke certificates with fewer than 2,048 bits. This was later reduced to 1,024 bits.

    Up against this threat, and others that followed, was the detection of the smallest banking Trojan, at only 20kb.

    In other news the ICO issued its largest monetary penalty of £325,000 to Brighton and Sussex University Hospitals NHS Trust; and Twitter blamed a cascading bug rather than an attack on an outage.

    I always thought that if a social networking site were to suffer a security issue, it would never survive it. Well I was wrong, as LinkedIn still seems to be going strong after 6.5 million user passwords were posted online after it added salting.

    Another social network to have security issues was Menshn, backed by former MP Louise Mensch, which was dismissed by the co-founder Luke Bozier, who called it "a safe, clean and secure environment" and said "reported security issues around Menshn are unfounded". We didn't hear any more about security flaws, or that website, to be honest.

    In threats, McAfee warned of attacks on high value targets from ‘Operation High Roller', while MI5 director general Jonathan Evans said that a London business had lost £800 million due to a cyber attack, although no one owned up.

    At SC Magazine's summer Total Security Conference, the talk was of working better with MSSPs, the insider threat and using encryption, while a report that the USA and Israel were behind Stuxnet led one commentator to tell SC that Obama had approved state-sponsored hacking.

    A war of words broke out between RSA and a research group called Team Prosecco, with the latter claiming that the former's tokens could be broken in under 15 minutes; the vendor calling it ‘an alarming claim' that was not true; to which Team Prosecco defended its research and RSA criticised again.

    After the issues of Flashback, Apple seemed to change its tune on security, as it discreetly updated the text on its website to gently admit to a certain fallibility to viruses.

    Kaspersky Lab detected the Gauss worm that is designed to steal credentials, cookies and configurations of infected machines.

    An interesting situation arose over the summer around the DNSchanger botnet, which was switched off and suspicions led to this causing the internet to be shut off for many users. As it turned out, there was no such crisis, and a lot more people learned a bit more about domain name settings and web security.

    Yahoo had another issue when its Voices service was breached by a union-based SQL injection vulnerability in the application, leading to 400,000 usernames and passwords being stolen and published online. The credentials were reportedly stored in clear text and were taken from the Yahoo.com subdomain dbb1.ac.bf1.yahoo.com. It later fixed the flaw and apologised to users.

    Despite web-based malware being the best known vector, email attracted headlines in this period too, with a suspicious email received by delegates at the Black Hat conference. The anti-phishing working group (APWG) also reported that February 2012 saw the largest amount of phishing messages ever seen, while Dropbox called in external investigators after a spam outbreak on dormant user accounts.

    After claiming that ‘every little helps' for so many years, Tesco got caught up in a security headache when it was criticised for browser and password security failings, which caused debate on password security at the high street giant. Organisers of 44Con cheekily offered Tesco staff a complementary ticket. The ICO also announced plans to investigate Tesco over the claims.

    It was a quiet time in acquisitions, with only eEye being snapped up by BeyondTrust and Dell's purchase of Quest Software of note. In other corporate news, former McAfee president Dave DeWalt joined the board of FireEye and Symantec parted company with CEO Enrique Salem.

    To finish, let's look at some good news again. Microsoft announced the winners of its first Blue Hat prize for the development of a new, innovative computer security defence technology. In a big scoop for SC, we revealed how 200 e-commerce websites were vulnerable to a shopping cart flaw – the good news? Well I hope those websites read it and updated the software.

    2012 in review: January to April

    2012 in review: January to April

    Looking back over the last 12 months it could be said that nothing really changed within infosec, people still lost data, malware was successful and threats continued, so will 2012 be remembered for the year that things stagnated in security?

    Over the next few blogs I am going to look back over the information security headlines of 2012, as reported by SC Magazine, to see if that was the case.

    Going right back to the start of the year, over the Christmas period hacktivist group Anonymous hit the US security think tank Stratfor, posting 200GB of data online. This included the addresses and passwords of every customer that has ever paid Stratfor for services, and the personal information of 860,000 people who registered with the company.

    It later emerged that British military and political user passwords were among those leaked, while CEO George Friedman said that credit card files had not been encrypted and admitted that this was "a failure on our part".

    In other Anonymous activity, its 2011 offensives against Sony got a further push, as the hacktivists announced plans to ‘dox' (release personal information) of executives from the electrical giant, while the takedown of Megaupload also saw Anonymous target the FBI and US Department of Justice websites.

    Also in January, hackers Team Poison published the names and passwords of T-Mobile staff; while Symantec began looking into reports about source code for its consumer Norton brand being leaked, and it later admitted to the theft of code and instructed users to avoid using its pcAnywhere software.

    It was mid-March when Symantec confirmed that source code for Norton anti-virus was leaked, admitting it got into a blackmail battle of words with a hacker.

    To the best of my memory, this is where major attacks on Sony ended, as after a year of huge campaigns the hacktivists went a bit quiet. More on that later.

    In some better news, January marked the first decade of Microsoft's Trustworthy Computing division, while Twitter took a major step into security with the acquisition of anti-malware firm Dasient.

    One of the most referenced stories for me in 2012 was the announcement by Viviane Reding of proposed changes to the Data Protection Directive for the European Union, some of which were exclusively revealed in 2011 by SC, including the 24-hour notification law, the appointment of data protection officers and rulings on the ‘right to be forgotten'.

    The changes were not met with complete acclaim; while there is some benefit to consumers if these changes are approved, it will hit businesses hard. One of the critics of the changes was the Information Commissioner's Office (ICO), who called for a rethink, saying that there were "challenges for its practical application and risks developing a ‘tick-box' approach to data protection compliance".

    Into February, and Russian programmer Andrey Sabelnikov protested his innocence after he was accused by Microsoft of being the brains behind the Kelihos botnet, which it took down in September.

    Also in February, research by Context Information Security found that web applications developed for government, financial services and law and insurance sectors had the greatest increase in vulnerabilities and to combat any problems, regional cyber crime units were created in Yorkshire and the Humber, the north west and East Midlands. 

    Also, a division of VeriSign reported facing "several successful attacks against its corporate network in which access was gained to information on a small portion of our computers and servers" in 2010. It later said that its domain name system function were unaffected.

    In other threat news, ticketing giant Ticketmaster admitted that is direct mailing system had been compromised with spam emails sent from its official accounts; while in much more secure terms, Barclays announced the launch of the ‘Ping' payment application and Twitter detailed its plans to set all users to HTTPS by default.

    Just when you thought data loss couldn't get any worse, details of a stress test from Hartlepool's nuclear power station were lost on a USB, and ahead of London's Olympics it was predicted that more than 3,000 smartphones could be lost in the capital.

    In mid-February, there were serious red faces at Microsoft as it flagged a Google update as malicious, with Internet Explorer claiming that Google.com was serving up a severe threat and that Google's home page was infected with the Blackhole exploit kit.

    At the end of February, the annual RSA Conference arrived and the theme this year was very much about collaboration, with RSA executive chairman Art Coviello calling for this and the launch of the Trustworthy Internet Movement to achieve just that.

    The month of March saw the back catalogue of late pop superstar Michael Jackson stolen from Sony; while research into every security-conscious person's least favourite operating system, Android, was revealed to send personal data from devices to advertising companies without user knowledge.

    In acquisition news, M86 Security was bought by Trustwave, Cryptocard became part of SafeNet's offering, Sophos acquired German mobile device management vendor Dialogs and Dell announced plans to purchase SonicWall.

    Rounding off news from 2011, when hacktivist LulzSec scared the life out of anyone willing to cross its path, it was revealed that leader ‘Sabu' had worked for the FBI as an informant. He was named as Hector Xavier Monsegur from New York and upon naming his former colleagues, some of who had been arrested already and some of who faced that fate later in 2012, was given FBI protection.

    The hacktivists naturally were outraged, and since then Sabu's once noisy Twitter feed has fallen silent. However LulzSec did not stay quiet for long, as it hit a number of UK government sites over the Easter weekend, including the Home Office and Ministry of Justice.

    After successes in taking down botnets in 2011, Microsoft continued the trend with a takedown of command and control (C&C) servers to disrupt the Zeus banking Trojan. In another botnet story that we will pick up in the next blog Apple users were affected by the biggest confirmed malware outbreak with the Flashback botnet.

    The ICO was in its usual regulatory swing with numerous fines issued, leading me to ask whether it had anything against local councils. Some time later, I got to ask this directly to Information Commissioner Christopher Graham who said there were issues with awareness, training and mainly a realisation of what staff were dealing with. In short then, no real dilemma, but there is an underlying problem within this sector.

    The proposed surveillance bill hit the headlines and was criticised by privacy campaigners and internet founder Sir Tim Berners-Lee, while Christopher Graham told SC that the Queen's Speech would flesh out the details of the bill.

    Ending this period with some good news: Jonathan Millican, a 19-year-old student of computer science at Jesus College, Cambridge, was announced as the second winner of the Cyber Security Challenge, and SC began its countdown to the London Olympics with 100 days until the opening ceremony with advice on how to prepare.

    Last and by no means least, SC announced the winners of its 2012 awards. A year when nothing happened? This was only the first four months!

    ISSA chapter meeting on government infosec initiatives and threats

    An ISSA UK chapter meeting was held in early December in London and in attendance was Fujitsu's James Gosnold.

    Opening the event, Lord Toby Harris discussed 'How insecure is the UK?' and articulated a message about the ever increasing cyber threat and the context in which it is evolving. Aspects such as political extremism and environmental radicalism will apparently lead to an all round 'riskier' landscape, and our propensity for looking at what has gone before rather than at what may come will not help society's fight against cyber crime.

    Mike St John-Green, a director at the UK Cabinet Office and formerly at the OCSIA (the Office of Cyber Security and Information Assurance), who clarified that he "was not speaking on behalf of the government", addressed Neelie Kroes' EU Digital Agenda team. He said that they had 'more or less' decided on intervention in cyber space, something that didn't seem likely at the Information Security Forum conference earlier in November. St John-Green also mentioned Project Auburn and the high hopes for that area of work.

    The audience were also told to expect a UK National Cert next year – at the behest of the EU.

    Jason Steer, solution architect for EMEA at Silver Tail Systems, gave an interesting talk on real world attacks on websites. After explaining to gathered members that the UK has the largest online economy in the world (eight per cent of trade is apparently online) and that companies spend more money on coffee than application security, Steer gave some examples of business logic abuse, including one on a company issuing online 'mystery discount' vouchers giving anything from a ten to 50 per cent discount. However there were only four unique codes and people quickly worked out which of those gave the 50 per cent discount and the company ended up giving much more away than they ever intended.

    Another example was given of a PC manufacturer whose website had a 90-day basket expiry. Discount/promotional codes could be applied to the items in the basket throughout the 90 days – continuing to reduce the item value - and nobody at the company ever noticed until a 60,000 order was placed that they were losing money on.

    The final speaker of the evening was comedian Bennett Aaron. Seemingly an odd choice to close out an information security event, Aaron's account of how he had his identity stolen and the impact it had on his life was both entertaining and touching. He made a documentary on the subject for Channel 4, which can be watched via his website.

    Trust and security of remote workers

    Significant numbers of people admit to regularly taking risks with potentially sensitive data at work that could lead to data breaches.

    A survey of 2,000 people conducted by Check Point in November found that of those who sometimes or frequently work away from the office, 34 per cent regularly forward material to personal email accounts so they can continue working elsewhere; 40 per cent check work email regularly on personal phones or tablets; 33 per cent carry work-related data on unencrypted USB sticks; and 17 per cent use cloud storage services such as Dropbox.

    This is despite the fact that 25 per cent of workers say their company's IT policy specifically forbids such actions, while a further 23 per cent either do not know if their company has an IT security policy, or are not aware of what their company's IT policy states.

    As a result, 50 per cent of British people say their trust in government and public sector bodies has been diminished while 44 per cent per cent say their trust in private sector companies has been reduced as the result of breaches and losses of personal data over the past five years; 77 per cent of people would prefer to buy goods or services from a company that had not suffered a data breach, with only 12 per cent saying that it was not important to them whether a company had suffered a breach.

    At a roundtable held by Check Point to discuss these issues, contributors suggested a number of ways to deal with these problems. Kevin Bailey, research director for European security software at IDC, argued that while an organisation has to trust its employees to some extent, as well as those intent on being malicious, there are those who might be socially engineered and those acting incorrectly but innocently.

    These people have to be protected and the organisation has to protect itself from them, too. “In God we trust – for everyone else, there's the end point.”

    Martin Pickford, head of technology security solutions at EE, added that education in combination with contracts is important. “People need to be aware of the rules and they need to be reminded. But if they go bad, they go bad and you can't stop that.”

    Andy Lucas, a partner at law firm SNR Denton, agreed, arguing that organisations should “trust no one but trust in the contract”. While security can be enforced technologically and physically, ultimately, it's only if there's a legal way to enforce security policies internally that security can hope to succeed, since at least some people will always be willing to try to circumvent security for both good and bad reasons.

    It's a suggestion picked up by Pickford: “People have to understand that they'll lose their job.”

    However, Bring Your Own Device (BYOD) and mobile working are blurring the boundaries between employees' work and personal lives. These two trends have their benefits, for both employers and employees: employers get more flexible working patterns, can spend less on hardware and support, and can potentially access more powerful technology than they would otherwise have been able to afford; employees can work the way they like when they like on devices that they're familiar with and they don't have to have two of everything.

    Peter Warren, chairman of the Cyber Security Research Institute, suggested that BYOD can actually help with security. “You will only get people to buy in to security if it's their responsibility to look after a device.” An employee is far less likely to lose their own computer or smartphone than they are a company-provided one, particularly a ‘CrippleBerry' that is more of an inhibitor to flexible working than an enabler.

    Nevertheless, both trends still have problems, particularly for security. As the survey showed, employees are likely to want to use insecure services such as Dropbox for accessing data at home or on their phone, which could potentially lead to data loses.

    They aren't going to want to use their smartphone for work if they can't use their own apps, such as Facebook, because the company doesn't like it or regards it as a security risk. They're even less likely to want to have their phone or home computer wiped completely when they leave the company.

    Andy Lucas pointed out that in the US, contracts requiring employees to submit to such wiping are being challenged in the courts: “Are employees really positioned to give consent to such contracts?” While there hasn't been a challenge in the UK, relying simply on employment contracts may not be enough.

    At the very least, says Lucas, in combination with contracts, there needs to be training for employees in how to be secure. But companies also need to consider whether they're applying new standards to an old phenomenon. “Employees have been taking customer lists since the industry began. The key issue is enforcement. Cast iron contracts help, but it's also partially behaviour. People going on gardening leave for six months after they leave a company is partly about getting them to forget things.”

    But panellists were agreed that largely the solution to the security risks presented by BYOD was to focus on the data and securing that, rather than the devices or endpoints. Encryption in particular was seen as the best way to safeguard against data loss, since even if data is transferred insecurely by email or Dropbox, if it's encrypted, no one else can use.

    “Companies want to take over security for devices but that can cause issues,” said Check Point major accounts director Caroline Ikomi. “But it's easy to take control of data.”

    However, while Peter Warren wanted to know why encryption wasn't legally mandated for all devices – although he suggested that at most security conferences, the only people attending who were against legally mandated encryption were from governments – both Martin Pickford and Caroline Ikomi pointed out the problem with encryption is key management.

    “It's a pain and an overhead,” said Pickford. Advances in key management usability might well be the solution to broader adoption of encryption within organisations. Indeed, if there was one thing the panellists could agree one, there are no easy answers to the issue of trust, at least not yet. “Trust is probably going to be the big fundamental argument we get for the first 20 years of this century,” suggested Peter Warren.

    Dell World: Former US President Bill Clinton echoes security trends

    Dell World: Former US President Bill Clinton echoes security trends

    The last time former US President Bill Clinton spoke at an event I was attending, it was RSA 2011 in San Francisco and press were not permitted to enter the theatre.


    This week I was in attendance at the Dell World conference in Austin, Texas, and Clinton joined founder and CEO Michael Dell in presenitng the opening keynote, and I am glad to say that this time I was allowed in to hear the two-term president.


    Anyone who has attended one of Clinton's keynotes will be familiar with his talks on the work of the Clinton Global Foundation and his work around global poverty and the environment. I had also heard that he tends to avoid the subject matter of the conference to focus on that work, so I was a little impressed to hear him talk about technology, if only briefly, and in a context around the key theme of collaboration.


    Michael Dell opened the keynote with customer deployment features and thoughts on the recent capabilities. He concluded by saying that he believes that technology 'can create great opportunities' and referenced its own Dell Centre for entrepreneurs, and the opportunities it offers to help with access to capital, to expertise and with 'solutions for business of all sizes to grow and help their customers'.


    Passing the microphone to Clinton, the former President said that the Clinton Global Initiative had inspired nearly 1,200 young leaders who were taking steps to meet challenges, create jobs and power the economy forward.


    He said: “Technology works a model for the challenges we face in the 21st century. It is hard to believe that we are coming up to the 20th anniversary of my inauguration. When I became President the average cellphone weighed five pounds, there were 50 websites on the entire internet, that many have been added since I began talking today.


    “I sent two emails when I was president: one to the troops in the Balkans; and one to John Glenn who was in space. It was also noticed that almost all email was in-office traffic and the young people would often type before they thought and we had a hostile Congress who thought that their number one job was to subpoena every email sent, I did too and we read them. It is all different now, but it all fits into how the 21st century works and what I do now.”


    Clinton also said that he 'loves open source and the internet' and its ability to make partnerships, and that users are not afraid to fail and try something else.


    Elsewhere, Clinton said a few things that grabbed my attention and may (or may not) have been indirectly referenced to the issue of collaboration and perimeters, but his comment that 'all barriers end up looking like nets rather than walls' certainly reflected the issue of penetrated networks and de-perimeterisation.


    He also made comments that 'we cannot shut each other out' and that 'everyone has an obligation to try to build a positive and reduce the forces of global dependence'.


    The work Clinton is doing both in and out of the USA is really making a difference, and it great that he is given the opportunities to speak at conferences such as this one. What was interesting to me was the changes in technology since his inauguration in 1993. I am not sure if his checking every email was actually done by him personally, but it does seem to be an early form of employee monitoring technology!


    He later said that there is a 'need to build ever inclusive communities' to harness the 'power of creative cooperation' and concluded by saying that the USA was in danger of falling behind other nations when it came to broadband speed.


    You could argue that Clinton came from a different age to what we are in now, and his time in the White House was ahead of the new millennium and all that it has brought in technology. However he does make the right noises about collective thinking and working and as he was in the White House as technology boomed in the 1990s, I doubt much of what he said was accidental.

    Losing it: how to protect data on USB devices

    Losing it: how to protect data on USB devices

    In October, Greater Manchester Police force was fined by the Information Commissioners' Office (ICO) for losing a USB flash drive.

    This apparently contained data on more than a thousand people who had 'links to serious crime investigations' and the USB stick was, crucially, unencrypted and belonged to an officer in the Serious Crime Division. Rather than using the corporate, encrypted memory stick provided by the force, the officer was using his own, higher capacity device.

    This was a costly and embarrassing episode for the force, but Greater Manchester Police is far from alone. Some organisations address the issue by simply blocking the use of USBs altogether, but this restriction can have a negative impact on productivity. The real key to solving these types of data breaches means answering a fundamental question: how can organisations ensure that data is secure without preventing staff from doing their jobs?

    Remote wipe and remote kill features are growing in prominence as data breach headlines mount up. In these scenarios, as soon the device is plugged into an internet connected endpoint, a command is sent from a central console in the IT department to either erase (wipe) all data or completely disable (kill) the device's ability to function.

    For highly sensitive data, an organisation might want the security of the remote kill function so that they know the device is no longer usable. An IT department can disable the device and prevent all access, even by the normally authorised user. This option can help ensure that employees don't take data with them on the device when they leave a company.

    Equally, if inconsistencies are found in authentication policies or device security is not fully implemented in hardware, then a remote kill feature forms an extra layer of security.

    Making remote kill 100 per cent effective over the internet requires a type of policy enforcement server to be involved in every attempt to access the device. The policy server would ideally take part in the authentication process and a user would not be able to access the device without the server also permitting it.

    At this level, the remote kill function can be a message from the policy server to execute a data destruct or block command on the device, instead of the usual authentication. Remote kill is a good solution to protect against the carelessness of current or former employees, a 'rogue employee' situation is more complex.

    The rogue employee scenario makes the policy decision rather difficult. Say you want to terminate the employee's contract - you'd like to be able to disable all access to sensitive information immediately. The employee, knowing that he is about to be terminated and knowing that there are grace periods, simply needs to disconnect from the network and copy all the data from his USB device in one session. Unfortunately, in this situation, the grace period allows remote kill to be easily defeated just when you need it the most.

    Remote wipe and remote kill features improve security, but they also add an extra layer of complexity. As such they are most suited to high security environments involving sensitive data. For these types of organisations, remote kill and remote wipe are a safety net – a way of rescuing potentially catastrophic situations. This technology should not be used in isolation, and there should be many measures ahead of this final line of defence against data breaches – beginning with the use of encryption and proper staff training.

    This requires pre-planning and investment – but given the cost of data breaches, having the right controls and technology in place is a small price to pay.

    Nick Banks is head of EMEA and APAC for mobile security at Imation

    ForgeRock aim to bring open source stack concept to IAM

    ForgeRock aim to bring open source stack concept to IAM

    With 'a new approach to identity management', the UK recently saw the launch of ForgeRock.


    Speaking to vice president of marketing Daniel Raskin, he told me that its product is 'the only fully supported open source identity management solution for supporting legacy, enterprise and next generation mobile and social application development'.


    The company was created by former employees of Sun Microsystem's identity management division, who chose to leave after the acquisition by Oracle in 2010.


    The 'mission' of ForgeRock is to offer an open source identity management product with openness for all identity services which is dedicated to democratising identity management across the enterprise, social, mobile and cloud.


    Raskin told SC Magazine that the open source-developed product, which utilises open development of single sign-on (SSO) and identity and access management (IAM) offers 'a highly developed technology'.


    The concept is around an open identity stack API that allows interaction with all of the ForgeRock open stack identity services, including access controls, provisioning, directory services and more.


    He said: “Our open stack is not only for identity and access management, but it is one piece as we feel we are different from other vendors with what we are doing. We are looking at enabling identity for users, we looked at scale there is nothing there.


    “With the open identity stack we are looking to simplify identity and we found that it is overly complex; look at other vendors and they have added complexity through acquisitions and that is passed on to the user who has to integrate. With the unified stack, we do it at scale.”


    Raskin said that this was a big passion for him and its work with its user base determined that this was what the audience wanted too. “We don't want to have six or seven technologies to manage as it doesn't make sense for us, we would struggle to do that for the customer,” he said.


    “Other vendors are making mistakes and that is why users are not going for legacy vendor technologies.”


    Earlier this year I attended the Gartner identity and access management summit in London, and at the start of the event, a show of hands was asked for from those who were not happy with their IAM deployment – the response was large.


    I asked Raskin if responses such as this were a bearing on its development, he said that while he was not at that event he could understand frustrations over heavyweight technologies that did not perform as expected.


    What we are seeing in the IAM sector is more connection with Microsoft Active Directory and cloud-based identity, particularly in the Software-as-a-Service. This approach towards to open source and a single stack may be welcomed by those dissatisfied users.

    Gone phishing?

    Gone phishing?

    Earlier this year, it was announced by the anti phishing working group (APWG) that February 2012 had seen record amounts of phishing emails detected.

    Add this to recent research from Trend Micro that determined that 91 per cent of targeted attacks begin with a spear phishing message, this sends out some complicated messages. So, you could argue that the concept of mass phishing campaigns are over and that it is all about spear phishing is the successful trend for the attacker, but the APWG data contradicts that.

    What is more likely is that cyber criminals are not seeing large-scale attacks as useless and are concentrating on targeted attacks, but more likely are trying everything in the ambition that something works.

    Rohyt Belani, CEO at PhishMe, predicted that phishers will be changing their tactics in 2013 and resorting to targeted spear phishing emails rather than the mass mails of the past.

    He said: “Currently a phisher might send an email to John saying ‘It was great to meet you at XYZ event last week, here's a link to some of the research we covered on the day which might be interesting to you' (because the criminal has seen from his Twitter feed that John was at an event last week). But John might not remember meeting that person and might feel a bit suspicious and not click on the link.

    “However, criminals are starting to build up trust by using a two-pronged approach to spear phishing to try to make the automated emails seem more human. So the criminal might initially send an email to John saying ‘It was great to see you at XYZ event last week, I'm just working on a report that I think you might find interesting – I'll send it over to you tomorrow', and lo and behold, tomorrow comes, John receives the email he has been told to expect, and his defences are down – so he is much more likely to click the link and the criminal has his way in to the network.

    “The best technological defences are unlikely to stop an email like this, so you have to train your users what to look out for.”

    Belani said that as sear phishing attacks are performed by humans against humans, so while software solutions exist, relying on technology alone is not enough and companies need to employ a holistic approach with anti-virus and filters that will remove more basic, generic attacks.

    A recent comment on the SC Magazine website on the Trend Micro research suggested that any decent mail filter should drop all attachments which are password protected or executable, and scan those remaining files it lets through for malware.

    Speaking to SC Magazine, Daniel Axsater, CEO of anti-spam technology vendor CronLab. He said that email security was not an old concept and it was just as necessary now as always 'or you will get spam and phishing'.

    Commenting on the recent research, Axsater said: “There is still a lot of phishing. We are seeing an increase but we are seeing a decrease in the volume of spam but an increase in its severity, there is still spam related to Viagra and Canadian Pharmacy, but also an increase in viruses in phishing attacks. That is why email security is still so apparent.”

    Any integrator or analyst will tell you that a layered approach to security is required for best practice protection, and email security should be part of that. Yes web-borne malware may still outrank that on emails, but that is not an excuse to take a relaxed approach to email-based malware.

    As 2012 comes to an end and 2013 sees many predictions on future trends of attack vectors, it is unlikely that a rise in phishing or spam will be among those, but targeted attacks will likely feature heavily. Before dismissing, consider the reality of email – it's here to stay after all.

    The demand for data forensics and emergence of Triage

    The demand for data forensics and emergence of Triage

    In today's world digital content is everywhere - from corporate environments to home computers, to the latest smart phones, games consoles and media centres, digital content can be created and stored in an ever increasing number of devices.

    Many modern crimes include multiple forms of digital media/content and the proliferation of this creates a lot of complexity for forensic scientists when tasked with finding and analysing digital evidence in an investigation.

    Traditional investigative methods have approached digital forensics with either a ‘seize all' or ‘image on-site' strategy, which involve forensic scientists analysing all digital devices found at a given crime scene. While this method has historically worked well, the rising number of devices seized per crime, coupled with an increase in the amount of data that is storage on each device, means that forensic examiners are starting to struggle.

    Several hours are wasted on imaging and analysing vast amounts of data and devices that are irrelevant to the overall investigation, which significantly impacts staff resources and costs. Managing the workload and associated storage required for digital evidence is becoming an increasing issue, and the size of the problem can be clearly seen when comparing technology and its use today with five years ago.

    Moreover, one report  has noted that during the London Olympics, around 306 billion files were shared on the World Wide Web and Almost 5 billion tweets and in excess of 100 billion files were shared via social media outlets, showing the scale of data in today's world. This has led to an increased demand for forensic examiners to help the increasing workload.

    To help front line officers tackle these challenges, police forces have started to adopt an approach called ‘Triage', or ‘Targeted Data Collection'. Triage helps filter out devices that do not contain information or items of interest and instead prioritises analysis of items likely to be of evidential value.

    By only giving investigators devices that are known to contain evidence, their time can be focussed on the real items of interest. In addition, users of Triage technology do not necessarily need to be highly skilled examiners as the technology assists in much of the forensic process.

    This means that the forensic examiners themselves can focus on critical evidence once it has been identified, rather than having to be involved in the process of identifying evidence.

    A recent Government study showed that forces that adopted a strategic Triage approach reduced their backlogs by 60 per cent and increased the productivity of their investigators by 90 per cent, and an example of the benefits of Triage technology can be seen with the Lancashire Constabulary Hi Tech Crime Unit (HTCU).

    In July 2011, Dell's Triage solution, SPEKTOR, was chosen as an effective tool to assist in processing the ever increasing range and volume of digital evidence presented for examination. This is operated and managed by Nigel Hardacre and Mick Ellwood within the Hi-Tech Crime Unit.

    The team has found it to be a very useful tool, as it has been able to overcome many of the difficulties associated with processing complex devices such as those containing multiple disks or solid state storage. It has assisted them in acquiring forensic images from these types of devices using the specially developed SPEKTOR collection technology, which has proved to be a real time saver and avoids the often onerous task of disassembling devices to remove hard disks.

    In addition to its regular use within the lab, SPEKTOR can also assist on scene, helping examiners with the challenge of potentially complex crime scenes, involving multiple and varied devices.

    Triage solutions are helping forensic examiners reduce the backlogs of data analysis, providing the ability to handle the wide variety of PCs, servers, phones, removable media and satellite navigation units, seized on a daily basis. Simple-to-use triage tools allow an investigator with minimal training to process, review and make an informed, forensically sound decision as to whether a suspect device requires deeper forensic analysis.

    While the technology is not currently mandated, the benefits of Triage are clear and proven in operational environments, helping digital investigators analyse data more effectively and help address the backlog of devices and huge costs associated with traditional approaches to digital investigations.

    James Buckland is EMEA business developer of digital forensic solutions at Dell

    CipherCloud mark a successful year with strong future

    This year has seen a boost into the encryption sector by a number of new start-ups and one of them, led by the former founder of ArcSight, arrived in London this week.

    We focused on the technology technology offered by CipherCloud earlier this year, and this week it confirmed $30 million funding and an expansion into the European market. Meeting with founder and CEO Pravin Kothari, who was one of the founders of the SIEM giant, showed me the news about the funding appearing on financial websites, I responded by telling him what I usually find in my inbox after lunch.

    Kothari said: “We have seen demand from the user to use the cloud but to do it securely as well, as data is going out and that is the number one problem, how do you control and protect it, so it does not go to the cyber criminal and other countries so that other governments cannot get to your data.

    “We do client-side encryption, there is too many keys and a bigger nightmare. So we do a gateway to do all of your encryption. With this the user doesn't know what data is encrypted and there is no impact on their usability and there is no change in the cloud application as you use it as you own the data and you select what to encrypt.”

    Now entering their third year of operation, CipherCloud have experienced a 500 per cent growth and it has now expanded into the UK and Europe. CipherCloud regional director Richard Olver, said: “CipherCloud has come a long way in just two years.

    “The establishment of a dedicated European headquarters is recognition of the phenomenal growth we've achieved in a very short time. We look forward to helping organisations in Europe adopt cloud applications, while ensuring that their sensitive information is fully protected and all government guidelines from the EU Data Protection Reform and the UK Information Commissioner's Office (ICO) are complied with.”

    Olver said that the company sees Europe as being 'two years behind' when it comes to cloud computing, yet the company is seeing ten-to-15 per cent of its revenue from European businesses. He referenced analyst statistics on the concerns around cloud computing, and he said Europe is an 'important frontier'.

    “We found that 60 per cent of businesses are moving to the cloud so the demand is there, but there are concerns on the control of data,” he said.

    Research conducted this week by CipherCloud of 300 senior IT professionals found that 41 per cent were unaware of the recently announced guidelines from the ICO on cloud computing. A quarter (27 per cent) said that they were aware and compliant, yet when it comes to protecting data in the cloud, 29 per cent rely on their cloud application provider, while 28 per cent implement their own internal controls.

    Olver said: “UK IT professionals need to be aware of the fact that regulatory non-compliance penalties could be as much as half a million pounds. It's clear that businesses are confused or even complacent about regulation, legislation and compliance when storing data in the cloud and are largely unaware of their responsibilities.”

    He said that as ICO guidelines are filtering down from the European Commission, it does suggest that there is a strong European stance on data security in the cloud. “How do we react to this? What people want to know is where are the solutions and the ICO does talk about using solutions, and it is explicit with access to keys,” he said.

    “Europe is two years behind and there is pressure to reduce costs and we are engaging with local and national government and we feel that there is a big opportunity.”

    Kothari said that CipherCloud had 'come along at the right time' as people want to do more and more with the cloud while many want to get rid of software as 'innovation is in the cloud'. I once asked if encryption ever changes, well truth is the concept does not really, if you excuse the rise in levels of coding, but the ways of doing encryption is what is interesting.

    From the encrypted USB key vendors to those with a view on how to encrypt emails, this area is one that regulators love to reference but with little guidance on how to do it – sometimes the most straightforward answer is the best.

    'Miss, a virus stopped me from doing my homework!'

    'Miss, a virus stopped me from doing my homework!'

    In terms of a good excuse for not doing your homework, the one about a virus eating pupils' work is actually true.

    According to Slashdot, pupils at the Lake Washington School District had been issued laptops in order to improve remote working, however a computer virus caused havoc for the district as it worked its way through the laptops.

    Geekwire reported that the virus disrupted classes and cost the district money, as the district spent more than a month fighting off the Goblin virus. According to Eset, this polymorphic virus spreads via removable media and fire shares with techniques aimed to outwit the anti-virus scanning engines.

    Lake Washington District director of communications Kathryn Reith said that the virus was "an extremely sophisticated one". Despite all of the laptops running Windows 7, Goblin works through executable files and networks and this led to the district having to hire five temporary IT staff members to suppress the virus.

    A computer virus outbreak is certainly a modern reason for not being able to work. Perhaps this is an example of how a lack of preparation or deployment of mobile devices to be used outside of the perimeter can come back to bite the IT department, but this is a school, so unlikely to have such staff.

    Geekwire reported that the proximity of the school to Redmond was Microsoft's reason for involvement in supporting local and national education, and that Lake Washington School District has previously served as a poster child of sorts for Microsoft's Trustworthy Computing Group.

    Even so, the fallibility of devices in the hands of the unprepared is one of the key worries of bring your own device (BYOD) and device deployment policies, and this is a prime example of how such a challenge can cause havoc. Plus, it is one step up from ‘my dog ate my homework'.

    Malware and the threat of the Mini-Me

    Malware and the threat of the Mini-Me

    Today's cyber criminals have shifted their efforts from somewhat low-level, financially motivated attacks on unwitting individuals, to all out enterprise-class assaults on large corporations and even nations in the hope of stealing precious intellectual property and other high value confidential data.

    The prevalence of cyber war and state-sponsored espionage campaigns – which was first pushed to the forefront by a spate of high-profile malware discoveries - also appears to be escalating as evidence of equally advanced, increasingly targeted off-shoots of these ‘parent' viruses is coming to light.

    While much media attention surrounded the unearthing of sophisticated malware such as Flame, Stuxnet and Gauss, it now appears that there were several ‘mini' versions of these viruses released into the wild from the same factory of hackers – most likely around the same time that their famous parents were unleashed.

    Junior variants such as the recently reported ‘miniFlame', a smaller-scale, highly targeted version of the Flame virus, have serious implications when you consider the growing threat of cyber war and increased sensitivity around intellectual property theft worldwide.

    MiniFlame has been said to focus its attacks almost exclusively on IT systems in Western Asia, signalling a second wave of targeted international cyber espionage campaigns, as nations continue to tap into the growing use of sophisticated malware to indirectly attack one another.

    The dynamic between generations of malware is certainly an interesting one. While variant strains of headline viruses are arguably just as dangerous as their parents in terms of genetic complexity, under-the-radar viruses such as miniFlame are more adept at moving through signature-based defences undetected. In other words, they are designed to better hone in on target systems and to wreak maximum havoc once within the confines of the network.

    This presents a two-stage attack scenario – large viruses such as Flame cast a comparatively wide net and identify the potentially lucrative targets, before their offspring set to work drilling further into the target system.

    The probable scope of these mini variants is also an alarming prospect. Based on reports of the nature of the command/control infrastructure of miniFlame, it is safe to assume that other variants of Flame – and indeed other well-documented viruses – are in existence. 

    The potential for this combination of malware to wreak havoc on a target system is also dictated by the order in which they were discovered. For instance, if the parent malware was discovered before any of the junior variants were used, the parent malware can prove more damaging than the off-shoots, mainly because it provides clues to security researchers, which helps them better identify other variants inside the victim organisations before they can do too much damage.

    Conversely, if the junior variants are discovered first, it can be more difficult for experts to expand their scope of analysis to look for indicators of parent genesis malware – particularly if the parent virus has not been widely used in one or more attacks and remains relatively unknown.

    With the original development of miniFlame thought to date back as far as 2007, it is perhaps then just a neat coincidence that the variant was uncovered after a period during which Flame had been credited as the ‘most sophisticated computer virus in the world'.

    It is more apparent than ever that we are now in an age of heightened cyber security threats, where different generations of malware are in simultaneous play, and attackers are equipped with the necessary tools to launch successful advanced, persistent threats on enterprises and government organisations alike.

    Under-the-radar viruses such as miniFlame are a worrying indication of the expertise and determination of today's threat actors. Indeed, the highly complex nature of malware being discovered today is unfortunately the final nail in the coffin of traditional perimeter-based defences and anti-virus as standalone measures of defence – and urgent, proactive measures must be taken by organisations, governments and nations to ensure networks are defended as robustly as possible from these next-generation threats.

    After all, it seems that further discoveries of a similar nature have become very much inevitable.

    Darien Kindlund is senior staff scientist at FireEye

    Users, senators and privacy advocates criticise Facebook over proposed changes

    Users, senators and privacy advocates criticise Facebook over proposed changes

    Facebook users are known for their willingness to fall for scams, hence why the scams are successful.

    The latest involves users posting a statement that they believe will indemnify them against proposed changes by declaring their copyright over "all of my personal details, illustrations, graphics, comics, paintings, photos and videos".

    Unsurprisingly, the status is being replicated across the social network and it has led to Facebook issuing a statement that acknowledged the rumour that Facebook is making a change related to ownership of users' information or the content they post to the site.

    “This is false. Anyone who uses Facebook owns and controls the content and information they post, as stated in our terms. They control how that content and information is shared. That is our policy, and it always has been,” it said.

    However the basis of this chain letter is in the news that the Electronic Privacy Information Center and the Center for Digital Democracy have asked Facebook to reconsider proposed changes to its terms of service, on the grounds that they say that they violate commitments to protect users.

    The ‘proposed updates to our governing documents' posted by Elliot Schrage, vice president of communications, public policy and marketing at Facebook, state the legalities on the collection and use of data for Facebook users.

    It said: “Our goal has always been to find ways to effectively engage your views when we propose changes to our governing policies. As a result of this review, we are proposing to restructure our site governance process. We deeply value the feedback we receive from you during our comment period.”

    Among the changes are a new feature to allow users to submit questions about privacy to chief privacy officer of policy, Erin Egan, who will also host webcasts on privacy, safety and security.

    The changes state that users own all of the content and information that they post. For content that is covered by intellectual property rights, such as photos and videos, users specifically grant Facebook "a non-exclusive, transferable, sub-licensable, royalty-free, worldwide licence to use any IP content that you post on or in connection with Facebook".

    This IP licence ends when a user deletes their IP content or account unless it has been shared with others, and they have not deleted it.

    In terms of third party applications, this comes down to the agreement with that application on how it "will control how the application can use, store and transfer that content and information".

    These, and other changes, have caused the Electronic Privacy Information Center and the Center for Digital Democracy to write a letter to CEO and founder Mark Zuckerberg, directly criticising a proposal to "end the voting component" of the site governance process, also the replacement of the 'who can send you Facebook messages' setting with new filters for managing incoming messages and the integration of users' Instagram information into their Facebook profiles.

    The letter said: “Facebook has been receptive to its users in the past. In 2010, you unveiled a set of simplified privacy controls in response to public criticism. And in 2009, you agreed to back off proposed changes to the Terms of Service and establish the procedures for user input.

    “Now, we ask that Facebook be similarly responsive to the rights of Facebook users to control their personal information and to participate in the governance of Facebook. We ask that you withdraw the proposed changes to the Data Use Policy and the Statement of Rights and Responsibilities.”

    The letter is counter-signed by a number of senators and members of the US Federal Trade Commission, while thousands of users have criticised the proposed changes and called for a vote away from the social network.

    Jim Killock, director of the Open Rights Group, said that as the changes have not yet been enforced, they will be subject to the voting process, but that the vote is only binding if a third of users take part – something two previous votes failed to achieve.

    He said: “Facebook are lobbying the UK government to weaken new data protection laws and reduce our legal rights. They claim that the right to have our data back or to destroy it would be unworkable. But then Facebook go and show exactly why UK citizens need new, stronger personal data laws.”

    Facebook has rode these challenges out before and managed to survive. Yet each criticism from governments must strike a blow on Facebook's privacy policy consultation, so maybe this could be the first question a user could pose to Egan?

    Personal documents used as confetti in New York

    Personal documents used as confetti in New York

    This week's news saw one of the world's most notable stores and the police department of Nassau County in New York in a rather unusual and hopefully unprecedented situation.

    According to WPIX, department store Macy's annual Thanksgiving parade, notable for marching bands, balloons, cheerleaders and clowns, also included mounds of confetti that would have been useful for anyone looking to harvest personal information. The New York streets were littered with confidential personal information, including social security numbers and banking information for police employees, some of whom are undercover officers.

    One attendee noticed that a strip of confetti had ‘SSN' and a number on the strip of paper, while others had phone numbers, addresses, car registration numbers and police incident reports. Also included was information about republican presidential candidate Mitt Romney's motorcade, supposedly from the final presidential debate that took place at Hofstra University in Nassau County last month.

    In a statement to PIX11, the Nassau County police department's commanding officer for public information, Inspector Kenneth Lack, said: "The Nassau County police department is very concerned about this situation. We will be conducting an investigation into this matter as well as reviewing our procedures for the disposing of sensitive documents."

    Macy's said that it uses "commercially manufactured, multi-colour confetti, not shredded paper", and it is a mystery how this information came to be where it was. The challenge for the forensic teams is not only to figure that out, but also how to stop it happening again.

    Those with memories of the ticker tape of the 1978 World Cup may not have thought where the paper actually came from, and in the days before the recycling bug caught us, I doubt anyone was really concerned about using recycled paper.

    So perhaps that is the issue – mass produced paper cannot be bought, shredded and distributed environmentally, so something else needs to be used. In this case, this was a bad move, and I hope that the investigation reveals the real reason why confidential documents became parade confetti and prevents such a thing happening again.

    ISF World Congress features major speakers on security areas

    ISF World Congress features major speakers on security areas

    This year's Information Security Forum (ISF) World Congress was held in Chicago, Illinois, and Fujitsu's James Gosnold was in attendance for SC Magazine.

    The first day of the three-day event began with BBC royal and diplomatic correspondent Nicholas Witchell introducing former Nasa flight director Gene Kranz, who gave a powerful presentation entitled 'Failure is not an option' (the same name as his book) based on his experiences leading up to and then directing the Apollo 13 mission to the Moon.

    Several parallels could be drawn with management of a security incident or crisis and how Kranz worked through the issues to successfully bring the crew of the Apollo 13 spacecraft back to earth, namely leadership, making difficult decisions based on solid data and having a strong team in place underpinned by a strong trust ethic.

    Kranz said that he believed that the modern world is too risk averse, although went on to say that risk should be controlled and well tested. He also has a strong grasp of modern technologies having gone on to become a director of Nasa with the largest software inventory in the US outside of federal government.

    Running one of the breakout sessions was Dr Geraint Price of Royal Holloway University on the latest developments in cyber security. Royal Holloway has been engaged in a ‘Cyber Security Club' since 2011, which includes well-known members of the government, academia and industry. They predicted that in six to eight months, a whitepaper on cyber security will be released. Price also introduced the VOME project that is bringing together academics and practitioners across many disciplines and is aiming to raise privacy awareness amongst the general public.

    One point Price made was about the 'decline of reputational impact' and as more organisations ride out well-publicised data breaches with only a short term impact, the reputational damage is becoming less of a justification for increased security budget.

    After a panel discussion on 'The role of government in security cyber space', held under Chatham House rules, Bobby Singh from the TD Bank Financial Group ran a breakout session on developing a security operations centre.

    While the content was very detailed and comprehensive – I did wonder how many organisations actually have the resources to build a security operations centre to the specifications described by Singh, but if one did there were some good pointers here and aspects to consider such as should a SOC & NOC be separate? Is the scope nailed down? Will it be a governance or operational function?

    The final session of the day that I attended was facilitated by consultants from PwC and presented a case study of how they used the ISF's ‘Standard of Good Practice' to first drive out requirements and then design a security architecture for one of their customers.

    The second day of the Congress opened with a presentation from Derek O'Halloran of the World Economic Forum (WEF) on 'The view from the C-suite'. This included an awareness video (called 'companies like yours') and some headlines from its research on the increasingly 'HyperConnected World', where it was explained that with two billion connected people, more data will be produced in the next 12 months than in all of history. It was also estimated that there will 50 billion connected devices by 2020.

    O'Halloran put forward the view that the human vulnerability is very much the main issue but that it is in the boardroom (“they don't get it”), security is still often under-funded. WEF survey results demonstrated that cyber security risk is the third most underestimated risk across all industries.

    The WEF also saw very inconsistent survey results with many of the same boards ranking technology as a high risk and cyber security as a low risk.

    Finally, the WEF will be releasing a document worth looking out for called 'Partnering for cyber resilience – Tools for the boardroom', which is intended to introduce some middle ground between C-level management and security staff.

    The first breakout session I attended that day was given by Thomas Bernard from e-health Ontario, with magic tricks and a look at the lighter side of security. It certainly delivered that, with rope tricks being used to analogise challenges faced by security professionals.

    Bernard is a big advocate of plain English in policies and risk documents, and he is also very experienced in incident management, stressing the value of getting all parties together once or twice a year and working through an incident, however painful. Bernard also added that "most significant incidents are caught by people, not by machines", which I thought a useful insight.

    I delivered the second breakout session on 'Security monitoring on a budget', discussing how a set of very simple security incident and event management (SIEM) reports being reviewed on a daily or weekly basis could significantly improve the security posture of many organisations. Especially those who found delivering a fully-fledged monitoring and alerting service to be very onerous and effectively gave up, leaving the SIEM to all intents and purposes to gather dust.

    Public key infrastructure inventor Dr Whitfield Diffie gave the second day's closing keynote on 'Possible futures in information security'. While very speculative, Diffie's first point was that the internet would not have been so successful had it been heavily secured at the outset in the 1960s.

    Colourful soundbites aplenty such as 'society needs crime (and has always needed crime) therefore the internet must also need crime' and 'the move of society online is comparable to the move of society into cities 5,000-7,000 years ago (i.e. the significance is comparable)' made for an entertaining hour.

    Diffie's opinion is that "the (current) security status of everything but cryptography is rotten" and then cited key management, operating systems and protocols as examples. Other problem areas that we (the security industry) are not good at is specifying what we want and writing good code, acknowledging that not all code can be the best code (and confining it) and that this has been unsolved for 50 years. He went on to sum this up saying "we still don't know how to program".

    On the future, Diffie felt that the cloud uptake would increase the dependence of small players on large. Diffie is also looking to homomorphic encryption as the possible silver bullet for cloud computing, although he commented that it is slow and lends itself more to confidentiality than authenticity.

    This led Diffie to discuss quantum computing, which would address the speed of homomorphic encryption and just as well, as it will also destroy modern cryptography. Diffie then reminded the audience that quantum computing has been promised by physicists for over 20 years.

    Into the third day, quantum computing continued with the first breakout session presented on that subject by BT's Konstantinos Karagiannis, and how it will change security forever. This was essentially a 30-minute crash course on quantum physics, that particles "know when you are watching them and vanish". This particularly resonated with me.

    Karagiannis told us about 'particle entanglement' and how it troubled Einstein before explaining that keeping particles in a 'state of super positions' (Qubits unlike bits can be in a state of zero, one or a superposition of both which – on chalkboards anyway – will allow Quantum Computers to defeat even 2,048-bit encryption in minutes) is the essence (and therefore challenge) for quantum computing.

    There is a big QC race going on at the moment as nobody wants to be last. Significant recent developments in QC have seen the award of the Nobel prize to two scientists for proving a way to measure quantum particles without destroying them, and the launch of 'The Bell Labs of tomorrow', while the University of Waterloo opened its Quantum-Nano Centre earlier in the year. A company called D-Wave are also doing a lot of good work in this space.

    The Congress closed with a keynote from Frank Abagnale ‘The original social engineer' (the film 'Catch me if you can' portrayed his life). He took the listeners through how he went from a 16-year-old lying about his age in order to command higher wages in manual jobs to defrauding airlines and banks to live a life way beyond his means before being caught.

    Don't believe in Zimmermann?

    Don't believe in Zimmermann?

    This week I was privileged to interview privacy and encryption expert Phil Zimmermann.

    The founder of PGP has been off the scene since his company was acquired by Symantec in 2010 and he completed his duties to the security giant during that year. His new venture, Silent Circle, sees him in the zone of secure communications, along with partners Mike Janke and Jon Callas. Janke is a special operations communications expert and fellow privacy advocate and Callas was formerly a major player at Apple.

    The concept of the company's technology is fast and secure communications over voice-over IP (VoIP), SMS and voice calls. Using Zimmermann's PGP creation as its base, its functionality is on peer-to-peer communications with minimal need for keys and the public key infrastructure (PKI).

    Janke said that rather than building for business, this is "built for the individual". He said that a company can buy as many licenses as they need, but it is down to the employee to download the tool to their device and therefore the secure communications are between the sender and recipient, and no data is seen by the employer or Silent Circle.

    Janke said: “It is important as they want a service they can trust. We open source our client so anyone can look at it and we were very careful about the basics. There is hot topic around privacy groups and other dissidents so if we don't get this right, it is more than a bad review. It is important to be trustworthy, the trust model exists around the world.”

    Zimmermann, speaking to SC via the secure VoIP channel over an iPad, said that he has been working on secure communications since 2004 and that he had been interested in secure voice channels since before secure email, but "the technology wasn't ready yet".

    He said: “Back in the early days of PGP I had some legal issues, lawyers would ask me why I was interested in this and after I explained it, they went with it. I said it was like being able to stand a 1,000 miles away and whisper in your ear. Security is now restored with face to face communications.”

    One of the key parts of Silent Circle is the minimum reliance on keys, in keeping with the founders' privacy leanings. Janke said that the platform had been designed to retain the least amount of data, only retaining the username, password and ten-digit phone number in the case of a voice call.

    Zimmermann said: “With the right protecting we can minimise exposure to keys. They are not shared with the server and for calls, there is nothing that we have that can compromise the call.

    “You can put this on your mail client so you can manage the keys on it; sometimes you need to keep keys for email decryption but with a call, when you are done you don't need to keep the keys.”

    He also went on to say that rather than a server or depository holding the keys, the device holds the key, further enhancing the motto of privacy. The point of user privacy is understandable, especially when this is used for sensitive or confidential communications, but I asked Janke and Zimmermann how this can be managed by businesses who want to use this and deploy this to users.

    Janke said that this was considered, so a web interface was developed for enterprise, while desktop users are given a control panel where they can build a phone book with other user credentials.

    “If two people are end-to-end secure, the enterprise cannot decrypt it unless they have the phone book and personal device,” Zimmermann said. “This doesn't need PKI for example and it doesn't require certificates from certificate authorities (CAs), as there has been some spectacular failures.”

    Janke said: “When we started this, we said 'let's build for the public, the home office workers and it has to be secure and familiar and the call has to be clear and crisp.”

    I concluded the meeting with Janke (who was in London) and Zimmermann (who was in Washington DC) by asking them what they thought about the state of the take-up of encryption, particularly as the likes of the Information Commissioner's Office (ICO) have called for greater take-up of encryption.

    Janke said: “Ten years ago no one knew about encryption, now the every day user understands it, so it has to be user friendly and about true peer-to-peer connectivity.”

    Zimmermann said: “Everything comes down to having something everybody can use. We see making it easy with PKI, but the technology to deliver it comes at a price and you are at risk like DigiNotar. What is a more spectacular failure for PKI? It is difficult to get it right, it is easy to use and we do not depend on PKI and we have got something your mum can use without being dependent on PKI.

    Janke said: “We also understand the huge requirement for people to use secure communications. I tried everywhere and care about security, so we created a secure coding platform that encrypts communications to the network and then connects to the recipient.”

    The issue of security is key to so many communications; be it soldiers on active duty talking to families back home; politicians speaking or transmission of data, and Zimmermann's claim that his interest in this area has been as persistent as in secure email, and that he was just waiting for the technology to catch up, suggests that this is the right time for such a launch.

    There are other features, such as distinct colouring of the messages and an ability to 'burn' messages after a fixed time, similar to the 'Kill Pill' concept so needed within email clients, but many will find this concept useful.

    Many may be suspicious of the concept of total user control of the encrypted communications without total knowledge and visibility of the corporate IT department, but with more tweaking expected from Silent Circle, Zimmermann's new venture may have arrived at the right time for some.

    To CC or not to CC?

    To CC or not to CC?

    The Taliban has managed to reveal the email addresses of some members after it CC'd them into an email, according to a report by ABC News.

    It said that official Taliban spokesperson Qari Yousuf Ahmedi sent out a routine email last week where he publicly CC'd the names of everyone on his mailing list. This list of 400 included journalists, an address appearing to belong to a provincial Afghan governor, an Afghan legislator, an Afghan consultative committee and a representative of Gulbuddein Hekmatar, an Afghan warlord whose outlawed group Hezb-i-Islami is believed to be behind several attacks against coalition troops.

    As SC demonstrated in its research with Egress earlier this year, sometimes it pays to have a double check of your email before you hit send, as that Kill Pill would have come in very useful!

    Other research issued last week by Varonis also found that 62 per cent of respondents reported a mishap — often with serious consequences — as a result of sending an email to the wrong person or with improper or unauthorised content.

    This sort of ‘accident' is very easy to do, personally I find it easier to CC than BCC so I do what hopefully keeps me out of trouble – don't CC anyone without checking first. Although I am receive far fewer emails than the average of 100 every day (according to Varonis' research, 78 per cent receive that amount), it is hard to guarantee I will never make this mistake.

    Email is here to stay, unless someone else has a viable and practical alternative. However the issues that plague all of us also affect major terror organisations, so when you feel like sticking that poster up in the office to remind your employees of your security posture, this might spring to mind.

    Newsletters