Virtualisation seems like the solution to managing IT systems, but what are its faults?

Feature by Rob Buckley

In a complex security world, virtualisation seems to be a brilliant solution. But the VM path is strewn with pitfalls, says Rob Buckley

In a complex security world, virtualisation seems to be a brilliant solution. But the VM path is strewn with pitfalls, says Rob Buckley.

Managing systems easily and getting the most out of them is the holy grail of an IT department. “There has to be a better way” is a thought that has crossed the minds of most IT managers at some point – whether when it's having to apply patches to 10,000 desktops, or working out an easy way for users to access their files on a dozen different storage systems.

Virtualisation is a technology that seems to offer a way to attain this goal. This adds an extra software layer, a ‘hypervisor', which mimics hardware. It runs on the ‘host' and creates a virtual environment, typically called a ‘guest'. Virtualised environments offer many management advantages, which differ according to what is being virtualised.

Although almost anything can be virtualised, there are four main types: server, desktop, storage and network.

Server virtualisation enables one server to pretend to be many. Most servers use only five per cent of their resources at any one time, so it's possible to get better usage out of it by running several servers on it within guest environments. It can enable apps or configs to be moved from machine to machine according to the resources available, changes in demand or for disaster recovery, without having to worry about driver compatibility et al.

Desktop virtualisation can enable servers to pretend to be desktops, so they can be accessed from anywhere, on any low-spec device. The data, apps and OS remain on the server, where they are manageable and safe, since only screen updates need be sent to the devices.

Virtualised storage enables disparate storage units to appear to be single units or a single unit to appear to be many. This can lead to better use of resources by filling up storage systems, or splitting up data if necessary. It can also be used in information lifecycle management, moving rarely accessed data to different media or slower networks – although to end-users it appears the same as before.

Lastly, network virtualisation can combine discrete networks in single networks or separate a network to create different virtual networks. This enables easy reconfiguration and reduced numbers of devices and infrastructure.

But there's no such thing as a free lunch, and anything that offers such capabilities is going to need managing itself. The introduction of another layer of software, the hypervisor, also provides a possible source of instability. There's the vexing question of security – is this going to lead to more, fewer or simply different security problems?

“Overall, virtualisation increases some risks and decreases others, but I'm not sure the balance changes much,” says James Rendell, a technical manager at IBM. “Much of good security policy and practice translates very well.”

On the surface, virtualisation can appear to offer the solution to some security problems. Abstracting the software being run into a new environment makes it hard for an attacker to access the host environment directly, for example. If applications are separated off from one server, but then run in separate virtual servers, it's harder for a compromise in one application to allow access to another application. It's also easier to set up a defence perimeter around one server running ten virtual servers than to set up a perimeter around ten real servers – or one running 1,000 virtualised desktop PCs. Traffic between virtual servers on the same physical server can't be eavesdropped. Also, patch management is easier if you only have to patch one master virtual machine (VM).

Virtual networking also offers simpler security, says Trevor Dearing, head of enterprise marketing at Juniper, one of the leaders in virtualised networking: “Traditionally, you build barriers such as firewalls and IPS devices. Now you don't have to build multiple barriers and multiple policies, since we've built a more powerful device that can collapse multiple virtual networks into that device, which contains a firewall and IPS. You can then create security zones and define policy from the bottom of the stack up.”

But virtualisation also brings fresh security worries. If server consolidation converts ten real servers into one real server hosting ten virtual servers, that reduces ten points of failure to just one.

The hypervisor itself also becomes a possible avenue of attack. Flaws do occur in the software. Market leaders VMware and Microsoft have had their hypervisor security circumvented: exploits allowing an attacker to escape from the virtualised environment and run code directly on the host, in Microsoft's case.

However, Fredrik Sjöstedt, director, EMEA product marketing at VMware, says the hypervisor presents a very small target for hackers and that there are more obvious areas to consider. “If you look at the size of the code base, the attack angle on the code presents minimal surfaces. We have Q&A systems and nothing goes out without passing through private and public beta programs.”

Less obvious sources of risk are the admin tools, typically web browser-based and, says Rendell, vulnerable to attacks such as cross-site scripting. In theory, there's the possibility a malicious virtual machine could be hooked up to the stack and used instead of the real version. And there's an equally theoretical exploit wherein a rootkit could be installed on the boot sector and the VM hypervisor gets loaded as a guest environment.

These vulnerabilities, when they are real rather than theoretical, are relatively rare, and hard to exploit. Direct access to the server or storage system is, however, something that affords an attacker greater opportunities. If apps were once running on 50 separate servers and are now running in 50 virtual servers on a single physical server, it's far easier to steal all the data than it once was. In virtualised environments where a single ‘image' file corresponds to an entire guest operating system's hard drive, it is easier to make a copy of data and take it away for later use.

Floris van den Dool, head of security for EMEA and Latin America at Accenture, says the ease of deployment of VMs means it is simpler to make mistakes and overlook the requirements of security. “If you do it wrong, there's a big impact. It's easier to make a mistake and overlook vulnerabilities.”

He recommends the use of ‘secure templates' to reduce the risk of user error. “These are almost like scripts that you run,” he says. The templates provide the same security configurations to each of the machines and reduce the amount of configuration needed. “To really get the benefits of virtualisation, you need to automate it as much as possible.” Van den Dool also recommends reducing access to the management console to the minimum. “VMs go back to the old mainframe days and the controls we used to have. Access to the console needs to be restricted and access logged in files.”

Security templates that encapsulate security-team processes also help overcome that dread problem: too many requests, not enough time. Peter Wilkins, technical director at desktop virtualisation consultancy Centralis, says, “When you're under pressure, it's easy for corners to be cut”. By sticking with proper processes, going through proper testing, it's easier to avoid problems.

Most hypervisor vendors have guides to best practice, but standard ITIL, PCI DSS and ISO/IEC 27000 guidelines offer similar protection, says Simon Godfrey of CA, particularly when it comes to access management: “Managing privilege, the principle is that everyone should have the least privileges appropriate. ITIL's focus is around access management.” Depending on the size and scale of the organisation, there is a variety of duties that could and should be separated into separate teams. Access to the hypervisor itself should be “super, super, super user-restricted, with a sole or dual-person access”, says Godfrey.

“The type of person whom you make responsible is critical,” says Chris Mayers, chief security strategist EMEA for Citrix, desktop virtualisation market leader, which also has a server virtualisation product called XenServer. “Desktop support isn't typically accustomed to supporting servers, but the people who support servers aren't accustomed to desktop support. You have to bring the different groups together and cross-train. Server people can learn about giving desktop users a good experience, desktop support people can learn about uptime.”

Similar problems can occur, says Dearing, when dealing with virtualised networking, where firewall teams may find themselves having to negotiate with other teams in a larger organisation, once firewalls and other infrastructure are subsumed into the virtual network. “It depends on the quality of people, but it can be difficult. A large organisation can be very siloed. The biggest issue is how to get over the structure and policy management of enterprise. It's like what happened with IP telephony.” Separation of duties becomes more important: “In a virtualised environment, where everything is more consolidated, there's an erosion of separation of duty,” he says.

Desktop virtualisation presents similar security risks as VPN access to a corporate network, although with no data being transferred there are fewer risks posed by client machines being stolen – if no passwords have been compromised. However, as Imprivata CTO David Ting points out, password compromise becomes a greater problem, since it allows attackers access to a user's desktop, not just to data that might be stored on a server. “The session roams with the user, so someone can break in, making user sessions vulnerable to password sniffing and shoulder surfing.”

Regulation and governance also have important bearings on virtualisation. PCI DSS requirements for separation of functions are more difficult to prove in a virtualised system, but Citrix's Mayers says the difficulties aren't great and further guidelines will be emerging from the PCI DSS virtualisation committee.

For virtualised storage, separation of data can be a particular issue. “You need to prove to auditors and regulators that there's segmentation of storage, even though it's virtualised – regulations and compliance requirements don't go away, but now you have to rely on logs and IT management,” says Andrew Maloney, marketing director, EMEA, at RSA.

Defending virtualised environments requires technology as well as process. Most security products work normally within virtualised environments. One exception to this rule of tool portability is authentication technology. For companies trying to use more than password-based authentication for accessing virtualised desktops, it's often impossible to add in a second factor for authentication using hardware, because of the thin nature of the client software. “It's much harder with connection brokers to support additional hardware,” says Ting.

While security tools may well work in virtualised environments, they can also throw up unexpected side effects. As anyone who's ever tried working while an anti-virus program thrashes a hard-drive, security products can affect performance. In a virtualised desktop environment, the thought of hundreds of desktops being scanned at the same time will give nightmares to any server admin.

As a result, some hypervisor vendors have developed APIs that allow security software on the host operating system to scan into guest environments, even if they're not currently running – VMware, for example, has an API called VMsafe.

Trend Micro's Core Protection product for VMware, for example, is based on Trend's anti-malware, but integrates with VMware's management console, vCenter. “It's specifically designed to scan each virtual instance, but to have an agent with a small footprint agent in each VM,” says Trend Micro's senior security expert, Rik Ferguson. “Outside, there's a scanning VM interface with VMsafe that examines the active and dormant machines with the latest malware definitions.” Demand on resources is lower, the individual VMs need fewer software updates, and only one set of malware definitions needs to be updated: that used by the external scanner. The integration with vCenter allows the security of individual virtual machines to be monitored more easily.

Virtualisation may appear to be a technological issue, and one that can save on management costs and time. With proper processes in place, this can certainly be true, even if security professionals will still have much of their old work to do.

Five questions to ask before you embark on the virtual path

1. What are you going to virtualise?
If your aim is consolidation, you'll need to make sure your remaining servers will be able to cope with peak demand and that storage access will be fast enough. In some cases, it may be easier to virtualise legacy applications first, so you simply don't have to run older hardware any more.

2. Have you got the necessary expertise?
While consultants will need to help set up more complex infrastructures, ongoing management also needs to be considered. The aim is for as much as possible to be automated, something consultants can certainly help with. However, trained staff will still be needed for maintenance and managing accidents such as hardware failures and the like.

3. How will duties be separated?
With so much more riding on virtualisation, you need to have rock solid stability, with security as tight as possible. While allowing the same people who managed the servers before to manage virtual machines (VMs) may seem logical, it can lead to too much power being vested in one group. Consider how you will separate duties.

4. Will you need extra technology to secure it?
In most virtualised environments, the security concerns will be the same as before. However, security tools that are unaware of the nuances of virtualisation may use up too many resources; and managing them may be harder if they don't tie into VM management tools.

5. Where are your points of failure going to be?
Almost all types of virtualisation consolidate systems, reducing the number of points of failure. If disaster strikes – which may happen if virtualisation is being used in a disaster recovery strategy, for example – make sure you still have enough redundancy and a good strategy for redeployment of systems.

The statistics: how widespread is virtualisation?

Figures for adoption of virtualisation in the UK are hard to come by. Analyst firms don't record them, because of the difficulty in defining virtualisation and of getting organisations that think they have a commercial advantage to admit they use it.

Easiest to obtain are server virtualisation figures. Roy Illsley, a senior analyst at Ovum, suggests that 10-15 per cent of UK companies, predominantly in the finance sector, have virtualised servers operating in a live environment, 20-25 per cent in testbed and non-mission-critical live applications. However, he says, “If you do a survey at any event, 80 per cent of people say they're using virtualisation in some capacity”.

Ovum's figures tally with Gartner's, estimating virtualisation adoption at 12 per cent of x86 servers in 2008, growing to 19 per cent in 2009. According to Errol Rasit, senior analyst at Gartner, the main reason for UK adoption of server virtualisation is cost savings, but as virtualisation matures in larger companies, users tend to be more interested in agility of redeployment of resources. In mid-size firms, there are fewer implementations, but Microsoft's technology is the most likely to be implemented.

Regarding desktop virtualisation, Ovum suggests about one per cent of the global market has it, predicting that in five years' time 20-40 per cent of companies will be using it; but Illsley favours a more conservative 20-25 per cent. With Windows Terminal Services almost omnipresent, it's hard to know how many are actually using them. A recent Imprivata survey of US firms suggested adoption might be as high as 17 per cent.

For network virtualisation, there is difficulty in obtaining figures, since most professionals include VPNs in survey responses, invalidating them. Equally, 15-20 per cent of organisations could be using application virtualisation. Storage virtualisation is not being tracked by the analysts.

From PCs to people: how much can you virtualise?

Why stop at virtualising your hardware and software when you can virtualise your employees as well? The savings can be considerable: why send them all around the world to meetings and site inspections when you can send the virtual employees instead? You save on plane fares, accommodation, and reduce the company's carbon footprint, all in one go.

In 2006, IBM took the bold step of using virtual world Second Life for some of its meetings, something that companies such as Dell, Cisco, Xerox, Intel, Unilever and BT have also done. Second Life goes one step further than instant messaging's collaboration tools by allowing its users to create virtual versions of themselves that can interact, communicate and manipulate objects in a virtual environment. Since then, IBM has begun using the environment for conferences.

According to Karen Keeter, a marketing executive at IBM Research, IBM found that Second Life's recreation of the real world made it more easy to use as an alternative to meetings. “A lot of vendors are building virtual spaces where you all sit down and watch presentations. Why bother being there with an avatar? Why not just share the presentation? The value of [Second Life] is interactivity and concurrent content creation – you can have 20 people in a room at the same time, you can put sticky notes on a wall at the same time, and you can create lists of the most important ideas.”

This interactivity allows organisations such as Accenture to use Second Life for recruitment fairs – and even job interviews. The University of Texas and the Open University have both used it to create virtual campuses.

IBM employees can still use Second Life for meetings, and Second Life has its own meeting rooms as well. However, IBM has drawn up behavioural guidelines (http://domino.research.ibm.com/comm/research_projects.nsf/pages/virtualworlds.IBMVirtualWorldGuidelines.html) for its employees that they must abide by. These guidelines include security rules, and broadly state that Second Life is to be treated as a public place and so nothing must be discussed in it that could not be discussed in public.

The company has, however, been working with Linden Lab and other companies to create its own version of Second Life that lives behind its firewalls – a version that Linden Lab and IBM plan that they will eventually sell to other organisations.

Within that version, it is possible to control levels of access for each employee by tying in access to IBM's password-authentication system. Meetings in private rooms are not visible to those who have not been invited, and are held in separate regions, so it's not possible to ‘run between meetings'.

Topics:

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events