You can work around it or do it by criticality, but patching is too important a topic to be neglected
Patching software is a bit like going to the dentist. You know you should do it, but it can often be unpleasant. This is a generic software industry beef rather than a vendor-specific question, as a review of vulnerability lists will show.
Part of the problem is that, due to the complexity of modern software, fixing one problem may well introduce another. While vendors do their best to avoid this, the volume of different in-service configurations means some problems will always slip through – it is often necessary to do testing in your own environment before rolling the potentially dangerous patch out to a larger audience.
There are other challenges. Some patches can't be easily uninstalled, making it riskier to push them out. In certain cases, particularly in the Windows world, a reboot is necessary to apply the patch, resulting in downtime. Other patches take a lot of effort, so system administrators are cautious about installation.
The security world has an incentive to push patches out. The longer a system goes unpatched, the more it is vulnerable, even though a risk can sometimes be mitigated without a patch; for example, by anti-virus software or additional firewall filtering.
Steve Beattie et al looked at this situation pragmatically in a 2002 conference paper (http://xrl.us/bee7h9).
After examining the “bad patch” rate, ie how long it typically took to weed out a bad patch, and balancing this against how long it was safe to leave a vulnerable system online, they came up with timings between 10 and 30 days as the optimum time to wait (I'm oversimplifying hugely, of course; please see the Beattie paper for the full details).
That was then, however. These days, the “patch to exploit” loop is measured in hours, and many system administrators now take the risk and push patches out immediately. Fortunately, there's patch-management software to make this easier.
So it may be a surprise to hear about the number of people hit by the recent Conficker worm, which exploited a bug that Microsoft patched way back in October, in a rare “out of band” patch in advance of the normal “second Tuesday” cycle.
Hospitals in Sheffield were hit by Conficker, because automatic installation of Microsoft patches had been disabled across the network. Cue cries of stupidity and demands for resignations.
It turns out that the automatic installation of patches was turned off after incidents involving Windows machines in operating theatres rebooting in the middle of operations. Although it isn't clear whether the systems involved directly affected patient care, an unexpected reboot while an operation is in progress is too close for comfort. Not surprisingly, the Sheffield health authorities are remaining quiet about the details, so it's difficult to say whether their decision not to patch theatre systems was reasonable or not (that hasn't stopped most commentators from condemning them, of course).
It does seem somewhat heavy-handed to apply it across the board to all IT systems, and that decision seems clearly in error. There are also questions about other security aspects – such as content filtering and anti-virus precautions – that need to be answered.
A more sensible approach would be to split the systems according to their criticality, and manage patch installation accordingly, an approach supported by every patch management solution, including Microsoft's free Software Update Services (or SUS).
This same challenge affects the wider business world. In any office, there are some machines that will cause wails of despair when they go down, and many that will do so largely unnoticed.
Patching of systems, and its associated risks, needs to be managed carefully, in line with an accurate assessment of each system's importance. The risks of patching and not patching need to be carefully balanced, particularly where there are lives at stake, rather than simply money.
In patching, as in comedy, timing is everything.