Month after month, the frequency, size and complexity of attacks against businesses online are increasing.
Rather than becoming more civilised, the internet is becoming less so; even as businesses are moving greater parts of their revenue stream to online channels.
Attacks near the end of 2010 were reaching 10,000 times the normal traffic seen by e-commerce sites, with thousand-fold increases in other sectors – and these attacks were targeting more businesses than ever before. If this trend continues, how can businesses protect themselves?
In the last quarter of 2010, we saw more attacks against our retail and financial services customers than we'd seen against our entire customer base in the previous three quarters. That growth has increased into 2011, with attacks to deny service – or compromise the servers behind the service – increasing each month.
This ‘de-civilisation' is being driven by the increased anonymity present as more systems, which are often insecure, are online and permit adversaries to hide in the ever-spreading shadows on the internet.
Yet adversaries are attacking for many different reasons. The profit-motivated attackers are either after extortion (using Distributed Denial-of-Service attacks) or black market profits (using theft of marketable valuables, like credit cards).
Politically motivated attackers might target national entities (like the 2009 attacks on South Korean and US government and financial services sites), or companies that have engaged in activities they disagree with (as in the Anonymous Operation Payback attacks in 2010). They might want to simply satisfy an agenda (as in the case with many anti-globalisation and environmental organisations).
Whatever their motivation, adversaries can easily and cheaply amass significant assets to conduct their attacks. Botnets have become a commodity. The rise of broadband around the world gives attackers new pools of machines to compromise, with increasing amounts of bandwidth at their disposal.
Even as online assets become more critical, the environment in which they exist is becoming more dangerous and our systems are often not robust enough to scale well and survive in a hostile environment.
The problem isn't that our systems aren't robust enough, it is that when we build them, all too often we assume reliability, rather than failure. With that foundation, adding reliability often requires complex and fragile overlays to provide a semblance of robustness (consider the complexity involved in synchronous multiple-geography database replication, the bugaboo of many disaster resilience projects).
If instead we begin with an assumption that everything will fail, we can build robustness into our designs from the beginning. Consider the case of the Domain Name Service (DNS) as an example: built atop the most fragile of architectures (UDP) at each layer, additional robustness is added until DNS failures are the exception, rather than the rule.
Perhaps we can learn from systems like DNS that, in designing for failure and success, will prove to be robust into the future.
Andy Ellis is chief security officer at Akamai