A condensed history of the botnet
Over recent weeks I have been following Trend Micro's senior security advisor Rik Ferguson on his investigation and story about the evolution of the robot network (botnet).
Now I could re-tell the story in my own words or cut and paste Rik's words onto this page, but instead I know I would prefer to read a summary and give you the opportunity to read the complete detail via the links below.
Part 1 details the beginnings of the botnet from the Melissa and Iloveyou worms, with the Sub7 Trojan and Pretty Park Worm that both introduced the concept of the victim machine connecting to an internet relay chat channel to listen for malicious commands. Later the mIRC client influenced the GT bot, which could initially run custom scripts in response to IRC events and had access to raw TCP and UDP sockets, making it perfect for rudimentary denial-of-service attacks, a major duty of the modern botnet.
In Part 2, we are moved to 2003 when the criminal interest in the possibilities afforded by botnets began to become apparent with the development of the first spamming bots Bagle, Bobax and the malware dropper Mytob. Ferguson said that this enabled criminals to build large botnets and distribute their spamming activities across all of their victim PCs, giving them agility, flexibility and helping them to avoid the legal enforcement activity that was starting to be aggressively pursued.
In the period leading up until 2007, there was the development of the likes of Zeus, Rustock, Storm and Cutwail, which still rank among the most prevalent botnets, with the creator of Zeus regularly updating, beta testing and releasing new versions of the toolkit, all the while adding or improving functionality.
Ferguson said: “Right now, the Shadowserver Foundation is tracking almost 6,000 unique command and control servers and even that figure does not represent all the botnets out there. At any one time Trend Micro is tracking tens of millions of infected PCs that are being used to send spam and that figure does not include all the other bot infected PCs that are being used for the purposes of information theft, distributed denial-of-service or any of the other myriad of crimes.”
In Part 3, Ferguson looked at how since the second half of 2007, criminals have been abusing the user-generated content aspect of Web 2.0, with the first alternative command and control channels identified as blogs and RSS feeds. With open websites such as Twitter, Facebook, Pastebin, Google Groups and Google App Engine all being used as surrogate command and control infrastructures, Ferguson said that the 'public forums' have been configured to issue obfuscated commands to globally distributed botnets and the commands contain further URLs, which the bot then accesses to download commands or components.
Ferguson said: “Of course we can fully expect criminals to continue this unceasing innovation as we move forward, more botnets will take advantage of more effective peer-to-peer communication, update and management channels. Communications between bots or between bot and controller will become more effectively encrypted perhaps through the adoption of PKI. Command and control functionality will be more effectively dissipated, using cloud services peer-to-peer and covert channels though compromised legitimate services.”
In terms of fighting back, in Part 2 Ferguson acknowledged the action against owners with the take down of botnets, but said that 'the concerted action that both public and private organisations are taking against botnets means that the criminal innovation never stops'.
At the end of Part 3, Ferguson asks 'where do we go from here?' He said: “So what can we do, is all hope lost? Not entirely I would argue. The battles continue in a war that must be waged on several fronts; governments and international organisations such as the EU, OECD and UN need to provide a strong focus on the harmonisation of criminal law globally in the area of cyber crime, enabling more effective prosecution.
“Law enforcement agencies need to formalise multi-lateral agreements to tackle a crime that is truly transnational. Internet Service Providers and domain registrars also have a key role to play. ISPs should be informing and assisting customers that they believe to be compromised (a trend which happily appears to be on the increase). They should also be terminating service to customers they believe to be acting maliciously. Domain Registrars should be demanding more effective forms of traceable identification at time of registration and bad actors should have their service suspended as soon as credible suspicion is raised.”
Finally, he calls on the security industry to continue with the levels of cooperation achieved among rivals during the fight against Conficker and for these to deepen, while initiatives 'must be financed on a national level to more effectively educate and inform citizens of the dangers posed by cyber crime and to encourage safer computing practices'.
There could easily be a belief that the fight against cyber crime is an uphill struggle against innovation, but at the same time there has been major successes in recent years. Rik's three-part history is also available as a PDF whitepaper here.