Public key infrastructure (PKI): Pioneers, processes and events

As part of an ongoing celebration of the 20th Anniversary of Entrust's Public Key Infrastructure (PKI), this four part series looks at the pioneers, processes and events that have shaped and continue to shape this ever-evolving technology.

 Since the very earliest days of the information age, secure data networks – including those  carrying highly-sensitive military and diplomatic cables – were based on hub and spoke topology. Encryption was used on each of the links in the network and the switching nodes were RED, meaning that messages were “in the clear” in the switching centres where they were transferred from one link to another.

Back then, the only encryption algorithms available were “symmetric” which meant that the keys used to encrypt and decrypt the message were the same. Each link had a different key, and keys were commonly changed daily, with the change-over having to be carefully coordinated at each end of each link

A large, organisation of highly trusted, trained staff was necessary to generate, distribute, install, retrieve and destroy key books for every hub and terminal in the network. Secure couriers and diplomatic bags were the most common means of transporting keys. Partly because of the cost of key management, and partly because sales of encryption products were strictly controlled, encryption was little used outside of government.

Starting in the 1970s, computer technology underwent significant changes. Machines were becoming computationally more powerful, consuming less electricity, getting smaller and plummeting in price. It became possible to interconnect machines within a computer room or across a small campus. Networking approaches based on token-ring and collision-based bus architectures were fighting for supremacy. Finally, the bus approach came to dominate – first with proprietary protocols, then with the multi-vendor standard, Ethernet.

In the wide-area, national and international research networks were being connected by circuit-switched leased lines. But in the telephony world, advances were being made in automated switching systems, quickly followed by reports of successful hacking incidents. For the time being though, data networks remained relatively immune from attack, as they were protected by physical security measures and trusted telecom providers.

Packet-Switching and the Key Distribution Problem

The telecom industry developed a wide-area packet-switching standard in the CCITT's X.25. This, coupled with deregulation in the telecom industry, opened the door to cost-effective data networking based on packet-switching technology.

Meanwhile, in the financial services sector, cash-machine networks were appearing and banking mainframes were being connected across countries and continents. The value of assets being entrusted to commercial data networks made them attractive to criminal organisations, so commercial grade cryptography was needed to protect data integrity.

But governments treated encryption technology as a munition for purposes of export licensing and export licenses were generally granted only for financial applications using approved algorithms. IBM, with assistance from the US National Security Agency, developed the DES algorithm, with a cryptographic strength of 56 bits. The US Government National Bureau of Standards (later the National Institute of Standards and Technology) published it as a standard, and it was adopted around the world for use in financial applications.

Measuring Cryptographic Strength

Cryptographic strength measured in bits is the logarithm (base 2) of the number of operations required to defeat the algorithm. Logarithmic measures are familiar to us in the Richter scale for earthquake magnitude, where a one step increase in the scale represents a ten-fold increase in strength. For scales measured in bits, a one-bit increase in the scale represents a doubling. Therefore, a cryptographic strength of 56-bits represents roughly ten thousand million million operations.

Interestingly, the details of the operation itself don't come into the calculation – only the number of them. For a well-designed symmetric algorithm, the cryptographic strength is the same as the key size, meaning that the best-known method of attacking the algorithm is to do an exhaustive search of the key-space. The same is not true ofasymmetric algorithms, where better attacks than exhaustive searches exist. So in these cases, the key size is always greater than the design cryptographic strength.

The cost of managing symmetric keys on a link-by-link basis and the cost of providing trusted RED switching nodes for packet-switched data networks was going to be unacceptable for commercial applications. As a result, protection would have to be provided at the transport layer, resulting in end-to-end security and permitting BLACK “untrusted” switching nodes.

This is where symmetric cryptography runs into a problem not solved until the development of the Needham-Schroeder protocol. In 1978, Roger Needham and Michael Schroeder designed a solution using symmetric techniques that could scale up to large networks. But even this new approach had limitations: Since confidential key material had to be communicated between each node and the central key distribution centre, the set-up procedure was onerous and expensive.

And while their design eventually achieved widespread adoption for securing individual administrative domains as the Kerberos protocol, it failed to find acceptance for interconnected domains. That meant a newer, better solution was still needed. Luckily, there was someone already working towards it.

Tim Moses is the senior director of Entrust Datacard's Security Technology Group. He holds BSc and PhD degrees in electronic engineering and has over 35 years experience in industry. He has worked in the field of information security, both in product development and consultancy, for the past 25 years. His current research interests include enhancing the trustworthiness of the Web, the security of Indentity 2.0 frameworks and risk-based authentication. Tim is the past chair of the CA/Browser Forum and editor of the OATH reference architecture.