Cryptography Versus Quantum Computing: A Race with Big Consequences

For almost four decades now, we’ve relied on public-key encryption to secure the keys we use to encrypt our data. The most popular form of public-key encryption, RSA, depends entirely on how difficult it is to factor products of two very large prime numbers. It is easy to compute the product, and this becomes part of the public key. But the private key can’t be determined unless we know the two prime numbers that were randomly chosen and then multiplied together to yield the public key. So, anyone can encrypt data using the public key of the recipient, but only the recipient, the sole holder of the private key, can decrypt it. Remarkably, after almost 40 years of using this scheme, it is still very difficult for traditional computers to factor the product, provided the numbers used to produce it are large. As time has advanced, the size of the numbers multiplied to produce the product has had to increase to keep the approach secure, but we’ve been able to outpace increases in computing horsepower to keep that publicly known product unfactorable. Thus, data encrypted using the public-key approach has remained encrypted.

Unfortunately, the RSA approach doesn’t hold up against the power of quantum computing. Quantum computing has been an up-and-coming technology since the 1980s. The joke about technologies that are “up and coming” for that long is that they are a “down and leaving”. But many signs suggest that the era of quantum computing is set to begin, albeit with a continued nebulous timeframe of five to thirty years. Whereas traditional computers input, process, and output patterns of ones and zeros that are implemented by networks of transistors that emulate and transform those patterns, quantum computers use qubits that can be on and off simultaneously, yielding an immense pool of virtual traditional computers working in parallel to execute a set of instructions. Since these computers work in parallel, they solve problems much more quickly than a single computer would, including – you guessed it – factoring the product of two very large prime numbers.

While we still can’t go to Best Buy to purchase a quantum computer, they are certainly theoretically capable of breaking public-key encryption. Theoretical computer scientists have proven that much. And, if something is theoretical possible in computing, the track record suggests that it will be practically possible, too, before long. This has caused great concern, because it means all encrypted data won’t be encrypted much longer.

So, cryptographers have been busy since the 1990s trying to find the next “impenetrable” encryption algorithm. The class of algorithms that has shown the most promise uses mathematical lattices. A lattice is a set of points calculated by taking a set of vectors and rotating and combining them in all possible orientations and combinations to produce a multidimensional surface. By multidimensional, we don’t mean 3D, which is great for movies, but completely inadequate for strong encryption. We’re talking more like 500D: a “shape” that, to explore every nook and cranny, requires moving along the x, y, and z axes, and 497 unnamed axes, too.

In lattice-based encryption, the public key is our starting point, and the private key is buried somewhere in this huge-dimensioned lattice. Finding the private key from the public key requires moving somehow through the immense and intricate lattice in search of it. Although it hasn’t been proven that quantum computers can’t find the path to the private key, they haven’t proven effective at it, either. In other words, finding the needle in the haystack – the one private key point in this immense and intricate lattice of points – is not something that even quantum computers seem to be able to do. This theoretical result is enough for the NSA and others to remain hopeful that computer scientists have found a form of encryption suited to the quantum computing era.

An article in Wired describes excellently the challenges posed by quantum computing and how certain kinds of lattice-based encryption meet those challenges. The article also highlights the constant tug-of-war between security and efficiency and how sometimes we can be too greedy in trying to optimize an algorithm for speed. Basically, when we try to cut corners, we open up avenues for code-breakers to find the private key far too easily.

The interesting complementary nature of theoretical and applied computer science also becomes apparent as you read the article. Theoretical computer science helped us understand the threats posed by quantum computing well in advance of its adoption. And ideas from theoretical computer science, once implemented, will help us meet those threats, too.

 

About Ray Klump

Professor and chair of Mathematics and Computer Science Director, Master of Science in Information Security Lewis University http://online.lewisu.edu/ms-information-security.asp, http://online.lewisu.edu/resource/engineering-technology/articles.asp, http://cs.lewisu.edu. You can find him on Google+.

Leave a Reply

Your email address will not be published. Required fields are marked *