Welcome to The Machine

cktIn a project that demonstrates perfectly the interplay between Computer Science and Computer Engineering, HP announced that it is working on a new kind of computer architecture designed to meet the hefty demands of the Big Data era. Dubbed “The Machine”, HP’s new computing platform, which is currently in the research phase, will use new kinds of memory and data communication techniques to achieve unprecedented processing speeds. If successful, this project will completely transform our computing experience.

One of the most daunting impediments to computer performance today is the need to move data from slow mechanical hard drives to high-speed electronic memory. System memory is limited, because memory is still relatively expensive, and it isn’t necessarily as compact as you would like it to be, meaning that it takes a tremendous number of memory cells to store even a little amount of data. Data values are represented using patterns of 1s and 0s. These 1s and 0s are called bits. It takes 8 bits to build a byte. A bargain laptop these days comes with 4 gigabytes of memory, which means that it can 4 * 1024 * 1024 * 1024 = 4.295 billion bytes of data, or 34.36 billion of these 1s and 0s. That sounds like a big number, and it is, but it is nothing compared to the 1 terabyte drives many laptops come with these days, which can store 250 times as much data. Unfortunately, how these bits are stored doesn’t scale very well. The fundamental component of memory is a device called an SR Latch. An SR Latch uses four transistors to hold just one bit of information. which means that it takes 4 * 34.36 billion = 137.4 billion transistors to implement the memory in your laptop. Transistors are ridiculously small, but they are still physical devices that take up space and pose unavoidable limitations on the speed with which data can be moved in and out of this network of over one-hundred billion interacting things. You simply can’t store an arbitrary amount of data in such a compact space and then move data into and out of it at will, but we want to because we don’t want to waste considerable time moving data between this limited space, the snail-slow hard drive and the lighting-faster processor that tries to make sense of all the data. This exchange of data between memory, the hard drive, and the processor imposes the most serious constraint on computer speed, one we have thus far not been able to scale.

So, HP’s project focuses on fixing this issue by exploring new ways to store and communicate data. For memory, HP is working on a new kind of memory that is far more scalable than using SR latches. The technology, called memristors, uses a grid of wires that connect cells of various molecular compounds. Through controlling the resistance of the connections between the compounds by applying electrical signals to them, we can change the data stored in the grid, and those changes persist thanks to the electrical characteristics of the memristor. What is so beneficial about this is that memristors can hold tremendous more data than a similarly sized network of SR latches. This means that entire programs and very large databases can be moved into memory and processed from there all at once rather than forcing the computer to swap data between the more capacious hard drive and the much smaller memory store repeatedly as an application does its work. After all, the ability to avoid making repeated swaps between the hard drive and the memory is the primary benefit of adding more memory to your machine. Memristors make it possible to add more memory on a grand scale. Combine memristors with HP’s efforts to communicate data through high-bandwidth light beams rather than through electrical currents coursing through relatively low-bandwidth wires, and you find yourself able to accomplish even those initial transfers of data between the hard drive, memristor-based memory, and processor in an incredibly short amount of time compared to what is possible today.

These innovations require the expertise of Computer Engineers, who apply their understanding of semiconductor physics to solve problems related to computing. However, their work would be tantamount to keeping a 1959 Les Paul unplayed behind glass were it not for Computer Scientists who understand how to adjust operating systems and applications to accommodate and take advantage of the incredible boosts in speed afforded by the hardware so that it can sing like it wants to. It would be like putting a Corvette engine in a Saturn were it not for the expertise of Computer Scientists who understand how to take advantage of those extra processing cycles now available to do real work rather than spinning wheels waiting for the data to get where it needs to be. Computer Scientists at HP are working on a new open-source operating system that will take for granted that data and instructions are available immediately rather than in a virtual fortnight. This new operating reality opens the door to a great number of opportunities, provided Computer Scientists figure out how to solve the challenges associated with immediate, simultaneous access to libraries’ worth of data.

HP’s work is exciting, and it demonstrates how valuable and potentially transformative the efforts of Computer Engineers and Computer Scientists can be when they collaborate on projects like this. As with most R&D projects, HP’s “The Machine” isn’t following a firm schedule just yet. But expect great things.

 

Ray Klump

About Ray Klump

Professor and chair of Mathematics and Computer Science Director, Master of Science in Information Security Lewis University http://online.lewisu.edu/ms-information-security.asp, http://online.lewisu.edu/resource/engineering-technology/articles.asp, http://cs.lewisu.edu. You can find him on Google+.

Leave a Reply

Your email address will not be published. Required fields are marked *