A colleague in the Humanities shared a provocative article in Wired magazine with me today about a new initiative called the Ethical OS. “OS” means “operating system”, which, in computerspeak, is the software that enables your computer or phone to run your word processor, email, music player, and web browser. The operating system basically governs how your applications interact with your device and how you’ll interact with your applications. The operating system is your cop, coach, and companion as you interact with your phone, computer, or tablet.
According to the article, the Ethical OS has been proposed through a partnership between the Institute for the Future and the Tech and Society Solutions Lab as a framework for guiding technology innovators seeking to bring their new creations to market. It establishes policies, procedures, and practices for considering purposefully the long-term implications of new technology on civilization. In other words, it’s an operating system for technology companies to use when deploying their applications on the hardware of society.
I applaud this effort. Anything that prods today’s inventors to consider the possible effects of their tech, that encourages them to ponder not just what to build and how to build it but, simultaneously, whether to build it at all given the risks of its misuse, can help minimize the unintended consequences of innovation. “Unintended” and “unforeseen” may not be exact synonyms, but one certainly must be able to see possibilities in order to intend them. One benefit of the Ethical OS is that it may remind technologists and their funders to look up from their computer screens, circuit boards, and rosy projected earnings sheets long enough to see what may lurk hidden and to question whether those hidden pests could do real damage if their well-intentioned creations get misused.
But the technology creators can only do so much. Every technology that has ever been created and brought to market can be misused. Every technology can hurt individuals, and every technology can hurt entire communities and societies of individuals. When Henry Ford started mass-producing Model T’s, was it his job to include technology on the assembly line to ensure that auto workers weren’t forced to work long hours in unhealthy conditions? When Bardeen, Brittain, and Shockley invented the transistor, should they have slowed their research until they could draft policies for counteracting screen addiction and cyberbullying, neither of which would be possible sixty years later without their creation?
These might be preposterous examples, but I mention them to draw your attention to where I think the responsibility really lies. It is our job, as human citizens living and working together in our respective shared enterprises, to use technology – any technology – responsibly. When we fail to do that, when our use of technology hurts others or threatens our rights or even our health, it is government’s job to step in and regulate the behaviors that imperil our collective future.
Unfortunately, our federal representation is hardly up to the task of intelligently dissecting the ever-emerging slate of issues that surround today’s technologies. Indeed, Congress currently has just one science PhD in its ranks, Congressman Bill Foster. Our representatives can’t legislate what they don’t understand, which doesn’t stop them from trying, of course, sometimes with terrible consequences. But we and the government we elect ultimately have the responsibility to regulate when technology inevitably gets used for purposes other than those for which it was designed.
I’m not saying that the inventors have no responsibility. Technology creators should be inspired by thought systems like the Ethical OS to think more deeply and creatively about their work’s impact on the future. Frameworks that challenge scientists and technologists to consider the human impact of their creations can sometimes help them proactively include controls for mitigating the consequences of their inevitable misuse. Furthermore, today’s high-flying technology entrepreneurs and future members of that club must do better than Mark Zuckerberg’s flat-footed and myopic initial response to allegations that his Facebook helps spread misinformation. If the Ethical OS reminds today’s tech leaders that their tools aren’t just geek idols to be adored and that they have a responsibility to acknowledge limitations and failures and seek solutions with the same zeal they showed when they created them in the first place, then it and frameworks like it will have performed an essential service reigning in Frankenstein’s future monsters.
But, we and our government have far greater roles to play in keeping us safe from technology’s misuse.
Modern operating systems are designed to help software applications run as efficiently and as speedily as possible. Still, we install cybersecurity tools and controls on our systems to keep our data safe. These cybersecurity tools that police our systems almost always slow us down, but we live with that sacrifice because we need to keep ourselves safe. The best cybersecurity software uses advanced intelligence and analytics to detect threats before they do real damage. They have the knowledge and intelligence to protect us from unforeseen threats.
We and our elected officials must serve that same regulatory role when it comes to policing technology’s impact on the shared hardware of society. And we have to get more intelligent – very quickly – to play that role even adequately. We can’t expect the technologists themselves to do that effectively for us.