Short Take: Beating the Autocracy of Autonomous Systems

In the previous post, I discussed through a dystopian scenario, a manifestation of how autonomous systems can reduce our lives to one of hapless destiny. I believe that we as technologists can develop technologies with some guiding principles that can help us avoid plunging into such dystopian scenarios, under most cases. I also believe that as enlightened technologists, we should try to influence policy that governs the use of autonomous systems. In this article, I will hypothesize about these two aspects of our story.

  1. What some feared future scenarios are

  1. What we can do technologically to prevent such scenarios

  1. What we can do policy-wise to prevent such scenarios

Some Technological Guiding Principles

I love thinking of algorithms that can solve a problem. (I also love creating practical instantiations of such algorithms, but that is a story for another day.) Increasingly these days, the problem I am trying to solve through my work is how to make our interactions more autonomous. A working definition for autonomous operation that I will use here is that it uses less of my attention, my cognition, my engagement (pick your level of cognitive functioning) and the job gets done through the mystic working of some computational device. But some times I force myself to think of what are some guiding principles that will increase the likelihood of greater good for the greater number. And these are a few high-level principles that keep bubbling up in multiple rumination sessions.

Build technology that is usable by many, not just the dominant class or majority class of users.

This means building in features that broaden the user base—of course there are commercial reasons for doing this—and leaving out features that raise the barrier for some classes of users. Internationalization efforts for most popular software packages is a success story from the earlier days of software. With autonomous software, there is the added angle that users of different skill levels whose wellness and more dramatically, life and death, may depend on the software should have the required level of understanding of the technology.

Go as far as you can to guard your software against vulnerabilities.

Tautologically obvious, right? Use the state-of-the-art static and dynamic verification techniques to reduce the number of vulnerabilities in the autonomous software that you ship. This will entail stumbling through various software packages, some of which will be only minimally documented. This will entail difficult conversations with your product manager who has this one thing to think about—ship date of the product. But this is time and effort that we owe to the broader society. If our technology becomes popular, then there will be incentives to attack it and security built into the software is much more effective than security pixie dust sprinkled as a patch later on.

Use the highest-grade security techniques that you can afford, to store data.

This is a deployment practice, rather than a design practice. If your technology becomes wildly successful, it will be used to collect lots and lots of consumer data. With the ubiquity of sensors and monitoring software, data is being collected about us at ever finer granularity. We are talking of large volumes of data, high rates of data, and data of widely varying formats. Still do it, embrace the tedium of high-strength encryption and decryption.

Have manual overrides for your autonomy algorithms.

These may be the lifesaver when the algorithm runs into choppy waters. This also entails providing some interpretability for your algorithms so that the dark curtain can be peeled open if an end user wants to know why the algorithm told it something. This is a subject of intense research activity today. We as systems researchers should push forward the adoption of the best into our systems plumbing. We as autonomous system developers should look to use such plumbing. Sure this is not easy because it does not add a flashy bell or whistle, but nor is it easy to clean up after a failure, either legally or for your conscience. Just ask Boeing.

Minimize the chances of wrong use.

Check that the implementation of your algorithm is not easy to use in scenarios that you do not want it to be used in. This is of course tricky because much of the useful technology we create has dual use, for good and for bad, for recognizing faces of intruders and for tracking citizens protesting for their rights. But we can and should build into our autonomous systems safeguards that make it harder to abuse it, by single miscreants to state actors. Well the latter is beyond the capability of any single developer or even a small team, but could be done as an organizational goal.

So, to sum up, many of us are researching and developing autonomous systems in some of its very many shapes and forms. As technologists, it behooves us to pay attention to some design and development practices to increase the chances that our creation will be used for good. Most of these are known to us from traditional software design and development. But in many cases ill-effects of not following them may be magnified with increasing autonomy of software.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s