Blog/

We Have Already Let The Genie Out of The Bottle

 

How will we make sure that Artificial Intelligence won’t run amok and will be a force for good?

There are many areas where governance frameworks and international agreements about the use of artificial intelligence (AI) are needed. For example, there is an urgent need for internationally shared rules governing autonomous weapons and the use of facial recognition to target minorities and suppress dissent. Eliminating bias in algorithms for criminal sentencing, credit allocation, social media curation and many other areas should be an essential focus for both research and the spread of best practices.

 

Unfortunately, when it comes to the broader issue of whether we will rule our artificial creations or whether they will rule us, we have already let the genie out of the bottle. In his book Superintelligence: Paths, Dangers, Strategies, Nick Bostrom posited that the future development of AI could be a source of existential risk to humanity via a simple thought experiment. A self-improving AI, able to learn from its experience and automatically improve its results, has been given the task of running a paper clip factory. Its job is to make as many paper clips as possible. As it becomes superintelligent, it decides that humans are obstacles to its singular goal and destroys us all. Elon Musk created a more poetic version of that narrative, in which it is a strawberry-picking robot that decides humanity is in the way of “strawberry fields forever.”

What we fail to understand is that we have already created such systems. They are not yet superintelligent nor fully independent of their human creators, but they are already going wrong in just the way that Bostrom and Musk foretold. And our attempts to govern them are largely proving ineffective. To explain why that is, it is important to understand how such systems work. Let me start with a simple example. When I was a child, I had a coin-sorting piggy bank. I loved pouring in a fistful of small change and watching the coins slide down clear tubes, then arrange themselves in columns by size, as if by magic. When I was slightly older, I realized that vending machines worked much the same way and that it was possible to fool a vending machine by putting in a foreign coin of the right size or even the slug of metal punched out from an electrical junction box. The machine didn’t actually know anything about the value of money. It was just a mechanism constructed to let a disk of the right size and weight fall through a slot and trip a counter.

If you understand how that piggy bank or coin-operated vending machine works, you also understand quite a bit about systems such as Google search, social media newsfeed algorithms, email spam filtering, fraud detection, facial recognition and the latest advances in cybersecurity. Such systems are sorting machines. A mechanism is designed to recognize attributes of an input data set or stream and to sort it in some manner. (Coins come in different sizes and weights. Emails, tweets and news stories contain keywords and have sources, click frequencies and hundreds of other attributes. A photograph can be sorted into cat and not-cat, Tim O’Reilly and not-Tim O’Reilly.) People try to spoof these systems—just like I and my teenage peers did with vending machines—and the mechanism designers take more and more data attributes into account so as to eliminate errors.

A vending machine is fairly simple. Currency changes only rarely, and there are only so many ways to spoof it. But content is endlessly variable, and so it is a Sisyphean task to develop new mechanisms to take account of every new topic, every new content source and every emergent attack. Enter machine learning. In a traditional approach to building an algorithmic system for recognizing and sorting data, the programmer identifies the attributes to be examined, the acceptable values and the action to be taken. (The combination of an attribute and its value is often called a feature of the data.) Using a machine-learning approach, a system is shown many, many examples of good and bad data in order to train a model of what good and bad looks like. The programmer may not always know entirely what features of the data the machine-learning model is relying on; the programmer knows only that it serves up results that appear to match or exceed human judgment against a test data set. Then the system is turned loose on real-world data. After the initial training, the system can be designed to continue to learn.

If you’ve used the facial recognition features of Apple or Google’s photo applications to find pictures containing you, your friends or your family, you’ve participated in a version of that training process. You label a few faces with names and then are given a set of photos the algorithmic system is fairly certain are of the same face and some photos with a lower confidence level, which it asks you to confirm or deny. The more you correct the application’s guesses, the better it gets. I have helped my photo application get better at distinguishing between me and my brothers and even, from time to time, between me and my daughters, until now it is rarely wrong. It recognizes the same person from childhood through old age.

A human-machine hybrid

 

Note that these systems are hybrids of human and machine—not truly autonomous. Humans construct the mechanism and create the training data set, and the software algorithms and machine-learning models are able to do the sorting at previously unthinkable speed and scale. And once they have been put into harness, the data-driven algorithms and models continue not only to take direction from new instructions given by the mechanism designers but also to learn from the actions of their users.

In practice, the vast algorithmic systems of Google, Facebook and other social media platforms contain a mix of sorting mechanisms designed explicitly by programmers and newer machine-learning models. Google search, for instance, takes hundreds of attributes into account, and only some of them are recognized by machine learning. These attributes are summed into a score that collectively determines the order of results. Google search is now also personalized, with results based not just on what the system expects all users to prefer but also on the preferences and interests of the specific user asking a question. Social media algorithms are even more complex, because there is no single right answer. “Right” depends on the interests of each end-user and, unlike with search, those interests are not stated explicitly but must be inferred by studying past history, the interests of an individual’s friends and so forth. They are examples of what financier George Soros has called reflexive systems, wherein some results are neither objectively true or false, but the sum of what all the system’s users (“the market”) believe.

The individual machine components cannot be thought of as intelligent, but these systems as a whole are able to learn from and respond to their environment, to take many factors into account in making decisions and to constantly improve their results based on new information.

That’s a pretty good definition of intelligence, even though it lacks other elements of human cognition such as self-awareness and volition. Just as with humans, the data used in training the model can introduce bias into the results. Nonetheless, these systems have delivered remarkable results—far exceeding human abilities in field after field.

In those hybrid systems, humans are still nominally in charge, but recognition of and response to new information often happens automatically. Old, hand-coded algorithms designed by human programmers are being replaced by machine-learning models that are able to respond to changes in vast amounts of data long before a human programmer might notice the difference. But sometimes the changes in the data are so significant—for example, makeup designed specifically to fool facial recognition systems, astroturfed content produced at scale by bots masquerading as humans or deepfake videos—that humans need to build and train new digital subsystems to recognize them. In addition, the human mechanism designers are always looking for ways to improve their creations.

Govern not by rules but by outcomes

 

The decades of successful updates to Google search in order to maintain search quality in the face of massive amounts of new information, adversarial attacks and changes in user behavior—as well as other success stories like antispam and credit-card-fraud-detection systems—provide some basis for understanding how to govern the AI of the future. Human control is expressed not through a fixed set of rules but through a set of desired outcomes. The rules are constantly updated in order to achieve those outcomes. Systems managed in this way represent a sharp break with previous, rules-based systems of governance.

Any governance system that tries to define, once and for all, a set of fixed rules is bound to fail. The key to governance is the choice of desired outcome, measurement of whether or not that outcome is being achieved and a constantly updated set of mechanisms for achieving it.

There are two levels of AI governance:

  1. 1. The microgovernance of constant updates in response to new information, expressed by building better algorithms and models
  2. 2. The macro-governance of the choice of outcome for which algorithms and models are optimized

Today’s technology companies have gotten pretty good at level 1. Where they struggle is at level 2. The outcomes-based approach to governance does have an Achilles’ heel. Algorithmic systems are single-minded optimizers. Much like the genies of Arabian mythology, they do exactly what their masters ask of them regardless of the consequences, often leading to unanticipated and undesirable results.

Peter Norvig, Google’s director of research and co-author of the leading textbook on AI, notes that part of the problem is that it is hard to say what you want in a succinct statement—whether that statement is made in everyday language, legalese or a programming language. This is one advantage of machine learning over traditional systems. We show these systems examples of what we consider good and bad rather than try to summarize them once and for all in a single statement—much as human courts rely on case law.

Another part of the problem is the hubris of thinking it is possible to give the genie a coherent wish. Norvig points out that we should recognize there will be errors and that we should use principles of safety engineering. As he said to me, “King Midas would have been OK if only he had said, ‘I want everything I touch to turn to gold, but I want an undo button and a pause button’.”

I’m not sure that would be sufficient, but it’s a good start.

Be careful what you ask for

 

If the outcome is well chosen and directed by the interests not only of the mechanism designer and owner but also of the system’s users and society as a whole, the benefits can be enormous. For example, Google set as its corporate mission “to organize the world’s information and make it universally accessible and useful.” Few could deny that Google has made enormous progress toward that goal. But in a hybrid system, goals set in human terms must be translated into the mathematical language of machines. That is done through something referred to as an objective function, whose value is to be optimized (maximized or minimized.) Google’s search algorithms are relentlessly optimized for producing answers—originally, a list of pointers to websites and now, for many searches, an actual answer—that satisfy users, as measured by the fact that they go away and don’t make the same search a second time.

Facebook, too, lays claim to a noble mission. It aims “to give people the power to build community and bring the world closer together.” However, in fulfillment of that mission, Facebook tasked its systems with optimizing for what might broadly be called engagement, measuring such factors as how much time users spend on the site and how many posts they read, like and respond to. The system’s creators believed that this would bring their users closer relationships with their friends, but we now know that instead, it drove divisiveness, addictive behavior and a host of other ills. Not only that, but outsiders learned to game the system in order to manipulate Facebook’s users for their own ends.

Like other social media platforms, Facebook has made progress at the microlevel of governance by targeting hate speech, fake news and other defects in the newsfeed curation performed by its algorithmic systems in much the same way that the vending machines of my childhood added new tests to avoid dispensing candy bars in exchange for worthless metal slugs.

But the company is still struggling with the higher-level question of how to express the human wish to bring people together in a mathematical form that will cause its genies to produce the desired outcome.

Most troubling is the question, What are Facebook’s alternatives if greater engagement with its services is not actually good for Facebook users? The value of the company depends on growth in users and usage. Its advertising premium is based on microtargeting, wherein data about users’ interests and activities can be used for their benefit but can also be used against them. In far too many cases, when the interests of its users and the interests of its advertisers diverge, Facebook seems to take the side of the advertisers—even to the point of knowingly accepting false advertising over the protests of its own employees.

The problem of mixed motives

 

Despite its history of success in constantly updating its search engine for the benefit of its users, Google fell prey to many of the same problems as Facebook at its YouTube unit. Unable to use “give them the right answer and send them away” for the majority of its searches, YouTube chose instead to optimize for time spent on the site and ended up with disinformation problems at least as bad as those that bedevil Facebook—and quite possibly worse.

Even at its search engine unit, Google seems to have turned away from the clarity of its original stance on the divergence of interest between its users and its advertisers, which Google cofounders Larry Page and Sergey Brin had identified in their original 1998 research paper on the Google search engine. In an appendix titled “Advertising and Mixed Motives,” they wrote, “We expect that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”

At the time, Page and Brin were arguing for the existence of an academic search engine without commercial motives as a check on that problem. But with the adoption of pay-per-click advertising, whereby advertisers are charged only when a user clicks on an ad—presumably because the user found it useful—they believed they had found a way to align the interests of the company’s two prime constituencies. In the company’s first decade or so, Google also made a clear separation between the systems that served its end-users and the systems that served its advertisers. Ad results were calculated separately and shown completely separately from organic search results. But gradually, the boundaries began to blur. Ads, formerly in a secondary position on the page and highlighted in a different color, began to take on more and more prominent positions and to become less distinguishable from organic search results.

Google also seems to have re-evaluated the relationship between itself and the content suppliers of the World Wide Web. The company began as a way
to match information seekers with information providers in this vast new marketplace for human collective intelligence. Its job was to be a neutral middleman, using its massive technological capabilities to search through what was to become trillions of Web pages in order to find the page with the best answer to trillions of searches a year. Google’s success was measured not only by the success of its users but also by the success of the other sites that the search engine sent the users off to.

In an interview that was attached to the Form S-1 filing for Google’s 2004 IPO, Page said: “We want you to come to Google and quickly find what you want. Then we’re happy to send you to the other sites. In fact, that’s the point. The portal strategy tries to own all of the information . . . Most portals show their own content above content elsewhere on the Web. We feel that’s a conflict of interest—analogous to taking money for search results. Their search engine doesn’t necessarily provide the best results; it provides the portal’s results. Google conscientiously tries to stay away from that. We want to get you out of Google and to the right place as fast as possible. It’s a very different model.”

Page and Brin seem to have understood at the time that success did not mean success for only themselves—or even for their customers and advertisers—but for the ecosystem of information providers whose content Google had been created to search. Google’s early genius was in balancing the competing interests of all those different constituencies. This is the positive future of AI. As Paul Cohen, former DARPA program manager of AI who is now dean of the School of Computing and Information at the University of Pittsburgh, once said, “The opportunity of AI is to help humans model and manage complex, interacting systems,” yet 15 years after Page said Google’s aim was to send users on their way, more than 50% of all searches on Google end on Google’s own information services, with no clickthrough to third-party sites; and for any search that supports advertising, paid advertising has driven organic search results far below the fold.

Our algorithmic master

 

It is not just companies like Google and Facebook that have moved from being traditional, human-directed organizations into a new kind of human–machine hybrid that ties their employees, their customers and their suppliers into a digital, data-driven, algorithmic system. It is all companies whose stock is traded on public markets. Science fiction writer Charlie Stross calls modern corporations “slow AIs.” And like Bostrom’s paper clip maximizer, these AIs are already executing an instruction set that tells them to optimize for the wrong goal and to treat human values as obstacles.

How else can you explain a system that treats employees as a cost to be eliminated and customers and communities as resources to be exploited? How else can you explain pharmaceutical companies that worked consciously to deceive regulators about the addictiveness of the opioids they were selling, thereby triggering a devastating health crisis? How else can you explain decades of climate denial by fossil fuel companies, decades of cancer denial by tobacco companies and the obscene accounting manipulations and government derelictions to avoid paying the taxes that keep their host countries functioning? It is the machine that is in charge, and like all such machines, it thinks only in mathematics, with an objective function whose value is to be maximized.

Those who gave our companies and our markets the objective function of increasing shareholder value above all else believed that doing so would lead to greater human prosperity. When, in 1970, Milton Friedman wrote that the only social responsibility of a corporation is to increase its profits, he believed that that would
allow shareholders, as recipients of those profits, to make their own determinations about how best to use them. He didn’t imagine the race to the bottom of declining wages, environmental degradation and social blight that the single-minded pursuit of corporate profit would actually deliver. But after 1976, when Michael Jensen and William Meckling made the case that the best mechanism design for maximizing shareholder value was to pay executives in company stock, the human managers were made subservient to the objective of the machine.

We now know that Friedman, Jensen and Meckling were wrong about the results they expected, but the mechanism has been built and enshrined into law. Those who designed it have passed on, and those who are now nominally in charge (our policy-makers, our economic planners, our legislators and our government executives) no longer entirely understand what was built or can no longer agree on how to change it. Government, too, has become a slow AI. As E. M. Forster wrote in The Machine Stops, “We created the Machine to do our will, but we cannot make it do our will now . . . We only exist as the blood corpuscles that course through its arteries, and if it could work without us, it would let us die.” And so the paper clip maximizer continues its work, just as it has been commanded.

We humans do what we can to blunt this relentless command from our former algorithmic servant, now through ignorance and neglect, our algorithmic master. We adopt high-minded principles like those articulated by the Business Roundtable, promising to take into account not just corporate profit but also the needs of employees, customers, the environment and society as a whole. Attempts at governance of this kind are futile until we recognize that we have built a machine and set it on its course. Instead, we pretend that the market is a natural phenomenon best left alone, and we fail to hold its mechanism designers to account. We need to tear down and rebuild that machine, reprogramming it so that human flourishing, not corporate profit, becomes its goal. We need to understand that we can’t just state our values. We must implement them in a way that our machines can understand and execute.

And we must do so from a position of profound humility, acknowledging our ignorance and our likelihood to fail. We must build processes that not only constantly measure whether the mechanisms we have built are achieving their objective but that also constantly question whether that objective is the correct expression of what we actually desire. But even that may not be enough. As Russell notes in Human Compatible, the machinery we create must operate on the principle that it does not know the right objective. If it has a single-minded objective, a truly self-aware AI might well work to prevent us from changing it—and from detecting the need to change it—and so, our oversight is not enough.

The governance of AI is no simple task. It means rethinking deeply how we govern our companies, our markets and our society—not just managing a stand-alone new technology. It will be unbelievably hard—one of the greatest challenges of the twenty-first century—but it is also a tremendous opportunity.

 

Submit your questions for our next episode, The Future of AI: Developing a Framework that Benefits All

The panelists will be answering your questions LIVE during the Q&A.

Related Updates

Jul 08 2020
Blog Post
A Bretton Woods for AI: Ensuring Benefits for Everyone
More
Jul 08 2020
Blog Post
AI’s Invisible Hand: Why Democratic Institutions Need More Access to Information For Accountability
More
Jul 08 2020
Blog Post
Complete Machine Autonomy? It's Just a Fantasy
More
  • Report

    AI+1: Shaping Our Integrated Future

    The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
    Download PDF

Tags :

Back to Top