Ideas & Insights / All Perspectives / Ideas & Insights

Taking Care of Business: The Private Sector’s Lens on Responsible AI

Hilary Mason — Founder & CEO, Fast Forward Labs
Jake Porway — Founder and Executive Director, DataKind
A Conversation

What does responsible AI, mean to you?

+ Hilary:

Responsibility means thinking from the very beginning about potential impacts when building systems. This includes testing as best as possible, understanding that there are often errors and biases not discovered in the development and testing process and having mechanisms to report and correct what may have been missed.

Responsible AI means understanding the use of predictive technology and its impact on people.

This has many layers. It may be allowing a human override when necessary.

Responsible AI is not a technology problem. There is no technical checkbox. You cannot run an ethicize-my-work program and be done.

Responsible AI has to be owned by the product leaders, the business strategists and the people making business-model decisions as much as it is owned by the
technologists doing the technical work.

+ Jake:

Ultimately, the responsibility for any technology comes down to who has oversight of a system and who says yes or no. It depends on who can say this goes forward or not. It’s funny that we automate these processes and tasks and let AIs do their thing.

Do we ignore oversight in any other situation? Like not checking on whom the hiring manager is hiring, for example? We enforce ethical AI by looking at outcomes. We should trust but verify. As engineers, we should be thinking about responsibility of oversight of our systems more that way.

+ Hilary:

We need to think about how we evolve the practice from one that focuses on the math to optimize the objective function, without regard for impact or testing, to one in which testing is a required step in the development process. You cannot escape human ownership and credibility. Still, there’s no test that will solve for this without thinking about who might be impacted in positive and negative ways.

The broad sentiment in the community and at most companies—the people who hire and manage, not just the people with hands-on keyboards—is that there’s no one process for this.

If a company wants to commit to building responsible AI, it has to commit to building a responsible business. That means leadership has to believe that that’s the right path forward.

It doesn’t matter how many engineers or technologists ultimately leave their employers, because they can always hire people who share their values. The value system is a big piece.

There is a very broad conversation around excellence; a piece of it is also responsibility. A lot of people are drawn to this work because they care about that piece. So I’m very hopeful for the next ten years.

How are folks in the trenches grappling with these challenges?

+ Hilary:

The data science and AI community realizes it has the power to advocate for how they would like to do the work.

That comes from a strong hiring market. Employees can easily move if they don’t like what their company does. They can and do. Like the No Tech for ICE movement. People do not want to support something they strongly feel is wrong.

+ Jake:

How can we make whistle-blowing something more than a high-risk situation? Because we do see situations where engineers are shuffled out the
door for speaking up. How do we make it safe and actionable to push back? Without that, ethical codes or responsibility training won’t make a difference.

A bigger question is how much we want engineers to make ethical decisions. I hear a lot of conversations putting the onus on engineers not to do unethical things. But identifying or assessing what’s “ethical” isn’t always easy. For example, one patient diagnostic system overrecommended oxycodone to make more profits. It was a clear example of an algorithm doing harm so that its company could make more money.

People were up in arms. They were livid that engineers had coded this overprescription feature into the software and did not speak up.

But people assume that distorted outcomes were obvious to the engineer creating this feature. Does that spec come down the line to them as “Let’s kill people”? Certainly not. It may be described as a “medication recommendation feature” that senior executives have requested for certain outcomes.

Do you want the engineer to be the one deciding whether the feature prescribes too much oxycodone? How would an engineer know without being a physician? If the engineer pushed back and the feature was removed, would you have the flipside headline, “AI denies pain medications to people in pain”? You may want an engineer to raise questions. But you probably don’t want the engineer making the ethical decisions. So, do we have agency to push back as engineers? Do we
have context to know when to push back?

+ Hilary:

Engineers are building more leverage but do not have agency. And even when they do, they quit their jobs and go somewhere else. And the work continues.

What might illuminate a path forward toward responsible AI?

+ Jake:

To talk about responsible AI, we first have to address the goals of a system. AI is basically an accelerator of the values in its system. AI makes reaching the goals of that system faster and cheaper. So, it’s important to recognize that responsible AI is impossible without responsible systems. For example, most companies are designed to make money only from their products. So, one version of being responsible focuses on how companies can do less harm and how engineers can be more ethical.

Then there’s a version of responsible AI where the technology is applied to a human challenge. For example, what are the apps for getting clean water to people, and what might that look like—responsibly? How can machine learning or AI help? This takes on a close but slightly different lens, starting less from the AI solution and more from how we can solve this human problem through an AI intervention so that many more do not die. In that version of responsible AI, we use AI to support systems that have human goals—goals for civil society.

So, when we think about a path toward responsible AI in that second context, we have to ask how we will build that tool and who will build it. There may not be a profit motive. How might we also use our skills to achieve goals like social prosperity? How do you get the technologists on payroll to do that? AI-for-good tools are a cost center. Unfortunately, a lot of social prosperity is a cost center!

However, companies are increasingly putting money and effort behind having their engineers be a part of social-good projects or finding ways to share their data safely for social good. Microsoft has an established AI for X program where X can be Earth, oceans or other social causes. Companies are putting millions of dollars and engineering capabilities and technology into X and partnering with UN agencies or non-governmental organizations [NGOs] to see how a technology can be applied.

For example, Johnson & Johnson just put $250 million behind digital health workers with a view to exploring with UNICEF and USAID how machine learning
can be used on the front lines of health. Accenture and NetHope are building capacity in the social sector; they have just created the Center for the Digital Nonprofit. And they’re investing long term in digitizing the civil society sector and creating responsible digital nonprofits.

Multistakeholder partnerships are clearly the way people and businesses can together make change.

We’re involved with a group called data.org, launched by The Rockefeller Foundation and the Mastercard Center for Inclusive Growth, which helps multistakeholder partnerships deepen their social impact through data science. Our work uses AI to boost the effectiveness of the health-care that community health workers provide. Teams are building algorithms that identify which households need care most urgently, and they’re using computer vision to digitize handwritten forms to modernize data systems. This kind of innovation is unlikely to come from the private sector alone—or just NGOs. So, we’re bringing together folks from different sides of the table to build the AI they want to see in the world. It’s a winning approach involving businesses, foundations, NGOs and other actors.

Another strategy is establishing a consortium of companies and agencies thinking about AI safety and responsibility, such as the ABOUT ML group of
folks from Microsoft and Google, which wants to create a system to improve the explainability and understanding of the algorithms their companies are
building. In both of these examples, companies are devoting their resources to partnerships that allow us to build AI for human prosperity.

+ Hilary:

We need to shift the system, get more granular. For example, build a malware-detection model to expose how testing is done. I’m interested in examples of data scientists’ sitting at their computers, doing their jobs and looking at how someone else has done this right.

How and where are companies doing this right?

+ Hilary:

Some people are very thoughtful about their image tester bias. They examine almost every real data set for extreme class bias and for how they accommodate classifications.

There are also some things that shouldn’t exist. Like the facial-recognition start-up that’s selling police departments photos harvested from social media. This violates fair use, copyright and permission terms. Or AI video interviews that are replacing face-to-face ones.

+ Jake:

Companies implementing ethics codes in their engineering departments are at least starting the conversation. But we also have to consider how we think about success. Because if we’re not clear about what responsible AI looks like, no one company can get it right. In the AI interview case, how would you know if the AI interviewer was biased? This algorithm may be better than the status quo, but it’s also encoding a set of consistent biases at scale—which the status quo did not.

Unfortunately, running a rigorous experiment to determine whether your AI interviewer is better or worse than a human is really hard because a huge set of complex economic and demographic factors make it hard to assess such AI systems. This comes back to responsibility. That’s why we’re having conversations about optimizing for profits and about how these things correlate with numbers of sales. We can all agree on pretty straightforward profit metrics. But incredibly difficult philosophical questions are not easily quantified.

Success in the real world is hard to quantify because the code is too complex and because we all have independent sets of values for how we think the world should be. But AI systems work only when they have very specific objective functions. The greatest trick AI will pull off will not be taking over humanity. Often, we’re not explicit enough about what success looks like in society.

Some big tech companies say they want to be regulated. What is your take on that?

+ Hilary:

I’m pretty cynical. When large companies ask to be regulated, they’re asking to entrench their advantage in an area that’s changing and developing very quickly.

Let’s say you put a review process around the deployment of any AI model that costs around a million dollars. So only companies with global scale can
afford to use this model.

So small start-ups can’t compete. But the Googles on a global scale will invest and try these things. So, I worry deeply that the kinds of regulation these large companies will push through will strongly advantage them and create an oligopoly with access to broad technology and destroy efforts of smaller organizations to use the technology effectively.

On the other hand, regulation could encourage broader innovation through data ownership and data portability—meaning, being legally required to explain when and how you sell data from one organization to another.

+ Jake:

When I think about regulation innovation, I think about Kenya. Kenya has a long history of experimentation in the digital space. But the innovators are usually not Kenyan, and collect data that doesn’t go back to Kenyans. For example, people were building payment systems that collected Kenyan citizens’ data, but creditors sold data they’d just acquired. It was problematic.

So, the Kenyan government passed a modified version of the European Union’s General Data Protection Regulation to protect citizens’ data. The government also thought the new regulation would bring business in, because it required that data stay in-country. I heard that Amazon is setting up a data center in Kenya—perhaps because of this policy. So, some government regulation can help business and government.

+ Hilary:

I’m interested in the California Consumer Privacy Act which became effective in June 2018. If you share data about citizens of California, you’re required to disclose it. We’ll see what happens.

What about incentives and voluntary or involuntary approaches? What should nonprofits be thinking about?

+ Hilary:

I would love to see more collaboration and understanding. People do mess up. Companies are not innately evil. If we give more space to learn and improve over time, I think we’d get to better outcomes. That’s really tough right now because even those talking are not doing so in public.

+ Jake:

It’s really difficult for individuals to change systems. They tend to adopt the values and systems they’re in.

We need to think deeply about how complicated and challenging responsible AI is. Mitigation is a common theme. The question is not how do we stop this, but, rather, what should responsible AI look like? We need to be open, considerate, and to reflect on solutions.

There’s a nuanced specter of risk. Things feel fairly histrionic whenever AI is perceived to be unethical. Not everyone is the worst offender. We need to share stories about industries that have regulated well and understand why. Could we not follow the path that is addressing climate change?

We all know that climate change is a huge issue that crosses national boundaries, and yet we’re still driving cars. We’re not saying Toyota engineers should be rising up and protesting. BP still has plenty of engineers.

And we have other safeguards for the environment. The Environmental Protection Agency as a regulatory body is only as strong as the governments we elect to enforce our laws. We’re in a similar spot with AI.

Companies absolutely have a role to play, and they shouldn’t be forced to shoulder this burden alone. It comes back to us, as people. People with a vision of how AI should be used in their lives should rise up and vote for regulation, vote for tech-literate politicians and find ways to measure AI models that are used in the public interest so we can ensure we’re getting what we need. At the end of the day, AI is for us. So, we must define how this stuff works. And we must hold AI accountable. Its level of responsibility must reflect what we as society need.

Related Updates

  • Report

    AI+1: Shaping Our Integrated Future

    The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
    Download PDF

Tags :