Ideas & Insights / All Perspectives / Ideas & Insights

AI’s Invisible Hand: Why Democratic Institutions Need More Access to Information For Accountability

Marietje Schaake — Former European Parliament Member

Ethics and self-regulation are not enough

Across the world, artificial intelligence (AI) elicits both hope and fear. AI promises to help find missing children and cure cancer. But concerns over harmful AI-driven outcomes are equally significant. Lethal autonomous weapon systems raise serious questions about the application of armed-conflict rules. Meanwhile, anticipated job losses caused by automation top many governments’ agendas. Effectively, AI models govern significant decisions impacting individuals, but they also ripple through society at large.

Yet discussions of AI’s likely impact cannot be only binary—focused on gains and losses, costs and benefits. Getting beyond hope and fear will require a deeper understanding of AI-application-triggered decisions and actions amid their intended and unintended consequences. The troubling reality, however, is that the full impact of the massive use of tech platforms and AI is still largely unknown. But AI is too powerful to remain invisible.

Access to information forms the bedrock of many facets of democracies and the rule of law. Facts inform public debate and evidence-based policymaking. Scrutiny by journalists and parliamentarians and oversight by regulators and judges require transparency. But private companies keep crucial information about the inner workings of AI systems under wraps. The resulting information gap paralyzes lawmakers and other watchdogs, including academics and citizens who are unable to know of or respond to any AI impacts or missteps. And even with equal access to proprietary information, companies examine data through different lenses and with different objectives than those used by democratic institutions which serve and are accountable to the public.

The starting-point for AI debates is equally flawed. Such conversations often focus on outcomes we can detect. Unintended consequences such as bias and discrimination inadvertently creep into AI algorithms, reflecting our offline world, or erroneous data sets and coding. Many organizations focus on correcting the damage caused by discriminatory algorithms. Yet we must know what we may expect from AI when it works exactly as anticipated. Before addressing the sometimes discriminatory nature of facial recognition technologies, we need to know if the technologies respect the right to privacy.

But AI and new technologies disrupt not only industries. They also systemically disrupt democratic actors’ and institutions’ ability to play their respective roles.
We must devote more attention to actors’ and institutions’ ability to access AI. This is a precondition for evidence-based regulation.

The key to the algorithmic hood

AI engineers admit that no one knows where the heads and tails of algorithms end after endless iterations. But we can know AI’s unintended outcomes only when we know what was intended in the first place. This requires transparency of training data, documentation of intended outcomes, and various iterations of algorithms. Moreover, independent regulators, auditors, and other public officials need mandates and technical training for meaningful access to, and understanding of, algorithms and their implications.

Accountability is particularly urgent when AI-based, government-provided systems are used for tasks or services that encroach into the public sphere. Such outsourced activities include the building and defense of critical infrastructure, the development and deployment of taxpayer databases, the monitoring of traffic, the dispersal of Social Security checks and the ensuring of cybersecurity. Many companies that provide vital technologies for these services process large amounts of data impacting entire societies. Yet the level of transparency required by and applied to democratic governments is not equally applied to the companies behind such services.

Algorithms are not merely the secret sauces that enable technology companies to make profits. They form the bedrock of our entire information ecosystem. Algorithmic processing of data impacts economic and democratic processes, fundamental rights, safety, and security. To examine whether principles such as fair competition, non-discrimination, free speech, and access to information are upheld, the proper authorities must have the freedom to look under the algorithmic hood. Self-regulation or ethics frameworks do not make possible independent checks and balances of powerful private systems.

This shift to private and opaque governance that lets company code set standards and regulate essential services is one of the most significant consequences of the increased use of AI systems. Election infrastructure, political debates, health information, traffic flows, and natural-disaster warnings are all shaped by companies that are watching and shaping our digital world.

Because digitization often equals privatization, it means that the outsourcing of governance to technology companies allows them to benefit from access to data while the public bears the cost of failures like breaches or misinformation campaigns.

Technologies and algorithms built for profit, efficiency, competitive advantage, or time spent online are not designed to safeguard or strengthen democracy.  Their business models have massive privacy, democracy, and competition implications but lack matching levels of oversight. In fact, companies actively prevent insight and oversight by invoking trade-secret protections.

Transparency fosters accountability

Increasingly, trade secret protections hide the world’s most powerful algorithms and business models. These protections also obscure from public oversight the impacts companies have on the public good or the rule of law. To rebalance, we need new laws. For new evidence-based, democratically passed laws, we need meaningful access to information.

A middle way between publishing the details of a business model for everyone to see and applying oversight to algorithms when outcomes have significant public or societal impacts, can and should be found. Frank Pasquale, author of The Black Box Society, sensibly speaks of the concept of qualified transparency, meaning that the levels of scrutiny of algorithms should be determined by the scale of companies processing data and the extent of their impact on the public interest. Failure to address and fix the misuse of trade secret protections for this purpose will lead to the shaping of more and more digitized and automated processes in black boxes.

The level of algorithmic scrutiny should match algorithms’ risks to and impacts on individual and collective rights. So, for example, an AI system used by schools that taps and impacts data on children requires specific oversight. An AI element in industrial processes that examines variations in the color of paint is, by contrast, of a different sensitivity. But AI stretches beyond the physical world—into the inner workings of machine learning, neural networks and algorithmic processing.

Some argue it is too early to regulate artificial intelligence or insist that law inevitably stifles innovation. By empowering existing institutions to exert their oversight roles over increasingly AI-driven activities, these institutions can regulate for antitrust, data-protection, net-neutrality, consumers’ rights, safety and technical standards as well as other fundamental principles.

The question is not whether AI will be regulated but who sets the rules. Nondemocratic governments are moving quickly to fill legal voids in ways that fortify their national interests. In addition to democratic law-making, governments as major procurers of new technological solutions should be responsible buyers and write public accountability into tenders.

Many agree that lawmakers were too late to regulate online platforms, microtargeting, political ads, data protection, misinformation campaigns and privacy violations. With AI, we have the opportunity to regulate in time. As we saw at Davos, even corporate leaders are calling for rules and guidance from lawmakers. They are coming to appreciate the power of the governance of technologies and how technologies embed values and set standards.

Reaching AI’s potential

While much remains to be learned and researched about AI’s impact on the world, a few patterns are clear. Digitization often means privatization, and
AI will exacerbate that trend. With that comes a redistribution of power and the obscuring of information from the public eye. Already, trade secrets not only shield business secrets from competitors; they also blindside regulators, lawmakers, journalists and law enforcement actors with unexpected outcomes of algorithms based on their hidden instructions. AI’s opaque nature and its many new applications create extraordinary urgency to understand how its invisible power impacts society.

Only with qualified access to algorithms can we develop proper AI governance policies. Only with meaningful access to AI information can democratic actors ensure that laws apply equally online as they do offline. Promises of better health-care or the just use of AI in extreme circumstances such as war will reach their potentials only with access to algorithmic information. Without transparency, regulation and accountability are impossible.

Technology expresses our values. How will we be remembered?

We are at a critical juncture. Our values are coded and embedded into technology applications. Today, companies as well as authoritarian regimes direct the use of technology for good or evil. Will democratic representatives step up and ensure AI’s developments respect the rule of law? We can move beyond hope and fear only when independent researchers, regulators and representatives can look under the algorithmic hood.

Related Updates

  • Report

    AI+1: Shaping Our Integrated Future

    The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
    Download PDF

Tags :