Ideas & Insights / All Perspectives / Ideas & Insights

Making AI Work For Humans

Amir Baradaran — Creative Research Associate and Adjunct Faculty, Columbia University
Katarzyna Szymielewicz — Co-founder and President, Panoptykon Foundation
Richard Whitt — Founder, GLIAnet Project

The web is light-years away from its inventors’ goal of expanding individual freedom and power through new connections and information sources. Understandably.
Platforms are reluctant to give up control over our data storage and stories—or our marketing profiles. Users’ stickiness—or their propensity to stay on a web
page—brings big platforms big profits from advertisers. That concentration of data power also dissuades other developers from competing with large platforms. The
result? Platforms have limited accountability, and users have limited alternatives.

It’s broken but can be fixed

How to tilt power and agency back to end-users? New regulations are an obvious and frequently made suggestion. But to date, they are neither fast nor forceful enough. Another common idea is to break up the platforms. But like a game of Whack-a-Mole, over time similar woes will remerge because, as Tim O’Reilly noted in his essay, the same perverse incentives remain. Also under discussion: a utility model much like that used for telecommunications companies, converting platforms to publicly owned companies with data portability—analogous to phone number and contacts portability. All of these approaches strive to increase accountability, which is sorely needed. But their sanctions and obligations primarily deter and punish bad actors.

We think there’s a better way. A more human way. A way that doesn’t require dismantling the current system and that expands the awesome reach and potential of AI for good—for all humans. As lawyers who’ve worked in government, at major platforms and at public interest groups—and a technologist/designer who has ripped apart AI’s guts and put it back together—we’ve been in war rooms, legislative chambers and garages. And we strongly believe that a new paradigm is possible. We believe we can restructure a web ecosystem to give more power and agency to end-users and that the web can better serve the interests of a range of stakeholders, not just those chasing profits. Our proposal is in its infancy, but it has many precursors, and we are starting to see its contours.

Humans at the core

Our new ecosystem puts humans in the driver’s seat and makes possible new business models from trusted platforms through smart interfaces that replace current patterns of data exploitation with real transparency and empowerment. We very intentionally use the term humans rather than users because humans are active rather than passive agents in these models, providing vital data fuel. In this new ecosystem:

+ Humans can access, control, verify and shape their personal data and narrative and its flow without losing contacts, prior posts, asking platforms for permission to keep these things, or additional application downloads. Today, theoretically, humans have a right of data access. But in practice, they receive only a curated picture of their data and cannot do what we are proposing. In the new ecosystem, should humans object to aspects of a profile, those aspects can be deleted in one click. Result: advertisers and commercial developers can access only preapproved information. Humans can thus control their own narratives
and their narratives’ journeys. They can also connect to other social networks and access other humans’ data—with permission. Finally, humans outside their social network can follow all public updates generated by humans on this platform in read-only format by using a simple RSS (really simple syndication) function via a standardized API (application programming interface, or basic software instructions).

+ Other companies and noncommercial developers can train their algorithms or build their own models on existing platform data and stories to offer alternative solutions, functionalities, experiences or privacy and security standards. Consider a newsfeed that ceases at 10 p.m. or that excludes violent content. Or a newsfeed rigorously fact-checked by an independent news agency. Or a kill-the-newsfeed plug-in. Access to data made possible by a standardized API creates opportunity for such innovation. A maker can ask humans about their expressed preferences and the news they seek (which is impossible today) to propose a curated newsfeed. But the maker may also need limited (read-only) access to statistical models previously developed by Facebook for Facebook’s own purposes. For that, a regulator would need to approve their justification for competitive or public-interest purposes—in an approach akin to eminent-domain rulings.

+ Public institutions and researchers can use this same information and analysis to help solve hard problems. By taking a Creative Commons approach, they can aggregate humans’ output data (no longer personal) and statistical models developed by dominant platforms for purposes such as medical research, tracking and responding to pandemics or planning infrastructure upgrades. Possible applications might include matching soup kitchens with supermarkets for food nearing its sell-by date, tracking emissions in moving trucks or analyzing energy consumption patterns in personal devices to calculate their environmental cost. Some of this is already under way. For example, Uber has furnished driver data to the Organisation for Economic Co-operation and Development for tackling gig-economy issues. Google has done the same with US transit agencies for subway system upgrades.

Our new paradigm is a greenfield opportunity for platforms and those in their ecosystem, large and small. Benefits might include new revenue streams and broader visibility from new goods and services; reputational gains from fomenting trust among existing customers and staff (Amazon employees, for example, protested the sale of the company’s facial recognition product for questionable purposes); new business models that may expand reach or reduce costs; better adtech and martech experiences for advertisers and humans; new job categories like ethical data brokers; more-satisfied humans through novel technology options such as personal AI assistants (which we’ll go into later); and more control over both humans’ and platforms’ futures. But time is running out. We must act swiftly in our shared interests.

Rethinking the web’s virtual and human infrastructures

A rough outline of how the new ecosystem might work follows. It is achievable with help from experts in multiple domains, including technology, law and business management. Best of all, our vision for a decentralized, open, human-centric web infrastructure builds on existing social networks and commercial databases.
Please be in touch with your thoughts and expertise.

Systems and design thinking guide the two types of infrastructure we propose: virtual and human. The new, virtual infrastructure consists of interfaces, portals, standards, protocols and other technical means of mediating between existing platforms and humans. The new, human infrastructure involves new kinds of organizations and agents that can help humans on the edge of the web access data for their own noncommercial purposes. Three steps make new virtual and human infrastructure creation possible:

+ Breaking up is hard to do: Separating data storage and analysis

To create an open and human-centric data infrastructure, the breaking apart of the collection, transmission and storage of data from its computational analysis is key. That’s because many of the statistically significant patterns that emerge from platforms’ deep and pervasive access to our data fall on the cutting room floor because the data cannot be easily monetized. So humans lose some of the upside of their stories that could be used for tools and applications that support human flourishing.

In our proposed approach, humans take back control of their algorithmic destinies. Regulations or market forces compel platforms and data brokers to develop a standardized API that enables humans to easily see and verify their data and move it elsewhere—to, for example, a competing social network if desired. Alternatively, the data can be left on the platform, but humans exert far more control over their stories1 (or, for platforms, their marketing profiles) because data analysis sits elsewhere. An independent regulator such as a consumer protection body will need to define and control the structure of such an API.

In the new space between the storage of data and its interpretation, new roles and new ways of using data for noncommercial purposes can emerge.
Other stakeholders—new competitors, researchers or public service providers—can gain access to data inputs, data outputs and models used for algorithmic processing—depending on their needs.

An example of such a human-centric technology platform is Sir Timothy Berners-Lee’s Solid project, which shifts and corrals one’s data to a local,
home-based pod (handheld device) from servers across the world. And Finland’s Ministry of Transport and Communications has its MyData
project, with a framework, principles and a model whereby individuals can access their medical, transportation, traffic, financial and online
data sets in one place, decoupling data storage and its analysis for consent-based release and sharing of data to groups, including public
interest research data banks.

Another step in the right direction in the US ACCESS Act, a bipartisan bill introduced in October 2019. The act would grant new authority to humans
so they can connect directly with platform networks for interoperability and move their personal data to trusted entities, for portability.

+ The human infrastructure: Trusted stewards who help manage our digital lives

Freeing personal data from its commercial moorings can bring enormous benefits and complexity based on novel choices humans can make. But today’s ecosystem lacks both human and digital agents to manage the complexities. A human independent third party could shift the current dynamic from one of passive platform-services users to empowered clients.

New and trusted fiduciaries can serve as the digital life support system for humans (or clients). That notion builds on the common law of fiduciary obligations, which includes the duties of care, of confidentiality and of loyalty. Humans can select their own digital fiduciary to act on their behalf. Typical functions might include:

+ Basic client protection such as managing passwords, updating software and establishing privacy settings
+ Filtering the client’s personal data-flows to reflect personal interests
+ Using advanced-technology tools, such as a personal AI, to promote the client’s agency and autonomy

To wrap your head around this intermediary approach, consider a residential real estate analogy. The seller’s agent ostensibly works for both parties to secure a deal. But in truth the agent is working only in the seller’s interests. The buyer’s agent, by contrast, serves solely the buyer.

In our ecosystem, the seller’s agent is a platform company; the buyer’s (here, the human’s) agent is a trusted intermediary. In other industries, similar separate agents that serve different interests within the same transaction include (1) a pharmaceuticals manufacturer (selling drugs) and a local pharmacist (tasked with giving sound advice) and (2) a bookseller (selling books) and a librarian (tasked with giving sound advice and protecting patron privacy).

On this point, the ACCESS Act legislation mentioned earlier would let humans delegate their interoperability and portability rights to a trusted third party—a custodian—operating under strong fiduciary duties.

Some digital third-party precursors already exist. enables humans to download data from various sources to their phones in encrypted form,
which other services can then process on-device.

+ The virtual infrastructure: An Alexa as your personal-data police force made possible by separating the computational layer

A longer-term but increasingly viable evolution of the API is the personal AI assistant. Think of a personal AI assistant as a virtual version of Alexa which is 100% on your side. This augmented reality avatar acts almost like a librarian—fetching data on demand instead of books—and protects your logs from tracing. Your personal AI assistant takes on those otherwise burdensome tasks, which helps separate personal data from the stories that result from its analysis. Building APIs into the platform companies’ computational systems lets humans and their digital agents control their data-flows and analyze data and patterns for their own needs.
Sometimes referred to as on-device, off-cloud AI, these applications hold enormous potential to represent humans in daily interactions with the web. Among tasks that a personal AI assistant can perform are protecting a human’s online security from rogue actors or hackers, projecting a human’s term of service to websites (rather than the reverse) and the bidirectional filtering of newsfeeds, social interactions and other content-flows on the web. A personal AI assistant can even challenge the efficacy of algorithmic systems—representing the human in, say, disputes with financial, healthcare and law enforcement entities—for bias, error and other flaws with potentially serious consequences. Finally, it can query, correct, negotiate and demand that the client be left alone.

This virtual zone of trust and accountability can stretch into the offline space. Today, companies and governments are embedding billions of sensors in smart speakers, microphones, cameras and wearables to extract and act on our personal data, including information on our location, facial expressions, or physical state. (In 2016, 325 million wearables were already in circulation.) A personal AI assistant can actively prevent these devices from unapproved surveillance and extraction of data. Instead, it blocks signals or negotiates with the sensor provider, fortifying agency that would otherwise not be possible.

This concept is gaining traction. Stanford University is working on a virtual assistant called Almond, which retains a human’s personal information, thus reducing dependence on big platforms for services. US engineering trade association, the Institute of Electrical and Electronics Engineers (IEEE), recommends a proxy, or “trusted services that can . . . act on your behalf . . . a proactive algorithmic tool honoring their terms and conditions.”

In fact, work is already under way at the IEEE to develop standards for personal AI assistants through its P7006 working group. Precursors that can help build-in interoperability and portability include US Federal Communications Commission concepts developed in the 1980s and 1990s to encourage telecommunications competition.

A standardized API, perhaps evolving into personal AI, is the first step toward an interconnected infrastructure controlled by citizen-humans, not advertisers.


Democratizing AI: Serving humans on the edge

The problems related to today’s commercialized data ecosystem are well-known and will evolve and grow. Clearly, an ecosystem based on the wrong premise is not sustainable, and a radical change is needed. Infrastructure, services and interfaces that are more responsible and human-centered are required. Our most urgent task is to translate these values into tangible, practical solutions.

Our proposed ecosystem shifts the power from platforms and advertisers at the core of the web to humans and other stakeholders at the edge of the network. Result: a more sustainable, human-centric ecosystem in which people can reclaim control over their data and their digital lives. And not just data stored or generated about them, but, more importantly, control over how interpretation and application of their data influence their and their peers’ life chances and life choices.

This paradigm shift also opens the online world to a new set of new actors and roles, often pursuing loftier goals than transactions or profits. Stakeholders might include nonprofits, B corporations, public service providers and data trusts. Delivering value without a service—such as by targeting societal problems in a nonprofit or public model and a service not-in-the-bundle with influence—is also possible.

By keeping and aggregating our personal data under our control, we can solve our own challenges and tell our own stories—on our own terms—and reach others with what we need. Our data can be used to deliver social value and ethical services that are free from commercial ends and hidden influence. Such options are difficult if not impossible in a wholly privatized, profit-driven ecosystem. But they can emerge in an ecosystem redesigned to serve our self-defined individual and societal purposes.

There are of course challenges. They include establishing communications protocols, API structures and new design (user-friendly interfaces; rules-based
settings) and governance structures such as data trusts. But those obstacles are surmountable. When complete, this virtual infrastructure will transition power over data from centralized platforms back to humans.

Such solutions are not mature enough to be implemented nor even prototyped. But we must begin. Please join us in this endeavor.

Related Updates

  • Report

    AI+1: Shaping Our Integrated Future

    The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
    Download PDF

Tags :