How Do We Ensure “Data for Good” Means Data f…
Jake Porway

Jake Porway Founder and Executive Director, DataKind

Tags for this post
March 05, 2019

How Do We Ensure “Data for Good” Means Data for All? Consider These Three Principles

Jake Porway

Jake Porway Founder and Executive Director, DataKind

Tags for this post
March 05, 2019

We’re living in a time of massive potential to use data science and AI for the greater good, yet the world’s problems only seem to be mounting. Data and algorithms touch nearly every aspect of our daily lives, from the movies and music that we choose, to the news we receive, to how our cities run. Companies have leveraged data and algorithms to maximize their profits, but social organizations like nonprofits, NGOs, and governments still lack the resources to harness this same technology for social good. Moreover, questions are being raised about the responsible use of data and AI in society.

At DataKind, we envision living on a sustainable planet where all have access to their basic human needs and can live lives of equality and prosperity. Over the past eight years, DataKind has worked to create a world where social organizations like nonprofits, NGOs, and governments can boost their impact using the same data technologies that companies use to boost their profits. We’ve helped connect a global network of volunteers to deliver more than $25 million in pro bono data science services to social change organizations worldwide. Top data scientists, hailing from places like Netflix and MIT, have generously donated their time on over 250 DataKind projects, creating algorithms that have helped transport clean water more effectively, informed government policy that protects communities from corruption, and detected crop disease using satellite imagery.

Our vision of data and AI being used ethically and capably for all humanity means that it must be in the hands of all humanity.

With the generous support from The Rockefeller Foundation and the Mastercard Center for Inclusive Growth, we’re taking our work one step further. We see a world where all those who fight on the frontlines of social change can get solutions for their data science problems. Right now, however, too many organizations still can’t get access to the funding, talent, skills, or training to make that happen easily. This problem also can’t be solved solely on the backs of thousands of volunteers, but it can happen if we can help bridge the gaps between those who have data science resources to give and those who can use them.

Our vision of data and AI being used ethically and capably for all humanity means that it must be in the hands of all humanity. That means the only way to succeed in our mission is to see local leaders rise in their own communities. We can’t approach this work without an equity lens. No one group can solve for the varied needs of our global community, not without investing in and working alongside a diverse group of communities.

Want to embrace this vision of data and AI for good but don’t know how to start? Considering the following three principles can help you on your journey:

  1. Finding problems is harder than finding solutions: Most organizations can’t articulate their machine learning problems (understandably), nor do they know where all the data is that could drive them. In order to find good data and machine learning solutions, you have to be ready for iterative conversations on theories of change, design decisions, organizational culture, and data auditing in order to find the right opportunities. Whether a technologist or social organization, do not skimp on this exploratory design process.
  2. Collaboration is key: AI is in the service of people, not the other way around. As such, people should be involved in determining what it does. We’ve found the best way to do this is by creating spaces for active collaboration and co-creation between those with the needs (social organizations) and those who know what technology can do (data scientists). In order for these collaborations to work, we focus on maximizing communication by stamping out jargon and making sure all folks approach the work with humility.
  3. Build with, not for: Building on the concept above, one needs diverse viewpoints involved in the design, creation, and use of data and AI systems. If you just let one group of people with a certain cultural lens build a solution, you run the risk of building something ill-designed for a community’s needs or, worse, harmful to them. This concept is not new, and systems change designers and international development practitioners have long espoused the need for more diverse collaborations, but it bears repeating. We shouldn’t default to handing the design work to those most available, most schooled, or most “expert”. We must actively seek to make space for the voices of those who will interact with the data and technology to be co-creators in this process.
New York City DataDive, June 2018. Photo courtesy of DataKind.

We’ll be tackling some of the topics above during a session at SXSW this year on Friday, March 8, at 3:30 PM CST. If you’ll be in Austin, we’d love to have you join. We’ll explore these areas in an effort to continue to help rebalance the scales of who can use data and AI to create a more equitable world with technology. Our hope is that this is only the starting point of a week filled with thoughtful discussions and lively debates on topics like ethical uses of AI, data literacy, the threat of automation, and tech philanthropy.

We hope you’ll all join us because it’s only when we bring everyone around the table that we can move beyond using AI to drive shareholder value, and instead use AI to drive toward social value.


Check out the other pieces below on our data & tech series leading up to SXSW 2019.

How Much Longer Can We Continue to Overlook the “Power of Local”?
The Gender Gap in Innovation and How to Break the Digital Ceiling
Fusion: Innovation through Integration

Tags for this post