Bellagio Conversations in AI/

Jim Guszcza Looks at AI Through the Lens of Human Behavior

Jim Guszcza – a research affiliate at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University – sees a fundamental disconnect at the heart of AI development between engineers and experts in behavioral science.

Jim’s Bellagio Reflections: “It was at a 2022 Bellagio convening that a yearlong project that I was leading at CASBS culminated. This Rockefeller-sponsored project gathered a dream team of academics from computer science, political science, law, philosophy, and behavioral science to articulate the DNA of a new design-focused discipline that would enable better collaboration between people from different backgrounds on AI development.

“A specific idea that came out of that meeting was the need to develop a shared design pattern language to enable these multidisciplinary, multi-stakeholder collaborations. We vetted these and other ideas from our white paper, and when I returned to Bellagio for my residency in March, I wrote the drafts of the essays to submit for publication. Bellagio offered a precious opportunity to get feedback from a diverse group of very accomplished people.”

Jim believes that treating AI development as an applied social science and ensuring that stakeholders are included, together with computer engineers, as decision-makers would make it easier to both mitigate AI’s harms and realize its societal benefits. He explains why, and how.

We talk about AI almost like a religion. Many people have faith that each new innovation is a step toward autonomous machines with human-level intelligence, but that’s an ideological spin on what’s happening. In reality, it’s about applying large-scale statistical analysis – machine learning – to big data. There is little doubt that this will be massively impactful – maybe comparable to previous general-purpose technologies like the internal combustion engine and the printing press – but “AI” has become a tagline with multiple meanings. For most people, it means something real and specific, like using an algorithm to help a doctor make a better diagnostic decision. Yet some people also use “AI” to connote the speculative extrapolation that recent developments put us on a path to creating a human-like artificial general intelligence – one which will either solve all of our problems or wipe us out. We need to shift our discourse to prioritize socio-technical facts over science fiction.

Prior to CASBS, I’d been a data scientist for 25 years, since before the data science field even had that name. My experience has taught me that practical, real-world AI is best understood as an applied social science, albeit one that uses a lot of computer science and machine learning. AI can’t solve all our problems. It can only generate potentially useful – or potentially harmful – outputs, and it’s still up to humans to use those in ways that create better outcomes. Human-machine collaboration is inherent to the whole process, but the brilliant engineers developing these new systems are mostly doing so in isolation.

  • We need a new profession that sits alongside machine learning engineering, one that integrates ideas from the computational, statistical, social, and behavioral sciences.
    Jim Guszcza
    Research affiliate at the Center for Advanced Study in the Behavioral Sciences, Stanford University

Often what’s missing is discussed in terms of “AI ethics,” but a significant stumbling block in introducing ethics to AI is not just bad intent or lack of regulation – it’s that the people who build the algorithms and the people who understand their impacts often don’t understand each other. We collectively don’t know how to bridge the gap between high-level ethical considerations and the tangible design specifications that are meaningful to engineers.

My experience as a data scientist has also taught me how difficult it can be to communicate with end-users and other impacted stakeholders, and to reflect their needs when designing algorithmic technologies. This can fail to happen even when everyone involved has good intentions. As a result, AI often reflects the narrow perspectives of its engineers. We’re seeing this with AI systems that deceive, amplify biases, or lend themselves to “off-label” uses, if not outright misuse. I especially worry about the potential of generative AI systems to accelerate the degradation of our knowledge ecosystem, analogous to how the CO2 being pumped into the atmosphere degrades our natural ecosystem. Better regulatory guardrails and better ethics are certainly necessary, but they are not sufficient. We also need smarter social norms, better choice architecture, better incentives, and better systems of human-algorithm collaboration designed into AI systems. In other words, harnessing the social and behavioral sciences is no less integral to responsible AI than harnessing machine learning and big data.

My data science experience has also taught me lessons about what I call, in machine learning, the “first-mile problem” and the “last-mile problem.” The first-mile problem is that you can’t just grab whatever data is convenient – regardless of how “big” it is. Rather, you must design the right dataset, and doing so usually has lots of ethical and domain-specific nuances. Typically it’s more of a social science challenge than a computer science challenge. The last-mile problem is that we ultimately don’t care about the algorithmic output, we care about achieving the right outcome. For example, our ultimate goal is not an accurate medical diagnostic algorithm; it is whether the patient gets better. In every real-world application of AI – from helping doctors make diagnoses, to managers making better hiring decisions, to social workers making decisions around child support – we need to start thinking through the human interaction level of this technology more systematically.

So, on the front end, social scientists and other domain experts should typically be involved in helping decide what to optimize, how to design the needed data sets, how to validate and fine-tune the models, and so on. Once we build an algorithm, we often need insights from the social and behavioral sciences to figure out how to integrate its outputs with human decisions, behaviors, and workflows. For example, it’s great that ChatGPT can predict the next words in a sentence based on a prompt, but what we ultimately want is better communication. If we’re getting a lot of misinformation and bland pastiche instead, what we have is “artificial stupidity” – not artificial intelligence.

There’s a serious need to move beyond the status quo, in which small groups of elite engineers hold a huge amount of decision-making power. We need to distribute that decision-making power more broadly – not only to domain experts and social and behavioral scientists, but also to representative stakeholders who possess crucial local knowledge of, and appreciation for, community-specific values and perspectives. So much of the rhetoric around artificial general intelligence points us away from these crucial issues. I believe that the policy community needs to step up and use its muscle to ensure that more social scientists, end-users, and stakeholders are involved in the design process of these technologies.

Ideally, we want to create processes in which computers do what computers are good at, and better enable humans to do what humans are good at. A good human-computer partnership will be one where computers compensate for natural limitations of human cognition, while humans compensate for the limitations of algorithms. This would be a huge breakthrough – but right now, all the focus is just on improving the algorithms. If we get the collaboration processes right, we’ll have something better than either the machines or the humans alone could create alone.


Explore more

Prior to CASBS, Jim was a professor at the University of Wisconsin-Madison business school, and also Deloitte Consulting’s inaugural US Chief Data Scientist. He holds a Ph.D. in Philosophy from the University of Chicago and is a Fellow of the Casualty Actuarial Society. At CASBS, he led a Rockefeller Foundation-sponsored initiative titled “Towards a Theory of AI in Practice,” which culminated in a 2022 convening at Bellagio. He returned to Bellagio for a residency in 2023 titled, “Advancing a Multidisciplinary Field of AI Practice.”

For more information about Jim’s work, visit his Deloitte profile or his CASBS profile, which also features the CASBS program “Towards a Theory of AI Practice.”

Related