Event Recap / November 2023

Panel Recap: “Sustainable AI: Ethical Applications for Good”

On October 19, the Center for Long-Term Cybersecurity (CLTC) and the AI Security Initiative hosted an online panel exploring “Sustainable AI,” an innovative way to unlock the potential positive benefits of artificial intelligence. The panel featured Lydia Gaby, Principal at HR&A Advisors; Krishnaram Kenthapadi, Chief AI Officer for Fiddler AI, and Jared Lewis, Head of Policy for Dentsu Good. The event was presented in partnership with the UC Berkeley Algorithmic Fairness and Opacity Group (AFOG), and the CITRIS Policy Lab.

The panel was co-hosted by Jessica Newman, director of the Center for Long-Term Cybersecurity’s AI Security Initiative, along with Brandie Nonnecke, director of the CITRIS Policy Lab and Associate Research Professor at the Goldman School of Public Policy, as well as faculty co-director at the Center for Law and Technology at Berkeley Law. Newman and Nonnecke are also co-directors of the UC Berkeley AI Policy Hub.

The Need to Manage the Harms of AI

Jared Lewis kicked off the panel with a brief overview of the core challenge facing policymakers and other decision-makers in an era of rapidly advancing AI technologies. “The problem is that the AI rate of adoption is much steeper than other technologies,” Lewis said. “AI presents a range of risks and challenges. And so as we think about the best way to use this technology for society and for industry, we have to also weigh that against the range of risks that are coupled with the use of that technology.”

Lydia Gaby explained that she is a strategic planner and a consultant focused on economic development, so she is focused on “how we can improve quality of life from a socioeconomic perspective.” She noted that the rise of AI could lead to loss of jobs, particularly in the “knowledge economy.”

“Without really careful study and policy, the use of AI will only exacerbate the socioeconomic divide,” Gaby said. “What that means from an economic sense is that a small number of elite staff… who have the ability to use AI, will have control over those kinds of systems. A large portion of the population would be locked out of those jobs. And we’ve never really been good as a country, or as a world, at restructuring systemically our economies to support those kinds of transitions, to support those who are no longer able to serve those job functions and need to be involved elsewhere in the economy.”

Part of the challenge is that AI-based decisions cannot be trusted to adhere to human values, Gaby said. “There’s something that AI lacks, which is the power of nuanced judgment that’s based in the human experience,” Gaby said. “Even while AI can process data and information, and it can make recommendations, it doesn’t understand the essence of human behavior, culture, emotions, and all the variables that shape our world, and that shape our economies…. As organizations increasingly lean on AI for decision-making, we risk a world where our decisions lack that human judgment and human observability. The question is, can AI truly understand the impacts of its recommendations on our societal fabric — on health and well-being, on the subtleties of our cultural context? No, not really.”

AI-based systems also may be biased, Gaby explained, as they are trained on data with inherent biases. “Oftentimes, people perceive AI as something that is objective or neutral,” Gaby said. “But these kinds of systems are trained on the data that we give them. And the data that we give them is often riddled with historical bias and inequality in and of itself. When that’s the case, AI systems can, will, and do perpetuate and amplify systemic bias.”

Kenthapadi stressed that AI systems have significant potential positive benefits. “There are lots of positive aspects with AI the tools,” he said. “The challenge is, how do we ensure that these kinds of tools are used to augment human ability, as opposed to displace humans?… How do we as a society create the appropriate incentives, appropriate regulations…? I think that’s going to be a big challenge as we start having more and more of these kinds of tools.”

He noted that governments may need to rethink their economies to reduce inequality that results from widespread adoption of AI-based technologies. Countries like the US and China “could afford to change the taxation policies to heavily tax even those companies which are benefiting disproportionately,” Kenthapadi said. “These are kind of challenges that we are going to grapple with — either at an entire-society level or even at a per-country level. How do we create incentives for more equitable development across all regions?”

A Values-centered Approach: Sustainable AI

Lewis explained that, following conversations with a wide range of stakeholders, he and his colleagues concluded that “we should start evolving the conversation from risk to values,” Lewis said. “We came up with this proposition, which is sustainable AI.”

“Sustainable AI is about merging social values with ethical AI design to accelerate both practical applications,” Lewis said. “The purpose is to center the long-term prosperity of people and the planet in the evolving human-to-machine interaction. What this means is that, as we choose where to apply this technology, as we choose what problems to solve, we don’t just focus exclusively on those that bring economic value or efficiency to organizations, but those that bring other forms of social and economic values to society.” She noted that the UN Sustainable Development Goals help define the principles that can guide decision-making.

Gaby explained that there are “three simple questions” that should be asked about any AI technology. “Without a clear yes to all three of these questions, we really have to ask ourselves if this is the best and most responsible use of AI, or sustainable use of AI,” she said. The questions, she explained, are: does the application give the users agency in choosing when, how, and where to engage in the technology? Does the application seek to solve a fundamental human or environmental challenge? And, does the application of AI cause social or environmental risk or harm?

“Answering these questions is not a simple task,” Gaby said. “A lot of the work is actually designing new methods to see and feel the risk and harm associated with these kind of applications and bringing that information closer to those who are designing and adopting the technology so that they can make more informed decisions.”

Gaby added that a frame to think about this value proposition already exists: the social value proposition, which has led companies to increasingly understand the broader social impacts of their products and services. “On one hand, we have global leadership that exists around where corporations and organizations should target investment in order to create opportunities that both are prosperous for organizations and for society at large,” she said. “On the other hand, the growth of investment in AI has ballooned. If we think about the way that we’re investing in AI through the lens of these goals, we might be able to achieve a balanced growth in the technology.”

Many companies and governments are already aligned around the need to address challenges like climate change, decent jobs, economic growth, and health and well-being, Gaby explained, so “we suggest that corporate social responsibility, environmental and social governance, and sustainability provide a frame to think about where to target AI development that both serves organizations in the private sector, as well as society at large.”

Moving toward that kind of frame requires a few key steps, Gaby said: uplifting applications that improve human prosperity, and identifying ways that these technologies can improve quality of life and not just the bottom line; creating a public understanding of AI mechanics to empower public choice in the appropriate uses of AI; and creating frameworks that prevent risky and harmful applications of AI.

Lewis said that he and his colleagues have identified five interventions that could be immediately applicable and help build this sustainability frame. First, technology should provide transparency into what generative AI and other applications are doing. The second intervention is licensing and credentialing. “We could think of no other industry with such a broad potentially harmful and positive externalities that don’t have some form of licensing and credentialing,” Lewis said.

Third is an AI use index that would allow easy search of existing AI technologies, and “understanding through this index what are the appropriate uses and what uses might pose a range of risks,” he explained, while the fourth intervention is a “layperson’s guide to AI” that would make understanding AI more accessible.

“What we found in having a range of discussions is that the average person cannot talk about AI even in an intuitive sense from a technical perspective,” Lewis explained. “That creates a challenge, because when you’re asking regulators, you’re asking policymakers, or you’re asking individuals and decision-makers within organizations to choose which uses are appropriate, or to choose where to invest or to apply, their lack of ability to explain the mechanics of the technology even in an intuitive sense is an inherent risk.”

The final intervention, Lewis said, is an “AI sustainability stamp” that would help consumers and users of AI technology to “very efficiently and effectively identify whether or not a particular application has reached a certain threshold of sustainability…. What are the risks? What are the harms? And have those concerns been adopted into the design and use of the technology?”

Lewis and Gaby both stressed that the media industry, including advertisers and film and video content producers, can have an outsized influence in growing awareness about the potential harms of AI. “Media creates and reinforces its own narratives about how the technology could be used, how it is being used, and where are the dangers and risks,” Gaby said.

For more information about the Sustainable AI initiative, contact Jared Lewis at JaredLewis@berkeley.edu.

Watch the full panel above or on YouTube.