A CLTC–UC Berkeley and India AI Impact Summit 2026 Pre-Summit Discussion
This recap was based on an event summary originally authored by Deepika Raman.
As part of UC Berkeley’s Tech Policy Week, the Center for Long-Term Cybersecurity (CLTC) hosted a panel on “Establishing AI Risk Thresholds and Red Lines: A Critical Global Policy Priority,” moderated by Jessica Newman, Director of CLTC’s AI Security Initiative.
Recognized as an official Pre-Summit event of the India AI Impact Summit 2026 (a global gathering to be held in New Delhi on 19-20 February 2026), the discussion convened experts from government, civil society, and research institutions to explore how the world is defining and enforcing limits on the development and use of high-risk AI systems.
The panel began with a presentation by Deepika Raman, a researcher with the AISI, who set the stage on the definitions of AI risk thresholds and red lines and how they have come to be so critical to AI governance debates taking place today. Raman was the lead researcher and co-author of the CLTC White Paper, “Intolerable Risk Threshold Recommendations for Artificial Intelligence: Key Principles, Considerations, and Case Studies to Inform Industry and Government for Frontier AI Safety Frameworks,”
The conversation is timely, Raman explained, because of the “AI arms race,” as “tech companies are investing billions and fighting to claim that they have the title of the world’s most advanced AI to keep investors happy.”
As a result, “model capabilities are being pushed to the frontier, and some some model releases…have already triggered all the safeguards and thresholds in the industry frameworks,” Raman said. “But because these are defined in qualitative ways, they have been interpreted differently. With every model release, we’re seeing companies drop certain risk categories altogether from their frameworks. This demonstrates how unilateral these decisions are and why there is need for further advocacy and research around these governance mechanisms.”
Reframing Safety: From Systems to Society
Opening the discussion, Sarah Myers West, Co-Executive Director of the AI Now Institute, argued that the goal of governance should not be to make AI systems “safe” in the abstract, but to make people safe in practice. “The goal isn’t to make AI systems safe. It’s to make people safe,” she said.
Myers West emphasized that certain systems, especially those designed for surveillance or military targeting, have demonstrable failure models that are inherently unsafe and cannot be rendered benign through technical optimization. The focus, she noted, should be on institutional accountability, not only engineering safeguards.
“Red lines are designed to constrain the behavior of developers,” she said. “We need mechanisms with real leverage in an environment dominated by powerful firms.”
Drawing lessons from Cold War–era risk management frameworks, Myers West warned against the current “arms race” dynamic that is diluting safety thresholds and pushing frontier capabilities without the public deliberation that once accompanied existential technologies. (To explore this topic in greater detail, Sarah’s recent paper, co-authored with Heidy Khlaaf, is available here.)
AI Governance in Practice: The European Model
Leonie Koessler, Policy Officer at the European AI Office, presented the emerging regulatory architecture under the EU AI Act, which operationalizes “systemic risk tiers” through enforceable safety and security frameworks.
Koessler explained that developers of general-purpose models must predefine ‘if–then’ commitments: if a model reaches a specified capability tier, it must trigger predefined mitigations. She explained how the EU AI Office intends to verify whether these frameworks are measurable, appropriate, and continuously updated to reflect evolving model capabilities. Such a process, she added, provides companies some flexibility in defining risk thresholds, but ultimately places them under regulatory supervision and enforcement, ensuring that high-risk development decisions are accompanied by accountability mechanisms.
Drawing Global AI Red Lines
Niki Iliadis, the Director of Global AI Governance at The Future Society, described the ongoing effort to build an international consensus around AI red lines. “Red lines are not supposed to be about panic, they’re more about prevention […] and they’re definitely not about slowing down innovation,” she said.
She highlighted the Global Call for AI Red Lines, which has been endorsed by over 90 organizations and 1,500 individuals, including Nobel laureates and former heads of state. These red lines, she explained, are intended to be enforceable thresholds for unacceptable AI use cases, ranging from electoral interference and surveillance to biological weapons design.
She further outlined three pillars of the coalition’s work: expanding global participation, developing diplomatic pathways toward international agreements, and supporting domain-specific research to define concrete prohibitions.
“Red lines are about protecting society,” Iliadis said. “The alternative would be to wait for more large-scale harm to actually manifest, and that’s not responsible. The world has drawn red lines before…. The call is to get the international community to do it again for AI. The window is narrowing, but this is an opportunity for governments to come together to find that lowest common denominator of what isn’t working, and then to understand that some risks from AI are just too great to take, and that we need some lines that must never be crossed.”
Accelerating Adoption of AI Red Lines
Marc Rotenberg, Executive Director and Founder of the Center for AI and Digital Policy, traced how the global AI governance ecosystem has evolved from broad ethical principles to legally binding prohibitions. “The right to prohibition is as important as the right to innovation,” Rotenberg said. “If we cannot maintain meaningful human control of an AI system, we have an affirmative obligation to terminate it.”
He linked this termination obligation to milestones such as the UNESCO AI Ethics Recommendation, the Council of Europe AI Treaty, and the Hiroshima AI Process (G7), all of which have articulated categories of prohibited AI practices, including social scoring and mass surveillance. He emphasized that meaningful oversight will depend on independent supervisory authorities with powers to inspect, subpoena, and assess models, without which enforcement would remain weak.
“I would not underestimate the amount of work ahead,” Rotenberg said. “It is very difficult to build broad political consensus, particularly for strong statements that will require implementation, and also these discussions are taking place in the context of multiple international organizations and governance forums. You may get a red line in one setting that looks very different from a red line in another setting, but that’s okay…. Keep in mind of the various ways in which red lines can be adopted. We don’t need a harmonized approach. What we do need is an effective and meaningful approach.”
Transparency and Accountability in Model Thresholds
Finally, Nada Madkour, a Non-Resident Research Fellow for the AI Security Initiative at CLTC, examined how leading AI companies are internally managing, and sometimes crossing, dangerous capability thresholds.
She cited examples such as Gemini 2.5 Pro and Claude 4, whose model cards referenced elevated cyber and biosecurity capabilities, but with little public clarity on how those thresholds were defined or validated. Madkour called for quantitative, interpretable, and standardized risk levels, including “living” model cards that evolve as new risks are discovered post-deployment.
“Transparency must be meaningful for society but not dangerous for safety,” she said. “That’s the balance we need to draw.”
Risk Thresholds as Democratic Instruments
Across the discussion, panellists converged on a key insight: AI risk thresholds are not merely technical safeguards, they are democratic instruments. They embody society’s collective decision on what levels of risk are tolerable, who bears responsibility, and how accountability is enforced. “We’ve drawn red lines before, for chemical weapons, for nuclear testing. We can and must do it again for AI,” Iliadis said.
As governments prepare for the India AI Impact Summit 2026, the panel underscored an urgent priority: transforming normative commitments into enforceable, measurable, and participatory governance frameworks that ensure AI development remains both safe and democratic. Following the event, Raman summarized key recommendations in relation to the India AI Impact Summit’s “Sutras,” foundational pillars defining how AI can be harnessed through multilateral cooperation for collective benefit, and “chakras,” areas of multilateral cooperation designed to channel collective energy toward holistic societal transformation.
Recommendations Provided to the India AI Impact Summit:
- Institutionalize Redlines to advance the “Safe and Trusted AI” chakra: Implement and enforce internationally recognized AI red lines prohibiting the deployment of AI systems—such as autonomous weaponization, mass surveillance, and electoral interference.
- Enforce Thresholds through independent oversight to strengthen the “Resilience” chakra: Mandate predefined “if–then” mitigation protocols and empower independent evaluation and oversight mechanisms with authority to audit and investigate AI models. Disseminate widely standardised reporting frameworks for AI capability thresholds and post-deployment risks, ensuring comparability and transparency across jurisdictions.
- Recommend Verifiable thresholds under the “Safe and Trusted AI” chakra: Develop outcome-oriented risk thresholds tied to human rights, and public safety. Incorporate accountability benchmarks that move beyond capability-based metrics to ensure AI systems remain within tolerable risk limits.
Collectively, the event strengthened momentum toward translating global normative commitments — such as those expressed in the Global Call for AI Red Lines, Frontier AI-Safety Commitments, and the Seoul Ministerial Statement — into enforceable governance frameworks. It also reinforced the role of the India AI Impact Summit 2026 as a platform for operationalizing these priorities through inclusive, evidence-based, and globally coordinated policy design.
