From Politico and The Hill to NPR’s Marketplace, the Center for Long-Term Cybersecurity has been in the news in recent weeks. Here’s a round-up of some of our recent media hits.
CLTC Grantees in Defense One, Politico’s Morning Cybersecurity
In a recent CLTC white paper, “Cyber Industrial Policy in an Era of Strategic Competition,” Vinod K. Aggarwal, Professor of Political Science and Affiliated Professor at the Haas School of Business, and Andrew W. Reddie, a Ph.D. candidate in the Department of Political Science, argued that cybersecurity is well-suited for government intervention because cyberattacks pose a significant security and economic problem for governments and firms, but growth in the cybersecurity industry has been limited by labor shortages and other challenges. Their report was mentioned in Politico’s May 21 Morning Cybersecurity brief, and the authors drafted a related op-ed, “The U.S. Needs an Industrial Policy for Cybersecurity,” that was published in Defense One.
Cussins Newman Pens Op-Ed on OECD AI Principles in The Hill
On May 22, the Organisation for Economic Co-operation and Development (OECD) and 42 partner countries adopted the world’s first intergovernmental policy guidelines for artificial intelligence (AI), which seek to promote AI that is safe, fair, and trustworthy, and that respects human rights and democratic values. In a recent op-ed for The Hill, CLTC Research Fellow Jessica Cussins Newman argued that the OECD principles represent a new core, or “global reference point,” for trustworthy AI and AI governance.
In her piece, “3 reasons you should pay attention to the OECD AI principles,” Cussins Newman noted that the values laid out in the principles—including global cooperation and coordination, reinforcement of human rights, and a comprehensive and future-oriented vision—have potential to stand the test of time in a rapidly accelerating field. “Even if only a few of the most powerful countries in the OECD were to uphold these principles with nation-state policies, they may have enough influence on the multinational corporations leading the development of AI to set an effective ethical and safe standard around the world.”
The op-ed in The Hill has been shared more than 2000 times, and it also caught the notice of some White House officials, including Ivanka Trump, an adviser to President Trump, who tweeted a link to the op-ed to her 6.5+ million followers.
CLTC and CITRIS Policy Lab Team Up for Blog on California AI Policy
Jessica Newman Cussins also teamed up with Brandie Nonnecke, Founding Director of the CITRIS Policy Lab, to author a blog post arguing that, while federal and international standards around AI are important, “the vanguard of AI policy is taking place more locally.” Written following an AI policy briefing at the California State Capitol in Sacramento, the blog emphasized California’s unique potential to be a world leader in responsible AI development, and included a discussion of priorities for California policymakers to consider as they develop statewide AI strategies and legislation. “It seems increasingly clear that state governments will play an essential role in shaping the future of AI,” wrote Nonnecke and Cussins Newman. “Given its unique position as the home of so many leading AI companies and research labs, California has an opportunity and responsibility to lead the way in establishing effective standards and oversight that ensures AI systems are developed and deployed for the benefit of all.” Read the article.
Cybersecurity Futures 2025 in South China Morning Post
The South China Morning Post reported on CLTC’s “Cybersecurity Futures 2025” project, after Steve Weber, Faculty Director for CLTC, delivered a presentation at the Internet Economy Summit, held in Hong Kong in April. “Weber discussed the project’s findings, which he says will help inform decision-makers and encourage greater international cooperation,” wrote the article’s author, Carli Ratcliff, who also cited the Cybersecurity Futures 2025 report directly: “The first generations of digital technology came with (possibly outsized) idealism – for wealth creation, safety, efficiency, peace, happiness and more…. It was inevitable that those expectations would be adjusted over time.” The Cyberfutures 2025 project was also covered in a blog about the RSA Conference.
CLTC Research Fellow Publishes Op-Ed on AI Norms and Policy in The Hill
Cussins Newman also wrote an op-ed for The Hill arguing for the importance of the United States’ maintaining control over the norms and values that shape the development and use of AI around the world. “The U.S. federal government has notably altered its AI strategy to include the importance of protecting values such as civil liberties, privacy, and technical standards to support safe AI development,” Cussins Newman wrote in The New AI Competition is Over Norms. “This helps align the nation with its allies and is a step in the right direction. However, a more proactive stance is needed.” Read the op-ed.
Improving Cybersecurity Awareness in Underserved Populations
Politico, MeriTalk, StateScoop, NextGov, and HelpNet Security were among the media outlets that covered Improving Cybersecurity Awareness in Underserved Populations, a recently released CLTC white paper by Ahmad Sultan that highlighted how underserved residents in San Francisco—including low-income residents, seniors, and foreign language speakers—face higher-than-average risks of being victims of cyber attacks. The report is intended to help officials in other cities better recognize how underserved populations may be at risk, and provides recommendations for city-led training programs and other initiatives that could help mitigate the cybersecurity challenge.
Steve Weber Talks Quantum Computing on Marketplace’s “Make Me Smart” Podcast
On a recent episode of Make Me Smart, Marketplace’s Kai Ryssdal and Molly Wood invited CLTC Faculty Director Steve Weber to discuss the potential implications of quantum computing. “If you think about the jump of capabilities that went from the abacus to the iPhone, you’re talking about that order magnitude of change when you move from conventional computing to quantum computing,” said Weber. He provided a brief overview of quantum theory, and why making a working quantum computer is so difficult. He also touched on CLTC’s Cybersecurity Futures 2025 work, asking what could go wrong if quantum technology were to fall into the wrong hands. Listen to the podcast or watch the recording.