Loading Events

CLTC 2020 Research Exchange | Day 3: Long-Term Security Implications of AI/ML systems

December 10, 2020 10:00 AM - 12:00 PM

  • This event has passed.

Please join us for the final event of our 2020 Research Exchange series!

When: Thursday, December 10, 10:00 AM – 12:00 PM PT (hosted virtually)

What: “Long-Term Security Implications of AI/ML systems”

From “deepfakes” to “smart cities,” machine learning (ML) and artificial intelligence (AI) technologies are rapidly reshaping modern society. The choices we make today about AI/ML will have an enduring impact for decades to come. How can leaders across technical, institutional, and policy domains support trustworthy development and deployment of AI systems today and into the future? What are the key decision points that will have the greatest impact on the trajectory of AI security? This virtual conference will feature research talks from a diverse group of UC Berkeley researchers who are studying AI and ML, and developing tools and methods to help keep society more secure as technology continues to advance.

The Research Exchange features the work of a diverse group of CLTC-affiliated researchers who are pushing the boundaries of technology, social science, and the humanities to positively influence how individuals, organizations, and governments think about and deal with cybersecurity issues. More information on the event will be shared with event registrants in the coming weeks.

“Long-Term Security Implications of AI/ML systems” Presentations:

  • Jeremy Gordon, Covert Embodied Choice: Decision-Making, VR, and the Limits of Privacy Under Biometric Surveillance
  • N. Benjamin Erichson, Novel Metrics for Robust Machine Learning
  • David Wagner, Secure Machine Learning
  • Alexei Efros, Detecting Images Generated by Neural Networks
  • Ruoxi Jia, What Is My Data Worth? Towards a Principled and Practical Approach for Data Valuation
  • Inderpal Kaur, Hands-on Teaching Tools for Identifying and Addressing Machine Learning Bias
  • Nicholas Carlini, (Non-) Private Machine Learning
  • Featuring Rachel Azafrani of Microsoft and Priyanka Saxena of Deloitte Consulting, Implementing trustworthy AI for the long-term: a view from the field

Accessibility accommodations

If you require an accommodation for effective communication (ASL interpreting, CART captioning, alternative media formats, etc.) or information about mobility access in order to fully participate in this event, please contact Rachel Wesen at cltcevents@berkeley.edu with as much advance notice as possible and at least 7–10 days in advance of the event.

Share Event