News / May 2022

Q&A with Rafi Lazerson, Alternative Digital Futures Researcher

In January 2022, CLTC put out a call for proposals for UC Berkeley graduate students working on or interested in exploring new imaginaries for the futures of cybersecurity and/or AI security. We sought students from fields not often represented in cybersecurity or AI, and who have ideas on how the future of digital security could or should be reimagined to be more inclusive of diverse perspectives.

Rafi Lazerson, Master of Public Affairs student, Goldman School of Public Policy

We recently sat down with the author of one of the successful proposals, Rafi Lazerson a student enrolled in the Master of Public Affairs program at the UC Berkeley Goldman School of Public Policy (GSPP) — to discuss his project, which focuses on inclusive cybersecurity and AI security futures in the Metaverse.

Rafi is an MPA candidate examining evidence-based approaches to mitigate harms in emergent technologies. He came to the GSPP after working at the Anti-Defamation League on issues of hate online, and earlier work as CMO of an online marketplace. At Berkeley, he takes an interdisciplinary lens on tech policy issues, dividing his coursework between the Goldman School of Public Policy, School of Information, and Haas School of Business. He is conducting his MPA capstone for the Global Engagement Center, U.S. Department of State, regarding disinformation and extremism on encrypted messaging applications. Rafi is currently a Research Affiliate at the Center for Security in Politics at UC Berkeley.

(Responses have been lightly edited for length and content.)

Tell us about your Alternative Digital Future. How would you describe it at a high level, and how did you conceive of this idea?

My project explores a digital future where Social VR platforms proactively include comprehensive community standards that disincentivize harassment and foster inclusive user interactions.

Although some social media platforms have developed community standards that are comprehensive enough to be effective, this process took many years to materialize and was largely reactive. With the rise of Social VR, a new form of online social interaction in the Metaverse, my project imagines a future that diverges from the past, where platforms prioritize and proactively develop comprehensive community standards at the early stage of adoption.

I specifically focus on how community standards in Social VR need to account for user interactions that are primarily synchronous and conduct-based. For example, on Horizon Worlds, a Meta Social VR platform, users interact primarily via real-time speech and avatar movements. This is a stark difference from  primarily asynchronous content-based interactions on 2-D social media, like Facebook, where users interact through posts or comments, which are often viewed long after the content was added.

My project asks: How have major Social VR platforms adapted and developed community standards to account for conduct-based interactions? Looking ahead, what steps can we take to develop comprehensive community standards in Social VR?

What drew you to this project?

The immersiveness of VR affords connection between users separated by physical boundaries in experiences that feel real and present. A grandparent living in one country can meet their grandchild living in another at a VR theme park and feel present together. Attending an international conference in VR does not require a flight but can still foster the spontaneous forms of interaction that in-person conferences have.

At the same time, my previous career experiences have taught me that proactive inclusive policies are needed to ensure that new technologies benefit all, rather than enhance existing inequalities against those already marginalized. As CMO at an e-commerce company, I saw first-hand how prevalent anti-Chinese comments were on our social media, a statistic that further increased with the onset of the pandemic. In my work at ADL, I gained an in-depth understanding of how new technologies are exploited for hate and harassment. Additionally, in providing support for individuals and organizations who experienced online hate and harassment, I saw how crucial community standards were in providing users with the ability to report and appeal to the platform for moderation.

More recently, in the fantastic Haas course, Designing Tech for Good, I was part of a student consulting group that worked with Electronic Arts. In developing ideas to incentivize positive behavior in online gaming, I gained an appreciation for creative approaches to incentivize positive behavior in Social VR.

How does your background in public policy offer a unique lens on cybersecurity or AI security?

My academic and professional technology policy experiences prompt my analyses of cybersecurity/AI security issues to include an examination of the implicit values imbued into a policy or product feature. Additionally, at the Goldman School, we learned that even the lack of a policy is still a policy that perpetuates the status quo. Similarly, in my coursework at the School of Information, we explored how design is never value-neutral. Any particular design inevitably incorporates numerous values held by the designers, the data, and the processes that went into constructing it.

These simple but discerning lessons shed light on many of the harms that emerged in digital pasts, as well as some of the ways to reimagine a more inclusive alternative digital future. What or whose values are in this policy or product? Who was included in the design process? Who are the stakeholders? Who will this policy or product impact, and how?

What is the intended result of your Alternative Digital Future project? What do you hope to achieve with it?

I hope that my project will shed light on the significance and time-sensitive need for proactively developing comprehensive community standards that address immersive conduct-based interactions in Social VR. I also hope to provide a snapshot of whether and how some of the largest Social VR platforms are tackling the shift from content-based to conduct-based interactions, and provide initial directions for how platforms, researchers, and policymakers can further approach this issue.

Did you encounter any challenges in developing your Alternative Digital Future, or were there elements that were particularly difficult to conceive?

In assessing the current state of community standards in Social VR, I found that some platforms have their community standards dispersed across several web pages, but which are still applicable in various combinations to some products. For example, community standards that apply in Horizon Worlds include the Horizon Worlds Prohibited Content Policy, the Oculus Code of Conduct in VR Policy, and the Facebook Community Standards. This dispersion of the community standards is challenging from a research perspective, but may be even more challenging to users who are seeking moderation of hate and harassment that occurs in Social VR.

What would you say is the value of thinking about “alternative futures”?

Thinking of “alternative futures” has enabled me to become a more creative researcher, analyst, and designer. Brainstorming an alternative future requires me to step outside of the way things currently are, or the direction they seem to be going. By imagining an alternative future, I am prompted to identify current opportunities that can help actualize the alternative.