CLTC is pleased to announce that Richmond Wong will join the team as a Postdoctoral Researcher. Richmond recently completed his PhD at the UC Berkeley School of Information and has undergraduate degrees in Information Science and Science & Technology Studies from Cornell University. His research focuses on how technology professionals attend to and address ethical issues in their work. His research also aims to develop design-centered methods and approaches to proactively surface ethical issues related to technology (particularly those surrounding social values such as privacy and security). Richmond’s work utilizes qualitative and design-based methods, drawing from human computer interaction, science & technology studies, and speculative and critical design. We asked Richmond a few questions about his past research and his goals in working with CLTC.
What will you be doing at CLTC?
Broadly I’ll be working on projects related to privacy, security, and trust. More specifically, one part of my work aims to surface and explore the types of harms and threats that may emerge from new forms of data collection and use, such as data collected by ubiquitous, always-on sensors and “smart” devices. I study these harms and threats in a socially- and human-centered way, asking questions like: who gets disproportionately affected by these harms, and how might individuals, groups, and institutions usefully respond to the threats of these harms?
A second part of my research will be studying how the work of addressing privacy is conducted by technology professionals, particularly in light of new regulations such as the GDPR and CCPA. How is responsibility for privacy distributed within companies, and what types of new tools and methods might be useful in advancing this work? Do these laws and regulations provide new language or the social license for concerned technology workers to surface privacy concerns? What kind of expertise and practices can front-line designers and engineers contribute to these initiatives?
What are you hoping to achieve with your research?
I want my research to help researchers and practitioners foreground interdisciplinary and socially oriented perspectives when addressing social values in design (like privacy, security, and trust). Addressing these values not only requires technical approaches and compliance with law and policy, but also requires thinking about potential harms and threats early in the design process, and it requires understanding how people differently experience harms related to privacy, security and trust. For instance, those who face everyday harms due to social inequalities may also be subject to higher risks when it comes to technology-related harms. My research aims to develop practices, methods, and tools that embody these perspectives and can be used by researchers, practitioners, and other stakeholders.
How does this tie into your previous work at UC Berkeley’s School of Information and the BioSENSE research group?
My research at CLTC will build on work that I did at the School of Information. My work with the BioSENSE research group and Center for Technology, Society & Policy involved creating speculative designs, or future scenarios, that emphasized potential privacy, security, and ethical harms that could arise with different types of uses and adoptions of new sensing technologies. At CLTC I plan to use similar techniques, but also engage existing and potential users, experts, community groups, and other stakeholders to understand their perspectives on these emerging harms.
The research that I plan to do at CLTC that focuses on technology professionals and organizations builds on my PhD dissertation research, which studied how user experience (UX) professionals at technology companies address social values and ethical issues as a part of their work–ranging from diversity and inclusion, to accessibility, to fairness, and privacy.
Why do you think CLTC’s mission is important?
Fully addressing cybersecurity requires interdisciplinary thinking and approaches, including thinking about security from the perspectives of design, engineering, law & policy, human behavior, and social inclusion and justice. Through its research, training, and outreach CLTC plays an important role in supporting and showing the importance of these multiple perspectives that are sometimes overlooked.
You were part of the team in 2015 that helped CLTC develop our cybersecurity futures scenarios for 2020. How has long-term scenario planning factored into your personal research?
The 2020 cybersecurity futures scenario “Sensorium,” highlighted new forms of personal bodily data that could be collected and used. This shaped some of the research I did with the BioSENSE group and other collaborators, studying the potential privacy and ethical harms posed by the collection of heartrate, brain EEG, and menstrual tracking data in different social and political contexts.
Moreover, I have found scenario planning and similar techniques, such as speculative design, useful in my research as a way to proactively surface ethical questions related to technology development, deployment, and use. Part of my research involves developing new tools for technology practitioners that take this sensibility and allow practitioners to ask these ethical questions in the course of their work.