Event Recap / February 2022

Security & Privacy Risks of the Hybrid Work Environment

 

Since the start of the COVID-19 pandemic, firms have shifted significant proportions of their workforces to “hybrid” roles, with workers splitting their time between their offices and homes. This economy-wide “reset” of work location and practices offers a rare opportunity to break through chronic habits of personal and organizational behavior that negatively impact privacy and security. A CLTC white paper released in December 2021, Security and Privacy Risks in an Era of Hybrid Work, explored some of the key issues related to this shift.

To further dig into the implications of the transition to hybrid and remote work, CLTC convened a panel discussion on January 26, 2022 that featured a conversation with Will Cooper, Vice President of Litigation and Compliance at Fortinet, an S&P500 cybersecurity company, and Greg Silberman, who serves as Associate General Counsel and leads Zoom Video Communications’ Global Privacy Team. Moderated by Ann Cleaveland, CLTC’s Executive Director, the discussion focused on surfacing ideas for how firms and policymakers can converge to help hybrid work live up to its promise.

Below are select excerpts from the conversation (edited for length and content). View the full discussion above or on YouTube.

Cleaveland: What’s the number one aspect of privacy and security in the remote and hybrid workplace that you think people aren’t paying enough attention to?

Silberman: I can think of two. The first is the mixing of personal and professional. For most of us, we’re using a broader range of tools. Based on our internal statistics [at Zoom], the number of people who now know how to blur or upload their virtual background has spiked. People are in their homes, connected to their home network and home printer. You connect to all these technologies, and all of a sudden, that boundary starts to blur. The other is IT resources. For many organizations, such as government agencies or law firms, there is still a good deal of paper mail that comes through. How do you address that in the home environment? Or if you’re working with hardware, how do you deal with physical security? Depending upon your role, you may need a certain amount of isolation or physical security to meet regulatory or compliance guidelines. These are all concerns that we need to look at as we move away from ad hoc emergency procedures that were adopted at the beginning of the pandemic, and toward a more sustainable pattern of behavior for hybrid and remote work.

Cooper: From a more macro perspective, there’s an underestimation of how pervasive the challenges are, and how much things have gotten even more challenging. When you had 75 to 80 percent of work on a corporate network on premises, and then that shifts to five or 10 percent, it creates a much more complicated network, with a huge web of potential vulnerabilities. And all of that is layered on top of a world where the incentive to build out internet infrastructure is enormous and already outpacing the cybersecurity dimension, which is typically looked at as an expense. The magnitude of the vulnerability from the cyber perspective is not appreciated. You see headlines, but the overall pervasiveness globally of this challenge is something I don’t think most people fully appreciate.

Cleaveland: The optimistic view — the “amplify the upside” view, which is the stance CLTC likes to take — is that this shift to remote and hybrid work offers an opportunity to rethink longstanding habits and allow companies to do new creative things, in terms of employee training or accelerating a shift away from single authentication. What’s the most creative thing you’ve seen in the field in response to hybrid work that has the potential to break some of these bad habits?

Cooper: The phrase “bad habits” is spot on, and is really important. So much of cybersecurity is human behavior. Obviously, a big part of it is plugging in firewalls or encrypting data. The technological component is important. But at the tip of the spear, a huge part of it is organizational culture — the human dimension and people’s habits, including how attuned and sensitized they are to not click on a link from an email they’re not familiar with, or not forgetting to add that extra layer of authentication on their laptop. Focusing on the human part of it is really key. You’re seeing a lot of innovation and creativity on the training side. It’s not just, how is this box going to filter out bad traffic, but, how can we get our organization to remember to have good cyber hygiene? We need to change habits, not just innovate technologically.

Training in cybersecurity is going to be similar to training in other areas. It’s about what captures people’s attention. You don’t want it to be too long, because then people space out. You don’t want to require it when people are busy, like at the end of a quarter. The idea of using gaming is a good example of how you can excite people. You can also remind them of all the examples that continue to show up on the front page of the paper, or show what happens when you make mistakes. If you look at the SolarWinds hack, which led to a foreign adversary getting into US government agencies, that resulted from somebody, somewhere making a mistake. They didn’t pay attention to their training, and they clicked on that link. Trying to be creative, get people excited, and frankly, scare people, is a good combination. The best training programs do that. There’s some really good innovation coming out of UC Berkeley in that area.

Silberman: I’m a big fan of using humor, rather than fear, because people get beat down on fear. Last year, we did a contest to see who could come up with the best label for a USB thumb drive that would attract somebody to plug it in. People came up with things like “‘Keeping up with the Kardashians’ Outtakes,” or they labeled it “Bitcoin” or “Confidential.” We’re fortunate to have some people with experience from intelligence services who have done presentations to explain how various forms of social engineering work, along with demonstrations, and it’s amazing when you see it happening. Watching real demos of it, rather than a wall of text and a parade of how horrible the threats are, is important. Avoiding training fatigue is critical.

Cleaveland: Do you see an opportunity to level the playing field in terms of privacy across physical, hybrid, and remote locations?

Silberman: There’s an opportunity, but it’s a very complex problem, and it’s early days. I am greatly concerned that we are going to see concerns around productivity and security drive an increase in employee monitoring. Companies that sell the tools and technologies to do employee monitoring and “algorithmic management” have seen a tremendous spike in the last 18 months. Use of productivity tools, like identity and access management tools, device management, performance monitoring, and straight-up surveillance technologies, have the potential to invade employees at home in ways that were inconceivable five or 10 years ago. It has led to some truly hilarious countermeasures, DIY attempts to level the playing field, everything from stand-ins to video loops to a cottage industry of software that will make it appear as if you are online and doing your job.

We have to ask the question, in remote environments, whose privacy are we protecting? We talk a lot about the privacy of customer data, or the security of our environment or the office, but we need to balance that against the needs of employees, particularly in the United States, where there is not as broad a privacy regime as in other countries. We have to stop some of these bad habits before they expand. You can start by saying it’s about security, and then it takes on a certain amount of function creep, and all of a sudden, it’s used for performance monitoring or applying algorithms to manage people. You can imagine someone looking at someone’s meeting durations or something and saying, we don’t think he’s being as attentive, or his keyboard stroke count is down, maybe he’s drinking. It’s important to us that something that starts out as a security or regulatory compliance feature doesn’t creep into permissive surveillance of employees, or to permissive application of artificial intelligence and machine learning to evaluating and managing employees.

Cooper: Many areas of technology are growing incredibly fast, and the regulatory environment, at the complete other end of the continuum, moves very, very slowly, and in some places (including the federal level) doesn’t move at all, in a lot of respects. If you have an area where there isn’t potential liability, it’s hard to clamp down on things that might might pose risks. At this point, that divide between the regulatory regime and innovation is just growing larger. There are also differences based on socioeconomics. Some people are able to walk up three flights of stairs to their home office, where there are three computer screens, wi-fi pumping, and all the privacy and insularity they want. And then there are other people that don’t have the resources, or they have roommates or kids running around, or they’re working on their kitchen table. That can create a whole lot of friction and challenges for people. That’s something we need to focus on, trying to figure out more ways to level that playing field, because it just exacerbates pre-existing disparities in ways that those of us who have easy access to these technologies might not appreciate.

Cleaveland: Assuming the hybrid work environment is here to stay, what would you like to see from policymakers in the next 18 months that would catalyze real progress on privacy and security?

Silberman: For me, it’s going to be regulation of employee monitoring and algorithmic management. There was a bill recently introduced in the Massachusetts State Legislature that aims to protect workers from non-consensual capture of information or communications within the individual’s home. However, it’s kind of weak, and doesn’t exist within a larger legal framework for safeguarding the privacy of remote workers, particularly at the federal level. So I could capture your keystrokes, or I could capture what apps you are running, in some cases with security tools that are doing identity and access management. The Georgetown Center on Privacy & Technology released a draft of a Worker Privacy Act, which proposed a federal law that establishes clear prohibitions on the use of data and requires employers to explain what data they collect, in addition to why they’re handling it. But so far, no one’s really driving this. Even if I am able to submit a data subject access request to my employer, how much information am I going to get back? Will I be able to review whether or not I’m being subjected to algorithmic management, or to what degree I am being monitored? We’re going to see employers balancing the risks of losing the trust of their employees, and creating a potentially significant regulatory burden in having to respond if you’re monitoring someone’s actions 24/7.

Cooper: Looking at it from the cybersecurity side, regulation is quite sparse compared to other areas. If you go to Wall Street, big banks will have entire floors of their headquarters filled with regulators. There’s over a century of practice and entire armies of regulators that are focused on how these companies are handling money and how they’re treating their customers. Then you go a few thousand miles away, to the west coast in Silicon Valley, and you have these huge troves of data, and the regulatory attention and focus is an unbelievably small fraction. I think bringing cybersecurity and data regulations into some sort of parity with other areas that are regulated is going to be important to actually incentivizing the private sector to put more resources into cybersecurity.

Cleaveland: If you could wave a magic wand and request that academic researchers do one piece of research that you can’t get done in your environment and that would really help your work, what would it be?

Cooper: I would love a deep dive on the impact of this incredible transformation to the digital world on employee morale and employee psychology. It’s still early, and in ten years the amount of data we will have will obviously be much larger, but I can’t tell where this is going to go. We’re in the middle of the “Great Resignation,” with lots of people quitting their jobs. Is that just because their savings have increased as asset prices have risen lately? Or is it because they’re absolutely miserable? Or because they’re looking at a Zoom all day, instead of being in the office? What does all of this mean to the way people feel about being at work? I’m very interested to see how this plays out and how sustainable it is. Does it net out as a positive because people have so much more autonomy? Or does it net out as a negative because people feel like they’re being surveilled, or they feel more isolated? The overall impact on the human experience at work is going to be very central to how we look at this transformation.

Silberman: I would love to see research on whether the vast array of performance monitoring and employee surveillance technologies are actually driving security and productivity. We’ve seen studies that show hybrid workers may actually be more productive, but it’s very mixed. Companies have adopted privacy-violating technologies and processes. Are those actually good for anything, or are they doing more harm than good? Is the erosion of trust more damaging to the economic and commercial success of an organization? Let’s look at this for people who are doing the kinds of work where we have false quantization of human productivity, and see if we can figure out whether this helps — or hurts.

Watch the full conversation above or on YouTube.