Event Recap / May 2021

Facebook’s Director of Public Policy on “Privacy and the Future of the Ad-Supported Internet”


Apple and Facebook are at the center of a highly consequential debate over the future of the ad-supported Internet, backdropped by Apple’s announcement that the latest version of its mobile operating system will require app developers to get permission to use certain data for ads.

Much more than a normal spat over business models, competition, and privacy, this debate could be a major inflection point for Big Tech and the future of data products, with implications that extend far beyond the two companies at the center of the argument.

On April 28, 2021, CLTC Faculty Director Steven Weber interviewed Steve Satterfield, Facebook’s Director of Public Policy, for a virtual fireside chat to explore the issues at stake. Satterfield leads the team responsible for developing and advocating for Facebook’s perspectives on privacy and data-related regulation. Before joining Facebook in 2014, he was a privacy lawyer at the law firm Covington & Burling, LLP, in Washington, DC.

[Note the questions and answers have been edited for length and content. Also, as Steve Weber notes in his opening comments, CLTC has received financial support from Facebook, along with other private companies, foundations, and individuals. Learn more on our funding policy page.]

Question (Steve Weber): What is the problem Apple is trying to solve with this new policy? What do you think makes sense about their way of solving it, and what does not make sense?

Answer (Steve Satterfield): This starts with a policy that was announced almost a year ago that Apple calls App Tracking Transparency. The first part of this policy requires developers who make their apps available in the Apple App Store to get permission (through a prompt) to access the Identifier for Advertisers (IDFA), an identifier stored on the phone. The IDFA plays the same role that cookies play on web, but it plays that role on apps. It’s primarily there to help advertisers target ads, to select the audience and measure the effectiveness of those ads. Apple is now requiring developers to get permission to access that identifier.

There’s also a part of the policy that restricts an app developer’s ability to use data from any other source outside of the app to show people personalized ads within the app. If you’re an advertiser, and you want to send data to another app to show your ads in that app, you’ve got to get permission for that all over the place.

What does this actually look like? I had an experience where I downloaded an app for a wine store, and I saw a prompt that said, this wine store wants to track me across different apps and websites. Is this app going to be following me around to different apps? No, that’s not what’s going on. What’s going on there is that this store wants to send data about my purchases and my interests to an ad platform so it can show me ads somewhere else. Wherever they’re running their ads, which could be on Facebook or Google or some other ad platform, that platform has got to get permission to use that data. So there’s a kind of a double permission requirement happening here. I should add that it doesn’t matter if the wine store or the ad platform had previously received permission from me to do this. It’s all wiped out by the new policy.

This is going to introduce a ton of friction, and the way the choice is presented is not neutral. There is certainly a thumb on the scale, as Apple is pushing people to say no, which is going to make it harder for that advertiser to advertise. Personalized advertising is extremely valuable, especially to smaller firms. And it’s going to make it harder for the developer of the apps where the ads are showing up to make money through advertising. This is why there’s so much conversation about this. It’s going to make the business model of the internet a little harder to operate.

Q: From your perspective at Facebook, if there is a problem underlying this change — whether it be lack of consent, lack of understanding, or the relationship between those — how would you think about solving that problem more effectively?

What Apple says is that they’re trying to bring greater transparency and control to the ways in which data is used for online advertising, which is a perfectly laudable goal. I don’t think anybody has a quarrel with what they’re trying to accomplish. I think that people do have an issue with how they’re going about it, through the “thumb on the scale.” There’s been a bit of downplaying their own business interests, as well. A world in which it’s harder to monetize through ads is a world in which developers have to use other means to monetize subscriptions, like in-app payments. Apple takes a cut of those subscriptions and in-app payments, which is a separate controversy that you’re hearing play out in the litigation with Epic Games. But they do have a business interest in a world in which personalized advertising becomes harder.

As was reported in the Wall Street Journal, Apple is also simultaneously trying to build its own [advertising] business. It happens that their own advertising products work a little better than third-party advertising products in this new world.

What’s the alternative? What Facebook thinks is that we should be trying not to throw the baby out with the bathwater. The ad-supported internet is the internet as we know it, and there’s value in trying to preserve the business model that gave us the internet. And there are real consequences to moving to a different business model. So how do we do that? One way is a tool that that we released that has some of the same goals as the Apple policy called off-Facebook activity. This is a tool with which a user can see a summary of the data that advertisers are sending to Facebook, and you can disconnect it from your account and say, I don’t want that information stored with my account. You can make those choices on a going-forward basis and say, I don’t want any of these third parties that are using Facebook’s business tool to have information connected to my account.

The point of doing this was to try to have a better transparency and control, but in a way that preserves the benefits. We can’t use that information to target the person with advertising, but we do get some of that measurement, and that is an incredibly valuable piece of the service, as it’s sometimes hard to tell know whether your advertising is working or not, which is incredibly valuable, especially for businesses with small budgets. It was a solution that tried to achieve privacy protection while sustaining at least part of the benefits of the data flow that supports the advertising ecosystem.

Q: What is the “baby” that would be thrown out with the proverbial bath water in this case?

I think there are two “babies.” I talked about preserving the internet as we know it. The internet is still characterized by access to an incredible amount of information for free. If you told me in the 1970s that there would one day be a service where I could enter in search terms and have access to basically all of the world’s recorded knowledge in less than seconds, I would never have believed you. But even if you had told me that this thing would become a reality, I would have assumed it would only be available to the wealthiest members of society. And of course, that’s not true. Search has been brought to us by advertising, and the same thing is true of the Facebook social network. We connect three billion people around the world, without asking people to pay for that. Advertising has brought us these services. It’s kind of an astonishing thing to me how quickly we tend to discount that, and many in the news media are very critical of the ad-supported business model. But we should not undervalue subsidized access to content.

For the impact on small businesses, the bottom line is this: small businesses rely disproportionately on personalized advertising. And when you have a policy or a new product that is going to affect personalized advertising, it’s going to have a disproportionate impact on them. And the reason for that is simple: small businesses have limited advertising budgets, and they can’t afford to waste advertising impressions. If I’m a big business, I can slap an ad up on a billboard, and thousands of people could drive by and not even look at it. Small businesses can’t afford that kind of waste. They have to start with a smaller group of people who are already likely to be interested in what they’re offering. And that’s what personalized advertising brings: it brings that initially smaller group who may be interested in the product or service I’m offering.

Q: Facebook is often portrayed as a villain in the debate around privacy. Since you’re at the center of the storm, what does the storm look like? How can you see the company taking a leadership role in moving the ball forward in a way that helps your users and customers?

You asked what’s it like to be at the center of the storm, but I think the better metaphor is like a fast-flowing river of criticism, and much of it is warranted. Of course there’s a lot of emotion around these issues. I would be surprised if there weren’t. But we’ve actually moved a lot in the last couple of years to where we’re now trying to address these things largely through regulation. I’m trying to inject facts and evidence into the policy debate. I think that there are a lot of times the policy debate gets overheated, and if the facts are missing, that’s really on us. It’s our job to provide the evidence that’s appropriate for consideration and policymaking processes.

What does this mean for consumers, and where could Facebook lead? It’s a great question. There’s plenty of skepticism by Facebook leadership on privacy. But I’m very proud of the work that we’ve done on what privacy folks call user data rights, which are traditionally defined in terms of access, correction, deletion, and portability. It’s very easy to understand the kinds of information that Facebook has about you, and we have a new tool called Manage Activity, which is a way to bulk-archive or even delete content that you previously shared on Facebook.

And then there’s portability, which is near and dear to my heart. Through a partnership with the Data Transfer Project, a group of tech companies that are trying to build interoperable data transmission API’s, we have a new product called “transfer your information,” which is enabling you to directly transfer a growing number of types of data to a growing number of partners. What’s in it for consumers is innovation around privacy-protective technologies like those. I hope that’s what’s being incentivized by the regulation.

Q: What could regulators do, from your perspective, to create a more vibrant “race to the top” in this space?

We’re going to see how this plays out, arising out of some of the new privacy products. There are now complaints in France and Germany against Apple arising out of the iOS 14 changes that we’re talking about. We’ll see if there’s greater anti-trust enforcement around punitive privacy measures that may be carried out through anti-competitive means.

To the question of how we can incentivize innovation around privacy, market incentives do exist, and no one would agree with that more than Apple, which is already running ads focused on privacy. They’re the greatest marketers in the history of the world, so they’re probably on to something. How do we create the right incentives? Regulation is not helpful when it comes to innovating around privacy, as it is extremely prescriptive and becomes ossified pretty quickly. It discourages creative solutions to very legitimate concerns about privacy and data collection online.

Q: We were talking before about how valuable it is to understand whether an advertisement achieved what it is supposed. Could there be a similar standard of proof that privacy actions and policies are achieving what they are supposed to achieve?

The idea of measuring the impact of privacy is fascinating because the conventional wisdom embodied in the privacy paradox is that we build all these choice infrastructures, but many people don’t actually care. Policymaking may be trending in a difficult direction when it comes to giving people a bigger stake in managing their information online. It is very much moving in the direction in which Apple is moving, which relies on consent. What could be wrong with asking people’s permission for everything? The experience ramifications of that are pretty significant, to the point where we are flooded with requests for permission constantly. We tune out, and we actually stop paying attention and stop managing our data. This is where the academic commentary is actually really strong, and you’re seeing prominent privacy scholars, many of whom are not lovers of Facebook, increasingly skeptical about this way of protecting privacy, of just getting people’s permission. And you’re hearing policymakers say the opposite thing, which is that this is this should be the gold standard for how we protect privacy online.

Q: Facebook and its competitors need a higher level of trust from both customers and government, as the days of permissionless innovation are well past us. Everybody is trying to figure out a pathway toward responsible innovation when you need the social license to operate.  If there’s a deficit in trust, what can companies like Facebook and its competitors do to build up more “credit” in that account so they can do the kinds of experiments that, the long run, are going to make this industry vibrant?

Those are huge questions. One part of it is unglamorous, and that’s governance. In the seven years I’ve been here, I’ve seen the implementation of governance structures, and this is not an exciting thing, but it is essential. We now have a privacy committee of the board that is driving real change in the way that we approach privacy. We have executive-level accountability for the decisions we make around privacy and the implementation of our privacy program. Part of how we recover from mistakes we’ve made in the past is by doing the unglamorous governance work of managing our decision-making and risk assessment around privacy better.

The other part of it is transparency. There is still a lot of misunderstanding, and we’re trying to build more transparency around what are people experiencing on Facebook. During last year’s election season, you might have assumed that users were just seeing a stream of political content. But it turns out it was probably only about 6% in the United States. I think we need more of that. And it cannot come from Facebook. We need to be opening up more data to researchers, to the academic community that’s going to hold us to account, and we can’t do that while compromising privacy. It’s transparency that is going to help build the trust that you’re talking about.


To hear more of the conversation — including responses to questions submitted by the audience — watch above or on YouTube.