Mark Zuckerberg Talks to WIRED About Facebooks Privacy Problem

For the past four periods, Facebook has been taken to the woodshed by critics, the stock market, and regulators after it was reported that the data-science firm Cambridge Analytica acquired the data of 50 million Facebook customers. Until Wednesday, Mark Zuckerberg had abode silent. On Wednesday afternoon, though, he addressed the problem in a personal Facebook post and laid out some of the solutions he will introduce.

He then rendered an interview to WIRED in which he discussed the recent crisis, the error Facebook shaped, and different modelings for how the company could be regulated. He also discussed the possibility that another–Russian–shoe could plummet. Here is a transcript of that discussion 😛 TAGEND

Nicholas Thompson: You learned about the Cambridge Analytica transgres in late 2015, and you got them to sign a legal document saying the Facebook data they had embezzled had been deleted. But in the two years since, there were all kinds of tales in the press that could have attained one mistrust and mistrust them. Why didn’t you dig deeper to see if they had misused Facebook data?

Mark Zuckerberg: So in 2015, when we heard from columnists at The Guardian that Aleksandr Kogan seemed to have shared data with Cambridge Analytica and a few other parties, the immediate actions that we took were to ban Kogan’s app and to require a legal certification from Kogan and all the other tribes who he shared it with. We got those certifications, and Cambridge Analytica is really told us that we are really hadn’t received raw Facebook data at all. It was some kind of derivative data, but they had deleted it and weren’t[ making] any employ of it.

In retrospect, though, I think that what you’re pointing out here is one of the biggest mistakes that we established. And that’s why the first action that we now need to go take is to not only will vary depending on certifications that we’ve get from developers, but[ we] actually need to go and do a full investigations conducted by every single app that was operating before we had the more restricted platform policies–that had access to a lot of data–and for any app that has any suspicious activity, we’re going to go in and do a full forensic inspection. And any developer who won’t sign on for that we’re going to kick off the platform. So, yes, I envision the short answer to this is that’s the step that I think we should have done for Cambridge Analytica, and we’re now going to go do it for every developer who is on the platform who had access to a large amount of data before we locked things down in 2014.

NT: OK, great. I did write a piece the coming week articulating I thought that was the main mistake Facebook made.

MZ: The good report here is that the large-scale acts that we needed to take to prevent this from happening today we took three or four years ago. But had we taken them five or six years ago, we wouldn’t is right there right now. So I do guess early on on the platform we had this very idealistic vision around how data portability would allow all these different brand-new experiences, and I think the feedback that we’ve gotten from local communities and from the world is that privacy and having the data locked down is more important to people than perhaps inducing it easier to bring more data and have different kinds of experiences. And I think if we’d internalized that sooner and had built these changes that we established in 2014 in, suppose, 2012 or 2010 then I also think we could have avoided a lot of harm.

NT: And that’s a super interesting philosophical change, because what interests me “the worlds largest” about this history is that there are hard tradeoffs in everything. The criticism of Facebook 2 weeks ago was that you need to be more open with your data, and now it’s that certain data needs to be closed off. You can encrypt data more, but if you encrypt data more it stimulates it less useful. So tell me the other philosophical changes that have been “re going through” your psyche during the past 72 hours as you’ve been digging into this.

MZ: Well that’s the big one, but I think that that’s been decided pretty clearly at this degree. I consider the feedback that we’ve gotten from people–not only in this chapter but for years–is that people appreciate having less access to their data above having the ability to more easily bring social suffers with their friends’ data to other homes. And I don’t know, I necessitate, part of that might be philosophical, it may simply be in practice what developers are able to build over the platform, and the practical importance exchange, that’s surely been a big one. And I concur. I see at the heart of a lot of these issues we are confronted are tradeoffs between real values that people care about. You know, when you think about issues like fake report or abhor speech, right, it’s a tradeoff between free speech and free expression and safety and having an informed community. These “re all the” objection the conditions that I think we are working to try to navigate as best we can.

NT: So is it safe to assume that, as you went through the process over the past few days, you’ve been talking about the tradeoffs, looking at a broader range of answers, and you picked four or five of them that are really good, that are solid, that few people are going to spat? But that there’s a whole other suite of changes that are more complicated that we may hear about from you in the next few weeks?

MZ: There are definitely other things that we’re thinking about the hell is longer term. But there’s likewise a lot of nuance on this, right? So there are probably 15 changes that we’re inducing to the platform to farther curtail data, and I didn’t list them all, because a lot of them are kind of nuanced and hard to explain–so I kind of is seeking to paint in broad-spectrum strokes what the questions are, who the hell is first, going forward, stimulating sure developers can’t get access to this kind of data. The good report there is that the most important changes there had been constructed in 2014. But there is also several interesting thing that, upon interrogation, it established feel to do now. And then the other is just that we want to make sure that there aren’t other Cambridge Analyticas out there. And if they were able to skate by imparting us, announce, fraudulent legal certification, I just think our responsibility to our community is broader than to merely rely on that from a cluster of different actors who might have signals, as you say, of doing suspicious things. So I consider our responsibility is to now is now going look at every single app and to, any time there’s anything suspicious, get into more detail and do a full scrutiny of them. Those, I imagine, are the biggest pieces.

NT: Got it. We’re learning a lot every day about Cambridge Analytica, and we’re discover what they did. How self-confident are you that Facebook data didn’t get into the hands of Russian operatives–into the Internet Research Agency, or even into other groups that we may not have found yet?

MZ: I can’t really say that. I hope that we will know that more surely after we do an scrutiny. You know, for what it’s worth on this, the report in 2015 was that Kogan had shared data with Cambridge Analytica and others. When we demanded the certification from Cambridge Analytica, what they came back with was saying: Actually, we never actually received raw Facebook data. We get maybe some personality scores or some derivative data regarding Kogan, but actually that wasn’t useful in any of the models, so we’d already deleted it and weren’t use it in anything. So yes, we’ll basically confirm that we’ll amply expunge everything there is and be done with this.

So I’m not actually sure this is right this is going to go. I surely reckon the New York Times and Guardian and Channel 4 reports that we received last week suggested that Cambridge Analytica still had access to the data. I necessitate, those seemed credible enough that it is imperative to take major action based on it. But, you know, I don’t wishes to leap to conclusions about what is going to be turned up once we complete this audit. And the other thing I’d say is that we have temporarily paused the audit to relinquish to the UK regulator, the ICO[ Information Commissioner’s Office ], so that they can do both governments investigation–I think it might be a criminal investigation, but it’s a government investigation at a minimum. So we’ll let them go first. But we certainly want to make sure that we understand how all this data was used and amply confirm that no Facebook community data is out there.

NT: But presumably there’s a second degree of analysis you could do, which would be to look at the known stuff from the Internet Research Agency, to be addressed by data signatures from files you know Kogan had, and to see through your own data , not through the audited data, whether there’s a potential that that info was passed to the IRA. Is that investigation something that’s ongoing?

MZ: You know, we’ve surely looked into the IRA’s ad spending and use in a lot of detail. The data that Kogan’s app get, it wasn’t watermarked in any way. And if he extended along data to Cambridge Analytica that was some kind of derivative data based on personality scores or something, we wouldn’t have known that, or ever seen that data. So it would be hard to do that analysis. But we’re surely looking into what the IRA did on an ongoing basis. The more important thing, though, that I think we’re doing there is just trying to build the sure government has all the access to the content that is required. So they’ve given us certain warrants, we’re cooperating as much as we are capable of with those investigations, and my opinion, at the least, is that the US government and special counsel are going to have a far broader view of all the different signals in the organizations of the system than we’re going to–including, for example, fund carries-over and things like that that we just won’t have access to be able to understand. So I think that that’s probably the best bet of coming up with a connection like that. And good-for-nothing that we’ve done internally so far has obtained a link–doesn’t mean that there isn’t one–but we haven’t determined any.

NT: Speaking of Congress, there are a lot of questions about whether you will go and witness voluntarily, or whether you’ll be asked in a more formal feel than a tweet. Are you planning to go?

MZ: So, here’s how we think about this. Facebook regularly witness before Congress on a number of topics, most of which are not as high profile as the Russia investigation one recently. And our logic on this is: Our occupation is to get the government and Congress just as much information as we are capable of about anything that we know so they have a full picture, across companies, across the intelligence community, they can put that together and do what they need to do. So, if it is ever the lawsuit that I am the most informed person at Facebook in the best position to testify, I will blithely do that. But the reason why we haven’t done that so far is because there are people at the company whose full jobs are to deal with legal conformity or some of these different things, and they’re simply basically more in the details on those things. So as long as it’s a substantive testament where what folks “ve tried to” get is as much content as is practicable, I’m not sure when I’ll be the right person. But I would be happy to if I were.

NT: OK. When you think about regulatory simulations, there’s a whole range. There are various kinds of simple, limited things, like the Honest Ads Act, which would be more openness on ads. There’s the much more intense German simulation, or what France has certainly talked about. Or there’s the ultimate extreme, like Sri Lanka, which simply shut social media down. So when you think about the different simulations for regulation, how do you think about what would be good for Facebook, for its customers, and for civic society?

MZ: Well, I signify, I think you’re framing this the right away, because the question isn’t “Should there be regulation or shouldn’t there be? ” It’s “How do you do it? ” And some of the ones, I believe, are more straightforward. So take the Honest Ads Act. Most of the stuff in there, from what I’ve seen, is good. We corroborate it. We’re building full ad transparency tools; even though it doesn’t inevitably seem like that specific invoice is going to pass, we’re be going implement most of it anyway. And that’s merely because I think it will be terminated being good for local communities and good for the internet if internet services live up to a lot of the same standards, and even go further than TV and traditional media have had to in advertising–that just seems logical.

There are some truly nuanced queries, though, about how to regulate which I think are extremely interesting intellectually. So the most difficult one that I’ve been thinking about is this question of: To what length should corporations is responsible for to use AI tools to kind of self-regulate content? Here, let me kind of take a step back on this. When we got started in 2004 in a dormitory room, there used to be two great difference about how we governed content on the services offered. Basically, back then people shared stuff and then they pennant it and we tried to look at it. But no one was saying, “Hey, you should be able to proactively know each time someone posts something bad, ” because the AI tech was much less evolved, and we were a got a couple of people in a dormitory room. So I anticipate people understood that we didn’t have a full operation that can go deal with this. But now you fast-forward virtually 15 years and AI is not solved, but it is improving to the point that we are able to proactively identify a lot of content–not all of it, you know; some really nuanced dislike lecture and browbeat, it’s still going to be times before we can get at–but, you know, nudity, a lot of terrorist content, we can proactively determine a lot of the time. And at the same period we’re a successful enough corporation that it is possible to utilize 15,000 people to work on security and all of the different forms of community[ runnings ]. So I think there’s this really interesting question of: Now that companies increasingly over the next five to 10 years, as AI tools get better and better, to be allowed to proactively ascertain what might be offensive content or contravene some rules, what therefore is the responsibility and legal liability of companies to do that? That, I think, is probably one of the most interesting intellectual and social debates around how you govern this. I don’t know that it’s going to look like the US model with Honest Ads or any of the specific simulates that you brought up, but I think that getting that right is going to be one of its most important things for the internet and AI going forward.

NT: So how does government even get close to get that privilege, given that it takes years to make laws and then they’re in place for more years, and AI will be completely different in two years from what it is now? Do they just set you guidelines? Do they require a certain sum of clarity? What can be done, or “whats being” the government do, to help guide you in this process?

MZ: I actually think it’s both of the things that you just said. So I think what tends to work well are transparency, which I think is an area where we need to do a lot better and are working on that and are going to have a number of large-scale bulletins this year, during the course of the year, about clarity around content. And I feel guidelines are much better than prescribing specific processes.

So my understanding with meat security is there’s a certain quantity of dust that can get into the chicken as it’s “re going through” the processing, and it’s not a large amount–it needs to be a very small amount–and I think there’s some recognizing also that you’re not going to be able to fully solve every single issue if you’re trying to feed hundreds of millions of people–or, in such cases, build a community of 2 billion people–but that it should be a very high standard, and people should expect that we’re going to do a good job getting the loathe lecture out. And that, I anticipate, is likely the right way to do it–to give corporations the right flexibility in how to execute that. I conceive when “youre starting” get into micromanagement, of “Oh, you need to have this specific queue or this, ” which I think what you were saying is the German model–you have to handle loathe lecture in this way–in some ways that’s actually backfired. Because now we are handling loathe lecture in Germany in a particular style, for Germany, and our processes for the countries of the world have far outdone our ability to handle, to do that. But we’re still doing it in Germany the style that it’s mandated that we do it there. So I believe guidelines are probably going to be a lot better. But this, I guess, is going to be an interesting conversation to have over the course of the year, maybe, more than today. But it’s going to be an interesting question.

NT: Last query. You’ve had a lot of big changes: The meaningful interactions was a huge change; the changes in the ways that you’ve determined and stopped the spread of misinformation; the changes today, in accordance with the rules you work with developers. Large-hearted changes, right. Lots of material happening. When you think back at how you set up Facebook, are there things, alternatives, directional selects, you wish you had done a little differently that would have prevented us from being in this situation?

MZ: I don’t know; that’s tough. To some degree, if the community–if we hadn’t provided a lot of people, then I think that some of this material would be less relevant. But that’s not a change I would want to go back and reversal. You know, I imagine the world is changing rapidly. And I think social norms are changing quickly, and people’s explanations around exactly what he dislike lecture, what is false news–which is a concept people weren’t as concentrate on before a couple of years ago–people’s confidence and anxiety of governments and different institutions is rapidly evolving, and I guess when you’re trying to build services for their home communities of 2 billion people all over the world, with various social standards, I think it’s fairly unlikely that it was possible to navigate that in a way where you’re not going to face some thorny tradeoffs between importances, and need to alteration and adjust your systems, and do a better position on a lot of stuff. So I don’t begrudge that. I think that we have a serious responsibility. I want to make sure that we take it as earnestly as it should be taken. I’m grateful for the feedback that we get from columnists who criticize us and teach us important things about what we need to do, because we need to get this right. It’s important. There’s no way that sitting in a dorm in 2004 you’re going to solve everything upfront. It’s an inherently iterative process, so I don’t tend to look at these things as: Oh, I wish we had not made that mistake. I signify, of course I wish we didn’t attain the error, but it wouldn’t be possible to avoid the mistakes. It’s just about, how do you learn from that and improve things and try to serve the community move forwards?

Facing Controversy

After periods of silence about the Cambridge Analytica dispute, Mark Zuckerberg authored a Facebook post.

Facebook has struggled to respond to the revelations about Cambridge Analytica.

Spoke the WIRED tale about the past two years of fights inside Facebook.

Leave a Reply

Your email address will not be published. Required fields are marked *