Although a handful of advertisers have suspended their Facebook campaigns in the wake of the platform’s ongoing data scandal, most are staying the course.
“Both advertisers and developers are very understanding about having some features removed that they were using,” said Mark Rabkin, Facebook’s VP of ads and business platform, who took the reins from longtime Facebook exec Andrew Bosworth in August.
The question is how understanding Facebook investors and analysts will be on Wednesday, when the company reports its first quarter earnings. Bulls advise buying on the dip, and bears, like Pivotal analyst Brian Wieser, rate the stock a sell.
Rabkin sees Cambridge Analytica as a turning point rather than a disaster. In the more than 10 years since Rabkin joined Facebook in 2007 as a software engineer, the company has changed, and so has the industry.
“As that balance shifts, where digital media is the default and traditional media is additional, everyone knows we need to have new standards,” he said.
New standards – and new restrictions. Facebook recently limited the amount of data developers can get through its APIs and killed third-party partner targeting on the platform. The moves were necessary, Rabkin said.
“We have to recognize that as people’s expectations of the ad model increase, as people become more opinionated and more self-aware about data use and privacy, the whole industry has to do better,” he said. “And we’re trying to lead the way on that.”
AdExchanger spoke with Rabkin on the Facebook campus in Menlo Park.
AdExchanger: How is the near constant drum beat of tough headlines about Facebook impacting your relationship with advertisers?
MARK RABKIN: I haven’t heard a lot of complaints about the work being done. Rather, we’re actually getting a lot of positive reinforcement for making good changes. I think what the world really wants to see from us is actions to show that we’re really protecting people’s data, actions to show that we really don’t want Cambridge Analytica or any other data situation like that to happen again.
How is Facebook using machine learning? In the past, you’ve described it as being used to “maximize the positive and minimize the negative.”
Zooming into the negative, we’re able to find 99% of ISIS and Al Qaeda content before it’s reported to us. That involved a greater deal of natural language processing, content understanding and deeper algorithms than just looking for a couple of words.
Advanced machine learning is also really helpful in detecting more sophisticated patterns of behavior and things like “astroturfing” – fake grassroots – which is trying to use bots to make a piece of content look like it has a lot of grassroots support.
What about maximizing the positive?
Most environments on Facebook are a collection of stories that a person consumes from different sources. It’s more of a discovery environment than a context environment. As we start putting ads in videos and near videos, context and the connection between ads and the content a person is viewing starts to become really important for everyone. It’s important for creators to have ads they feel good about and that reinforce their brand, and it’s important for advertisers to be in a safe context and a context that makes sense for them.
That involves really deep understanding. Even trying to figure out if a creator’s video is PG-13 or G is quite difficult.
Deeper machine learning has also helped us get into verticals that have longer consideration cycles, more complex purchasing behavior or longer-lived intent, like auto, travel and real estate. It’s helping Facebook serve a more complete set of verticals on the business side.
Will humans always be involved in that process? At least for now, it seems like the answer is ‘yes.’ For example, Facebook is planning to increase the safety and security headcount to 20,000 this year.
Machine learning is a kind of power tool. It doesn’t just do things on its own or make decisions on its own. Especially on the policy enforcement side, Facebook is trying to navigate very interesting trade-offs between free speech and censorship on one side and hate speech, promoting violence, terrorism, child grooming – all of these terrible things – on the other side.
Humans will be really necessary to help us identify new behaviors by bad guys, because the bad guys are very, very adaptive. Humans are also necessary to help us build the case law for a lot of this enforcement. You can say, ‘no hate speech,’ but you also have to figure out what the exact line is for hate speech in different countries.
Machines can’t find that line. But machines can really amplify the power of those humans and enforce millions or billions of pieces of content in real time.
What are brands asking you about?
The conversation has really shifted with clients from explaining what mobile is to how they can make the most of it operationally, in practice.
What you have noticed about how people are engaging with video on Facebook?
We have a lot of different behaviors that are growing. Watch is quite big, and live video is still growing really fast. But if I have to pick one next format I think is really important for marketers to master, it’s Stories.
Stories illustrates the operational challenges a lot of advertisers are having. This is a format that was quite tiny or basically didn’t exist two years ago, and it reached 100 million daily users in three months on Instagram with a ton of usage.
How can advertisers keep up?
Marketers don’t have a year, two years, three years to slowly dip their toes in and learn to use something, to adapt to it very gradually. To really compete, you need an organization that can learn and adapt to a new experience on a three-to-four-month time frame.
What exemplifies it for advertisers is trying to figure out which parts of their marketing are more suitable to testing, learning, rapid iteration – to a more agile marketing approach – and then figuring out which part of their business they can start with.
This interview has been condensed and edited.