I Helped Create Facebook's Ad Machine. Here's How I'd Fix It

Facebook finally laid out some changes to its ad platform. But a former employee who helped build it shares his own ideas on how to fix the Russia problem.
Image may contain Human and Person
Helena Price

This month, two magnificently embarrassing public-relations disasters rocked the Facebook money machine like nothing else in its history.

First, Facebook revealed that shady Russian operators purchased political ads via Facebook in the 2016 election. That’s right, Moscow decided to play a role in American democracy and targeted what are presumed to have been fake news, memes, and/or various bits of slander (Facebook refuses to disclose the ad creative, though it has shared it with special counsel Robert Mueller) at American voters in an attempt to influence the electoral course of our 241-year-old republic. And all that on what used to be a Harvard hook-up app.

Second, reporters at ProPublica discovered that via Facebook’s publicly available advertising interface, users with interests in bigoted terms like "how to burn Jews" could be easily targeted. In the current political climate, the optics just couldn’t be worse.

For me, reading the coverage from the usual tech journalist peanut gallery was akin to a father watching his son get bullied in a playground for the first time: How can this perfect, innocent creature get assailed by such ugliness?

You’re likely thinking: How can the sterile machinery of the Facebook cash machine inspire such emotional protectiveness? Because I helped create it.


In 2011, I parlayed the sale of my failing startup to Twitter into a seat on Facebook’s nascent advertising team (for the longer version, read the first half of my Facebook memoir, Chaos Monkeys). Improbably, I was tasked with managing the ads targeting team, an important product that had until then dithered in the directionless spontaneity of smart engineers writing whatever code suited their fancy.

"Targeting" is polite ads-speak for the data levers that Facebook exposes to advertisers, allowing that predatory lot to dissect the user base—that would be you—like a biology lab frog, drawing and quartering it into various components, and seeing which clicked most on its ads.

My first real task as Facebook product manager was stewarding the launch of the very system that was the focus of the recent scandal: Code-named KITTEN, it ingested all manner of user data—Likes, posts, Newsfeed shares—and disgorged that meal as a large set of targetable "keywords" that advertisers would choose from, and which presumably marked some user affinity for that thing (e.g. "golf," "BMW," and definitely nothing about burning humans).

Later that year, in another improbable turn of events that was routine in those chaotic, pre-IPO days, I was tasked with managing the cryptically named Ads Quality team. In practice, we were the ads police, a hastily assembled crew of engineers, operations people, and one grudging product manager (me), charged with the thankless task of ads law enforcement. It was us defending the tiny, postage-stamp-sized ads (remember the days before Newsfeed ads?) from the depredations of Moldovan iPad offer scammers, Israeli beauty salons uploading images of shaved vulvas (really), and every manner of small-time fraudster looking to hoodwink Facebook’s 800 million users (now, it’s almost three times that number).

So now you’ll perhaps understand how the twin scandals—each in a product that I helped bring to fruition—evoked such parental alarm.

What can Facebook do about all this?

Let’s set aside the ProPublica report. Any system that programmatically parses the data effluvia from gajillions of users, and outputs them into targeting segments, will necessarily produce some embarrassing howlers. As Buzzfeed and others highlighted in its coverage of the scandal, Google allows the very same offensive targeting. The question is how quickly and well those terms can be deleted. It’s a whack-a-mole problem, one among many Facebook has.

Also, there’s zero evidence that any actual ads targeting was done on these segments (beyond the $30 that ProPublica spent). Actual ad spend on the million-plus keywords that Facebook offers follow what’s called a long-tail distribution: Obscure terms get near-zero spend, and Facebook’s own tools show the reach for the offensive terms was minimal. Keyword targeting itself isn’t very popular anymore. Its lack of efficacy is precisely why we shipped far scarier versions of targeting around the time of the IPO; for example, targeting that’s aware of what you’ve browsed for online—and purchased in physical stores—nowadays attracts more smart ad spend than any keywords.

No, the real Facebook story here is the Russia thing, which should be of concern to anyone worried about the fate of our republic. While the amount of Russian spend Facebook admitted to is peanuts ($100,000) and certainly didn’t influence the election’s outcome, this should be considered a harbinger of what’s to come. Even US politicians didn’t spend much on Facebook in 2008; now they certainly do, and you can be sure the Russians will grow their budgets in 2018 unless Facebook acts.


The good news for democracy (and Mark Zuckerberg) is that these problems, unlike the unscalable miracles that most Facebook plaints would require to address, are eminently solvable. On Thursday, in fact, as this piece was being edited, Mark Zuckerberg livestreamed an address wherein he broadly elucidated the company’s next steps, which were remarkably in line with what I imagined—with one big exception.

Facebook already has a large political ad sales and operations team that manages ad accounts for large campaigns. Zuckerberg hinted that the company could follow the same "know your customer" guidelines Wall Street banks routinely employ to combat money laundering, logging each and every candidate and super PAC that advertises on Facebook. No initial vetting means no right to political advertising.

To prevent rogue advertisers, Facebook will monitor all ad creative for political content. That sounds harder than it is. Take alcohol advertising, for example, which nearly every country in the world regulated heavily. Right now, Facebook screens every piece of ad creative for anything alcohol-related. Once flagged, that content goes into a separate screening workflow with all the varied international rules that govern alcohol ads (e.g. nothing in Saudi Arabia, nothing targeted to minors in the US, etc.).

Political content would fall into a similar dragnet and be triaged accordingly. As it does now, Facebook would block violating ad accounts, and could use account meta-data like IP address or payment details to prevent that advertiser from merely creating another account. It would be a perpetual arms race, but one Facebook is well-equipped to win, or at least keep as a stalemate. Zuckerberg’s video shows commitment to waging that war.

Next, based on Zuckerberg’s somewhat vague wording, Facebook will likely now comply with the Federal Election Campaign Act, a piece of 1971 legislation that governs political advertising, and from which Facebook finagled a self-serving exemption in 2011. The argument then was that Facebook’s ads were physically too small (no longer true) to allow the usual disclaimer—“I’m Joe Politico, and I approve this message…”—required on every piece of non-Facebook media. Facebook also claimed at the time that burdensome regulation would have quashed innovation at the burgeoning startup.

With Facebook’s market value now hovering at half a trillion dollars, that’s a preposterous thought. The company needs to put its big boy pants and assume its place on the world stage. The FECA disclaimers could easily live inside the upper right-hand-side dropdown menu that currently carries some ads targeting information (check it yourself), and would seamlessly integrate with the current product. Reporting of malicious political content could act in a similar manner to the recently added buttons that allow the reporting of fake news.

Lastly, the step I didn't see coming, because of its inherent weirdness.

The biggest promise, at least at the product level, that came out of Zuckerberg’s video concerns the ominously named ‘dark posts’. The confusion around these is vast, and worth clearing up.

The language is a pure artifact of the rudimentary nature of Facebook’s ads system in the bad old days. Before the Newsfeed ads we have today, there was no commercial content in Feed at all, beyond so-called ‘organic’ (i.e. unpaid) posts that Pages would publish to whomever had liked their page. A Like was effectively license to spam your Feed, which is why companies spent millions to acquire them.

But modern digital advertisers constantly tweak and experiment with ads. When big brands requested the ability to post lots of different creative, it posed a real problem. Brands wanted to show a dozen different ad variations every day, but they didn’t want to pollute their page (where all posts necessarily appear). ‘Dark posts’ were a way to shoehorn that advertiser requirement into the Pages system, allowing brands to create as many special, unseen posts as they’d like, which would only be seen by targeted audiences in their Feeds, and not to random passers-by on their page. The unfortunate term ‘dark post’ assumed a sinister air this past election, as it was assumed that these shady foreign elements, or just certain presidential candidates, were showing very different messages to different people, engaging in a cynical and hypocritical politicking.

Zuckerberg’s proposes, shockingly, a solution that involves total transparency. Per his video, Facebook pages will now show each and every post, including dark ones (!), that they’ve published in whatever form, either organic or paid. It’s not entirely clear if Zuckerberg intends this for any type of ad or just those from political campaigns, but it’s mindboggling either way. Given how Facebook currently works, it would mean that a visitor to a candidate’s page—the Trump campaign, for instance, once ran 175,000 variations on its ads in a single day—would see an almost endless series of similar content.

As big a step as the transparency feature sounds, I don’t see how Facebook can launch it until these Pages product concerns are worked out. The Facebook Pages team product managers must be sitting right now in a conference room frantically scrawling new design ideas on a whiteboard. I’d bet anything that the Ads Quality and Pages teams are prioritizing that as you read this. This is one scandal Facebook isn’t going to weasel its way out of with generic appeals to "openness" and "community".


Despite Zuckerberg’s sudden receptiveness to user (and government) feedback, should Facebook be pilloried for these blatant shortfalls, or even sanctioned by Washington? You’ll accuse me of never having taken off my corporate-issue Facebook hoodie, but the answer is not really.

It would take the omniscience of a biblical deity to correctly predict just what Facebook’s two billion chatting, posting, scheming, and whining users are up to at any given moment. If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was. And yet, now we live in that otherworldly political reality.

If democracy is to survive Facebook, that company must realize the outsized role it now plays as both the public forum where our strident democratic drama unfolds, and as the vehicle for those who aspire to control that drama’s course. Facebook, welcome to the big leagues.