Facebook Claims It Has a Better Way to Prove Ads Work on Facebook

The plan? Convince advertisers that metrics don't matter.
FacebookAdTA186373365Converted.jpg
Facebook/WIRED

When it comes to online advertising, Facebook is crushing it—at least according to Facebook's balance sheet. Advertisers clearly believe they need to be on Facebook, and Facebook wants them to keep believing it. That case has been harder to make over the past several months following embarrassing and very public blunders that cast doubt on Facebook's honesty toward ad buyers. In September, the company admitted that for years it had inflated how much time on average viewers spent watching video ads on the site. Since then, Facebook said it found five more flaws in the metrics it shares with advertisers—the measurements by which those advertisers judge whether their ad campaigns on Facebook are working.

Facebook's reputation has yet to fully recover from those missteps, which cut straight to the core of its business. Today the company is trying to reverse that momentum. Facebook's first-ever State of the (Measurement) Union event in New York City today is meant to mollify marketers still worried about issues with transparency. The plan? Convince them that metrics don't matter at all.

Instead, Facebook says advertisers should care about Lift, a platform the company claims offers a more scientific way to gauge an ad's effectiveness. Metrics such as how long someone watches a video can indicate whether a particular ad campaign has succeeded on Facebook. What such metrics can't tell advertisers, Facebook says, is whether it was worth it to run an ad on Facebook at all. Lift, the company claims, can tell them exactly that via randomized controlled trials—an adaptation of the study design used for testing the effectiveness of medical treatments. “The reason why this is a compelling approach is it harkens back to what is the true gold standard of measurement,” says Jonathan Lewis, a Facebook product manager and keynote speaker at today's measurement event.

Marketing experts agree that randomized controlled trials are indeed the best approach for measuring an ad's effectiveness. But few advertisers use them, which would seem to make the case for Lift even more appealing. Still, it’s only Facebook that could run such experiments on Facebook, because only Facebook has a full view into and full control over its platform. Transparency is still a problem—the same problem that shook advertisers' confidence in the first place.

Lift Up

General Motors has run ads on Facebook for years. Last year, it decided to see if those ads were worth it, leveraging the Lift platform. The campaign used randomized controlled tests to see whether Facebook ads were helping it sell more OnStar 4G LTE Wi-Fi data plans to drivers. Facebook would sometimes show users the ad, sometimes not. It would never show the control group the ad. After the test was complete, Facebook measured to see whether more people in the test group signed up for the data plan than those in the control group.

The result?

Sales of OnStar data plans increased 2.3 percent because of an online video ad run on Facebook, according to a case study published by Facebook. Sales rose among people who had never used a data plan by 7.2 percent. Facebook argues that those are the numbers that matter, and that advertisers can use them to make more informed decisions. If an online video ad generated more sales "lift" than a text-based ad, for instance, an advertiser could decide to keep spending on the video ad and drop the text ad. "It’s rigor that helps businesses understand how they can actually accomplish their goals the best, and directly," Lewis says.

If advertisers choose not to use Lift, they have other ways of trying to see whether an ad on Facebook is working. They could contract with third-party companies to track people who did see an ad versus people who didn’t see the ad and group them together. If a 30-something man from Boston saw an ad, for example, the third-party company would try to find a 30-something man from Boston who didn’t see the ad to start building their own test and control groups. "You could use the constructed control group of unexposed people to say, well, on average, how many of those people bought something from you?” says Brett Gordon, a marketing professor at Northwestern University’s Kellogg School of Management. "And on average for the people who saw an ad, how many people bought an ad from you? That distance would be your estimate of the effectiveness of the ad."

But according to a white paper jointly released by Gordon, another researcher at the Kellogg School, and Facebook, those third-party approaches didn't work as well as Lift because, Gordon says, Facebook's system can directly make use of truly randomized observations conducted directly at scale on and by Facebook.

Which gets back to the original problem—advertisers feeling stymied by Facebook are once again being asked to trust Facebook to check its own homework. The choice for advertisers would seem to be trusting Facebook to do the kind of truly randomized controlled trial of which it is capable, or relying on a third party to conduct an approximation without the access. Or maybe that's a false choice. "I suspect a determined capable advertiser can do a randomized controlled trial with third-party measurement,” says Benjamin Edelman, a professor at Harvard Business School and a vocal skeptic of the online ad industry.

Either way, advertisers for now seem likely to stick with Facebook. "It all depends on the objectives of the campaign," says Rick Martinek, senior manager for GM’s OnStar marketing, insisting that marketers have many online options. But when it comes down to it, Facebook is always a part of the mix in those options. “It gets a lot of consideration,” Martinek says. “They bring scale. We’re always looking at Facebook.”