Facebook has announced it rolled out a system of checks on political ads run on its platform in the UK which requires advertisers to verify their identity and location to try to make it harder for foreign actors to meddle in domestic elections and referenda.
This follows similar rollouts of political ad transparency tools in the U.S. and Brazil.
From today, Facebook said it will record and display information about who paid for political ads to run on its platform in the UK within an Ad library — including retaining the ad itself — for “up to seven years”.
It will also badge these ads with a “Paid for by” disclaimer.
So had the company had this system up and running during the UK’s 2016 Brexit referendum, the Canadian data firm AIQ would, presumably, have had to pass its political advertiser verification process, and display “Paid for by” Vote Leave/BeLeave/Veterans for Britain badges on scores of pro-Brexit ads… If it didn’t just get barred for not being based in the UK in the first place.
(How extensively Facebook will be checking up on political advertisers’ ‘paid for by’ claims is one pertinent question to ask, and we have asked; otherwise this looks mostly like a badging exercise — which requires other doing the work to check/police claims… ).
Ditto during Ireland’s referendum earlier this year, on overturning a constitutional ban on abortion. In that instance Facebook decided to suspend all foreign-funded ads a few weeks before the vote because it did not yet have a political ad check system in place.
In the UK, the new requirement on political advertisers applies to “all advertisers wanting to run ads in the UK that reference political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate”, Facebook said.
“We see this as an important part of ensuring electoral integrity and helping people understand who they are engaging with,” said Richard Allan, VP of global public policy, and Rob Leathern, director of product management in a blog post announcing the launch. “We recognise that this is going to be a significant change for people who use our service to publish this type of ad. While the vast majority of ads on Facebook are run by legitimate organisations, we know that there are bad actors that try to misuse our platform. By having people verify who they are, we believe it will help prevent abuse.”
This summer the parliamentary committee that has been investigating online disinformation called for a levy on social media to ‘defend democracy’. And earlier this year Facebook told the same committee it would roll out an authentication process for political advertisers in time for the UK’s local elections, in May 2019 — with CTO Mike Schroepfer telling MPs the company believes “radical transparency” can fix concern about the societal and democratic impacts of divisive social media ads.
In response, MPs quizzed Schroepfer on whether Facebook’s political ad transparency tool would be so radical as to include “targeting data” in the disclosures — i.e. “will I understand not just who the advertiser was and what other adverts they’d run but why they’d chose to advertise to me”.
The Facebook CTO’s response in April suggested the company did not plan to go that far. And, indeed, Facebook says now that the details it will disclose in the Ad library are only: “A range of the ad’s budget and number of people reached, and the other ads that Page is running.”
So not, seemingly, any actual targeting data: Aka the specific reasons a particular user is seeing a particular political ad. Which could help Facebook users contextualize political ads and be wiser to attempts to manipulate their opinion, as well as generally better understand how their personal information is being used (and potentially misused).
It’s true that Facebook does already provide some data about broad-brush targeting, with a per-ad option users can click to get a response on ‘why am I seeing this?’. But the targeting categories the company serves via this feature are so broad and lacking in comprehensiveness as to be selectively uninformative and thus pretty useless at very best.
Indeed, the results have even been accused of being misleading.
If Facebook was required by law to rip away its adtech modesty curtain entirely there’s a risk, for its business model, that users would get horribly creeped out by the full bore view of the lidless eye in the digital wall spying on them to target ads.
So while Schroepfer teased UK MPs with “radical transparency” the reality, six months on, is something a whole lot more dilute and incremental.
Facebook itself appears to be conceding as much, and trying to manage expectations, when it writes: “We believe that increased transparency will lead to increased accountability and responsibility over time — not just for Facebook but for advertisers as well.”
So it remains to be seen whether UK lawmakers will be satisfied with this tidbit. Or call for blood, as they set themselves to the task of regulating social media.
The other issue is how comprehensively (or otherwise) Facebook will police its own political ad checks.
Its operational historical is replete with content identification and moderation failures. Which doesn’t exactly bode well for the company to robustly control malicious attempts to skew public opinion — especially when the advertisers in question are simultaneously trying to pour money into its coffers.
So it also remains to be seen how many divisive political ads will simply slip under its radar — i.e. via the non-political, non-verified standard route, and get distributed anyway. Not least because there is also the trickiness of identifying a political ad (vs a non-political ad).
Malicious political ads paid for by Kremlin-backed entities didn’t always look like malicious political ads. Some of the propaganda Russia was spreading via Facebook in the US targeted at voters included seemingly entirely apolitical and benign messages aimed at boosting support among certain identity-based groups, for example. And those sorts of ads would not appear to fit Facebook’s definition of a ‘political ad’ here.
In general, the company also looks to be relying on everyone else to do the grunt-work policing for it — as per its usual playbook.
“If you see an ad which you believe has political content and isn’t labeled, please report it by tapping the three dots at the top right-hand corner of the ad,” it writes. “We will review the ad, and if it falls under our political advertising policy, we’ll take it down and add it to the Ad Library. The advertiser will then be prevented from running ads related to politics until they complete our authorisation process and we’ll follow up to let you know what happened to the ad you reported.”