×

What is the Future of Brand Safety on Meta's Platforms?  

What could Meta’s decision to scrap fact-checking on its social media platforms mean for the ad industry? We look at the changes announced by Meta, examining how its brand safety could be impacted and how this could affect advertisers and publishers. 

Taking a leaf out of X’s book  

Last week, Meta announced that it would be taking a leaf out of X’s book, loosening content moderation by scrapping fact-checking on its social media platforms – Facebook, Instagram, and Threads. It’s certainly an interesting choice of inspiration, considering the heavy criticism X has faced over the past few years for adopting a much slacker approach to moderating content. Under the direction of Elon Musk, the platform we once knew as Twitter has become home to an increasing amount of hateful and non-brand safe content. Unsurprisingly, many advertisers have distanced themselves from the platform due to the risk of their ads appearing alongside content that could damage their brand image. 

As Meta puts an end to its third-party fact-checking programme, it will adopt a Community Notes model – in the style of X. These Community Notes will be written and rated by contributing users. In a Meta blog post, Joel Kaplan, Chief Global Affairs Officer, wrote: “We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see.” 

A descent into chaos? 

Mark Zuckerberg sums up the changes in a video posted alongside the blog post. He says: “We’re going to dramatically reduce the amount of censorship on our platforms…We’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.” In a peculiar attempt to reassure viewers, Zuckerberg informs us that Meta is working with President-elect Donald Trump. Zuckerberg reportedly decided to begin implementing these changes after visiting Trump at Mar-a-Lago over Thanksgiving.

The timing of the change is significant, with Trump’s inauguration fast-approaching on 20th January. Pretty alarmingly, the Washington Post calculated that Trump made over 30,000 false or misleading claims while last in office. Bearing this in mind, a considerable increase of misinformation circulating on Meta’s platforms during Trump’s upcoming presidential term looks very likely. 

Alex Distadio, head of publisher acquisition, Americas at MGID, touches on the timing of the change, explaining how publishers could be impacted. “Meta’s move from third-party fact-checkers to Community Notes comes at a pivotal moment in US politics just a couple of weeks before Donald Trump’s inauguration, signalling a potential trend towards deregulation. Changes like this, made with little warning, can completely change the approach needed for publishers to survive,” he says. 

Distadio points out potential benefits, such as offering publishers opportunities including content diversity: “Reduced filtering may allow content that would previously have been removed, to be shared on social media, translating into higher engagement and wider audience reach for smaller publishers.” Despite possible benefits, there are certainly dangers for publishers. Distadio explains that the likely rise of misinformation on Meta’s platforms could lead to audience distrust and brand safety concerns – consequently resulting in fewer people visiting publisher sites and advertisers pulling back from Meta’s platforms. “This erosion of trust would affect not only Meta but also the reputation of publishers, creating lasting challenges for everyone in the industry,” he warns. 

Mattia Fosci, CEO of Anonymised, also expresses concern. “Negative emotions undermine brand trust, brand value and brand loyalty. As anxiety, anger and sadness occupy even more of our online experiences, boosted by AI-generated content and spread by bot armies, Meta’s social feeds will become a minefield for advertisers. A post-truth, angry, divided society is as dangerous for democracy as it is for business,” he comments. In response, Fosci advises advertisers to lean into curated, premium open web publishers who can provide safe environments and quality information.  

Others, however, believe the risks may not be so serious. Max Maharajh, managing director at Be A Bear, says that “Meta’s decision to scale back its fact-checking policy is not the disaster some are painting it to be.” He opines it could be a calculated move to fuel engagement. Maharajh expands on this idea: “Conversations mean impressions, and impressions drive media value – something brands and advertisers constantly seek.” 

Maharajh adds that, while misinformation remains a concern, “the pivot could solidify Meta’s role as a space where conversations – both constructive and chaotic – thrive, ultimately benefitting its ad-driven revenue model.” He acknowledges that, ultimately for marketers, this signals an opportunity to reach increasingly active and engaged audiences.  

Who should be responsible? 

Questions arise as Meta attempts to absolve its responsibility over the content posted on its own platforms. Who should be responsible? Becky Owen, CMO of influencer agency Billion Dollar Boy and former employee of Meta, debates the issue: "The question of who should hold responsibility for policing information on social media platforms is an acute challenge of our generation; in particular as we enter a reality in which the majority of learning is now happening online, where anyone with access to a phone can break news and shape opinion.” Weighing up the options, she remarks that “it has never felt fully comfortable for a for-profit organisation to essentially be the arbiter of fact, but to totally absolve itself of any responsibility for information and misinformation, does not seem right either.” 

“The perfect balance is yet to be found – but it is imperative that measures are put in place – specifically in the age of virtual influencers and artificial intelligence, when anyone can pretend to be anyone and use social platforms for harm,” Owen concludes.  

Will Meta become a target for EU regulators?  

Looking beyond the ad industry, potential regulatory problems take centre stage. Although first rolling out just to US users, the new model is likely to encounter regulatory scrutiny as it spreads to other parts of the globe. The European Commission (EC) is currently investigating X for breaching the content moderation laws under the region’s Digital Services Act. The EC is pushing to wrap up the investigation “as early as legally possible”. Following Meta’s move, Ofcom – the UK’s communications regulator – said that social media platforms “will have to assess the risk of any changes” made to content moderation and fact-checking policies. 

Relatedly, Meta has decided to terminate its diversity, equity and inclusion (DEI) programmes. Meta will no longer enact specific diverse hiring practices; it will also end its equity and inclusion training, as well as disbanding a team focused on DEI. The move signals a rejection of moving towards a more diverse and inclusive society. 

What is the future of brand safety on Meta's platforms? 

Circling back to the ad industry, advertisers could definitely reap some benefits from higher engagement – but does this outweigh the brand safety risks? With less content moderation comes the inevitable rise of not only misinformation, but also harmful and offensive content. For advertisers, of course, this means a much higher risk of advertising alongside non brand safe content. Being the social powerhouse that it is, Meta won’t lose popularity overnight. However, the more distant future could see advertisers gradually move away from its platforms. At the moment, we can only wait and see.