Home Page Regulatory Telecom Broadcast Court People Archives About Us GET FREE NEWS UPDATES
Advertising Subscribe Reuse & Permissions
The Hill Times Parliament Now The Lobby Monitor HTCareers
Subscribe Login Free Trial

Meta’s content moderation changes highlight challenges for Canadian digital regulation

Regulatory | 02/03/2025 4:30 am EST
Meta’s content moderation changes highlight challenges for Canadian digital regulation
Meta Platforms, Inc. CEO Mark Zuckerberg (Graphic: Naomi Wildeboer/Hill Times Publishing)

Meta Platforms Inc.’s decision to end its fact-checking program and loosen the content moderation policies on its platforms is raising concern among Canadian digital policy experts.

As the major United States-based company scales back its oversight and strengthens its ties with that country’s new administration, Canadian officials are being urged by technology, cybersecurity, and social media researchers to work with other countries to adopt digital policies that incentivize large online platforms to be more transparent. 

Early last month, Meta announced several significant changes to its content moderation policies on its platforms Facebook, Instagram, and Threads. “Starting in the U.S.,” the company will eventually get rid of its third-party fact-checking program and adopt a community notes system, meaning it will crowd-source content moderation by letting volunteer users write explanatory notes on social media posts and vote on whether those notes should be displayed. 

For example, such a note could contextualize a misleading or false claim by providing additional information, or clarify a satirical post that might be mistaken by some users as fact. 

The company did not confirm when, or if, the changes to fact-checking will be coming into effect in Canada.

Additionally, Meta will be reducing reliance on automated filters and lifting its restrictions on speech previously considered harmful. 

“We’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate,” the company stated. 

According to reporting from Platformer, Meta’s trust and safety teams — who will still be responsible for moderating content on the platform in line with community standards – received guidelines with examples of speech that will no longer be restricted, including certain insults on the basis of gender or sexual orientation. Meta now permits accusations of mental illness and the denial of transgender people’s existence. According to the company’s updated hateful conduct policy, Meta also now permits content arguing for gender-based limitations of military, law enforcement, and teaching jobs. 

Meta additionally plans to revise its approach to political content and treat it “like any other content in your feed,” the company said.

New policies have Canadian experts concerned

“The decision about topics that they were going to roll back trust and safety on was concerning,” says Emily Laidlaw, Canada research chair in cybersecurity law and professor at the University of Calgary. She highlights how the move could risk creating some “extreme vulnerabilities” for marginalized groups who are often the target of hate speech. 

Florian Martin-Bariteau, a University of Ottawa law professor and research chair in technology and society, agrees. 

“There is clearly an attack on queer people and other marginalized groups,” he says. “I’m not sure that removing fact-checkers and content moderation really helps freedom of expression and political conversation,” he adds, noting that by opening the floodgates to content that is harmful to these communities, Meta could paradoxically force them out of the conversations on its platforms. 

In a video announcing the changes, Meta CEO Mark Zuckerberg said that his company will be working with U.S. President Donald Trump to “push back on governments around the world that are going after American companies and pushing to censor more.” He accused Europe of “institutionalizing censorship,” asserted that “secret courts” in Latin America are quietly ordering content takedowns, and condemned China’s Facebook ban

“The only way that we can push back on this global trend is with the support of the U.S. government. And that’s why it’s been so difficult over the past four years when even the U.S. government has pushed for censorship,” Zuckerberg said.

“By going after us and other American companies, it has emboldened other governments to go even further,” he said. “But now we have the opportunity to restore free expression and I am excited to take it.” 

 

Meta’s content moderation changes highlight challenges for Canadian digital regulation
Meta Platforms, Inc. CEO Mark Zuckerberg, in a video posted online in early January, announced changes to the company’s content oversight (Photo: Screenshot/Meta)

 

For Martin-Bariteau, a team-up with the Trump administration to “push back” on government regulations from around the world would be especially concerning.

“The power of big tech, especially American [companies], over politics and society has been increasing for years,” says the professor. For him and others in his field who have been trying to challenge notions that the tech industry is politically neutral, January 2025 has been a big “we told you so” moment. 

“You always, at least in the U.S., need to play a bit nice with the executive power,” but those ties have gone far beyond the norm, Martin-Bariteau says.

Meta has joined a number of companies seeking to curry favour with the newly elected U.S. administration. Zuckerberg’s announcement referred to the recent election as a “cultural tipping point” that motivated the company’s sweeping changes. Meta reportedly donated US$1 million to Trump’s record-breaking US$170 million inaugural fund, alongside donations from other major tech companies. Joel Kaplan, who served in the George W. Bush administration as White House Deputy Chief of Staff for Policy has been on Meta’s public policy staff since 2011 and was recently promoted to the role of chief global affairs officer; while Ultimate Fighting Championship (UFC) CEO and Trump ally Dana White joined the company’s board of directors. 

“I believe that they wanted to make [this] big announcement to please the Trump administration,” says Martin-Bariteau, “it’s an interesting direction for Meta that is clearly pro-Trump…

“As we know already, the U.S. has a lot of geopolitical power,” he continues. “If all of big tech comes together to attack different democracies around the world, what’s going to happen?”

Digital policy must be a priority for Canadians and the government, expert says

In the coming months, Canadian officials must prioritize digital policy, says Martin-Bariteau, emphasizing its widespread impacts.

“Digital policy should be at the top of their priorities, because everything is digital,” he says. “Your life is managed by big tech, by connectivity issues, by algorithms. So it concerns every Canadian.” 

“If most of the providers and the platforms are now basically playing by the rules of the Trump administration, it might be quite concerning for businesses and for democracy,” he states, noting that platforms have the ability to turn the algorithmic dials and prioritize any kind of content. 

Although Meta’s moves on fact-checking are currently confined to the U.S., Canada will likely be next, believes Martin-Bariteau, not to mention that the company’s content moderation policies already apply to everyone who uses its platforms, anywhere in the world, local laws permitting.

“It’s highly probable that Canada will be next,” he says, “because in some jurisdictions – such as the EU – Meta may have to keep its fact-checking systems to stay compliant with the region’s regulatory frameworks. But in Canada, we don’t have this.”

 

Meta’s content moderation changes highlight challenges for Canadian digital regulation
Donald Trump speaking with supporters at a campaign rally at the Phoenix Convention Center in Phoenix, Arizona, in October 2016. (Photo by: Gage Skidmore/Flickr/https://flic.kr/p/Nz4kGp)

 

Addressing these and other concerns, such as privacy and online harms reform, should be part of broader U.S.-Canada talks, Martin-Bariteau argues. While digital policy might not seem central to bilateral relations, there have been signs that the U.S. may have set its own sights on it. 

On Jan. 20, President Trump issued a memorandum ordering the U.S. Treasury to develop “protective measures or other actions” in response to any foreign country having “any tax rules in place, or are likely to put tax rules in place, that are extraterritorial or disproportionately affect American companies….” This not only raises flags regarding Canada’s Digital Services Tax, but also the Online Streaming Act and the Online News Act – all measures to rein in large foreign tech companies.

The directive also announced the country’s intention to pull out of the Organization for Economic Co-operation and Development (OECD) Global Tax Deal, which imposes a minimum income tax on large multinational companies in the countries where they operate. 

“It’s going to be interesting to see what Canada can move forward when the House resumes in March. Will they decide to be strong against the U.S., or will they decide to try and appease our neighbor to the south?” Martin-Bariteau wonders.

“We know that a lot of the big tech companies are not in favour of privacy reform,” he says, highlighting Meta’s ongoing court battle with the Office of the Privacy Commissioner. C-27, another dead bill, would have brought about some “long overdue” improvements to Canadian privacy law, says Martin-Bariteau. He warns that if it or a similar piece of legislation returns to the order paper, “we could see an influence from the U.S. saying, ‘we don’t want this to go forward.’”

“Or, we could be a strong, independent country, and not consider what the Trump administration’s new best friends – who were all at the inauguration – wish, and go forward with policy reform that has been asked for by Canadians for quite some time, that has been promised by political leaders across many elections,” he says. “Almost 10 years later, we’re still waiting for privacy reform, content moderation rules, platform liability, and more.”

‘Canada doesn’t have the upper hand’

The fallout from Canada’s Online News Act illustrates the challenges of the country’s recent attempts at regulating big tech, says Merlyna Lim, a Carleton University professor and Canada research chair in digital media and global network society. 

After the legislation – designed to address power imbalances between tech platforms and news publishers – took effect, Meta blocked Canadian news content on its platforms. This underscores the need for regulatory strategies that incentivize collaboration rather than provoke retaliation, says Lim. 

“We know from the way that Canada dealt with Meta, Canada doesn’t have the upper hand,” she says. This is largely due to Canada’s small market size, making it less essential for the company. 

“Canadian regulators really have to take a long look at the regulatory mismatch,” she says, adding that “the punitive approach didn’t work, so you need to just forget it.”


READ MORE:

– Meta confirms it is ending news availability on its platforms

– Google signs $100M deal with news collective, ends News Showcase in Canada

– CRTC denies Meta’s request to keep Online News Act response confidential


Lim criticized the government for continuing to push forward on enacting the Online News Act, despite clear warnings from Meta that it would block news on its platforms, ultimately harming Canadian users and putting them at even greater risk of falling for misinformation. 

Instead, she calls for measures and dialogue that “incentivizes [platforms] to value fact-checking, to value information, and to value transparency.”

Community notes unlikely to address misinformation, online harms alone, expert says

Meta’s plans to switch to community notes raises additional concerns. Although the company has not provided more details on how it is going to work for its platforms, Lim explains that without proper safeguards, the system’s reliance on user contributions could fortify misinformation instead of combat it.

“On the surface, it might seem more democratic to let the people decide – but actually, there is no such thing as the wisdom of the crowd.”

This is particularly true when it comes to ideological and political issues, says Lim. “Popular opinion, in most cases, doesn’t reflect factual accuracy, especially on contentious topics – usually it is binary: ‘pro’ or ‘anti’. 

“With the nature of our social media landscape at this moment, there is a higher risk of amplifying misinformation with this move,” she adds.

Lim highlights how community notes could make it easier for certain groups to “game the system” through coordinated manipulation, allowing bad actors to flood the system and push false narratives.

“For instance, they might upvote misleading notes while suppressing corrections,” she says, adding that even in situations where corrections are issued, they rarely go as viral as the misinformation that spread before them. 

Because we live in a highly polarized environment – or at least we do on social media, and community notes “mainly reflects the biases of a dominant user group,” there is a potential for it to reinforce, rather than challenge, misinformation,” says Lim. 

Unlike paid fact-checkers, the majority of community notes contributors lack credibility and expertise. What they do have is a desire to intervene, she explains, usually driven by “a desire to shut down opposing views.”

Meta pointed to X – the social media platform formerly known as Twitter – as the inspiration for its adoption of community notes, stating that its version will require agreement between people with a range of perspectives, “just like they do on X.”

Previously known as Birdwatch, the crowd-sourced moderation system originally served as one part of Twitter’s broader approach to addressing misinformation. Independent of the company’s trust and safety teams, Birdwatch complemented Twitter’s moderation systems by helping tackle posts outside of “circumstances where something breaks our rules or receives widespread public attention,” the company said in 2021.

“The intention of Birdwatch was always to be a complement to, rather than a replacement for, Twitter’s other misinformation methods,” former head of trust and safety Yoel Roth said in a 2023 interview with WIRED.

According to June 2023 reporting from the Poynter Institute, since Tesla Inc. and SpaceX CEO Elon Musk acquired the platform, changed its Birdwatch program to Community Notes, and fired many of its trust and safety staff, the company relies heavily on the crowd-sourcing system to deal with misinformation and it is failing. This is largely due to community notes requiring ideological consensus in order to be displayed, reflecting Lim’s concerns about its effectiveness in today’s highly polarized environment.

Calls for transparency-focused regulations

“If you don’t have basic consumer protection rules set out in law, then the companies set them entirely themselves and they can reverse course at any point, and that’s what Meta did,” says Laidlaw, who was a member of the expert group that advised the federal government on Bill C-63, the proposed Online Harms Act that is now dead on the order paper

“We’re seeing a giant rollback in trust and safety in general.”

Community notes can be a useful tool, “but it’s not a solution and it’s not a governance framework,” she says.

Laidlaw calls for a regulatory approach that holds platforms accountable by requiring them to acknowledge and minimize the risks of online harms.

“There’s a very specific role for government to tackle aspects of [misinformation], but they have to break it down to the specific elements that it would be constitutional for them to address,” she says, noting that concerns about state-backed foreign interference online are much different from someone sharing false COVID-19 treatments to their friends.

“We do need these companies to take on self-governance, though,” she says, pointing to the European Union’s approach in its 2022 Code of Practice on Disinformation as an example of an effective place for Canada to start. 

She referred to C-63 and the digital safety body that it would have established “that could have worked with industry to develop codes of practice and bring us in line globally with what we’re seeing in the EU and so on. That’s a place to start right now.”

Carleton University journalism professor Dwayne Winseck also calls for an approach that prioritizes transparency and self-governance. He echoes Laidlaw in looking to the EU.

Instead of focusing on the idea of fact-checking, which Winseck argues is based on a “faulty premise” of objectivity and universal truth, he urges regulators to adopt measures similar to those in the EU’s Digital Services Act and Media Freedom Act.

According to the European Commission, the Digital Services Act requires “very large” online platforms and search engines – those with more than 45 million users in the EU (10 per cent of the population) – to conduct regular risk assessments on issues like illegal content, disinformation, and harm to public health. These platforms must also implement transparent content moderation policies, give users the ability to appeal company decisions, and ban targeted advertising to minors or based on sensitive personal data. Additionally, it mandates that platforms disclose their algorithms and data to researchers to enhance accountability and better understand their societal impact. Failure to comply can result in fines of up to six per cent of a company’s global revenue.

“What this basically says is, ‘you built this sucker, you built this machine.’ And just like any high-performance, massive technical system – whether a highway, a bridge, an [artificial intelligence] system, or a digital platform – you’ve got to have a team that knows how this thing is going to perform in the real world, continuously monitor it, and report back…,” says Winseck.

“There is great scope” left to the platforms to develop “as they see fit,” he continues, but it is “encased” in a framework that recognizes the competing rights of businesses versus people’s freedom of expression. 

“When those two rights clash, the [Digital Services Act] says it’s the people’s rights to freedom of expression that triumph over the platform’s rights to exercise editorial discretion over the flow of content on their platforms.”

Similar transparency measures could also help deal with the issue of governments pushing for certain pieces of content takedowns.

“That pressure behind the scenes [from governments] has been an issue with the regulation of tech and tech companies for decades, and it’s a problem because it lacks transparency,” says Laidlaw, acknowledging Zuckerberg’s complaints of “censorship.” 

“It’s doing in the shadows what is not set out in law,” she said.

According to Lim, transparency measures could also offer a more feasible alternative to Canada’s current regulatory approach, which has relied on punitive actions that sometimes backfire.

Because of Canada’s lack of leverage, experts suggest work must be done through a global approach

It is challenging to impose unilateral regulations on large multinational tech companies like Meta. This comes down to Canada’s relatively small market size, according to Martin-Bariteau and Lim. Such entities are not necessarily forced to follow the expectations set by regulators in smaller markets, they explain.

Meta’s response to Canada’s Online News Act illustrates how playing ball with Canadian regulations was not in Meta’s financial cards and that it was easier for the company to simply block news and redefine where it stood under the act.  “We cannot afford to repeat that, because that’s not good for all of us,” says Lim.

Instead, she and Martin-Bariteau advocate for a collaborative, global approach to digital policy. 

“That was one of the big debates about the Digital Services Tax, there was not a global conversation about taxation and fair taxation,” says Martin-Bariteau.

“If Canada acts alone, it’s easier to retaliate, or to leave Canada,” says Martin-Bariteau. “But if all of the countries in the world [or even] most of the major economies in the world, but the U.S., work together and have strict red lines, it’s more complex for Meta to do something — or Google, or TikTok, or Amazon, or Microsoft, you can name all of them.” 

“Governments try,” he continues, referring to the OECD and G7 tax deals – which the U.S. intends to pull out of. “But I think tomorrow, more than ever, we’re going to need global collaboration and coalitions to move the needle in the right direction.”

phalentm@thewirereport.ca


READ MORE:

– The Wire Report’s most popular stories of 2024

– The Wire Report’s 2024 in review


Related Stories

Four cyber bills dead on the order paper as Parliament prorogued

Regulatory | 01/10/2025 11:28 am EST

When Governor General Mary Simon prorogued Parliament on Jan. 6 following Prime Minister Justin Trudeau's announcement that he will be resigning, all legislation before the House of Commons and the...

Conservative MPs discuss possibility Liberal, Tory online harms bills could be studied together

Regulatory | 12/10/2024 5:32 pm EST

As the federal justice committee works through a pre-study of the Liberals’ proposed online harms legislation, known as Bill C-63, some Conservative MPs seek to have their peers consider the merits...

Online harms and more in focus as Parliament returns

News | 09/16/2024 7:00 am EDT

With Parliament returning today, there are a handful of bills affecting the online world still before politicians. MPs have roughly one year to act on the bills before the next election must be...

Reuse & Permissions

Unauthorized distribution, transmission, reuse or republication of any and all content is strictly prohibited. To discuss re-use of this material, please contact:

Chris Rivoire, 613-288-1146 | crivoire@hilltimes.com