Ottawa funds AI ‘misinformation’ detection tool to counter ‘fake news’ online

Funding artificial intelligence development to counter ‘misinformation’ ensures ‘we’re fighting for Canadians to get the facts,’ said Heritage Minister Pascale St-Onge.

Ottawa funds AI ‘misinformation’ detection tool to counter ‘fake news’ online
The Canadian Press / Adrian Wyld
Remove Ads

Taxpayers are funding a ‘misinformation’ tool involving artificial intelligence (AI), the Trudeau government announced. They claim it won’t come at the expense of free expression. 

Heritage Minister Pascale St-Onge told reporters Wednesday the Université de Montréal (UdeM) is receiving nearly $300,000 to develop a web browser extension to detect ‘misinformation’.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. Funding the project ensures “we’re fighting for Canadians to get the facts,” she added.

Project lead and UdeM professor Jean-François Godbout told Epoch Times the tool will rely on OpenAI’s ChatGPT.

The university claimed the January 6 Capitol breach, Brexit referendum, and COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.”

The project, already under development prior to federal funding, is part of the Digital Citizen Initiative, a program bent on shaping online information. It supports research contributing to a “healthy information ecosystem,” according to Heritage Canada.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus [the data which served for its training],” Godbout wrote in an emailed statement.

After considering information from “reliable external sources,” the program will evaluate whether the content is true or false, the political science professor said. It would provide reasoning for the decision and references used.

“This technology will implement effective behavioural nudges to mitigate the proliferation of ‘fake news’ stories online,” Heritage Canada told Epoch Times

While the federal government says the project specifically targets ‘misinformation’, the university says it’s aimed at ‘disinformation’. 

The Canadian Centre for Cyber Security defines ‘misinformation’ as “false information that is not intended to cause harm,” while ‘disinformation’ has intent.

Last September 20, Minister Joly tabled a UN declaration that Canada spearheaded to counter online ‘misinformation.’ Twenty-eight delegates advocated for definitive action at the time.

Joly urged fellow UN delegates to follow suit and take “appropriate measures … to address information integrity and platform governance.” 

Measures, including legislation and AI, must refrain from “blocking or restricting access to the Internet, eroding privacy, intimidating, harassing or abusing journalists, researchers and human rights defenders … or criminalizing or otherwise punishing the exercise of the right to freedom of expression online,” it said.

The non-binding declaration, it reads, would not contravene freedom of expression, and maintain access to a variety of ideas, backed by “accurate information.” 

However, concerns with large language AI models emerged over political biases. Elon Musk, who owns X and its AI chatbot Grok, echoed those sentiments.

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Minister Joly admitted last September that AI “has great potential to harm the integrity of the online information environment” because it mass-produces ‘disinformation’.

Godbout contends his sources are “diversified and “balanced … not significantly ideologically biased.”

“We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information,” he said.

The Trudeau government’s initiatives to tackle ‘misinformation’ and ‘disinformation’ have been multifaceted, having already passed major bills, such as C-11 and C-18, which impact the information environment.

C-11 revamped the Broadcasting Act, granting greater regulatory powers to the CRTC over online content. It creates rules for the production and discoverability of Canadian content online.

C-18 coerces social media platforms to share revenues with news organizations whose content is shared online. Then-Heritage Minister Pablo Rodriguez said the legislation strengthens the media in a “time of greater mistrust and disinformation.”

These two pieces of legislation have been followed by Bill C-63 in February to enact the Online Harms Act. The legislation would define “hate” in the Criminal Code and could result in convictions of up to life in prison.

Changes to the Human Rights Act would allow for complaints to be filed against individuals accused of posting 'hate speech' online and could result in the accused paying the victim up to $20,000. 

Remove Ads
Remove Ads

Don't Get Censored

Big Tech is censoring us. Sign up so we can always stay in touch.

Remove Ads