The United Nations has released a plan to regulate what it deems mis- and disinformation, hate speech and conspiracies posted on social media. The plan first came into development in September 2022 and featured a prominent "Internet for Trust" conference hosted by the global organization in February 2023.
Following an “extensive and collaborative effort,” the UN released its “Guidelines for the Governance of Digital Platforms” earlier this month.
“Digital technology has enabled immense progress on freedom of speech. But social media platforms have also accelerated and amplified the spread of false information and hate speech, posing major risks to societal cohesion, peace and stability,” said Audrey Azoulay, director-general of the UN Educational, Scientific and Cultural Organization (UNESCO).
“To protect access to information, we must regulate these platforms without delay, while at the same time protecting freedom of expression and human rights.”
The guidelines released by the UN point to five key principles:
Platforms conduct due diligence on human rights.
Platforms adhere to international human rights standards, including in platform design, content moderation and content curation.
Platforms are transparent.
Platforms make information and tools available for users.
Platforms are accountable to relevant stakeholders.
The UN's partners at the World Economic Forum broke down the details of the five principles in an article on its website.
The first, it noted, “says these assessments should take place ahead of elections, to ensure voting processes retain their integrity.” Surprisingly, the WEF linked to a Princeton study that showed Twitter's more liberal, left-leaning bias during the 2016 and 2020 U.S. elections may have actually taken votes away from president Donald Trump.
“Our work has been guided by one central requirement: the protection at all times of freedom of expression and all other human rights,” Azoulay said. “Restricting or limiting speech would be a terrible solution. Having media outlets and information tools that are independent, qualitative and free, is best long-term response to disinformation.”
The second principle discussing content moderation was a drive to make the internet a more “welcoming place for all,” the WEF notes. “Wherever algorithms are used for moderation rather than humans, they are free of the biases that can make them racist or sexist,” said the controversial organization.
UNESCO's website for the “Internet for Trust” points to “10 toxic spreaders of climate disinformation” who have “186 million followers on social media.”
Similar labels were applied to the so-called “Disinformation Dozen” during the COVID-19 pandemic — 12 social media users deemed to be responsible for spreading “65% of the shares of anti-vaccine misinformation on social media,” according to the Center for Countering Digital Hate, as reported by NPR.
A majority (60%) of 1 million internet users surveyed by UNESCO are apparently concerned about misinformation posted online.
The guidelines are simply a “set of concrete recommendations,” a Reuters fact-check on the story notes, adding it “may serve as a resource” for international policymakers, regulators and other government bodies looking to enact new legislation.