Biden surgeon general releases report on combating “health misinformation” on social media

Biden surgeon general releases report on combating “health misinformation” on social media
AP Photo/Evan Vucci
Remove Ads

President Joe Biden’s Surgeon General released a report on Thursday titled “Confronting Health Misinformation,” recommending that Big Tech companies such as Facebook, Twitter, and YouTube impose consequences for accounts that violate the private companies’ own policies. 

In a section titled “What Technology Platforms Can Do,” Surgeon General Vivek Murthy outlined a list of suggested “product design and policy changes” that technology platforms should focus upon to “slow the spread of misinformation.” According to the office of the surgeon general, health misinformation can be defined as “information that is false, inaccurate, or misleading according to the best available evidence.”

“Health misinformation is an urgent threat to public health. It can cause confusion, sow mistrust, and undermine public health efforts, including our ongoing work to end the COVID-19 pandemic,” said Murthy in a statement Thursday. “As Surgeon General, my job is to help people stay safe and healthy, and without limiting the spread of health misinformation, American lives are at risk. From the tech and social media companies who must do more to address the spread on their platforms, to all of us identifying and avoiding sharing misinformation, tackling this challenge will require an all-of-society approach, but it is critical for the long-term health of our nation.”

One item in this list is to “Prioritize early detection of misinformation ‘super-spreaders’ and repeat offenders,” with Big Tech giants directed to “[i]mpose clear consequences for accounts that repeatedly violate platform policies.”

The U.S. Surgeon General also called for the platforms to “proactively” provide information to users from “trusted and credible sources” in order to counter supposed misinformation.

Other items include the redesign of algorithms to “avoid amplifying misinformation,” the inclusion of “suggestions and warnings,” and providing researchers with “access to useful data” so they can “analyze the spread and impact of misinformation.”

Murthy also calls on Big Tech companies to increase moderation staffing, to “address misinformation in live streams,” and for platforms to “publish standardized measures” of misinformation data.

The full list of “suggestions” is as follows:

  • Assess the benefits and harms of products and platforms and take responsibility for addressing the harms. In particular, make meaningful long-term investments to address misinformation, including product changes. Redesign recommendation algorithms to avoid amplifying misinformation, build in “frictions”— such as suggestions and warnings—to reduce the sharing of misinformation, and make it easier for users to report misinformation. 
  • Give researchers access to useful data to properly analyze the spread and impact of misinformation. Researchers need data on what people see and hear, not just what they engage with, and what content is moderated (e.g., labeled, removed, downranked), including data on automated accounts that spread misinformation. To protect user privacy, data can be anonymized and provided with user consent.
  • Strengthen the monitoring of misinformation. Platforms should increase staffing of multilingual content moderation teams and improve the effectiveness of machine learning algorithms in languages other than English since non-English-language misinformation continues to proliferate. Platforms should also address misinformation in live streams, which are more difficult to moderate due to their temporary nature and use of audio and video. 
  • Prioritize early detection of misinformation “super-spreaders” and repeat offenders. Impose clear consequences for accounts that repeatedly violate platform policies.
  • Evaluate the effectiveness of internal policies and practices in addressing misinformation and be transparent with findings. Publish standardized measures of how often users are exposed to misinformation and through what channels, what kinds of misinformation are most prevalent, and what share of misinformation is addressed in a timely manner. Communicate why certain content is flagged, removed, downranked, or left alone. Work to understand potential unintended consequences of content moderation, such as migration of users to less-moderated platforms.
  • Proactively address information deficits. An information deficit occurs when there is high public interest in a topic but limited quality information available. Provide information from trusted and credible sources to prevent misconceptions from taking hold.
  • Amplify communications from trusted messengers and subject matter experts. For example, work with health and medical professionals to reach target audiences. Direct users to a broader range of credible sources, including community organizations. It can be particularly helpful to connect people to local trusted leaders who provide accurate information.
  • Prioritize protecting health professionals, journalists, and others from online harassment, including harassment resulting from people believing in misinformation.
Remove Ads
Remove Ads

2024 Student Journalism Conference

Applications are now open for The Democracy Fund's third annual Student Journalism Conference. This is a one-of-a-kind, all-expenses-paid opportunity for young aspiring journalists in Canada!

TDF Student Journalism Conference 2024

Don't Get Censored

Big Tech is censoring us. Sign up so we can always stay in touch.

Remove Ads