In the first quarter of 2021, the most shared piece of content on Facebook in the United States was an article by the Sun Sentinel newspaper, syndicated by the Chicago Tribune. The headline read: "A 'healthy' doctor died two weeks after getting a COVID-19 vaccine; CDC is investigating why."
While the full article involved good reporting, the headline was deeply irresponsible, and for many people, it was all that they saw. I would argue this is a type of misinformation. But if Facebook had removed it, there would have been a number of people who would have been very angry about press censorship.
The United States has a problem with health misinformation, from the use of dangerous or untested treatments to the depressing decline in trust in public health institutions. The way people responded to false claims about HIV, Ebola, and measles gave us some early warnings, but the COVID-19 pandemic has underscored the very serious consequences of low-quality information on people's beliefs and behaviors, from mask-wearing to vaccine uptake.
The U.S. has a problem with health misinformation, from the use of dangerous or untested treatments to the decline in trust in health institutions
The U.S. Surgeon General issued his Health Misinformation Advisory in July 2021, which said the United States need a "whole-of-society" approach to mitigating the harmful effects of misinformation, from new education initiatives, more research, platform action, and government oversight. The World Health Organization (WHO) continues to build an infrastructure to respond to the "infodemic," and they continue to convene discussions on the topic and publish reports, which highlight the need for new skills and competencies for people working in health departments globally. But is any of this actually moving the needle?
Why Regulation Isn't the Answer to Disinformation
Very often people discuss regulation as the only intervention that will have any significant impact on the proliferation of medical disinformation. In the United States, in particular, activists regularly call for updates to Section 230 of the Communications Decency Act. Passed in 1996, Section 230 states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In practice, this means the technology companies are legally protected from taking responsibility for much of the content they host on their sites.
Yet, the world is swimming in content that is causing real harm. This language is frustrating. Why shouldn't platforms be held responsible for what they publish?
While I can understand calls for policy action, I have deep concerns that the unintended consequences of regulation that has not been fully thought through could actually create much more serious issues. The reason for this concern comes from a decade of actually studying the "bad stuff on the internet." Over the past few years, the type of misinformation circulating online has evolved. There actually isn't a significant amount of outright falsehoods. Instead, there is a great deal of "gray speech," content based on a kernel of truth, but twisted in a way that makes things confusing and frustrating.
This change is partly in response to platform policies. It is important to acknowledge that the major technology companies have taken a number of concrete steps to limit these types of falsehoods on their sites, and as a result, the tactics and techniques of bad actors have evolved. Most noticeably, all the major platforms developed COVID-related misinformation policies—some better than others—in March and April 2020, and these have limited the number of outright falsehoods circulating on the internet. This has been accomplished through a mixture of partnerships with fact-checking organizations to help make decisions about which content to label, demote, or remove, and by establishing tougher internal moderation policies and more sophisticated detection systems.
The result has been an increase in the type of speech that goes right up to the line in terms of platform content policies but doesn't cross it. For example, people will share videos of first-person accounts of vaccine side effects. It's often impossible to know whether these are real or staged and therefore difficult to determine what actions should platforms take with these types of videos. Should platforms assume bad intent and remove or assume good intent and leave them up?
False information sharing also happens, for example, when people jump on social media in search of more information about unproven treatments such as ivermectin and hydroxychloroquine. Are these genuine questions or hoaxsters trying to drive google searches and purchases? What about rogue medical doctors—people wearing white coats advocating for alternative cures or supplements. It's not clear who bears responsibility for monitoring their content sharing and making the call when their content spurs risky or even deadly behaviors and should be removed.
Those who call for increased regulation often have a sense that misinformation is obvious, but medical disinformation isn't always easy to identify. Like pornography—you know it when you see it. But there isn't one definition that allows for easy detection.
There's also the question of what to do when science is unsettled. At the beginning of this pandemic, suggesting the virus was airborne was considered misinformation. Countries will never be able to have a shared definition of health misinformation that stays relevant during times of changing science and knowledge.
A Call for Information Transparency
What countries do need however is transparency. Researchers need to know what is circulating on social networks and the media, and researchers need to understand how many people are seeing this content and what they are doing with it. Right now, researchers have almost no understanding of the information different people consume. With the print media, I can do a quick database search and find all articles that reference ivermectin. It's very difficult for me to do that same search for cable news. It's impossible for me to do that on social media. And the measly tools that do exist are being dismantled. For example, the content discovery and social monitoring platform Crowdtangle is on track to be discontinued by its owner, Facebook parent company Meta.
The lack of tools means researchers often have little to no idea which posts are being shared most frequently, and whether those posts are from sites known for conspiracy theories and disinformation, mainstream news outlets, or official government agencies. In the European Union, every quarter, all major platforms have to publish Transparency Reports thanks to a Code of Practice that they have all agreed to adhere to. Take a look: they read beautifully and appear to suggest that there aren't really any problems with speech online.
This is because U.S. platforms are writing their own transparency reports, and that's a problem. Instead, the United States needs independent third-party auditors to write those reports. Like a financial audit, independent bodies should investigate and assess how effectively the platforms manage information flows, remove, label, and demote low-quality information, and prioritize quality information. Instead of governments passing regulation based on hunches rather than data, countries need governments to insist on increased transparency paired with independent auditing mechanisms.
Only this type of oversight would allow us to really understand what people are actually seeing as part of their information diets. Then researchers could compare the actions taken by different platforms in order to decide what best practices should be implemented and scaled. With that level of understanding, and hopefully, a parallel set of discussions at the societal level about the type of speech internet users want to see, regulation has a role to play. But not yet.