29.1 C
Delhi
Saturday, July 27, 2024

Going viral? How COVID-19 turbocharged snake oil and quackery

This article is from the first chapter of Silent Killers, our editorial series on chronic disease.

A rising wave of health misinformation is spreading across social media, with networks of doctors, influencers and patients touting fake cures for chronic diseases like diabetes and cancer.

The techniques used to disseminate misinformation during the pandemic have mobilized previously disparate groups behind unverified health claims. The World Health Organization calls it an “infodemic,” and social media giants have responded — with limited success — by removing content, banning users from posting false statements and linking readers to reputable scientific information.

Now, the misinformation playbook is being used to repurpose alternative COVID-19 cures or claim that approved vaccines and treatments for the viral disease are unsafe.

In one video shared by a Facebook user to her 44,000 followers, a U.S. woman tells viewers where they can buy the antiparasitic ivermectin without a prescription. At one point she whispers conspiratorially to the camera “ivermectin doesn’t just cure [COVID-19]… it kills cancer too.” In India, a prominent anti-vaccine doctor, who has been banned by Facebook, Twitter and YouTube, has turned to other channels to spread claims that his dietary plan will cure diabetes.

“Ivermectin and other antibacterials are being promoted, particularly on Telegram, as cures for a variety of health issues — particularly different types of cancers,” said Kyle Weiss, a senior analyst at Graphika, which models misinformation on social media. “We expect that to continue.”

Now, in Europe, the first tentative steps are being taken to tackle this through a coordinated forum for tackling misinformation on health and non-communicable diseases, organized by the World Health Organization’s European Office for the Prevention and Control of Non-communicable Diseases. The forum brings together experts, civil society and governments with the goal of developing a toolkit that to help address this content. The WHO plans to send this document to governments in January.

It’s not clear whether such efforts will succeed. The scale of the challenge is massive — researchers are unable to accurately quantify the extent of the problem in part because people involved in spreading health misinformation often shift from public platforms to private or encrypted channels of communication. Governments don’t know how to respond, and social media companies already have their hands full dealing with COVID-19 misinformation.

The pandemic has put addressing misinformation about non-communicable diseases, or NCDs, on the backburner, said Kremlin Wickramasinghe, from the team at the WHO tasked with setting up the new forum.

It’s not always been easy to get people to listen. Years of work in the field has taught Wickramasinghe that these conditions are often deprioritized. In the European region, major NCDs account for more than 80 percent of deaths but are usually allocated less than 10 percent of funding, he notes.

“We avoid it because it’s too big, too complex, multifaceted and long term. I think it’s the same for funding, the same for prevention and the same for misinformation,” he told POLITICO.

The scale of the problem 

Misinformation on social media about public health isn’t new. For years, a loose network has spread health misinformation about purported cures — ranging from vitamin intravenous therapy for cancer to “skinny” teas for unclogging arteries. While pandemic-related misinformation may have overshadowed some of these false health claims, experts see new levels of engagement.

Neville Calleja, head of the department of public health at the University of Malta, has focused his pandemic work on COVID-19 misinformation. Yet he also observes that misinformation about other health topics such as inaccurate information on cannabis use is “flowing again.” “We’re zooming back to good old misinformation around other things,” he said. 

The challenge, said Wickramasinghe, is putting hard figures on the problem — and the WHO doesn’t have the data. While some information is freely accessible, accessing detailed analysis on these social media trends is difficult and expensive. And that doesn’t even touch on the vast trove of mostly inaccessible content shared on private groups and platforms such as Telegram.

“The social media companies are very secretive,” said Judit Bayer, professor of media law at the Budapest Business School. Accessing data from them can be difficult, with many hoops to jump through. Even companies like Graphika, which has sophisticated tools to chart networks of misinformation, face limits on what they can access.

For their part, social media firms defend their practices. “Our goal is to lead our industry in transparency, and we’re continuing to share data publicly through efforts like our global COVID-19 Trends and Impact Survey which we believe is the largest public health survey in history,” said a spokesperson for Meta, which owns Facebook and Instagram.

Twitter says it has expanded its work during the crisis with health officials and agencies around the world, and Meta is working “closely” with health organizations like the WHO and the U.S. Centers for Disease Control in building policies and staying up to date on emerging health threats.

While the true scale may be hard to quantify, there is broad agreement on the potentially serious side effects of misinformation. Research on information about cancer treatments has shown increased engagement with inaccurate articles rather than factual ones. Scores of social media posts about statins, which are drugs used to lower cholesterol, push people to not take statins with claims that the drugs can kill you and are part of an agenda from Big Pharma to maximize their profits.

Anti-science agenda

The evidence is enough to convince researchers, governments, civil society and social media platforms that they need to tackle the problem.

One of the most obvious solutions is to fight fake information with the truth. 

It’s a tool that’s been used by many organizations during the pandemic — Instagram directs users to government websites about the pandemic when users post content related to COVID-19; Twitter points users to official public health information and TikTok has a similar policy. 

But in many cases that’s not going to work, say experts.

That’s because there is a “general crisis in trust,” said media law professor Bayer. “It’s a vicious circle. You cannot convince people to trust the WHO if they don’t trust the government,” she said. “Simply sharing the true information with someone who believes in the false information won’t change their opinion.”

This anti-science sentiment appears to be growing and is reinforced by the real-life networks of friends that people keep.

When correct information isn’t enough, so-called de-platforming is another response that has been shown to work. That’s when authors spreading false information are either banned from social media or their posts are removed. But it’s not a silver bullet, with blocked users able to create new accounts under different identities or move to other encrypted platforms. 

Governments, for their part, are often at a loss for how to respond. As part of the WHO’s forum on misinformation, the international organization has been speaking to governments about how they regulate this content. Asked what governments wanted from the WHO, Wickramasinghe said they mostly want to know what other countries are doing. “They don’t have the specific questions,” he said. 

In some cases, governments are taking action. The U.K. is, for example, planning to ban all online paid advertising for foods that are high in fat, sugar and salt. While the efficacy of such bans is disputed, the U.K. law addresses the risk of misinformation about these foods being spread — by eliminating the content altogether. 

“The most sustainable way forward that I see is that hopefully, we get more investment in infodemic management,” said researcher Calleja. In practice, this would mean increasing the ability of governments and researchers to automate social media monitoring. 

This could involve creating roles for researchers to work full-time on tackling health misinformation, rather than going back to their day jobs after the pandemic. 

“Policymakers are slowly coming on board with this,” he said. “But they haven’t walked the talk everywhere.”

This article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.

https://ift.tt/eA8V8J December 01, 2021 at 04:30PM
Ashleigh Furlong

Most Popular Articles