This article is republished from The Conversation under a Creative Commons license. Read the original article here.

Audio streaming giant Spotify is getting a crash course in the tension between free speech and the need to protect the public from harmful misinformation.

The Swedish-founded platform, which has 400 million active users, has faced a hail of criticism over misinformation broadcast on its most popular podcast, the Joe Rogan Experience.

Rogan, a former ultimate fighting commentator and television presenter, has argued healthy young people should not get a COVID vaccination. This is contrary to medical advice from governments all over the world, not to mention the World Health Organization.

A recent episode of his podcast, featuring virologist Robert Malone, drew criticism from public health experts over its various conspiracist claims about COVID vaccination programs.

There were widespread calls for Spotify to deplatform Rogan and his interviewees. Rock legend Neil Young issued an ultimatum that Spotify could broadcast Rogan or Young, but not both.

Spotify made its choice: the Joe Rogan Experience is still on the air, while Young’s music is gone, along with Joni Mitchell and Nils Lofgren, who removed their content in solidarity.

Read more:
Neil Young’s ultimatum to Spotify shows streaming platforms are now a battleground where artists can leverage power

Spotify’s response

Spotify co-founder Daniel Ek has since promised to tag controversial COVID-related content with links to a “hub” containing trustworthy information. But he stopped short of pledging to remove misinformation outright.

In a statement, Ek said:

We know we have a critical role to play in supporting creator expression while balancing it with the safety of our users. In that role, it is important to me that we don’t take on the position of being content censor while also making sure that there are rules in place and consequences for those who violate them.

Does it go far enough?

Freedom of expression is important, but so is prevention of harm. When what is being advocated is likely to cause harm or loss of life, a line has been crossed. Spotify has a moral obligation to restrict speech that damages the public interest.

In response to the controversy, Spotify also publicly shared its rules of engagement. They are comprehensive and proactive in helping to make content creators aware of the lines that must not be crossed, while allowing for freedom of expression within these constraints.  

Has Spotify fulfilled its duty of care to customers? If it applies the rules as stated, provides listeners with links to trustworthy information, and refuses to let controversial yet profitable content creators off the hook, this is certainly a move in the right direction.

Platform or publisher?

At the crux of the problem is the question of whether social media providers are platforms or publishers.

Spotify and other Big Tech players claim they are simply providing a platform for people’s opinions. But regulators are beginning to say no, they are in fact publishers of information, and like any publisher must be accountable for their content.

Logos of big tech platforms
Tech platforms like to claim they’re not publishers.
Pixabay, CC BY

Facebook, YouTube, Twitter and other platforms have significant power to promote particular views and limit others, thereby influencing millions or even billions of users.

In the United States, these platforms have immunity from civil and criminal liability under a 1996 federal law that shields them from liability as sites that host user-generated content. Being US corporations, their actions are primarily based on US legislation.

It is an ingenious business model that allows Facebook, for example, to turn a steady stream of free user-posted content into US$28 billion in quarterly advertising revenue.

Established newspapers and magazines also sell advertising, but they pay journalists to write content and are legally liable for what they publish. It’s little wonder they are struggling to survive, and little wonder the tech platforms are keen to avoid similar responsibilities.

But the fact is that social media companies do make editorial decisions about what appears on their platforms. So it is not morally defensible to hide behind the legal protections afforded to them as platforms, when they operate as publishers and reap considerable profits by doing so.

How best to combat misinformation?

Misinformation in the form of fake news, intentional disinformation and misinformed opinion has become a crucial issue for democratic systems around the world. How to combat this influence without compromising democratic values and free speech?

One way is to cultivate “news literacy” – an ability to discern misinformation. This can be done by making a practice of sampling news from across the political spectrum, then averaging out the message to the moderate middle. Most of us confine ourselves to the echo chamber of our preferred source, avoiding contrary opinions as we go.

If you are not sampling at least three reputable sources, you’re not getting the full picture. Here are the characteristics of a reputable news source.

Read more:
Merchants of misinformation are all over the internet. But the real problem lies with us

Social media, meanwhile, should invest in artificial intelligence (AI) tools to sift the deluge of real-time content and flag potential fake news. Some progress in this area has been made, but there is room for improvement.

The tide is turning for the big social media companies. Governments around the world are formulating laws that will oblige them to be more responsible for the content they publish. They won’t have long to wait.

This article is republished from The Conversation under a Creative Commons license. Read the original article here.