Skip to toolbar
How President Biden might change social media regulation

The landmark US law that makes social media networks and tech platforms immune from liability for what users post on them will change under President Joe Biden’s forthcoming administration, The Drum columnist Samuel Scott predicts. Here is some insight into what may occur.

If there was one thing on which Joe Biden and Donald Trump agreed, it is that Section 230 of the US 1996 Telecommunications Act should not continue to exist in its current form.

As I previously discussed at length, the legislation protects any “interactive computer service” in the country from being legally liable for what users post on the platforms. One of the reasons for the law was to protect and grow the nascent internet and world wide web.

Say that I write something ludicrous on social media such as "Oasis was a better band than Blur". The network would not be responsible at all for people reading such a reprehensible idea. But in practice, Facebook users now spread political disinformation, and YouTube’s algorithm has popularised far-right conspiracy theories. And those are just two examples.

Two decades later, Section 230 has now come under fire from the left and right. The left worries about extremism and disinformation. The right believes they are victims of political bias. As a result, changing the law might be the last remaining bipartisan issue.

And the rest of the world will have to adapt because global social media and tech companies are based in the United States and must adhere to its laws.

Fact versus fiction on social media

In May, President Trump signed an executive order that directed the Federal Communications Commission (FCC) to clarify the scope of Section 230. The order also directs complaints about political bias to the Federal Trade Commission (FTC) and asks the agency to review whether such alleged actions are “unfair or deceptive business practices.”

"Companies that engage in censoring or any political conduct will not be able to keep their liability shield,” Trump said at the time.

My editorial observations. First, there is no evidence to support Trump’s accusations of widespread, systemic bias. Repeating something time and time again does not make it true.

For example, look at these daily lists compiled by New York Times tech columnist Kevin Roose. The 10 top-performing posts by US Facebook pages are dominated by Fox News, Donald Trump, Dan Bongino, Franklin Graham and Ben Shapiro.

Second, social platforms take action against racism, bigotry, sexism, violent extremism and political disinformation. It just so happens that those things are usually on the right end of the spectrum these days. Any punitive measures are not the result of the pages being conservative – it is because they are spreading those horrible things.

But how can we save an objective, fact-based reality?

What President Biden might do

Of course, President-elect Biden’s opinion is the one that counts because he won the US presidential election.

In January, he told the editorial board of The New York Times that Section 230 “immediately should be revoked … for [Facebook cofounder Mark] Zuckerberg and other platforms.”

“[The Times] can’t write something you know to be false and be exempt from being sued,” Biden added. “But he can ... [Section 230] should be revoked because [Facebook] is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy.

“[Zuckerberg] should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times. Whether he engaged in something and amounted to collusion that in fact caused harm that would in fact be equal to a criminal offense, that’s a different issue. That’s possible.”

So, what will happen? Right now, we do not know. US presidents set general policy. But the details will likely depend on who Biden names to positions such as commerce secretary and agencies such as the FCC and FTC.

Tech companies and marketers should read the tea leaves based on whoever he appoints. (Note: Facebook declined to comment for this column. Amazon and Google did not respond to requests for comment.)

How Section 230 might change

Nothing drastic will change immediately when Biden takes the oath of office on January 20. Governments move slowly, and any potential policy shifts will surely face court challenges as well as intense lobbying from tech companies.

However, the signs point towards the internet no longer remaining a Wild West free-for-all without any government regulation or oversight. The first shot fired at Section 230 was when the US created an exception in a 2018 law aimed at fighting sex trafficking. Websites did become liable in those specific instances.

Most people would agree that such horrible crimes are sensible exceptions to Section 230. But where will it end? Will there be exceptions for offenses such as libel, slander or suggesting Oasis in music video recommendations? Should they – like media companies – be able to be held liable for damages caused by their spreading of misinformation?

“Many of the 'social networks' that Section 230 protects are now essentially media companies because of how they edit or sometimes editorialise user-generated content,” Jimmy Hutcheson, chief executive of the alternative music and culture publication SPIN, told me.

“The law is outdated and needs to reflect the current realities of digital media based on all the new technology that exists. There’s likely some middle ground here to protect all consumers, treat everyone fairly and help these social networks maintain their business models.”

No one knows what will come next. At one end of the spectrum, social platforms could merely face small fines for not removing certain material within a specified amount of time. At the other extreme, companies or even individual executives could face civil or criminal penalties for what their networks disseminate.

Remember: Algorithms are human, editorial decisions written into code. In the end, the humans are responsible.

“Tech platforms want to be an 'open platform' but maintain the censorship rights of a publisher. Moving forward, it's going to be one or the other,” Matt Berman, president of the US digital marketing agency Emerald Digital, told me.

“Section 230 currently says that content could be removed if is 'otherwise objectionable", but this term is broad. What is 'objectionable'? Democrats and Republicans could choose to better define 'objectionable' and determine a clear and transparent view of what type of content could be removed or not.”

My first job in journalism – before I went into marketing here in Israel – was at a weekly newspaper that covered issues including politics, business and crime in Boston. As we put the paper to bed on Friday afternoons, everyone would go over every printed page. We would personally proofread and check every article and every advertisement for anything objectionable.

Of course, such human activity does not “scale.” And the huge profits of tech companies today – including social networks – depend on hiring as few people as possible and letting algorithms or third parties do the rest.

In July, Amazon chief executive Jeff Bezos told the US House’s antitrust subcommittee that third-party sales account for 60% of the company’s product sales. In August, a California court ruled that federal law does not protect Amazon from liability for defective products sold by third parties on the website. In the past, Amazon has won some cases on Section 230 grounds.

Take YouTube, which contains many videos refuting the idea that the Earth is round. A YouGov poll in 2018 found that 34% of Americans aged 18-24 now think that the Earth might be flat.

“For tech companies, Section 230 is likely to be one part of a larger effort to more effectively regulate the tech platforms that have had an adverse impact on the news business, media business, and democracy,” Wakeman Consulting Group principal Dave Wakeman, a marketing consultant in Washington, D.C., who has also worked on political campaigns and messaging, told me.

“Because the antitrust investigations around big tech have been largely bipartisan in the House, this improves the likelihood that some level of common ground can and will be found that will lead to meaningful regulation or actions taken to reign in some of the worse actions of these tech firms.”

What marketers can do

In addition to Section 230’s protections, social platforms have so much power because the marketing industry throws buckets of money at them. Without us, they would go out of business.

Just as Trump makes claims of online bias and election fraud without any evidence, so have marketers assumed for years – also without evidence – that social media is a critical collection of channels to reach customers and build brands.

“Social is where your Above the Line idea walks up to customers, shakes their hands and kisses their babies,” BBH Labs, the R&D arm of the usually sensible agency BBH, recently published in a blog post.

No. The idea that brands should act like human beings is patently false – but marketers keep insisting that it is true despite a complete lack of evidence. People follow friends, family members, politicians, celebrities, sports teams, influencers and entertainers on social media. Not mustard and dishwashing soap.

The Facebook boycott earlier this year during the Stop Hate for Profit campaign failed for one simple reason: money talks while brand purpose walks.

Take all of the disinformation relating to non-existent election fraud that is spreading in the US. Social networks will never be guilted into making significant changes. And as Techdirt always puts it anyway: “Content moderation at scale is impossible to do well.”

Yes, marketers should want the platforms to “do better”. But the way to force a change is not to boycott them but to realize that we do not need to spend money there in the first place. After cutting spend on Facebook and Twitter, Unilever is not going back anytime soon.  (My prior idea: social networks should pivot to subscription models without ads and algorithms at all.)

At an event in England, I once gave a talk on how much of the internet is fake. It seems that even politicians are realising that fact. Just see how a Biden campaign aide described their winning strategy to CBS News reporter Ed O'Keefe.

“We turned off Twitter. We stayed away from it. We knew that the country was in a different headspace than social media would suggest.”

The Promotion Fix is an exclusive column for The Drum contributed by global keynote and virtual corporate speaker Samuel Scott, a former journalist, newspaper editor and director of marketing in the high-tech industry. Follow him on Twitter. Scott is based out of Tel Aviv, Israel

[...]

Read More...

The landmark US law that makes social media networks and tech platforms immune from liability for what users post on them will change under President Joe Biden’s forthcoming administration, The Drum columnist Samuel Scott predicts. Here is some insight into what may occur.

If there was one thing on which Joe Biden and Donald Trump agreed, it is that Section 230 of the US 1996 Telecommunications Act should not continue to exist in its current form.

As I previously discussed at length, the legislation protects any “interactive computer service” in the country from being legally liable for what users post on the platforms. One of the reasons for the law was to protect and grow the nascent internet and world wide web.

Say that I write something ludicrous on social media such as "Oasis was a better band than Blur". The network would not be responsible at all for people reading such a reprehensible idea. But in practice, Facebook users now spread political disinformation, and YouTube’s algorithm has popularised far-right conspiracy theories. And those are just two examples.

Two decades later, Section 230 has now come under fire from the left and right. The left worries about extremism and disinformation. The right believes they are victims of political bias. As a result, changing the law might be the last remaining bipartisan issue.

And the rest of the world will have to adapt because global social media and tech companies are based in the United States and must adhere to its laws.

Fact versus fiction on social media

In May, President Trump signed an executive order that directed the Federal Communications Commission (FCC) to clarify the scope of Section 230. The order also directs complaints about political bias to the Federal Trade Commission (FTC) and asks the agency to review whether such alleged actions are “unfair or deceptive business practices.”

"Companies that engage in censoring or any political conduct will not be able to keep their liability shield,” Trump said at the time.

My editorial observations. First, there is no evidence to support Trump’s accusations of widespread, systemic bias. Repeating something time and time again does not make it true.

For example, look at these daily lists compiled by New York Times tech columnist Kevin Roose. The 10 top-performing posts by US Facebook pages are dominated by Fox News, Donald Trump, Dan Bongino, Franklin Graham and Ben Shapiro.

Second, social platforms take action against racism, bigotry, sexism, violent extremism and political disinformation. It just so happens that those things are usually on the right end of the spectrum these days. Any punitive measures are not the result of the pages being conservative – it is because they are spreading those horrible things.

But how can we save an objective, fact-based reality?

What President Biden might do

Of course, President-elect Biden’s opinion is the one that counts because he won the US presidential election.

In January, he told the editorial board of The New York Times that Section 230 “immediately should be revoked … for [Facebook cofounder Mark] Zuckerberg and other platforms.”

“[The Times] can’t write something you know to be false and be exempt from being sued,” Biden added. “But he can ... [Section 230] should be revoked because [Facebook] is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy.

“[Zuckerberg] should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times. Whether he engaged in something and amounted to collusion that in fact caused harm that would in fact be equal to a criminal offense, that’s a different issue. That’s possible.”

So, what will happen? Right now, we do not know. US presidents set general policy. But the details will likely depend on who Biden names to positions such as commerce secretary and agencies such as the FCC and FTC.

Tech companies and marketers should read the tea leaves based on whoever he appoints. (Note: Facebook declined to comment for this column. Amazon and Google did not respond to requests for comment.)

How Section 230 might change

Nothing drastic will change immediately when Biden takes the oath of office on January 20. Governments move slowly, and any potential policy shifts will surely face court challenges as well as intense lobbying from tech companies.

However, the signs point towards the internet no longer remaining a Wild West free-for-all without any government regulation or oversight. The first shot fired at Section 230 was when the US created an exception in a 2018 law aimed at fighting sex trafficking. Websites did become liable in those specific instances.

Most people would agree that such horrible crimes are sensible exceptions to Section 230. But where will it end? Will there be exceptions for offenses such as libel, slander or suggesting Oasis in music video recommendations? Should they – like media companies – be able to be held liable for damages caused by their spreading of misinformation?

“Many of the 'social networks' that Section 230 protects are now essentially media companies because of how they edit or sometimes editorialise user-generated content,” Jimmy Hutcheson, chief executive of the alternative music and culture publication SPIN, told me.

“The law is outdated and needs to reflect the current realities of digital media based on all the new technology that exists. There’s likely some middle ground here to protect all consumers, treat everyone fairly and help these social networks maintain their business models.”

No one knows what will come next. At one end of the spectrum, social platforms could merely face small fines for not removing certain material within a specified amount of time. At the other extreme, companies or even individual executives could face civil or criminal penalties for what their networks disseminate.

Remember: Algorithms are human, editorial decisions written into code. In the end, the humans are responsible.

“Tech platforms want to be an 'open platform' but maintain the censorship rights of a publisher. Moving forward, it's going to be one or the other,” Matt Berman, president of the US digital marketing agency Emerald Digital, told me.

“Section 230 currently says that content could be removed if is 'otherwise objectionable", but this term is broad. What is 'objectionable'? Democrats and Republicans could choose to better define 'objectionable' and determine a clear and transparent view of what type of content could be removed or not.”

My first job in journalism – before I went into marketing here in Israel – was at a weekly newspaper that covered issues including politics, business and crime in Boston. As we put the paper to bed on Friday afternoons, everyone would go over every printed page. We would personally proofread and check every article and every advertisement for anything objectionable.

Of course, such human activity does not “scale.” And the huge profits of tech companies today – including social networks – depend on hiring as few people as possible and letting algorithms or third parties do the rest.

In July, Amazon chief executive Jeff Bezos told the US House’s antitrust subcommittee that third-party sales account for 60% of the company’s product sales. In August, a California court ruled that federal law does not protect Amazon from liability for defective products sold by third parties on the website. In the past, Amazon has won some cases on Section 230 grounds.

Take YouTube, which contains many videos refuting the idea that the Earth is round. A YouGov poll in 2018 found that 34% of Americans aged 18-24 now think that the Earth might be flat.

“For tech companies, Section 230 is likely to be one part of a larger effort to more effectively regulate the tech platforms that have had an adverse impact on the news business, media business, and democracy,” Wakeman Consulting Group principal Dave Wakeman, a marketing consultant in Washington, D.C., who has also worked on political campaigns and messaging, told me.

“Because the antitrust investigations around big tech have been largely bipartisan in the House, this improves the likelihood that some level of common ground can and will be found that will lead to meaningful regulation or actions taken to reign in some of the worse actions of these tech firms.”

What marketers can do

In addition to Section 230’s protections, social platforms have so much power because the marketing industry throws buckets of money at them. Without us, they would go out of business.

Just as Trump makes claims of online bias and election fraud without any evidence, so have marketers assumed for years – also without evidence – that social media is a critical collection of channels to reach customers and build brands.

“Social is where your Above the Line idea walks up to customers, shakes their hands and kisses their babies,” BBH Labs, the R&D arm of the usually sensible agency BBH, recently published in a blog post.

No. The idea that brands should act like human beings is patently false – but marketers keep insisting that it is true despite a complete lack of evidence. People follow friends, family members, politicians, celebrities, sports teams, influencers and entertainers on social media. Not mustard and dishwashing soap.

The Facebook boycott earlier this year during the Stop Hate for Profit campaign failed for one simple reason: money talks while brand purpose walks.

Take all of the disinformation relating to non-existent election fraud that is spreading in the US. Social networks will never be guilted into making significant changes. And as Techdirt always puts it anyway: “Content moderation at scale is impossible to do well.”

Yes, marketers should want the platforms to “do better”. But the way to force a change is not to boycott them but to realize that we do not need to spend money there in the first place. After cutting spend on Facebook and Twitter, Unilever is not going back anytime soon.  (My prior idea: social networks should pivot to subscription models without ads and algorithms at all.)

At an event in England, I once gave a talk on how much of the internet is fake. It seems that even politicians are realising that fact. Just see how a Biden campaign aide described their winning strategy to CBS News reporter Ed O'Keefe.

“We turned off Twitter. We stayed away from it. We knew that the country was in a different headspace than social media would suggest.”

The Promotion Fix is an exclusive column for The Drum contributed by global keynote and virtual corporate speaker Samuel Scott, a former journalist, newspaper editor and director of marketing in the high-tech industry. Follow him on Twitter. Scott is based out of Tel Aviv, Israel

Powered by WPeMatico