by Ben BailesseMerit

Manipulating Public Opinion for Profit?

The default response from website owners when taken to court over user-submitted content is generally “bring your checkbook with you, because you’ll be paying our attorneys to defend us”, which is actually on the FAQ page for RateMDs

[1]. Whoa. Why so brazen? Because of a highly-contested, lesser-known piece of legislation commonly referred to as “Section 230”.

 

Section 230 is part of the Communications Decency Act (CDA) of 1996, which says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” [2]. At the highest level, it means that the website owner has legal immunity from its own users’ content – if one of its users submits illegal  or damaging content, the website operator is not liable.

 

The effect of this legislation gave website operators a shield to operate behind. The Internet was the wild west again – start up a website, and let your users go wild while you relax, comfortably out of grasp of the angry mob.

 

But if I learned anything from Survivor, it’s that immunity comes at a price. In this case, it comes in the form of ending up as a legal guinea pig. Section 230 has come under considerable fire, and has become less of an impregnable shield with each case. Although Section 230 forbids revocation of immunity on the basis of self-regulation and exercise of editorial functions, the line between unapologetic immunity and devastating liability has become increasingly blurry.

 

This is exemplified in part to a recent ruling against RipoffReport.com where revocation of Section 230 immunity was upheld by use of the “neutral publisher” test. If that sounds like a novel legal test to you, you’d be right. The concept of a “neutral publisher” is, in the words of Eric Goldman,  “an oxymoron and incoherent.” [3] The ruling cites the case FTC vs. Accusearch:

 

“We therefore conclude that a service provider is ‘responsible’ for the development of offensive content only if it in some way specifically encourages development of what is offensive about the content.” 570 F.3d at 1199

 

Up until then, the concept of revoking Section 230 protections on the basis of merely encouraging offensive content would have seemed fantastical. But here we are. Two more nails in the coffin.

 

So then, Ripoff Report may consequently be liable because it encouraged offensive content. (The case is still pending. The ruling was against Ripoff Report’s Motion to Dismiss.) I admit, it’s no surprise that Ripoff Report sold itself as a safe haven for negative commentary. But what precedent does this set for website operators in an indeterminate state of neutrality (read: the rest of the Internet)?

 

Now, I’m aware that Ripoff Report has few friends. OK. It has no friends. But the case is nevertheless important for one major reason. Encouraging an offensive type of content can land you in hot water. We know that now. But how about encouraging only positive content?

 

Again, Eric Goldman puts it well [3]:

 

In this case, the court misinterprets “neutrality” to mean that soliciting only negative reviews wasn’t “neutral.” By the same implication, then, a website that only permitted positive reviews wouldn’t be “neutral” either […]

 

Of course, people will line up to sue for defamation (the “mob” mentioned earlier), but who would sue for positive reviews? How about free speech advocacy groups? Consumer protection groups? The FTC?

 

Many review services who host their subscribers’ reviews on their own site(s) only post the positive stuff as a matter of course. They filter out all the negative reviews, because they suppose it might harm its members’ reputations. As short-sighted this policy might be, I seriously doubt that these services considered that their non-neutral manipulation of perceived public opinion could border on false advertising.

 

Ouch. A company or service is at risk for false advertising if its practice could be considered deceptive toward its stakeholders. In this example, the stakeholders would be the public, prospective patients who will likely make significant healthcare decisions based on the information they’re presented.

eMerit is not a fan of filtering review services, for a long list of reasons. Those sites make all review platforms look self-serving and manipulative. And insofar as these services actively and intentionally deceive the public and prospective patients and contaminate public opinion, these filtering review services are not only harming the public and their industry, but their subscribers as well.

 

The saving grace of reviews, however, is that they rely on trust. And trust has two important characteristics – it has to be earned, and it’s extremely fragile. Break that trust just once, and you can spend your life and fortune trying to earn it back. But for those of us who take that responsibility seriously, lots of good can come from it – for the public, our industry, and for those who entrust their reputations to us.

 

After all, we all have a reputation to uphold.

  •  
  •  
  •  
  •