Dialogues on the Price of Free Speech: Asocial Media


Geralt
, CC0, via Wikimedia Commons

By the Fabulous Dadbots: Mark M., Mark O., Dave S., Dennis C., Paul C., and Geoff Carter


The 1996 Communication Decency Act has at its heart Section 230:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

–The 1996 Communication Decency Act, Section 230

The law dates to a time before social media platforms and before the explosion of the internet into daily life via the smartphone.  At that time, primitive message boards, accessed via dial-up, were given a pass on what we now refer to as “content moderation”.

It’s a different world now, in multiple ways.  

Social media dominates the internet, and is swamping America’s timelines with both misinformation (unintentional) and disinformation (intentional). This is true of the “mainstream” giants like Facebook and Twitter, as well as the right wing dark web of Truth Social, Parler, Gab, Gettr, and similar “alternative” sites.  

Social media has vastly changed, and monetized, the concept of the message board. The platform companies maintain complex algorithms designed to maximize the users’ engagement with the platform, both in terms of time spent and psychic commitment. 

Social media is enabling, no, driving, a level of paranoia and conspiratorial belief unparallelled in America’s history. Conspiracy theories and alternative realities have always been an important element in US politics, throughout the 20th century and even earlier. Father Coughlin, the America First movement, Joe McCarthy and the Red Scare, and the John Birch Society were all expressions of what Richard Hofstaer referred to as “the paranoid style in American politics”.  But the supercomputer sitting in every American’s pocket, and the beckoning infrastructure of social media platforms, is turning ordinary citizens of today  into full-blown conspiracy nuts.  

Those algorithms which drive user engagement are, not by coincidence, leading more and more Americans into right wing paranoia. Facebook has conducted experiments in which they set up a “vanilla” user profile, with no overt liberal or conservative leaning. They subject the profile to the algorithm, and within a short time, following Facebook’s automatically generated suggestions, the “user” falls into a radically conservative rabbit hole. Frances Haugen, the Facebook “whistleblower” who testified before Congress, described this on Meet the Press:

“And yet in March of 2021, they had already run the same study at least four times where they took a blank account. This is an account that doesn’t have any friends. It doesn’t have any interests. And they followed some center right, center left issues. And then all they did was click on the content Facebook gave or follow groups Facebook suggested. And in two weeks they went from, you know, center topics, like Fox News, to white genocide just by clicking on the content.”

Meet the Press, January 1, 2023

Facebook, meanwhile, rejects even simple system tweaks. For example, we’ve all seen crazy links which are passed from user to user. This is easy to do; you simply share the link to your timeline. The content moderation team at Facebook suggested, once that piece of content has been pushed from User A to User B, and now to User C (who has no relation to User A), that the third-hand poster be forced to “copy and paste” rather than simply being allowed to forward the link. This would be quite effective in slowing the spread of mis and disinformation.  But Facebook refused to allow the change, because it would cut down on user engagement, time spent online, and ultimately, profits.

Another interesting twist is that an “angry face” emoji has FIVE TIMES the influence over the algorithm as a “like” or “heart” emoji. Anger drives engagement.

We’ve seen the results in real life. Most famously, January 6th. But don’t forget Pizzagate, where an armed individual in thrall to QAnon threatened a Washington, DC pizza joint, because he believed that Hillary Clinton and other elites were abusing children in the basement.

We’ve moved far beyond the internet of 1996, when a friendly Congress gave the sponsors of message boards a break, and held them harmless from any damaging or untrue content posted on their platforms. This “hold harmless” exception has prevented social media platforms from facing any type of consequences in the legal system. Every other industry in the nation–including other purveyors of news or information–lives in an environment where a legal framework helps keep their actions in check. Produce a dangerous product? You will be sued. Publish lies? You may be subject to libel or defamation lawsuits. Even gun manufacturers are starting to feel the heat from the courts. But if your platform allows lies to flourish on the internet?  No legal consequences whatsoever. 

Where do we draw the line with the social media giants? Should they be held responsible for the dangerous content which they not only allow to be posted, but seemingly encourage? Or do we continue with a strict hand-off “freedom of speech” approach, and allow online disinformation to continue to poison our political discourse?

–Mark M.


Here’s a dumb question for y’all, why not go after the sources of dangerous disinformation as opposed to the social media outlets where that information appears?  Kind of like the legal proceedings against Alex Jones. Obviously, you can’t prosecute everyone who creates disinformation content, there’s just too many sources. But a handful of high profile prosecutions, like Alex Jones, might do the trick of suppressing the massive wave of disinformation now prevalent.  

I offer this up as an alternative to platform content moderation because I have no faith in who we might assign content moderation to. A government agency?  No thank you. A corporation-created board of review?  Nope.  A non-profit moderation panel?  Who’s funding that non-profit? Or perhaps an AI generated bot. Who would create that black box and who would they be responsible to?

The long-term solution to the mis- and dis-information problem might be a massive education campaign for users (get em young, I tell ya). The goal being to teach skills on how to use the web safely. There are already programs available to teach seniors to recognize scams of all types that target them. The whole populace needs skills to recognize bullshit when they see it. Bullshit filters are more necessary now than ever.  Critical thinking and skepticism are skills that can be sharpened by everyone.In the marketplace of ideas, let the user beware?

MarkO


Wacked social media content – MM presents another rock and hard place issue, this time between protecting 1st Amendment rights and limiting the publication of obviously false, dangerous, hateful content. I agree with MO —  I’m skeevy about whoever would get the role of “decider” about which content gets banned etc. At this rate I’m afraid it would be Rep. Jim Jordan, the Chair and Chief Ass Clown of the brand new “Select Sub-Committee on the Weaponization of the Federal Government” (wtf right?)— and Chair of the House Judiciary Committee. 

For sure it is a good idea to go after the actual producers of disinformation ala Alex Jones; but it’ll be awful dang tough to flush some of those ornery critters out of their hidey-holes. We still don’t have any clue about the identity of “Q”, the person or persons who post those Q-Anon conspiracy theories.   

Those wacky posts, completely unfounded and unproven on any level, helped foment the Jan. 6 insurrection and, last weekend, inspired thousands of Brazilians to pull a copycat with an even bigger insurrection—another misguided adventure in stupidity based on completely unfounded rumors given credence through social media. 

Maybe some kind of campaign to educate the public would help? I knowwww, it’s kind of a cop-out solution, but, man, people reaallly need to learn how to fact check and look at actual evidence and engage their bullshit meters.

DC


If Steve Bannon ever dies – and let’s hope it’s in a prison jumpsuit – his epitaph will bear his most famous strategy: “Flood the zone with shit.” When it comes to sources of disinformation, we are up against motivated, resilient, and well-resourced bad actors. I agree that there’s little hope of choking off the flow at the source.

Dennis and Mark O argue reasonably  for improving, or at least friggin’ activating, the bullshit filter of the general population. I see this as a policy that everyone can feel good about, but which has little chance of working. It reminds me of discussions that I’ve had with a former co-worker – a religious, conservative guy, a current church usher, and an all around nice guy.  Every time I complain to his NRA lovin’ ass about the latest mass shooting, his response is that “poor parenting” and “bad media influences” are the real cause of the problem. The gun?  It’s just the tool. That may be!  But what is the actual policy solution that flows from that?  How do we address “poor parenting” and “bad media”?

Similarly, we can all nod  sagely in favor of improved propaganda detection education, But I think we all realize that internet propaganda and lies that confirm our political biases are so seductive, no amount of bullshit detection education is going to help. It’s like the DARE program for 5th graders—the schools and the cops feel great about it, but it has zero benefit to those kids, 8 or 10 years down the road.

I’m in agreement with both Dennis and Mark on the impossibility of appointing an effective and neutral content moderator. That’s a unicorn, kind of like the “fair and balanced” reporting on Fox News.

However, I think there may be some help on the horizon via the lawsuit angle. They asked Willie Sutton why he robbed banks, and his succinct reply has been memorialized by every personal injury since David Gruber was in diapers:  “Because that’s where the money is.”    The social media giants do have the money, and that is one concept that the legal system has distilled to its essence. If Section 230 could be repealed, the lawyers and courts could start to hold the companies accountable for the damaging lies spewed on their platforms.

Of course, not every lie would be actionable. Take the every day, normal, political, pants on fire lie. For example, Sarah Palin didn’t love Obamacare. So let me refresh your palate with her full statement:

The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s “death panel” so his bureaucrats can decide, based on a subjective judgment of their “level of productivity in society,” whether they are worthy of health care. Such a system is downright evil.

Washington Post; Sarah Palin Death Panels and Obamacare, June 27, 2012

Admit it, you enjoyed that! Anyway, it’s a doozie of a lie, with a barely discernible grain of truth,  but there’s nothing actionable. This is politics as usual–speech protected by the First Amendment. And, more to the point, there is no person or organization with legal “standing” who could sue over this. She’s inaccurately and unfairly ripping on a political proposal, and there’s simply no individual entity that can claim actual  damages.

How about something with a more specific target? Take the Q Anon claims that Hillary Clinton and Joe Biden literally drink the blood of children. Here we have individual political figures who are being maliciously and viciously defamed. If American libel law would allow it – and since they’re public figures, I’m not sure it does – then HRC and Biden should be able to bring action against Facebook for allowing this lie to be propagated. I’d make a similar argument for the lies about the Sandy Hook families.  We have identifiable victims who can demonstrate tangible damages from the spreading of the lies. The “public figure” libel exception would not hold.

The final type of lie I’ll discuss is violent, inflammatory rhetoric. Plenty of this was flying around prior to January 6th. The police officers who were injured should be able to go after the platforms for allowing these morons to jazz each other up with incendiary rhetoric.  

Finally (I can hear applause). Wouldn’t this have a chilling effect on the type of statements that are posted on social media? Um, yes, it most certainly would. And that is the intended effect. Remember, these are privately owned networks. There is no constitutional right to post anything on Facebook. If the platform decides that your statements are so inflammatory that they cause issues for the platform, they have every right to censor.  If the platform goes too far, you may be aggrieved, but your rights are not being violated.

–Mark M.


Keep going Mark!  Letter to the Journal-Sentinel?   

–Dave S.


Guess we’re at the point where you and I have to agree to disagree. I oppose setting up the social media platforms as the entities that should be moderating content. I don’t trust the Zuckermans and Musks of the world with that task. If they are going to be targeted with lawsuits, they will have a tremendous incentive to reject any opinions deemed scandalous by our litigious society. I stand by my previous post where I proposed enhanced prosecution of sources of disinformation, ala Alex Jones. 

By the way, I feel agreeing to disagree is a perfectly valid way to end a discourse or debate. There doesn’t have to be a winner and a loser. In fact, I’d say the ability to call an amiable truce in public discourse is absolutely critical to a functioning democracy.  So Cheers!

MO


My lawsuit reasoning is that the disinformation does not cause much damage if its distribution is limited. It only has the potential to damage society once Facebook et al spread it hither & yon. So it is the DISTRIBUTION agent that gets hit with the lawsuit.   

Plus, ka-ching, they have the money.

Agree to disagree? A foreign concept! Nay, all must bow before the bot collective intellect! Counter revolutionaries, internal or external, must be destroyed forthwith.

–Mark M.


Comrade Mamerow, I agree with you that if the goal is to maximize award proceeds to support the Cause, we must go after the bourgeois capitalists with the deep pockets. If however the goal is to reduce counterrevolutionary propaganda (disinformation), I feel we must strike fear into the misguided lumpen proletariat who is generating it. For while the unfortunate proles may have only modest net worths, those meager nest eggs are of equal importance to them as the misbegotten billions accumulated by the bloodsucking masters of our capitalist mode of information dissemination.  Solidarity!

MO  


Haha. Nice, MO. Well-said, fellow bots. Quite an entertaining thread. As but one beam of light in the bright shining collective consciousness of the Dadbots, I agree that agreeing to disagree  can elevate the level of discourse.  

That said, I wonder — Can the threat of lawsuits could actually incentivize  many of the less legally savvy “lumpen misguided proletariats” to cease and desist disseminating their “truth” about weird shit — such as “…all them  baby blood drinking pedophile liberal politicians…” (to quote a Jan 6 insurrectionist)?? I mean, they really are misguided — they actually think this stuff is true.

 For sure, cynical operators like Steve Bannon and Alex Jones might change their tune because they know they are lying their fat asses off and they know they have a lot of $dollars not yet in their off-shore Cayman bank accounts to lose.But the bulk of the goofnuts filling up various “baskets of deplorables” are mostly true believers. 

Can people be sued for libel if they are passing on information they believe to be true? IDK. And the threat of a lawsuit is not not a huge blip on the radar of The typical American knucklehead. 

 Thus, I fall back onto the weasel option of the public education campaign.  This campaign, funded through The K.N.A, (the Knucklehead Reduction Act) could raise awareness of the potential loss of one’s assets — such as double-wide trailers, or ‘87 Dodge Ram pickups fitted out at some expense with grey primer, gun-racks and confederate flag decals, … um ..yeah,and other stereotypical white trash possessions … while also teaching people basic bullshit detecting skills. 

Hey, such campaigns actually have worked to reduce smoking and drunk driving..

DC


The rednecks don’t know their ass from a libel case.  But the suits at Facebook do.  That’s why we sue Zuckerberg!  Not the patriots.  

–MM


‘tis an intriguing topic. Slowly sinking in here in 608 land. (and lots of laughs, lines btw…KNA! (think Dennis meant KRA, but let’s stick w/ KNA—as history unfolds it will be a ‘bots insider thing)—though I’m starting to come around with MM, I think a compromise may be in order here (part of agreeing to disagree…)—namely,  let’s proceed with KNA. It will reduce metal in microwave explosions, babies put to bed with bottles (massive tooth decay), toilet paper hording, etc… but in parallel, we lawyer up.

I like to zoom way out—always longed to be part of a think tank—stroke chin and observe, vis-à-vis the almighty IoT, (Internet of Things),we (the world) are but a child in a forest—no clue which way is north, only watching, not understanding all the ramifications—figuring out why the moss grows on one side of trees and how that is helpful when lost, is a long way off. But we have precedents—in this case, libel.  Here is where I come around.  GM made the Corvair—unsafe at any speed—it was inadvertently a vehicle of death and destruction, and Ralphie baby forced safety consciousness to an entire industry—same is true for the algorithm at FB that lets these modern day Idimins cavort far and wide, hacking mercilessly with their cyber machetes.  Someone is going to tag that algorithm as unsafe. That someone, ladies and gentlemen, (drumroll),  is Mark Walter Mamerow.

(did I guess right at the middle name?)

-D.


Whoa, the bots are rolling. Sorry I’m so late to the game—I was researching AI sex dolls. I’m behind criminalizing the onslaught of malicious disinformation. The legal precedent is, of course (casting our frayed memories back to Citizenship class) a clear and present danger, namely you can’t yell “fire” in a crowded theater. Civil actions will nail those gasbags—like Alex Jones—stupid enough to target specific people, but criminalization of spreading malicious discourse, especially a law which targets the source as well as the disseminator, would do wonders. And there is a precedent for going after the distributors. 

Those of you who remember The Pentagon Papers controversy, dramatized in Spielberg’s The Post, will recall that the feds wanted to sue the Washington Post for publishing the Pentagon Papers, which they maintained jeopardized national security. Aren’t QAnon loons and Proud Boy neanderthals jeopardizing national security, too? The legal precedents are there, but the political will is not. I think this goes back to the unfortunate truth that money talks, and that unfortunately, that’ll ensure that this bullshit will walk. As MM said, anger and drama fuel clicks and clicks fuel revenue and revenue fuels politics.

Geoff


It’s my understanding that the Governor Wittmer kidnapping caper was an FBI sting operation. The FBI supplied the “plan” and talked the stooges into pursuing it.  These stooges may have been genuinely dangerous people who should be taken off the streets. I don’t know. Anyway, a number of them will be doing serious time in the pen, so I’d say the FBI was effective in that case.  Trump and other would-be authoritarian leaders are indeed dangerous and should be watched carefully.  I don’t know that gagging them is the best approach.  Perhaps constantly dogging them with lawsuits and law enforcement action would be better.  I defer to the collective bot wisdom on that.

MO


You’re probably right, Mark; it just gets frustrating watching these jerks slowly chipping away at our freedoms using the Constitution as a shield–an irony Joseph Heller would be proud of. I guess the feds shouldn’t go ahead and do precisely what they’re accusing them of doing… but—I do wish there was a way to shut these guys up.

Geoff