The Real Danger of Facebook

The real problem with Facebook is not that they fail to censor enough posts. The real danger is their practice of using analytics to segment users into identity groups.

As a result of a 60 Minutes report, the media is all over Facebook – decrying what former employee Frances Haugen called an emphasis of “profit over safety.”

No kidding!

Of course, Facebook chooses “profit over safety.”  They exist to sell advertising and to turn a profit.  They are not, and should not, be in the business of ensuring safety – whatever that really means.

Facebook has been under attack for virtually any kind of post made by its users.  If someone posts an opinion questioning the current orthodoxy, it’s labeled misinformation.  If a user believes in American exceptionalism and cultural superiority, he or she is branded as an unrepentant racist and white supremacist.  If an individual questions the “settled science” of climate change, that person is called a science denier.  People must even ensure they use the accepted nomenclature to refer to gender.  Women can no longer be called women – they are “menstruating persons” or “people who give birth.”

Community Standards and Censorship

In order to placate the opponents of free speech, Facebook has created so-called “community standards,” which are ever-changing and worded similarly to the Democratic party platform.  Any deviation from these so-called standards may subject a user to a permanent ban from Facebook, particularly if a user creates posts containing a conservative point-of-view.  Although private companies are not required to follow the Bill of Rights, Facebook selectively chooses which types of speech violates their “community standards” and often uses technology, rather than real people, rendering the context of posts irrelevant to their decisions to suspend or ban a person.  Indeed, Facebook often seems intent on appeasing the 40% of millennials who believe so-called hate speech should be outlawed.

In spite of Facebook’s efforts, critics demand even more censorship of user content by the social media giant.

The content of posts found on Facebook is not the problem, and censorship often results in valid content and opinions being suppressed.  For example, Facebook would routinely remove posts claiming COVID-19 was man-made or escaped from a Chinese laboratory.  Users were even permanently banned from Facebook if they continued to voice these opinions.  In another example of Facebook’s defiance of constitutional norms, there is no appeal to their decisions, even if those decisions are made using incorrect assumptions and information.  In May 2021, Facebook reversed course as evidence was uncovered demonstrating the possibility of the virus escaping from a Wuhan laboratory due to an accidental leak.  Once the general media entertained the idea such a conclusion was credible, if not likely, Facebook stopped censoring posts asserting COVID-19 was created by humans.  Yet it did nothing to remedy the damage done to users it disabled or suspended for previously expressing the view that COVID may have originated or escaped from a laboratory.

The Case Against Censorship

The content of posts is not the problem with Facebook.  There is a good reason why the United States guarantees free speech in the First Amendment of the Constitution.  Today’s “crackpot ideas” may be tomorrow’s revelations.  Posts that challenge the moral tenor of the times one year may reflect common sentiment another year.  Most importantly, any kind of censorship involves the suppression of ideas, even if many consider certain ideas reprehensible.  Ideas are far less likely to cause a problem than the selective censorship of certain ideas.  Currently, some are so invested with the dogma of so-called progressivism that they consider any opposing viewpoint as not only wrong, but as evil.

The remedy of misinformation, crackpot ideas, and reprehensive ideas has always been the truth and a free marketplace of ideas in which opposing viewpoints are not only permitted, but encouraged.  This is where Facebook fails.  Rather than presenting multiple viewpoints to its users, Facebook categorizes users into groups of commonality, so they may be better targeted by Facebook’s advertisers.  Because Facebook has found it profitable to divide users up into political, social, and racial identity groups, users are rarely confronted with perspectives that differ from their own.  Indeed, Facebook users often find themselves in echo chambers in which every user they see expresses similar opinions.

The Real Danger of Facebook

Why is the practice of segmenting users into identity groups dangerous?

Facebook employs algorithms which segment users by age, political views, socio-economic characteristics, education, and likely even race.  This is useful for advertisers who seek to craft their message to appeal to specific demographics.  Yet it is disastrous for both free speech and the truth.

In normal society, everyone has the freedom to voice their own opinions.  Yet, voicing specific opinions are not without consequence.  If people spout misinformation, others can use facts to publicly correct them.  If someone uses socially unacceptable language or engages in discriminatory speech, that person may be shunned or embarrassed by others.  When speech is totally unfettered, so is the right and likelihood of rebuttal.  In a totally free marketplace of ideas, true, good, and acceptable ideas tend to rise to the top.  Discriminatory ideas, misinformation, and profanity is usually squelched or rendered impotent by those offering rebuttals.

Because of Facebook’s algorithms, however, users with similar ideas are placed together.  This includes users spouting misinformation, discriminatory ideas, crackpot ideas, conspiracy theories and the like.  Rather than being challenged, as they would in a free marketplace of ideas, bad ideas are amplified, and purveyors of stupid ideas believe they are in the right, because everyone with whom they interact agrees with their assessments.  In Facebook’s balkanized environment, users are only presented with ideas that mirror their own.  Contrary opinions and facts, which might change one’s perspective, are rare.  Instead, bad and dangerous ideas are reinforced in Facebook, rather than subjected to counter-arguments from others.

Although Facebook may mouth fidelity to whatever types of diversity are currently in vogue, they fail to ensure users are exposed to a diversity of ideas.  Instead, they enhance division and polarity by dividing users by certain characteristics and assigning them into electronic ghettos populated solely by like-minded individuals.  There is no debate, no give and take, and no free marketplace of ideas in Facebook.  Instead, there are hundreds of echo chambers, each populated by people of similar characteristics, similar interests, and similar opinions.  Not only are Facebook users not presented with differing perspectives – the Facebook algorithm prohibits this from occurring.  Instead, users are just grouped into categories of like-minded individuals.  There is no diversity of thought and ideas, and there is little opportunity for one’s views to be assessed and challenged.

As with any other type of social construct, organic freedom always results in greater progress and lifestyle than dictatorship.  When a person, company, or government seeks to censor and suppress certain ideas, they are doing so solely for their own perceived self-interest, not the interest of the citizenry as a whole.  When a free and diverse exchange of ideas exists, great ideas rise to the top and reprehensible ideas are consigned to the dustbin in the normal course of human interaction.  The danger of Facebook is not that they fail to censor enough ideas or remove enough posts.  The real danger is that Facebook actively works to prevent users from being exposed to different ideas.