To illustrate how useless the newly unveiled
Facebook oversight board
As reported by Business Insider
None of these stories was even remotely true. Yet none of them would have been removed by the oversight board. You see,
as Mathew Ingram pointed out
Now, it’s fair to acknowledge that Facebook CEO Mark Zuckerberg has an impossible task in bringing his Frankenstein’s monster under control. But that doesn’t mean any actual good is going to come of this exercise.
The board, which will eventually be expanded to 40, includes a number of distinguished people. Among them: Alan Rusbridger, the respected former editor of The Guardian, as well as international dignitaries and a Nobel Prize laureate. It has independent funding, Zuckerberg has agreed that its decisions will be binding, and eventually its purview may expand to removing false content.
But, fundamentally, this can’t work because Facebook was not designed to be controllable.
In The New York Times
It’s not really about the content. Stop me if you’ve heard this before, but what makes Facebook a threat to democracy is the way it serves up that content. Its algorithms — which are not well understood by anyone, even at Facebook — are aimed at keeping you engaged so that you stay on the site. And the most effective way to drive engagement is to show users content that makes them angry and upset.
Are you a hardcore supporter of President Donald Trump? If so, you are likely to see memes suggesting that COVID-19 is some sort of Democratic plot to defeat him for re-election — as was the case with a recent
semi-fake-news story
Now, keep in mind that all of this — even the fake stuff — is free speech that’s protected by the First Amendment. And all of this, plus much worse, is readily available on the open web. What makes Facebook so pernicious is that it amplifies the most divisive speech so that you’ll stay longer and be exposed to more advertising.
What is the oversight board going to do about this? Nothing.
“The new Facebook review board will have no influence over anything that really matters in the world,” wrote longtime Facebook critic
Siva Vaidhyanathan
In fact, Facebook’s algorithm has already been trained to ban or post warning labels on some speech. In practice, though, such mechanized censorship is aggravatingly inept. Recently the seal of disapproval was slapped on an ad called
“Mourning in America,”
I recently received a warning for posting a photo of Benito Mussolini as a humorous response to a picture of Trump. No doubt the algorithm was too dumb to understand that I was making a political comment and was not expressing my admiration for Il Duce. Others have told me they’ve gotten warnings for referring to trolls as trolls, or for calling unmasked protesters against COVID-19 restrictions “dumber than dirt.”
So what is Facebook good for? I find it useful for staying in touch with family and friends, for promoting my work and for discussing legitimate news stories. Beyond that, much of it is a cesspool of hate speech, fake news and propaganda.
If it were up to me, I’d ban the algorithm. Let people post what they want, but don’t let Facebook robotically weaponize divisive content in order to drive up its profit margins. Zuckerberg himself has said that he expects the government will eventually impose some regulations. Well, this is one way to regulate it without actually making judgments about what speech will be allowed and what speech will be banned.
Meanwhile, I’ll watch with amusement as the oversight board attempts to wrestle this beast into submission. As Kara Swisher said, it “has all the hallmarks of the United Nations, except potentially much less effective.”
The real goal, I suspect, is to provide cover for Zuckerberg and make it appear that Facebook is doing something. In that respect, this initiative may seem harmless — unless it lulls us into complacency about more comprehensive steps that could be taken to reduce the harm that is being inflicted on all of us.
WGBH News contributor Dan Kennedy’s blog, Media Nation, is online at
dankennedy.net