Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.

So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.

For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.

Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)

What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.

Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.

How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:

“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”

Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.

It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?

The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.

Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.

It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.

GBH News contributor Dan Kennedy’s blog, Media Nation, is online at dankennedy.net.