A special set of editorials published in today’s issue of the journal Science argue that social media in its current form may well be fundamentally broken for the purposes of presenting and disseminating facts and reason. The algorithms are running the show now, they argue, and the systems priorities are unfortunately backwards.
In an incisive (and free to read) opinion piece by Dominique Brossard and Dietram Scheufele of the University of Wisconsin-Madison, the basic disconnect with what scientists need and what social media platforms provide is convincingly laid out.
“Rules of scientific discourse and the systematic, objective, and transparent evaluation of evidence are fundamentally at odds with the realities of debates in most online spaces,” they write. “It is debatable whether social media platforms that are designed to monetize outrage and disagreement among users are the most productive channel for convincing skeptical publics that settled science about climate change or vaccines is not up for debate.”
The most elementary feature of social media that reduces the effect of communication by scientists is pervasive sorting and recommendation engines. This produces what Brossard and Scheufele call “homophilic self-sorting” — the ones who are shown this content are the ones who are already familiar with it. In other words, they’re preaching to the choir.
“The same profit-driven algorithmic tools that bring science-friendly and curious followers to scientists’ Twitter feeds and YouTube channels will increasingly disconnect scientists from the audiences that they need to connect with most urgently,” they write. And there’s no obvious solution: “The cause is a tectonic shift in the balance of power in science information ecologies. Social media platforms and their underlying algorithms are designed to outperform the ability of science audiences to sift through rapidly growing information streams and to capitalize on their emotional and cognitive weaknesses in doing so. No one should be surprised when this happens.”
“But it’s a good way for Facebook to make money,” said H. Holden Thorp, editor-in-chief of the Science family of journals.
Thorp, who also wrote an editorial on the topic, told me that there are at least two distinct problems with the way scientists and social media interact these days.
“One is that, especially with Twitter, scientists like to use it to bat things around and openly air ideas, support them or shoot them down — the things they used to do standing around a blackboard, or at a conference,” he said. “It was going on before the pandemic, but now it’s become a major way that kind of interchange happens. The problem with that, of course, is that there is now an enduring permanent record of it. And some of the hypotheses that get made and turn out to be wrong, overturned in the ordinary course of science, get cherry picked by people who are trying to undermine what we’re doing.”
“The second is naivete about the algorithms, especially Facebook’s, which put a very high premium on disagreement and informal posts that spread disagreement. You know, ‘my uncle wore a mask to church and got COVID anyway’ — that’s going to beat out authoritative info every time,” he continued.
As Brossard and Scheufele point out, the combination of these things puts scientists “at a distinct disadvantage…as some of the very few participants in public debates whose professional norms and ethics dictate that they prioritize reliable, cumulative evidence over persuasive power.”
Sadly, there isn’t much anyone can do on the science side. Arguably the more they participate in the system, the more they reinforce the silos around themselves. No one is arguing that we should just give up — but we really need to acknowledge that the problem isn’t just a matter of the science community being less effective communicators on social media than peddlers of disinformation.
Thorp also acknowledged that this is only the latest phase of growing anti-factual tendencies and politicization that goes back decades.
“I think people tend to get a little more emotional about this without recognizing it’s a very simple thing: The political parties aren’t going to take the same position — and when one of those positions is scientifically rigorous, the other is going to be against science,” he explained. That the Democratic party is more often on the side of science is true enough, but it has also been on the other side with GMOs and nuclear power, he pointed out. The important thing is not who is for what, but that the two parties define themselves by opposition.
“That’s a political party coming to the realization that it was more politically useful to be against science than to be for it,” he said. “So that’s another thing scientists are naive about, saying ‘we’re not getting our message across!’ But you’re up against this political machine that now has the power of Facebook behind it.”
Brossard and Scheufele make a final parallel in the defeat of Garry Kasparov by Deep Blue — afterwards, no one called for special training to outplay supercomputers, and no one blamed Kasparov for not playing well enough. After the shock wore off, it was clear to everyone that we’d turned a corner not just in chess but in the possibilities of computing and algorithms. (Kasparov’s own views have evolved as well, as he told me a while back.)
“The same understanding is now here for scientists,” they write. “It’s a new age for informing public debates with facts and evidence, and some realities have changed for good.”