As an Amazon Associate I earn from qualifying purchases

YouTube algorithm pushed election fraud claims to Trump supporters, report says


YouTube algorithm pushed election fraud claims to Trump supporters, report says

For years, researchers have suggested that algorithms feeding users content aren’t the cause of online echo chambers, but are more likely due to users actively seeking out content that aligns with their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed results from a YouTube experiment that just happened to be conducted right when election fraud claims were raised in fall 2020. They say their results provide an important caveat to prior research by showing evidence that in 2020, YouTube’s algorithm was responsible for “disproportionately” recommending election fraud content to users more “skeptical of the election’s legitimacy to begin with.”

A coauthor of the study, Vanderbilt University political scientist James Bisbee told The Verge that even though participants were recommended a low number of election denial videos—a maximum of 12 videos out of hundreds participants clicked on—the algorithm generated three times as many to people predisposed to buy into the conspiracy than it to people who did not. “The more susceptible you are to these types of narratives about the election… the more you would be recommended content about that narrative,” Bisbee said.

YouTube spokesperson Elena Hernandez told Ars that Bisbee’s team’s report “doesn’t accurately represent how our systems work.” Hernandez says, “YouTube doesn’t allow or recommend videos that advance false claims that widespread fraud, errors, or glitches occurred in the 2020 US presidential election” and YouTube’s “most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels.”

Bisbee’s team states directly in their report that they did not attempt to crack the riddle of how YouTube’s recommendation system works:

“Without access to YouTube’s trade-secret algorithm, we can’t confidently claim that the recommendation system infers a user’s appetite for election fraud content using their past watch histories, their demographic data, or some combination of both. For the purposes of our contribution, we treat the algorithm as the black box that it is, and instead simply ask whether it will disproportionately recommend election fraud content to those users who are more skeptical of the election’s legitimacy.”

To conduct their experiment, Bisbee’s team recruited hundreds of YouTube users and re-created the recommendation experience by having each participant complete the study logged into their YouTube accounts. After participants clicked through various recommendations, researchers recorded any recommended content flagged as supporting, refuting, or neutrally reporting Trump’s election fraud claims. Once they finished watching videos, participants completed a long survey sharing their beliefs about the 2020 election.

Bisbee told Ars that “the purpose of our study was not to measure or describe or reverse-engineer the inner workings of the YouTube algorithm, but rather to describe a systematic difference in the content it recommended to users who were more or less concerned about election fraud.” The study’s only purpose was to analyze content fed to users to test whether online recommendation systems contributed to the “polarized information environment.”

“We can show this pattern without reverse-engineering the black box algorithm they use,” Bisbee told Ars. “We just looked at what real people were being shown.”

Testing YouTube’s recommendation system

Bisbee’s team reported that because YouTube’s algorithm relies on watch histories and subscriptions. In most cases, it’s a positive experience for recommended content to align with user interests. But because of the extreme circumstances following the 2020 election, researchers hypothesized that the recommendation system would naturally feed more election fraud content to users who were already skeptical about Joe Biden’s win.

To test the hypothesis, researchers “carefully controlled the behavior of real YouTube users while they were on the platform.” Participants logged into their accounts and downloaded a browser extension to capture data on the recommended videos. Then they navigated through 20 recommendations, following a specified path laid out by researchers—such as only clicking the second recommended video from the top. Every participant started out watching a randomly assigned “seed” video (either political or non-political) to ensure that the initial video they watched didn’t influence subsequent recommendations based on prior user preferences that the algorithm would recognize.

There were many limitations to this study, which researchers outlined in detail. Perhaps foremost, participants were not representative of typical YouTube users. The majority of participants were young, college-educated Democrats watching YouTube on devices running Windows. Researchers suggest the recommended content might have differed if more participants were conservative or Republican-leaning, and thus assumedly more likely to believe in election fraud.

There was also an issue where YouTube removed election fraud videos from the platform in December 2020, resulting in researchers’ losing access to what they described as an insignificant number of videos recommended to participants that could not be assessed.

Bisbee’s team reported that the key takeaway from the report was preliminary evidence of a pattern of behavior of YouTube’s algorithm—but not a true measure of how misinformation spread on YouTube in 2020.



Source link

We will be happy to hear your thoughts

Leave a reply

Topstore
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart