Social media algorithms, created for advertising gains, results in the magnification of biases culminating in the spread of misinformation.
In prehistoric societies, information was traditionally acquired from within the ingroup or from individuals with higher status, ensuring its reliability and contributing to the success of the collective.
However, in today’s diverse and intricate modern communities, particularly within the realm of social media, the effectiveness of these biases diminishes. For instance, online connections may not necessarily be deemed trustworthy, and the perception of prestige can be easily manipulated on social platforms.
In a review published on August 3rd in the journal Trends in Cognitive Science, the role of social media algorithms in promoting cooperation, in line with innate human social instincts, is discussed by a group of social scientists. This misalignment can, in turn, give rise to extensive polarization and the dissemination of misinformation on a grand scale.
According to first author William Brady, a social psychologist from the Kellogg School of Management at Northwestern, user surveys on both Twitter and Facebook have indicated that a significant portion of users is experiencing exhaustion due to the political content encountered. Unhappiness among users and the reputational challenges faced by Twitter and Facebook in the context of elections and the dissemination of misinformation are noteworthy.
The intention behind this systematic review is to offer insights into the intricate interplay between human psychology and algorithms, with a focus on the resulting consequences. As Brady puts it,
“One of the contributions of this review lies in its adoption of a social learning perspective. Social psychologists consistently explore how knowledge can be acquired from others, a framework that proves essential in comprehending the impact of algorithms on our social dynamics.”
Human learning tendencies are naturally biased towards channels that encourage cooperation and collaborative problem-solving. This is why preference is often given to individuals perceived as belonging to the same social group and those regarded as prestigious.
Furthermore, during the initial evolution of learning biases, emphasis was placed on prioritizing morally and emotionally charged information. This prioritization was crucial as it increased the likelihood of relevance for enforcing group norms and ensuring collective survival.
In contrast, information selection by algorithms primarily focuses on enhancing user engagement to boost advertising revenue. This results in algorithms amplifying the very type of information that aligns with human biases, often saturating social media feeds with what the researchers refer to as Prestigious, Ingroup, Moral, and Emotional (PRIME) information, without considering the accuracy or the representativeness of a group’s viewpoints.
Consequently, content of an extreme political nature or topics that provoke controversy tends to receive greater amplification. If users remain shielded from diverse opinions, they may inadvertently develop a distorted understanding of the prevailing views held by various groups.
“It’s not that the algorithm is engineered to disrupt cooperation. It’s simply that its objectives differ. When these functions are combined in practice, some of these potentially adverse effects can emerge.” – Brady notes
In addressing this issue, the research group initially proposes that social media users should become more informed about how algorithms operate and the reasons behind the appearance of specific content on their feeds. Typically, social media companies do not fully disclose the intricacies of their content selection algorithms, but one potential step could involve offering explanations for why a user encounters a particular post.
For instance, users could be informed whether their friends are engaging with the content or if the content enjoys widespread popularity. Beyond the domain of social media companies, the research team is in the process of developing their own initiatives aimed at educating individuals on how to be more discerning consumers of social media.
Furthermore, it is suggested by the researchers that social media companies could take measures to modify their algorithms in order to better facilitate community-building. Instead of exclusively prioritizing PRIME information, algorithms could implement a restriction on the extent to which PRIME information is amplified and emphasize providing users with a varied range of content.
These adjustments may continue to enhance the visibility of engaging information while curbing the excessive representation of polarizing or politically extreme content within users’ feeds.
“As researchers, we recognize the challenges companies face when considering these changes and their impact on their financial bottom line. This is why we believe that these alterations could, in theory, maintain user engagement while simultaneously curbing the overabundance of PRIME information. It’s possible that the user experience might even see improvements through these adjustments.” – Brady emphasizes
- The action of social media algorithms, created with the intention of enhancing user engagement for advertising gains, results in the magnification of biases inherent in human social learning mechanisms, ultimately culminating in the spread of misinformation and increased polarization.
- As human beings are naturally inclined to acquire knowledge predominantly from their social circles and individuals perceived as prestigious, algorithms capitalize on these tendencies by promoting information that aligns with these biases, without necessarily considering its accuracy. This study underscores the importance of users gaining an understanding of algorithm functioning and urges tech companies to contemplate adjustments to their algorithms in order to cultivate more healthful online communities.
- The proposal put forth by the researchers involves restricting the amplification of content that may potentially polarize and diversifying the array of content presented to users.
Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or ‘PRIME’ information) to sustain users’ attention and maximize engagement. Here, we synthesize emerging insights into ‘algorithm-mediated social learning’ and propose a framework that examines its consequences in terms of functional misalignment. We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human–algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification.