“Twitter algorithm release may be “embarrassing,” but the community will fix mistakes,” said its CEO Elon Musk.
As repeatedly promised by its new owner Elon Musk, Twitter has announced plans to offer users a detailed view of how the algorithm “weighs” their accounts, with the aim of providing transparency and allowing users to see if their accounts are being unfairly weighted.
The move comes amid growing concerns over social media algorithms and their impact on society, particularly with regards to the spread of misinformation and toxic content.
In a blog post, Twitter said that it had a responsibility to make its platform more transparent, and was taking the first step towards this by opening up much of its source code to the global community. The code includes the recommendations algorithm that controls the tweets users see on their timelines.
Avoiding dystopian outcomes is not easy.
The intent is not for the algorithm to be some judgy moral arbiter, but rather that it do its best to inform & entertain.
Trying to maximize unregretted user-minutes seems like the right objective.
— Elon Musk (@elonmusk) April 1, 2023
However, the company was careful to exclude any code that would compromise user safety and privacy or the ability to protect its platform from bad actors, including efforts to combat child sexual exploitation and manipulation. Training data or model weights associated with the Twitter algorithm were also not released.
Twitter CEO Elon Musk has previously spoken about the need for more transparency on the platform. In a tweet, the billionaire said that the algorithm was not meant to be a “judgy moral arbiter” but rather to inform and entertain users.
Speaking on Twitter Spaces, Musk admitted that the initial release of the algorithm was likely to be “quite embarrassing”, but that the community would identify and fix any exploits.
The move by Twitter comes at a time when social media platforms are facing increased scrutiny over their use of algorithms. Critics argue that these algorithms can amplify harmful content and create filter bubbles that limit users’ exposure to diverse viewpoints.
In response, some platforms have made efforts to increase transparency around their algorithms. YouTube, for example, recently launched a new feature that allows users to see why they are being recommended certain videos. Facebook has also taken steps to provide more information about its news feed algorithm.
However, others argue that more needs to be done to regulate the use of algorithms on social media. In a recent report, the UK government’s Centre for Data Ethics and Innovation called for greater transparency and accountability around algorithmic decision-making.