The Bubble of Personalised News Feeds and the Politicisation of Facts

By Elise Renkema

 

Twitter CEO Jack Dorsley recently announced that Twitter will be banning political ads on its platform. The ban is not only for individual candidates, but also for advertisements on politicised issues. In practice, this means that political candidates can still tweet, but that these tweets cannot be promoted through advertisements. This move to shift away from personalised, algorithm-based advertisements puts pressure on other social media as well. Political advertisements is mainly Facebook’s and Google’s territory, who have received increasing criticism for the way their advertisements function.

In the beginning of this fall, Facebook announced the platform would not fact-check political speech, arguing:

‘This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.’   

 With the 2020 elections for US presidency coming up, candidates from both sides of the political spectrum are testing the limits of this policy. The Trump campaign, for example, is running a 30-second video ad accusing former VP Joe Biden – now running for president in 2020 –  of offering Ukraine money to fire the prosecutor investigating Biden’s son, Hunter Biden.  CNN refused to run the ad after fact-checking it, but Facebook greenlighted it. As a response, 2020 candidate Sen. Elizabeth Warren (D-MA) launched a fake ad that Facebook CEO Mark Zuckerberg had endorsed Trump’s re-election. Zuckerberg continues to defend his policy, even after Facebook’s employees posted an open letter arguing that allowing politicians to post false claims threatens the integrity of the company.

The criticisms that Facebook has received opens up a broader discussion about the basic consequences of personalised news feeds and targeted advertisements. Personalised searches were first introduced by Google in 2009. Its basic idea is that a website registers what you search for and how you use the platform in order to customise the website and provide the information you are most likely looking for. There is no standard Google or Facebook page that looks the same for everyone. Though this might not sound very problematic – it can be seen as very useful – there are significant consequences to this practice.

When the internet itself was first introduced, it was praised for its open dialogue and the fact that every user could look up information for themselves. Internet is still a free service, but its cost is users’ data. This information is bought by online advertisements companies, who can then predict which links will be clicked on for each individual user. The current state of the internet has led to parallel, but separate online universes of facts. These universes are called filter bubbles. These bubbles consist of like-minded individuals, grouped together by an algorithm, where pre-existing views that are popular with those individuals are fortified. Information or news can circuit within these communities without being contradicted by outsiders. This is how information that might be false can spread without resistance. In different, more popularised words: fake news. News has historically always had a political undertone; see for example the Manchester Guardian, or the Daily Telegraph. The problem with so-called echo-chambers occurs when internet users are only presented with news that the algorithm has predicted they will like. News sources Facebook thinks you might dislike are filtered out of your newsfeed. The information that is then shared with you comes from a politically biased source. An echo-chamber is formed when users only encounter like-minded users, and no one contradicts information that might be false if it fits in their worldview. Filter bubbles on Facebook indoctrinate users with these ideas that are constantly confirmed. There is no basis for an open dialogue about political and social issues. In our age of online news and algorithms, facts are no longer objective, but are used as political tools.

Facebook is not likely to stop using personalised news feeds, as it forms a major income source. But why hasn’t Facebook started regulating fake news more actively? To come back to the case of Trump’s ad against Biden, there would be two major consequences to taking it down. First, it would set a precedent for Facebook to take responsibility for each false political advertisement. Though this is not impossible, it is difficult to implement. Secondly, taking down the ad would causes as much controversy as keeping it up. Facebook, as well as Twitter, have already been accused of having a bias against conservative content. If they would start intervening with political statements, the backlash would be substantial. Furthermore, Facebook would need to define what a political post is. This can be challenging – would the NRA count as a political post? Or Planned Parenthood?

Though Zuckerberg’s argument of free press might be relevant for earlier times of the internet, there is a fine line between facilitating free speech and allowing the spread of misinformation through targeted advertisement. It is important to recognise the free market principle that is behind the platform. Like any market, author Simon Jenkins argues, Facebook needs to be regulated- and much like other publishers, Facebook has an obligation not to facilitate the spread of misinformation. This is especially applicable during election time. The current circumstances are polluting to legitimate debates and does not provide an open dialogue that is so vital to democracies.

 

Elise Renkema, Class of 2020, is a Politics and Law major from The Hague, the Netherlands.

 

Sources

Facebook Newsroom. (2019, 24 September). Facebook, Elections and Political Speech. From

https://newsroom.fb.com/news/2019/09/elections-and-political-speech/

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK.

Stewart, E. (2019, 9 October). Facebook won’t take down Trump ad with false claims about Joe Biden. From:

https://www.vox.com/policy-and-politics/2019/10/9/20906612/trump-campaign-ad-joe-biden-ukraine-facebook

Jenkins, S. (2019, 1 November). Ignore Zuckerberg’s self-serving rubbish. Facebook must be regulated | Simon Jenkins.

From: https://www.theguardian.com/commentisfree/2019/oct/31/mark-zuckerberg-facebook-regulate

Stewart, E. (2019, 9 October). Facebook’s political ads policy is predictably turning out to be a disaster From: https://www.vox.com/recode/2019/10/30/20939830/facebook-false-ads-california-adriel-hampton-elizabeth-warren-aoc

The New York Times. (2019, 28 October). Dissent Erupts at Facebook Over Hands-Off Stance on Political Ads.

Geraadpleegd op 2 november 2019, van

https://www.nytimes.com/2019/10/28/technology/facebook-mark-zuckerberg-political-ads.html

Image Source: ‘Het goddelijke perspectief van Silicon Valley: Op maat gemaakt ongeluk’ uit: de Groene Amsterdammer, 1 augustus 2018 – verschenen in nr. 31. Beeld: Milo

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Website Protected by Spam Master


* Copy This Password *

* Type Or Paste Password Here *

Close
Menu
Social profiles