22 January 2024
Strong support for Coimisiún na Meán new rules that algorithms based on intimate profiling of people’s sexual desires, political and religious views, health conditions and or ethnicity must be turned off by default
22 January 2024
Research commissioned by the Irish Council for Civil Liberties (ICCL) and Uplift has found that almost three-quarters (74%) of the Irish population believe that social media algorithms, which select content and insert it into users’ online feeds, should be regulated more strictly.[1]
The new poll also shows that more than four-fifths (82%) of people across Ireland are in favour of social media companies being forced to stop building up specific data about users’ sexual desires, political and religious views, health conditions and or ethnicity, and using that data to pick what videos are shown to people.[2]
The research was conducted by Ireland Thinks, using a representative sample of 1,270 people, selected across age, income, education, region across Ireland.
The findings come in the wake of a major step taken by Coimisiún na Meán, Ireland’s new online regulator. Its new draft rules say that recommender systems based on intimately profiling people should be turned off by default on social media video platforms like YouTube, Facebook, and TikTok.
These “recommender system” algorithms promote suicide and self-loathing among teens, drive our children in to online addictions, and feed us each personalised diets of hate and disinformation for profit.
Our children are at risk. Just one hour after Amnesty’s researchers started a TikTok account posing as a 13-year-old child who views mental health content, TikTok’s algorithm started to show the child videos glamourising suicide.[3]
Recommender systems also promote hate and extremism. Meta’s own internal research reported that “64% of all extremist group joins are due to our recommendation tools… Our recommendation systems grow the problem”.[4]
The European Commission recently reported that Big Tech’s recommender systems aided Russian’s disinformation campaign about its invasion of Ukraine.[5]
Speaking today, Dr Johnny Ryan, a Senior Fellow of ICCL, said:
“Social media was supposed to bring us together. Instead, it tears us apart. Users – not Big Tech’s algorithms – should have the freedom to decide what they see and share online. These findings show that the vast majority of the Irish public do not want toxic algorithms interfering in their online lives.”
Siobhan O’Donoghue of Uplift said:
“Big Tech’s toxic recommender systems and algorithms are amplifying hate speech, weaponising every fault line within our communities - driven by relentless surveillance to maximise “engagement” and ultimately profits. It is time social media corporations be made to give users real control over what they see, and be held to account for failing to do so.”
Big Tech’s algorithmic “recommender systems” select emotive and extreme content and show it to people who the system estimates are most likely to be outraged. These outraged people then spend longer on the platform, which allows the company to make more money showing them ads.
Digital platforms have a very poor record of self-improvement and responsible behaviour. After years of scandals and purported fixes by YouTube, nearly three quarters of the problematic content seen by 37,000+ test volunteers on YouTube in 2022 was due to the YouTube’s recommender system amplifying it.[6]
Slides
Slides of poll available at https://www.iccl.ie/wp-content/uploads/2024/01/RELEASE-social-media-poll.pdf
Notes
[1] The question asked was “Social media companies choose what content their users see. Should this be regulated more strictly?”
Yes / No / Not sure
[2] The question asked was “Would you be in favour of social media companies being forced to stop building up specific data about you (your sexual desires political and religious views, health conditions and or ethnicity) and using that data to pick what videos are shown to you (unless you have asked them to do this)?”
Yes / No / Not sure
[3] "Driven into the darkness", Amnesty International, 7 November 2023 (URL: https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/).
[4] "Facebook Executives Shut Down Efforts to Make the Site Less Divisive", Wall St. Journal, 26 May 2020 (URL: https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499). This internal research in 2016 was confirmed again in 2019.
[5] “Digital Services Act: Application of the Risk Management Framework to Russian disinformation campaigns”, European Commission, 30 August 2023 (URL: https://op.europa.eu/en/publication-detail/-/publication/c1d645d0-42f5-11ee-a8b8-01aa75ed71a1/language-en), p. 64.
[6] "YouTube Regrets: A crowdsourced investigation into YouTube's recommendation algorithm", Mozilla, July 2021 (URL: https://assets.mofoprod.net/network/documents/Mozilla_YouTube_Regrets_Report.pdf), pp 9-13.