An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making
Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have oth...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
International Institute of Informatics and Cybernetics
2022-10-01
|
| Series: | Journal of Systemics, Cybernetics and Informatics |
| Subjects: | |
| Online Access: | http://www.iiisci.org/Journal/PDV/sci/pdfs/SA050YB22.pdf
|
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850171457684897792 |
|---|---|
| author | Jordan Harner Lydia Ray Florence Wakoko-Studstill |
| author_facet | Jordan Harner Lydia Ray Florence Wakoko-Studstill |
| author_sort | Jordan Harner |
| collection | DOAJ |
| description | Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion "Always" or "Frequently" with 77% reporting the warning messages changed their opinion only "Sometimes" or "Never" suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status. |
| format | Article |
| id | doaj-art-bf7dc39839944d6897f1935ebdbab20d |
| institution | OA Journals |
| issn | 1690-4524 |
| language | English |
| publishDate | 2022-10-01 |
| publisher | International Institute of Informatics and Cybernetics |
| record_format | Article |
| series | Journal of Systemics, Cybernetics and Informatics |
| spelling | doaj-art-bf7dc39839944d6897f1935ebdbab20d2025-08-20T02:20:16ZengInternational Institute of Informatics and CyberneticsJournal of Systemics, Cybernetics and Informatics1690-45242022-10-01205118137An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision MakingJordan HarnerLydia RayFlorence Wakoko-StudstillProminent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion "Always" or "Frequently" with 77% reporting the warning messages changed their opinion only "Sometimes" or "Never" suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.http://www.iiisci.org/Journal/PDV/sci/pdfs/SA050YB22.pdf facebookfilter bubbletwittersocial cybersecuritycovid-19social mediadisinformation campaign |
| spellingShingle | Jordan Harner Lydia Ray Florence Wakoko-Studstill An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making Journal of Systemics, Cybernetics and Informatics filter bubble social cybersecurity covid-19 social media disinformation campaign |
| title | An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making |
| title_full | An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making |
| title_fullStr | An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making |
| title_full_unstemmed | An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making |
| title_short | An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making |
| title_sort | investigation of the effectiveness of facebook and twitter algorithm and policies on misinformation and user decision making |
| topic | facebook filter bubble social cybersecurity covid-19 social media disinformation campaign |
| url | http://www.iiisci.org/Journal/PDV/sci/pdfs/SA050YB22.pdf
|
| work_keys_str_mv | AT jordanharner aninvestigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking AT lydiaray aninvestigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking AT florencewakokostudstill aninvestigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking AT jordanharner investigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking AT lydiaray investigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking AT florencewakokostudstill investigationoftheeffectivenessoffacebookandtwitteralgorithmandpoliciesonmisinformationanduserdecisionmaking |