Social media platforms and the influx of fake news

PoliticsSocietyWorld
24 October 2023, 09:58

Following the Hamas attack against Israel in early October, the influx of brutal, sometimes explicitly violent, controversial, and incomprehensible videos flooded the internet, especially the social media. There is a video of a young woman in a pickup truck surrounded by men with automatic weapons. A child with a Palestinian flag is being sent towards the armed Israeli soldiers by someone who appears to be the child’s relative. There are destroyed buildings in the Gaza Strip, and devastated people mourning the victims of air raids. There are children in cages. Similar photos and videos spread all over social networks in a split second, making it extremely challenging for moderators to verify the authenticity of such content. The social platforms’ employees were unable to keep up with a wave of graphic material.

Such content not only complicates understanding of what is actually happening in the region but also poses a new challenge for various social platforms. They have once again found themselves cornered by the rapid spread of disinformation, algorithm patterns, and unable to  control the pace at which this content was disseminated. Problems with war content and highly sensitive posts emerged at the beginning of the full-scale Russian invasion of Ukraine, causing trouble for Ukrainians who attempted to use social platforms to share information about Russian war crimes. Ukrainians often faced sanctions from social platform moderators, who were at times overly pedantic in enforcing the platform’s guidelines. This is why it is now rather baffling to observe these platforms being used to distribute graphic content (content that includes scenes of extreme violence or photos of dead bodies, which are explicitly prohibited by the guidelines), and the moderators’ slow response, as well as their inability to cope with the influx of explicit fakes. This prompted a swift reaction from European officials, who reminded that in accordance with recently adopted EU laws, social media platforms are required to ensure immediate removal of disinformation or face hefty fines and prohibition from operating within the EU.

The information warfare

After an initial shock following the Hamas attack, it became obvious that both sides of the conflict were using social media platforms to fight an unprecedented information war. Many of the posts were evidently fake. Targeted efforts to block such social media content did not succeed. X (previously known as Twitter) explicitly stood out in the overall futile efforts to somehow contain the spread of fakes and banned content – and not just because of the eccentric behavior of its new owner.

Within an hour after the attack, Twitter was flooded with photos depicting explicit violence. Many of these photos were either old or blatantly fake, and sometimes they were created by AI or someone’s unskilled compilation of Photoshop tools and screenshots from computer games. What’s interesting is that more often than not these images were shared by the verified accounts (those with a blue tick). This was yet another proof that the idea of a paid verification, introduced by Elon Musk, has failed. Many experts referred to this situation as a test that the company did not pass.

This was a direct result of Elon Musk’s bizarre policies aimed at combating disinformation on the platform, namely, his complete dismissal of the tools created earlier for such emergencies. For example, Elon Musk recently downsized the team responsible for fighting fakes. Moreover, the new owner of X (Twitter) himself has shared tweets made by accounts spreading disinformation.

Twitter’s policies have drawn a reaction from European officials as they directly contradict European laws – the Digital Markets Act (DMA) and the Digital Services Act (DSA). European Commissioner, Thierry Breton warned Twitter’s management about spreading fake news and advised them to cooperate with Europol. In its official response, the platform assured that it removes fakes and any accounts associated with them. Apparently, this response did not satisfy the EU authorities, so the European Commission initiated an investigation into the company’s activities. The company has to provide its official guidelines for emergency situations by 18 October. If the analysis shows that X (Twitter) cannot combat the spread of false information online, it may face a fine of up to 6% of its annual revenue or even a complete ban within the EU.

Warnings were also issued to Meta, and they were pressured to explain what measures were being taken to stop the spread of disinformation. Meta reported that the company has created a so-called Special Operations Centre, which employs media experts proficient in Hebrew and Arabic and started collaborating with local fact-checkers.

Social media platforms are divided over supporting Hamas

Another contentious issue is the social media platforms’ decision on whether to allow or ban content that supports Hamas. Interestingly, platforms have varied in their stance on this matter. Users on YouTube, a platform owned by Google, or in posts on Facebook and Instagram, owned by Meta, are free to express their support for Israel, call for peace, or mourn the difficult situation of Palestinians. However, expressing explicit support for Hamas is prohibited. These platforms’ community guidelines state that Hamas is an extremist organisation, and hence any content related to this group is banned. Nevertheless, Meta received a warning from the EU Commissioner regarding Hamas-related content and was given a 24-hour deadline to remove content supporting the group. A similar warning was issued to the Chinese video platform TikTok. Although TikTok initially refused to comment on the permissibility of Hamas-related content, the company’s management later announced that Hamas-related photos and videos would be prohibited. Google also faced its share of warnings regarding its video hosting service, YouTube.

The Telegram messenger, which has traditionally paid little attention to content moderation, decided not to block content related to Hamas and announced that the company will not remove a Telegram channel used to report the group’s activities. In his official statement, Pavel Durov, Telegram’s founder, explained that Telegram moderators do in fact remove graphic or sensitive content, however reporting on armed conflicts is rarely straightforward. He added that shutting down the Hamas channel would not help to save lives. Durov also emphasised that complex situations like these “require careful handling, taking into account the differences between the social platforms”. He pointed out that Telegram users simply cannot accidentally bump into shocking content; they only see what they subscribe to. This approach transforms Telegram into a “unique first-hand source of information for researchers, journalists, and fact-checkers”. The official Hamas account on Telegram has over 120,000 subscribers and regularly shares disturbing videos of attacks on Israel.

Content that first appeared on Telegram was later often shared by other platforms, primarily on Twitter. This made Twitter one of the largest social platforms spreading violent content about the war in Israel. Although Twitter ostensibly bans Hamas-related photos or videos, its moderators apparently struggle to remove all of the flagged content. Due to its popularity and huge audience which is disproportionate to Telegram’s user base, Twitter plays a significant role in spreading controversial and brutal videos worldwide. Experts have referred to Twitter as a hub for publishing content that had been previously removed by other platforms.

The sea of fakes

The rapid influx of edited, or otherwise manipulated images, memes, videos, and posts complicates any attempts to understand what has actually been happening in the region.

Fakes created by foreign intelligence experts have become a hallmark of the hybrid media confrontation between Russia and Ukraine. The flow of these fakes has only increased with the start of the full-scale Russian invasion. On the other hand, during recent events in the Middle East, researchers tend to find only minimal evidence of disinformation originating from abroad. Instead, the internet is filled with content featuring violence, hate speech, and threats, coming from both sides. Such content, along with the fakes, has raised alarm among EU representatives, who have demanded a quicker reaction to violent posts.

The FakeReporter, an Israeli team combating the spread of fakes, consisting of over 2,500 volunteers, tries to report suspicious content and debunk false narratives on social media. However, the team often struggles with fakes, especially the voice messages shared via WhatsApp which quickly circulate in group chats.

Another challenge in the Middle Eastern information war is the use of AI. Unlike one may have expected, these are not image generators, though. In order to establish the truth, AI image detectors able to determine whether an image or video was artificially created are also being used. However, these tools often fail and identify completely legitimate pictures as artificially generated, or even fake. Experts have called this paradox the second level of disinformation and noted that there are currently no effective measures against such challenges.

Lessons to be learned

Social media networks have inadvertently become participants in geopolitical conflicts. This has been evident in the hybrid war between Russia and Ukraine since 2014 and has escalated further following the full-scale Russian invasion in 2022. However, during the war in the Middle East, social media platforms started spreading both fake and violent content. Quite often these were old photos and videos, such as those from conflicts in Syria or Libya. Such photos were digitally altered, given a misleading caption, and presented, for example, as evidence of Israeli atrocities against the residents of Gaza. Because of graphic content, which could be easily seen by anyone, Israeli parents were advised to remove social media apps from their children’s devices.

These new challenges, faced by the social media platforms during the Hamas attack against Israel, brought out the need for changes in the content moderation policies.

The verdict here is crystal clear: despite all the efforts and attempts to control the situation, the social platforms failed to manage the content related to this conflict.

Sadly, similar conflicts are likely to continue happening worldwide, and people caught up in these conflicts will attempt to present their version of the truth, either intentionally or unintentionally sharing false information, and trying to ‘reveal the truth’ to the world. Social media platforms will be forced to react. It’s likely that social media platforms will block not only fakes and violent content but also any mention of the ‘enemy’. Many Ukrainian may be familiar with Meta penalising users for using the words ‘Azov’, ‘Moskal’, or ‘Rusnya [a slur for ‘Russian’]. 

Social media platforms need well-defined rules and policies on how to act in contradicting situations. The war in Ukraine and the conflict in the Middle East could serve as a testing ground for global changes and potentially find better solutions in order to prevent turning social media into the hubs of unverified, violent content. It’s crucial to address this issue today; otherwise, if a new conflict emerges tomorrow, we will witness even more shocking footage, more sophisticated fakes, and further efforts to incite violence.

This is Articte sidebar