On January 9th 2015 four French Jews were killed in an attack on the Hypercacher kosher supermarket in Paris, which was targeted following the attack on the offices of Charlie Hebdo and an aborted attack on a Jewish center which left a police woman dead. On February 15th Dan Uzan, a community security volunteer, was killed outside the Great Synagogue in Copenhagen, Denmark. In Israel there were a multitude of fatal knife attacks on Jewish targets. The far right is gaining in popularity, particularly in parts of Europe, while antisemitism from parts of the Muslim and Arab world inspire self-radicalisation and violent extremism. These are just some of the results of rising antisemitism in 2015, and highlight the need for urgent action.
Through the Internet, antisemitic content and messages spread across national borders, feeding not only anti-Jewish hate, but violent extremism more generally. Removing the online incitement which leads to knife attacks in Israel is part and parcel of tackling the larger problem of online incitement which has also led to a dramatic increase in attacks on refugees in Germany. Responding to the rising social media incitement and very real consequences, German prosecutors opening an investigation into the possibility of criminal liability of senior Facebook executives in late 2015.
Following this move an agreement was reached between the German Government, Facebook, Google and Twitter to see content that violated German law removed within 24 hours. Facebook has since gone further and announced a project to tackle online hate in Europe.
As 2016 starts it is clear we have reached a point where the status quo is no longer acceptable. Social media platforms are being clearly told by governments around the world that if they don’t do better to combating incitement, hate and the use of their systems by violent extremists, government will look to legislate to impose increased regulation. Social media platforms are starting to respond, but some are doing so more effectively than others.
As governments increase their efforts to tackle threats in social media, antisemitism remains a core part of the wider fight against hate speech, incitement and violent extremism. It is an area where international efforts are well established, and where experts have been working on the problem since it was first raised at the Global Forum for Combating Antisemitism in 2008. Through its Working Group on Antisemitism on the Internet and in the Media, the Global Forum for Combating Antisemitism has continued to work steadily on this problem and released a major report of recommendations and a review of work to date in 2013, and an interim version of this report in 2015.
This report represents the latest research and a major step forward in efforts to tackle online antisemitism. It also lights a path for tackling other forms of online hate and incitement. Hate in social media is explored empirically, both with respect to its relative prevalence across the major platforms, and with respect to the nature of the antisemitic content. Most significantly, the rate of removal of antisemitic hate speech is reported on by social media platform and by antisemitic category over the last 10 months.
The report is based on a sample totalling 2024 antisemitic items all from either Facebook, YouTube or Twitter. The categories the hate were classified into were: incitement to violence (5%), including general statements advocating death to the Jews; Holocaust denial (12%); traditional antisemitism (49%), such as conspiracy theories and racial slurs; and New Antisemitism (34%), being antisemitism related to the State of Israel as per the Working Definition of Antisemitism.
The results in this report indicate significant variation in the way antisemitism is treated both between companies and also within a single company across the four categories of antisemitism. Positive responses by the platforms remain far lower than a concerned public or the governments who represent them would expect.
The best initial removal rates occur on Facebook for Holocaust denial where 46% is removed within 3 months. The best overall result is for incitement on Facebook with only 25% of the content remaining online. The worst case was YouTube New Antisemitism where after 10 months 96% of the New Antisemitism on YouTube remained online. This reflected an overall problem on YouTube with 91% of the classic antisemitism, 90% of the Holocaust denial, and 70% of the incitement found on YouTube remaining after 10 months. Twitter is removing content on an ongoing basis but at a slow rate.
On Twitter, classic antisemitism is the most likely to be removed (25% removed) and incitement is the least likely to be removed (14% removed). Changes to policies to move away from US legal standard which require a specific and immediate threat, and towards a wider definition covering
advocacy or support for violence, do not appear to have had an impact on this data. In contrast the high response rate for classic antisemitism seems to reflect Twitters focus on racial slurs.
The German Government’s moves, forcing the companies to apply domestic legal definitions of hate, and not those developed by the companies, is one way to close the gap between public expectations and current response rates. Another approach would be for the companies to actively work with civil society and governments to lift the internal standards close to public expectations. This applies not only to antisemitism, but to hate speech, incitement, and violent extremist content more generally.
We hope this report sheds light on the areas where improvement is most urgently needed, and that it will encourage a closing of the gap between public expectations on how social media companies should respond to antisemitism and the reality of what is currently occurring.