Earlier this month our paper titled “Islamophobes are not all the same! A study of far right actors on Twitter” was published in the Journal of Policing, Intelligence and Counter Terrorism. This is one of those papers that has the whole story in its title! So there is very little that I can add, but in this short post, I’d like to emphasise on one issue that we didn’t discuss in the paper: content moderation and platform policy.
This year has started quite stormy in terms of social media contestation and Twitter mess-up. With the attack on the Capitol in January, mostly ignited and organised on Twitter and other social media, platform owners panicked a bit, and started to react. The reaction included Twitter’s birdwatch initiative: a community based campaign for content moderation.
Discussions about the effectiveness of such initiatives and the gain and risk analysis requires extensive research. However, one point that comes to mind looking at the design of most of the content moderation policies is the focus on a single tweet, comment, post, etc., in most of them.
The new Twitter Birdwatch will facilitate collaboration among the community members to produce a fact-checking note to accompany the original tweets. Whilst this might work for certain purposes, it might be completely irrelevant in terms of overall norm formation. In our paper, by analysing millions of tweets coming from thousands of users, we show how different users’ (mis)behaviour have different characteristics and temporal evolution. We identified seven different types of users based on the dynamics of the content they posted over time and showed that these different types are fundamentally different in terms of the volume and rhythm of the hateful content they generate.
Back to the birdwatch and similar initiatives, whilst there are several advantages in policies that target individual tweets, it is important to understand the overall behaviour of a user over time. For instance, one of the user types we identified is the “escalating” haters, consisting of the users whose posted content gradually becomes more and more hateful. What could be more effective than labelling the tweets of such users one by one, would be to identify them early on and have a user level intervention (whatever that could be, perhaps a simple warning followed by more serious interventions if the “escalation” continued).
Such strategies could be more effective in the long term both in stopping hateful content (or even mis-information) from spreading as well as to enforce community norms and to create a more constructive culture on social media.

Here is the abstract of our paper:
Far-right actors are often purveyors of Islamophobic hate speech online, using social media to spread divisive and prejudiced messages which can stir up intergroup tensions and conflict. Hateful content can inflict harm on targeted victims, create a sense of fear amongst communities and stir up intergroup tensions and conflict. Accordingly, there is a pressing need to better understand at a granular level how Islamophobia manifests online and who produces it. We investigate the dynamics of Islamophobia amongst followers of a prominent UK far right political party on Twitter, the British National Party. Analysing a new data set of five million tweets, collected over a period of one year, using a machine learning classifier and latent Markov modelling, we identify seven types of Islamophobic far right actors, capturing qualitative, quantitative and temporal differences in their behaviour. Notably, we show that a small number of users are responsible for most of the Islamophobia that we observe. We then discuss the policy implications of this typology in the context of social media regulation.