This morning, the social media giant Facebook began testing out a button for users to report posts as hate speech.
While the new initiative seems to only be in the trial stages, as only certain users have been granted access to it, this could very well mark the start of a worrying trend for free speech on Facebook.
The option appears at the bottom of every post in the users’ newsfeed as an orange “warning” box containing an exclamation mark and the question “Does this post contain hate speech?”
If the user selects the “no” option, the button disappears. However, if the user selects the “yes” option, another pop-up box appears which gives the user the option to confirm that the post contains hate speech or select the three other test options.
It would appear that due to the presence of these seemingly unfinished options as well as the fact that Facebook ended up pulling the option from their site just a few hours ago, this was only a test run for Facebook.
Despite not being a permanent feature just yet, this brief trial run of the hate speech button does raise some concerning questions for Facebook users.
Just what is hate speech anyways? How does Facebook define it? What’s the difference between offensive speech and hate speech? Where can the line be drawn?
As of this morning, it would appear that they do not have definitive answers to these essential questions. Nowhere in the hate speech button portal was there an option to find the definition of hate speech.
You would think that they would have already clearly defined the term given that if you look carefully through the post settings you will find the option to report the post for hate speech as well as other options like nudity, self-harm, false news and spam.
All of these are questions that Facebook will have to clearly answer if they plan on making the hate speech button a permanent feature.