Instagram have launched a new feature that restricts public comments from trolls and bullies on the platform. Is this enough of a step or are they venturing down a tricky road of censorship?
Instagram previously mentioned that it was working towards “warning labels” for negative comments during the F8 conference back in April. The initial aims of the feature are to urge those posting negative comments to rethink what they are saying rather than simply blocking people from saying bad things. The prompts say “Are you sure you want to post this?” which provides an opportunity for the negative post to be rethought. This approach is thought to encourage a change in thought for those that post negatively rather than simply blocking them – which isn’t thought to discourage enough.
The head of Instagram, Adam Mosseri wrote in a blog post that the initial results of testing over recent weeks has seen positive results returned. He explained that
“It encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,”
It is unclear however just how effective those warnings had proven to be. It would be interesting to see the number of times that the warning had been shown and the number of times this had lead to posts being reverted, edited or deleted.
According to research in the United States by the Pew Research Centre; 59% of American teenagers had been bullied or harassed online. Over here in the United Kingdom, the anti-bullying charity Ditch The Label found that 42% of British 12-20 year olds had been the target of cyberbullying in their 2017 survey. Ofcom are also among those warning that online bullying is on the rise and with Social Media playing an even more central role in an increasing number of people’s lives, software development of this manner is needed to ensure that people’s wider mental health isn’t negatively affected.
Upon the announcement of the launch on Monday, English speaking Instagram users will be the first to have the new tool; although it is expected to be rolled out globally in the future as reported by the BBC.
Instagram have also added a new ‘restrict’ feature that may appear to be more useful to people that want to deal with bullying and harassment targeted directly at them. When someone has been designated as “restricted” then their posts or comments will only be visible to you and them but no one else publicly unless they have been approved by you. The user who has been restricted will not have any awareness that their comments are being moderated by you which is unlike blocking – something that has deterred people from working against bullies in the past due to fears that it may escalate things.
It’s extremely important that the large social media companies are becoming more and more involved in developing software that helps reduce negative comments and bullying but they need to be extremely careful as to what can be classed as over-censoring. Facebook and Twitter have faced much criticism in the past due to what appeared to be politically-motivated censoring. The long lasting argument behind Social Media companies simply staying as content platforms; not moderators, still rages on to this day. The argument behind this is spelled out in a simple thought experiment:
If someone threatens or harasses you on the phone; should the phone company be held accountable or responsible? Ultimately, should they start to censor or block certain messages? Can the same be said for social media? This is where the disputes begin and also reminds us of the extreme importance of ethics in all forms of software development and design, both bespoke or out-of-the-box solutions.