The internet has got a lot to answer for, or perhaps it’s social media more specifically that seems to be impacting our world in so many damaging ways. Facebook, Twitter, Snapchat and Instagram have all been linked to declining levels of mental health in teens and young adults. Not only do they exacerbate children’s and young people’s body image worries and worsen bullying, but they also cause sleep problems, feelings of anxiety, depression and loneliness.
One of the biggest concerns is undoubtedly the rise in online bullying and the ease with which faceless ‘trolls’ can say pretty much whatever they like, to anyone they like, with relative impunity. However, thankfully there is one way those who make bullying, threatening and inflammatory comments can be caught.
When free speech turns ugly
The anonymity trolls enjoy is a major contributing factor to their behaviour. They believe they can say anything without consequences and even use software that conceals their IP addresses so their ugliness can come out. That makes it extremely difficult to prove who is behind the comments using conventional investigation techniques.
Thankfully, investigators are starting to realise that linguistics could be the key to tracking the trolls down. Much like fingerprints, our speech patterns leave behind clues about who we are, with linguistic quirks and our unique vocabulary providing insight into our experiences, our background and where we come from.
Forensic linguistics in action
Of course, linguistic identification is nowhere near as exact as biological identifiers such as fingerprint evidence or DNA, but it can be enough to change the direction of an investigation. In a recent American defamation case, a local businessman and various judges were being relentlessly trolled by an online commenter.
Forensic linguists were called in and noticed that just a handful of nouns were used in unusual ways. This helped to link the troll to articles a local lawyer had written which had been published online. After digging a little deeper, investigators then saw that the username of the troll was connected to the lawyer personally. When confronted with this evidence, the lawyer confessed and was subsequently demoted.
The role of machine learning
Trolling is a very difficult issue to tackle in this way because it is so prevalent. The costs of using forensic linguistics in every case would be huge, which is where artificial intelligence can help. Some sites use AI moderation tools to review and approve content before it appears on public-facing platforms. This type of tool uses search filters to identify offensive keywords, but other tools have gone a step further than that and can identify abusive messages even where offensive language or slurs are not used.
This type of tool could be used by platforms like Twitter and Facebook to identify and block abusive messages before they are received. Not only would that offer a degree of protection for users, but it would also mean forensic linguistics could be saved for more serious cases.
What are your views?
This is a very emotive topic and one we would love to hear your views on. Please share your experiences on our Facebook page or to find out more about any of our translation services, please get in touch with our team.