One of the top challenges facing social networking sites, business websites and other online organizations is harassment and trolling. Due to the unprecedented explosion of user-generated content, various platforms are having a hard time keeping tabs on what is appropriate and what is not. Among the key reasons for this dilemma is the fact that traditional solutions lack what it takes to keep pace with inappropriate content generation and offer a worthwhile reprieve.
The pressure to stave off online harassment and trolling from website pages is ever increasing due to its pervasive negative effects. Online discussion forums often encourage people to express their opinions by commenting. Oftentimes though, innocent conversations seemingly spiral out of control and perpetrators unleash torrents of abuse and hate. Some go as far as to use threatening language, which causes untold distress to recipients.
These vices are becoming more and more commonplace, with 72% of US internet users claiming to have witnessed it according to the Data and Society Research Institute. A further 47% say they have suffered from it themselves while 27% are constantly self-censoring out of fear.
In a bid to silence abuse, users and platform operators alike have perpetrated a form of censorship, limiting freedom of speech. Countless internet users have opted to quit social networking sites to escape from unnerving backlashes to innocent posts. Furthermore, certain websites are severely moderating or disabling comments features to keep things in check. By doing so, however, business models that revolve around online communities are losing out on opportunities for interaction and brand promotion. Ultimately, the solution to content filtering challenges lies in the depths of machine learning algorithms.
How Does ML-Based Content Filtering Work
In order to understand why machine learning holds the key to solving the dilemma, let us start by exploring how content filtering works. Essentially, artificial intelligence (AI) systems are trained to learn and process data in the same manner as the human brain. By mimicking the behavior of the human brain, they ‘reason’ over the input they get in the form of text, images and sounds, and react accordingly.
They are, therefore, capable of scanning images as well as text for possible inappropriate content and flagging it. Under this approach, an artificial neural network undergoes training through exposure to real-world content, both appropriate and inappropriate. The training process seeks to equip computerized systems to distinguish between harmless banter and actual harassment. To do this, such a system requires a huge amount of data, from which it can pick out toxicity based on the words used as well as the volume of text and other patterns.
The higher the amount of data to which such a system gets exposure, the better it becomes at scoring. Developers of such content filtering software decide how the system should handle the so-called toxic comments. For instance, it could flag the inappropriate comments it identifies so as to allow moderators to review and determine whether or not to include it in conversation. Alternatively, it could allow a commenter to view the potential toxicity of their comment before they post it. And this could possibly prompt the commenter to rephrase.
However, before any such action is taken, the system automatically blocks suspicious content to prevent any possible damage. Through this, the targeted victims do not get any exposure to such content.
Weaknesses of Traditional Approaches
To illustrate the scope of the problem at hand and the weaknesses of traditional content filtering solutions, consider the example of Facebook. Statistics from Digital Marketing reveal that in a span of 60 seconds, the platform gets 136,000 photo updates, 510,000 comments and 293,000 status updates.
The traditional filtering approach involves getting a team of moderators to go through all user-generated content before approval. While this solution may work for small platforms, it is clearly incapable of handling the high volume of larger platforms. For a site that uses a traditional approach, it means that the more it scales, the larger the moderation team has to be. In turn, this increases personnel costs. Often, even expanding the moderator team is not enough to keep up with the flood of filth targeting the site in real time.
At the same time, content moderation has gained a reputation as one of the worst jobs as it involves constant exposure to gruesome and depraved content. This has been shown to take a psychological toll on moderators and cause consequences such as burnout and post-traumatic stress disorder.
Benefits of Machine Learning Content Filtering Technology
Content filtering technology based on machine learning offers a host of noteworthy benefits over traditional models. Let us briefly analyze some of them:
Ability to Handle Conversation Influxes
Among the top merits of this system is that it is capable of keeping up with a huge influx of comments in real time, unlike human moderators. It also means faster moderation at a lower cost as it takes lesser personnel to handle the task.
Addressing Censorship Concerns
The use of an automated model would allow a site to continuously enjoy the benefits of user interactions as it has the capacity to manage conversations effectively. As such, it negates the consequences of censorship which would result from disallowing comments altogether. In turn, this allows a business to enjoy the benefits of user interaction without the ill effects of toxicity.
A Capacity to Improve Continually
Just like human beings, machine learning systems are always continually improving based on the amount of data to which they gain exposure. With time, their accuracy levels increase and they become better at rooting out inappropriate text and images.
Predictive vs. Reactive Approach
Another major benefit of the automated system is its ability to take a predictive rather than a reactive approach. Traditional solutions are often reactive – taking action after the damage is already done. But with proper training, machine learning models can foresee or predict toxic situations before they actually take place. In order to do this, the neural networks are exposed to data on conversations that get out of hand and those that remain civil. They identify common patterns that precede toxicity and can issue alerts beforehand.
Machine Learning in the Fight against Trolls
In view of the high potential of machine learning-based content filtering software, there are already a number of big-name brands implementing the technology to fight trolling. A good example is Jigsaw, a technology incubator under Alphabet, Google’s parent company. It has created a machine learning software known as Perspective to automate content filtering.
Twitter seeks to address the problem from its root by using a machine learning system to identify and take action on troll accounts automatically. The platform’s preventative approach aims to tackle the issue even before it affects platform users.
The Future of Content Filtering is Here
Machine learning algorithms offer a viable and sustainable solution to content filtering. By automating the identification of toxic comments aimed at undermining civil interactions between platform users, they can prevail against online trolls. Notably, they overcome the challenges associated with traditional methods such as speed, psychological effects on human moderators and costs. Even though they might be limited in their capabilities, especially at the onset, they get better as they gain more exposure to relevant data.
At a time when civility and freedom of expression are on the line, the merits of this approach hold great potential in winning the battle against internet trolling and harassment.
- Microsoft Machine Learning Server in Agriculture: How the Fourth Industrial Revolution is Driving the Second Green Revolution - April 15, 2021
- Reimagining the Digitalization Process with Robotic Process Automation Software - March 30, 2021
- Machine Learning for Content Filtering – Winning the Battle against Harassment and Trolling - January 30, 2020