June 28, 2021
In a world where all information is suspect, where what’s real is muddled with what’s fake
The term “fake news” was popularized during the 2016 U.S. election campaign, even though lies, manufactured allegations, half-truths, and exaggerated gossips have been part of politics for as long as politics existed. What’s new in our internet day and age is the speed and the scale with which this type of false or unreliable information spreads.
Countless real-sounding news articles populate the web as they have become much easier to produce given new technologies and tools such as AI. The snowballing volume of such stories has made Fake News a growing threat to business, local, national, and international security.
Some internet companies blame machine learning algorithm errors – though the real issue is less the errors, and more the fact that these companies originally designed these algorithms to generate engagement and traffic. Others point at how crowded the online world is and how, in such a world, it can sometimes be hard to discern gray areas, e.g., what is satire and created with a purpose to entertain, and what is meant as fake news, created to mislead.
Some pundits argue that human editors are the answer and that they should replace machine learning mechanisms. But in 2016, the year of the US elections, it turned out that human editors weren’t impartial, after all, and that they favored certain politics over others.
Has Fake News Diluted Web Intelligence Efforts?
The short answer is no. Firstly, there’s only so much clicking a human intelligence researcher can do when it comes to searching the web for forensic breadcrumbs – whereas AI-powered web intelligence tools continuously crawl the internet, including the deep and dark web, matching the speed with which fake or real things appear online.
Secondly, gathering web intelligence requires working with countless metrics as well as achieving real-time anomaly detection and in-depth fact-checking. As such an effort would require many humans and many hours, the use of AI-powered analytics tools is the answer.
So, what’s the difference between the abovementioned machine learning algorithms that make errors and a successful AI-powered web intelligence solution such as Cobwebs? It’s that the former has no context. Thus, it’s easier to make mistakes. The latter, on the other hand, has a well-defined context as well as a goal that matches with the desired outcome. It can detect Fake News by focusing on anomalies far faster than humans can.
Cobwebs’ solution transforms a single lead into a comprehensive AI-powered investigation, streamlining results to provide automated real-time alerts and insights. For example, it can monitor for specific keywords, as predefined by the end-user. Using Natural Language Processing (NLP), it detects sentiments and assess whether the shared content is positive, neutral or negative. Applying Optical Character Recognition (OCR) technology, it recognizes specific objects such as weapons, bombs, or words in images and videos.
Anyone on the web can write fake news whether intentionally or accidentally. Using AI, Cobwebs web intelligence solution can detect anomalies and help prevent the simple machine learning algorithms from promoting and further spreading fake news. Thus, it can effectively support business and national security efforts.