Table of Contents 64q43
Bots (derived from the word “robots”, robots in English) are small artificial intelligence programs that perform various tasks on the Internet, such as analyzing data, generating recommendations for ads and products, and even writing texts. 60542d
As technology advances, it becomes increasingly difficult to differentiate whether something was produced by a program or by a human. However, a website promises to make this task simpler.
Bots x Humans 6n403d
developers of OpenAI announced that they had built a text generation algorithm called GPT-2, which they said was too dangerous to release into the world, as it could be used to pollute the web with endless material written by a bot.

But now, a team of scientists from MIT-IBM Watson Artificial Intelligence Laboratory and the Harvard University built an algorithm called GLTR which determines the probability of a given text age being written by a tool like GPT-2, creating a powerful weapon in the constant battle to reduce the number of spam and fake news circulating on the world wide web.
battle of wits 5yu2
When the OpenAI revealed GPT-2, they showed how it could be used to write fictional but compelling news articles, sharing one the algorithm had written about scientists who discovered unicorns.

GLTR uses the exact same models to read the final result and predict whether it was written by a human or a GPT-2. Just as GPT-2 writes sentences predicting which words should follow each other, GLTR determines whether a sentence uses the word that the fake news writing bot would have selected.
“We assume that computer-generated text tricks humans by keeping the most likely words in each position, a trick that tricks humans. In contrast, natural writing actually more often selects unpredictable words that make sense for the domain. This means that we can detect whether a text actually looks very likely to be from a human writer!”
Scientists behind GLTR on their blog
Finding a true text 48153q
The MIT and Harvard behind the project have created a website that allows people to test GLTR for themselves, which you can access via this website. link. The tool highlights words in different colors based on how likely they were written by an algorithm like GPT-2. Unfortunately, the site's algorithm only works with English text so far.
In the results generated by the program, the color green means that the phrase corresponds to GPT-2, and shades of yellow, red and especially purple indicate that a human probably wrote them.

We decided to test the system with a sentence written by us in this article. Despite having a large number of semantic structures identified as generated by a bot, the sentence has elements that demonstrate that there was human action in the generation of this content.
However, the Artificial Intelligence researcher specializing in Machine Learning, Janelle Shane, found that GLTR also does not apply to text-generated algorithms other than OpenAI's GPT-2.

Testing it in his own text generator, Shane found that GLTR incorrectly determined that the resulting text was so unpredictable that a human had to write it.
This suggests that we still need a more robust tool to advance the fight against misinformation and content spoofing on the Internet.