OpenAI Releases “Dangerous” Text-generating AI That Could Spread Fake News, Spam

OpenAI releases dangerous text-generating AI

Artificial Intelligence research lab OpenAI has published a debatable text-generating AI system, dubbed GPT-2, that is capable of “predicting the next word” that should follow large paragraphs of text. Are you wondering why I label GPT-2 debatable? Let me shed some more light on this AI system.

The text-generating AI system was first announced back in February 2019, but it wasn’t officially released back then. It was because of the fear that the trained model of GPT-2 could be used for malicious intents like spreading false news, spam, and misleading information. At a time when social media giant and search engines are being blamed for spreading fake news and affecting the political elections, it was better to not release the full-version of the text-generating AI system.

OpenAI has since released smaller, less complex chunks of GPT-2 online and studied how they are being utilized by developers and researchers. Now, that the organization hasn’t found any strong evidence of misuse, hence, it has officially released the AI system in its entirety.

So, what exactly is GPT-2 capable of, you ask? Well, GPT-2 is a text-generation language model that is capable of generating coherent pieces of text (sample) based on user input. OpenAI trained this AI model on eight million text documents scraped from the web and also used sample text snippets input by the researchers.

OpenAI has described this AI system as a “chameleon” since it can adapt to the writing style of the user based on the sample content. And you don’t need to feed it much info to generate a piece of news, story or poem. There’s a reason why experts feared that GPT-2 could be used for malicious activities. It can easily generate fake news stories based on a headline, spew out a complete poem based on the first line, write recipes based on all the ingredients, and much more.

This is the reason that researchers have dubbed this AI system as “too dangerous” to be published online. AI systems like these are at the ‘center of the debate’ around malicious intent and the harm it could cause the masses. OpenAI has some reservations too, but it has gone ahead with the release. It says synthetic pieces of text have a higher chance of misuse if the output is too coherent and reliable. Thus, it has developed a tool to detect synthetic pieces of text – in a safe measure to counter its own AI model.

So, if you are interested, you can check out GPT-2’s impressive AI capabilities using this online tool called (Transformer is a machine learning component used to create the GPT-2 AI system) by entering your own text snippet. Did it blow your mind? Do share your results with us in the comments below.

comment Comments 1
Leave a Reply