BBC journalists tested the GPT-2 AI system and decided that it was too perfect and therefore dangerous.
OpenAI, a nonprofit research company, has upgraded its AI-based GPT-2 text generator. The developers decided to engage BBC reporters to test the new version.
The test result was so impressive that GPT-2 decided not to make it publicly available to avoid possible abuse. To train the algorithm, a database of eight million web pages was used. It can now adapt to the content and style of texts and, for example, continue what another person has written or create a text from scratch using specific author’s expressions and style.
During the testing the texts written by AI were compared with those written by human authors. It turned out that it is almost impossible to tell a generated text from a human-written one.
The developers fear that attackers may theoretically take advantage of GPT-2 in order to spread fake news or spam and influence people’s opinions. Therefore, they have released a trimmed-down version having considerably fewer features. But even in that form the system copes with text generation, however, a reader would most likely understand that they were written by a machine.
Share this with your friends!
Be the first to comment
Please log in to comment