The Elon Musk backed non-profit company OpenAi have pull the plug on releasing it’s recent research to the public for fear of misuse.
This is hands down one of the most controversial Artificial Intelligence news stories we’s seen in many months! The pioneers behind this Artificial Intelligence system that is able to write non fiction news content have made the ethical decision to keep their technology completely out of reach of the public. They have dubbed this potentially harming if in the wrong hands and will be conducting lengthy reviews of the repercussions this groundbreaking technology could have.
GPT2 is a text generator that creates content from a paragraph or even a few words. It’s core function is to make a prediction as to the outcome of the brief. It has been trained to emulate the style and subject of the opening text. The results are astonishing. The system has a vast dialect and can easily recognise and replicate writing from famous authors to that of news based articles. A recent case saw the AI system create fiction on the back of the opening paragraph to George Orwells’ 1984.
GPT2 has a broad knowledge of current and historical affairs and also holds expertise in what we’d probably class as trending subjects. OpenAI says it’s trained extensively through scanning relevant articles across Reddit with the tolerance of merely 3 up votes.
The detrimental side to this sophisticated AI system would come in forms of fake news and spam. It’s great to hear OpenAI are leading the way in a bid to further their research in the direction of the risk of malicious use.
We’re prediction a string of ethically challenging advances with Artificial Intelligence this year. AI companies will be flirting with ethical boundaries in a bid to control and secure their AI systems for safe use with the mainstream market.
AI Suppliers – Keeping you up to date with the latest Artificial Intelligence News.