One wonders how much of this is media-friendly hype versus actual scary breakthroughs in artificial intelligence:
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.
At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
Here’s what the program wrote after being fed the first line of 1984:
“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”
Reminds me of the alarmist and mostly fake stories about how Facebook panicked and “shut down” an experiment in which two bots started to talk to each other in an incomprehensible language. It wasn’t exactly Skynet becoming self-aware, but the headlines tended to make you think that.
Granted, I’m probably not doing the OpenAI research justice here. I’m just very skeptical of AI-related stories that contain words like “fear” and “dangerous” in the headline and lead.