AI
OpenAI's apocalyptic worst-case existential risk scenarios
How could an AI model take over the world and destroy humanity? OpenAI found out...
AI
How could an AI model take over the world and destroy humanity? OpenAI found out...
AI
If relatively basic large language models (LLMs) are already giving us the run-around, what hope do we have against AGI superintelligence?
Ilya Sutskever
AI pioneer and safe superintelligence visionary argues that the era of using ever-bigger datasets to train ever-larger neural networks is over.
AI
Study finds that GenAI testimonials are hyper-positive and totally convincing, putting humans at risk of manipulation.
AI
Don't expect to see AGI next year - or anytime soon.
ChatGPT
GenAI model spotted breaking rules, trying to murder a competitor and then lying about it. No big deal...
space
Physicist suggests that automated extraterrestrials may seek to extract the power from the universe's darkest celestial entities.
security
"Individually, each risk is relatively minor, but combined, the danger increases considerably."
LLMs
GenAI models hit the 52nd percentile of creativity, potentially leaving four billion people eating their dust.
AI
"Your book sucked."
robots
"This is just the beginning of a complete digital transformation for the world."