Stephen Hawking’s Last Reddit Posts Warned of AI Apocalypse and Its Judicial Use

Stephen Hawking

Renowned theoretical physicist and cosmologist, Stephen Hawking passed away a few days ago, leaving behind a legacy of groundbreaking works on relativity, cosmology and quantum physics. But in addition to his expertise in the aforementioned fields, the celebrated scientist also had some deep insights to give in areas like artificial intelligence, extra-terrestrial life, religion etc., and his last Reddit posts reflect the same.

The world is well-aware of Elon Musk’s apocalyptic predictions about AI jeopardizing humanity’s existence in the future, but Stephen Hawking underlined the same idea in a Reddit AMA session two years ago, which was also his last interaction on the self-proclaimed ‘Internet’s front page’.

Stephen Hawking’s Last Reddit Posts Warned of AI Apocalypse and Its Judicial Use

Answering questions about AI and the mass perception surrounding the topic, Stephen Hawking wrote,“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” He further mentioned that young minds should be educated about the beneficial aspects of AI, and how it can play a huge role in the development of human society, rather than pondering over the destructive potential of AI in an illogically-envisioned apocalyptic scenario.

When quizzed about the expected timeframe of the emergence of human-level AI (or a superintelligent AI smarter than humans), he responded,“There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.” 

He stressed that we should start figuring out how to divert the development of AI solely for productive purposes, rather than wait for superintelligent AI to arrive and create a conflicting scenario that may spiral out of humanity’s control. Afterall, isn’t that the same human blunder that led to the creation and destructive ascension of Skynet in Terminator?

Leave a Reply