What is Safe Superintelligence and What It Does

In Short
  • Superintelligence is a form of AI system that surpasses human capabilities, skills, and knowledge. It's even more powerful than AGI.
  • Safe Superintelligence intends to align such powerful systems with human values to prevent catastrophic outcomes for humanity.
  • OpenAI says that Superintelligence is conceivable within the next 10 years.

It’s clear that we are in the initial stages of Artificial Intelligence (AI), using chatbots like ChatGPT, which are powered by Large Language Models (LLMs). However, AI is not just limited to chatbots. AI agents, AGI, and Superintelligence are the next paradigms of the AI era we are about to witness. So in this article, I explain what is Superintelligence and how Safe Superintelligence can protect mankind from powerful AI systems.

What is Superintelligence?

As the name suggests, Superintelligence is a form of intelligence that far surpasses the brightest and most ingenious human minds in every domain out there. It possesses knowledge, skills, and creativity, an order of magnitude higher than biological humans.

Keep in mind that Superintelligence is a hypothetical concept where AI systems gain superior cognitive abilities, beyond human capabilities. It can unlock new paradigms in scientific discovery, solve problems that have challenged human minds for centuries, think and reason much faster than humans, and perform actions in parallel.

It’s often said that Superintelligence will be even more capable than AGI — Artificial General Intelligence. David Chalmers, a cognitive scientist, says that AGI will gradually lead to superintelligence. An AGI system can match the abilities of humans in reasoning, learning, and understanding. However, Superintelligence can go beyond that and exceeds human intelligence in all aspects.

In May 2023, OpenAI shared its vision of superintelligence and how it can be governed in the future. The blog written by Sam Altman, Greg Brockman, and Ilya Sutskever, states that “it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

Implications and Risks of Superintelligence

Since Superintelligence can surpass human capabilities, there are many risks associated with this technology. Nick Bostrom, a prominent thinker argues that there is an existential risk to humanity if Superintelligence is not aligned with human values and interests. It can lead to unimaginable outcomes for human society, possibly leading to human extinction.

Apart from that, Bostrom also raises questions about ethical issues regarding the creation and use of superintelligent systems. What will happen to the rights of the individual, who is going to control it, and what will be the impact on society and welfare? Once such a system is developed, there is a high chance that it can evade human attempts to control or limit its actions.

Not just that, Superalignment can lead to an “Intelligence Explosion”, a term coined by the British mathematician I.J. Good in 1965. He theorized that a self-improving intelligent system can design and create even more powerful intelligent systems leading to an intelligence explosion. In such a scenario, unintended consequences may follow which can be harmful to mankind.

How Can Safe Superintelligence Help?

Many AI theorists have argued that taming and controlling a superintelligent system will require rigorous alignment with human values. Such a system must be aligned in a way that it interprets and performs actions correctly and responsibly.

Ilya Sutskever, the co-founder of OpenAI and former co-lead of the Superalignment project at the company, set out to work on aligning powerful AI systems. However, in May 2024, Sutskever left OpenAI along with Jan Leike, the superalignment head working at the company.

Leike alleged that “safety culture and processes have taken a backseat to shiny products.” He has now joined Anthropic, a rival AI lab. Sutskever, on the other hand, has announced a new company called Safe Superintelligence Inc. (SSI) that aims to create a safe superintelligent system. SSI says that it’s “the most important technical problem of our​​ time.

Led by Sutskever, the company wants to solely work on achieving safe superintelligence, without having to engage with management or product cycles. While working at OpenAI, Sutskever gave an interview to The Guardian where he emphasized the potential risks and benefits of powerful AI systems.

Sutskever says, “AI is a double-edged sword: it has the potential to solve many of our problems, but it also creates new ones.” He contends that “the future is going to be good for AI regardless, but it would be nice if it were good for humans as well.

#Tags
comment Comments 0
Leave a Reply

Loading comments...