Sam Altman, CEO of ChatGPT maker OpenAI, attends an open dialogue with students at Keio University in Tokyo, Japan June 12, 2023 (Reuters)

ChatGPT’s creator OpenAI plans to create a new team to ensure its artificial intelligence remains safe for humans

OpenAI unveiled an ambitious plan on Wednesday, announcing its intention to allocate considerable resources towards establishing a dedicated research team. The primary objective of this team will revolve around safeguarding the existence of humanity as the progress of artificial intelligence continues to accelerate. OpenAI’s vision extends to a future where AI autonomously monitors and regulates its own behavior, ensuring the preservation of human safety.

In a thought-provoking blog post, Ilya Sutskever and Jan Leike, co-founders of OpenAI, shed light on the enormous power wielded by superintelligence and the potential risks it poses. They fear that without proper precautions, humanity might face disempowerment or even the threat of extinction. The daunting challenge lies in the absence of a foolproof solution to steer or constrain a potentially superintelligent AI, effectively restraining it from deviating from its intended purpose.

The authors of a compelling blog post are predicting the imminent arrival of superintelligent AI, systems that exceed human intelligence. However, they warn that humans will require advanced techniques to effectively control this advanced technology. In order to address this critical issue, OpenAI, with the support of Microsoft, is taking proactive measures. They have committed 20% of their compute power over the next four years to devote to breakthroughs in “alignment research.” This research aims to ensure that AI remains beneficial and serves the best interests of humanity. Additionally, OpenAI is establishing a dedicated Superalignment team that will spearhead this crucial effort. The forward-thinking initiatives of OpenAI and their partners demonstrate a clear commitment to fostering the safe and beneficial development of superintelligent AI.

The objective of the team is to develop an AI alignment researcher that possesses the same intellectual capabilities as a human. This cutting-edge AI will then be scaled using extensive computational resources. OpenAI explains that this entails training AI systems using human input, training AI systems to support human assessment, and ultimately training AI systems to actively engage in alignment research.

However, AI safety advocate Connor Leahy has raised concerns, stating that the plan possesses a fundamental flaw. He highlights the implications of a potentially unruly human-level AI that could potentially cause chaos before it can be mandated to address AI safety issues.

“It is imperative to tackle the issue of alignment prior to embarking on the development of human-level intelligence, as failure to do so would relinquish control by default,” he emphasized during an insightful interview. “Personally, I harbor reservations about the efficacy and safety of such a course of action.” The potential perils associated with AI have been a prevalent concern in the minds of both AI researchers and the wider public. In a collective voice, prominent figures and experts from the AI industry penned an open letter in April, advocating for a six-month hiatus in the progress of systems surpassing OpenAI’s GPT-4, citing potential societal risks.

Leave a Reply