Is OpenAI’s latest development, the Q* model, an existential risk to humanity?

A recent report by Reuters revealed that some researchers at OpenAI, the company behind the ChatGPT system, had raised concerns about a powerful artificial intelligence model that they were developing. The model, called Q*, was able to solve basic maths problems that it had not encountered before, which could be a significant advancement in AI.

Could OpenAI’s Powerful Q* Model Escalate AI Technological Threat?(Photo by Kirill KUDRYAVTSEV / AFP)(AFP)

According to the tech news site the Information, the Q* model, pronounced as “Q-Star”, had alarmed some safety researchers at OpenAI, who wrote to the board of directors before the dismissal of Sam Altman, the former chief executive of the company. They warned that the model could pose a threat to humanity if it was not handled carefully.

The report came amid a turbulent week for OpenAI, which was founded as a nonprofit venture with a mission to create “safe and beneficial artificial general intelligence for the benefit of humanity”.

The board of directors fired Altman last Friday, but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back. Altman also had the support of Microsoft, the biggest investor in OpenAI’s for-profit subsidiary, which is legally bound to pursue the nonprofit’s mission.

Many experts are worried that companies like OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the existence of a maths-solving large language model (LLM) would be a breakthrough.

ALSO READ| What is project ‘Q*’ that led to Sam Altman’s OpenAI ouster before return as CEO?

“The intrinsic ability of LLMs to do maths is a major step forward, allowing AIs to offer a whole new swathe of analytical capabilities,” he said.

Altman hinted at the progress of the Q* model in an appearance at the Asia-Pacific Economic Cooperation (Apec) summit last Thursday, the day before his surprise sacking.

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime,” he stated

As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce. The interim chief executive, Emmett Shear, wrote this week that the board “did not remove Sam over any specific disagreement on safety”.

Leave a Comment