• A mysterious new OpenAI model known as Q* has got the tech world talking.
  • The model reportedly sparked concern at the startup, leading to the resulting chaos.
  • AI experts say the model could be a big step forward but is unlikely to end the world anytime soon.

As the dust settles on the chaos at OpenAI, we still don't know why CEO Sam Altman was fired — but reports have suggested it could be linked to a mysterious AI model.

The Information reported that a team led by OpenAI chief scientist Ilya Sutskever had made a breakthrough earlier this year, which allowed them to build a new model known as Q* (pronounced "Q star.") The model could reportedly solve grade-school math problems.

Reuters reported that this model provoked an internal firestorm, with several staff members writing a letter to OpenAI's board warning that the new breakthrough could threaten humanity.

This warning was reportedly cited as one of the reasons that the board chose to fire Sam Altman, who returned as CEO on Wednesday after days of turmoil at the company.

The ability to solve basic math problems may not sound that impressive, but AI experts told Business Insider it would represent a huge leap forward from existing models, which struggle to generalize outside of the data they are trained on.

"If it has the ability to logically reason and reason about abstract concepts, which right now is what it really struggles with, that's a pretty tremendous leap," said Charles Higgins, cofounder of AI training startup Tromero and a Ph.D. candidate in AI safety.

He added, "Maths is about symbolically reasoning — saying, for example, 'if X is bigger than Y and Y is bigger than Z, then X is bigger than Z.' Language models traditionally really struggle at that because they don't logically reason, they just have what are effectively intuitions."

Fellow Tromero cofounder and Ph.D. candidate Sophia Kalanovska told BI that Q*'s name implied it was likely a combination of two well-known AI techniques, Q-learning and A* search.

She said this suggested that the new model could combine the deep-learning techniques that power ChatGPT with rules programmed by humans. It's an approach that could help fix the chatbot's hallucination problem.

"I think it's symbolically very important. On a practical level, I don't think it's going to end the world," Kalanovska said.

"I think the reason why people believe that Q* is going to lead to AGI is because, from what we've heard so far, it seems like it will combine the two sides of the brain and be capable of knowing some things out of experience, while still being able to reason about facts," she added.

"That is definitely a step closer to what we consider intelligence and it is possible that it leads to the model being able to have new ideas, which is not the case with ChatGPT."

The inability to reason and develop new ideas, rather than just regurgitating information from within their training data, is seen as a huge limitation of existing models, even by the people building them.

Dr Andrew Rogoyski, a director at the Surrey Institute for People-Centered AI, told BI that solving unseen problems was a key step towards creating AGI.

"In the case of math, we know existing AIs have been shown to be capable of undergraduate-level math but to struggle with anything more advanced," he said.

"However, if an AI can solve new, unseen problems, not just regurgitate or reshape existing knowledge, then this would be a big deal, even if the math is relatively simple," he added.

Not everyone was so enthused by the reported breakthrough. AI expert and deep learning critic Gary Marcus expressed doubts about Q*'s reported capabilities in a post on his Substack.

"If I had a nickel for every extrapolation like that—'today , it works for grade school students! next year, it will take over the world!'—I'd be Musk-level rich," wrote Marcus.

OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.

Read the original article on Business Insider