Mint Primer | Strawberry: Can it unlock AI’s reasoning power?

Share This Post


OpenAI plans to release two highly-anticipated models. Orion, potentially the new GPT-5 model, is expected to be an advanced large language model (LLM), while Strawberry aims to enhance AI reasoning and problem-solving, particularly in mastering math.

Why are these projects important?

Project Strawberry (earlier dubbed Q*, or Q-Star) is reportedly a secret OpenAI initiative to improve AI’s reasoning and decision-making for more generalized intelligence. OpenAI co-founder Ilya Sutskever’s concerns about its risks led to CEO Sam Altman’s brief ouster. Unlike Orion, which focuses on optimizing existing LLMs like GPT-4 by cutting computational costs and enhancing performance, Strawberry aims to boost AI’s cognitive abilities, say The Information and Reuters. OpenAI might even integrate Strawberry into ChatGPT to enhance reasoning.

If true, how will they impact the tech world?

For autonomous systems such as self-driving cars or robots, Strawberry could improve safety and efficiency. Future iterations may focus on improving interpretability, making its decision-making processes transparent. Big tech giants like Google and Meta might face heightened competition since clients in healthcare, finance, automobiles and education, that are increasingly relying on AI, embrace the newer, enhanced models of OpenAI. Smaller startups, too, could struggle to compete with the new products, affecting their market position and investment prospects.

How can we be sure OpenAI is developing these?

New investors appear to be keen on investing in OpenAI, which, according to The Wall Street Journal, is planning to raise funds in a round led by Thrive Capital that would value it at more than $100 billion. Apple, Nvidia are likely investors in this round. Microsoft has already invested more than $10 billion in OpenAI, feeding reports of OpenAI boosting its AI models.

But can AI models actually reason?

AI struggles with human-like reasoning. But in March, Stanford and Notbad AI researchers indicated that their Quiet-STaR model could be trained to think before it responds—a step towards AI models learning to reason. DeepMind’s proposed framework for classifying the capabilities and behaviour of Artificial General Intelligence (AGI) models acknowledges that an AI model’s “emergent” properties could give it capabilities such as reasoning, that are not explicitly anticipated by developers of these models.

Will ethical concerns increase?

Despite claims of safe AI practices, big tech faces scepticism due to past misuse of data, copyrights and intellectual property (IP) violations. AI models with enhanced reasoning could fuel misuse, like misinformation. Quiet-STaR researchers admit there are “no safeguards against harmful or biased reasoning”. Sutskever, who proposed what is now Strawberry, launched Safe Superintelligence Inc., aiming to advance AI’s capabilities “as fast as possible while making sure our safety always remains ahead”.

 



Source link

Related Posts

Netflix beats earnings targets with 5 million new customers

Netflix picked up 5.1 million streaming subscribers in...

Amazon: Amazon AWS CEO: Quit if you don’t want to return to office

One of Amazon's top executives defended the new,...

Half a Million Users Flooded to Twitter Competitor After Elon Musk Handed Creeps the Keys

Surprise, surprise. X-formerly-Twitter owner Elon Musk is implementing...

Intel Denies Chinese Claims Of Security Issues

Intel China responds after influential Chinese cybersecurity association...

Audio Overview controls and team collaborations

To try it out, follow these steps:Go to...
- Advertisement -spot_img