Weekly AI Highlights Review: November 5–12
OpenAI is reportedly developing an in-house chip with Broadcom; Amazon may invest more billions in Anthropic; An auctioned portrait of Turing painted by a humanoid robot; Google CEO claims 25% of the company's new code is AI-generated; OpenAI's 'Orion' may not be performing as expected; and more.
Two of the most impactful news this week concern OpenAI and multiple reports stating the company is leaving no stone unturned to on the one hand, keep up with the computing demands of its products and figure out new ways to enhance its models' performance on the other.
Regarding the first one, recent reports claim that OpenAI has put its chip foundry ambitions on hold to focus on more immediate ways to support its massive computing infrastructure requirements: the startup is reportedly partnering with Broadcom to design a new in-house chip that will enable it to reduce its dependence on Nvidia's GPUs. According to the reports, it was through Broadcom that OpenAI managed to secure manufacturing with TSMC, which is tentatively set to begin in 2026. In the meantime, OpenAI is also considering additional alternatives, including using AMD chips through Microsoft Azure.
Nvidia has become a key player and the biggest benefactor of the AI boom. Estimates place the company's market share at an astounding 80%, while companies like AMD and chip design startups like Fractile are vying for a bigger slice of the market. Perhaps the most telling indicator of the power and influence Nvidia has gained over the market is that this week, Nvidia became the world's largest company after surpassing Apple's $3.38 trillion capitalization at market close.
Even if it has proven nearly impossible for tech giants and AI startups alike to escape from relying on Nvidia's products, companies are keen to find at least some respite from their dependence on the chipmaker's timelines and manufacturing and shipping capabilities. In addition to the report that OpenAI may be looking to design and manufacture a custom chip, this week also saw Amazon consider making another multi-billion investment in Anthropic as long as the latter considers incorporating more servers powered by Amazon's proprietary chips.
According to the reports about this matter, Anthropic agreed to adopt AWS as its primary cloud provider for essential workloads, including model training and safety research, after Amazon committed to the $4 billion investment completed earlier this year. However, it appears that Anthropic has since preferred to use Amazon's Nvidia-powered servers. Adopting Amazon's custom chips would likely cost Anthropic some degree of freedom, at least in its choice of cloud computing providers, as the hardware is not as ubiquitous and its software stack is not as well-supported as Nvidia's.
In addition to OpenAI's hardware woes, there have been some reports that Orion, allegedly OpenAI's next-generation GPT model has not shown a degree of performance improvement comparable to the leap from GPT-3 to GPT-4, despite the corresponding increase in computing resources and data availability. The matter is so serious that some claim this new model may not be reliably better than GPT-4 in domain-specific tasks like coding.
There had been a myriad of opinions regarding this development, ranging from questioning the so-called "scaling laws", to accepting that given the exponential and relatively smooth performance increases seen up to date, it was expected that a difficulty like this one would eventually come along.
Other headlines that caught our eye this week include:
The secret to Google's efficiency? Over 25% of its code is now AI-generated: During Google's 2024 Q3 earnings call, CEO Sundar Pichai revealed that AI now generates over 25% of Google's new code (which is then reviewed by engineers), highlighting the company's commitment to AI integration and development efficiency.
Meta announced the availability of its Llama models for U.S. government agencies and contractors: Following reports of China using Meta's Llama 2 model for military intelligence, Meta announced it would make its AI models available to US government agencies and contractors despite ongoing debates about the safety and usefulness of commercial models in military contexts.
Research Grid secured $6.5M to automate clinical trials' administrative work: Research Grid, a startup founded by Dr. Amber Hill that uses AI to streamline clinical trial processes through its TrialEngine and Inclusive platforms, has secured $6.5 million in seed funding led by Fuel Ventures to expand its presence in the US and Asia and enhance its AI capabilities.
Predikt raised €750K to support CFOs in their decision-making processes: Belgian-Swiss startup Predikt has secured €750,000 in funding to develop its AI-powered financial forecasting platform. Predikt combines internal financial data with millions of macroeconomic indicators to help large companies make more accurate financial predictions.
Mistral has released batch and moderation APIs: Mistral AI has launched two new services: a moderation API that leverages a Ministral 8B fine-tune to classify text according to nine categories describing common harms, and a batch API that enables developers to process high volumes of data for 50% of the cost of equivalent synchronous API calls.
An AI robot's portrait of Alan Turing made history after being auctioned for $1M: A portrait of Alan Turing created by Ai-Da, the world's first ultra-realistic robot artist, just made history by selling for $1.08 million at Sotheby's— becoming the first artwork by a humanoid robot to be sold at auction.
Panjaya's BodyTalk is a dubbing platform that synchronizes speakers' voices, lips, and bodies: Panjaya's new AI dubbing platform BodyTalk synchronizes lip movements and body gestures for natural-looking video translations. BodyTalk early adopter TED reports doubled viewer completion rates and a 115% increase in views of dubbed content.
Conflixis secured $4.2M in seed funding to support smarter financial decisions in healthcare: Healthcare startup Conflixis raises $4.2M in seed funding to use AI and advanced analytics to help healthcare organizations better manage and understand conflicts of interest in their decision-making processes.
AI coding assistant Cursor's parent company has triggered a bidding war: AI coding assistant Cursor's parent company Anysphere is fielding unsolicited offers from major VC firms at valuations up to $2.5 billion, following explosive revenue growth from $4M annually to $4M monthly in less than a year.
The Beatles' final AI-produced song snagged two Grammy nominations: AI technology was used to clean up the vocals from John Lennon's 1978 "Now and Then" demo to create The Beatles' final song, which received two nominations for the 2025 Grammy Awards: Record of the Year and Best Rock Performance.