(News Bulletin 247) – The two companies announced an agreement relating to the supply of AI chips. Broadcom takes more than 9% at the start of the session.

Open AI had already approached the two chip specialists for artificial intelligence (AI), AMD and Nvidia. The group led by Sam Altman has this time decided to strengthen its ties with the third major American name in the sector, namely Broadcom.

The American start-up known for having developed the GPT artificial intelligence (AI) language models (LLM), used by ChatGPT, announced this Monday, October 13, that it had entered into a partnership with the semiconductor specialist for the supply of “AI accelerators” (i.e. chips).

This for a power of up to 10 gigawatts, the equivalent of the peak consumption of New York City during the summer period.

“OpenAI will design the accelerators and systems, which will be developed and deployed in partnership with Broadcom,” the two companies said in a joint statement. The deployment of these chips should begin in the second half of 2026 and be completed in 2029.

>> Access our exclusive graphic analyses, and gain insight into the Trading Portfolio

OpenAI and Broadcom have not communicated the slightest amount linked to this contract. UBS, as part of an investment contract earlier this month by Nvidia in OpenAI for AI infrastructure capabilities, estimated that each gigawatt of data centers represented expenses of around $50 billion.

On this basis, the contract announced by OpenAI to AMD would represent a total of $500 billion in spending, although not all of it would be devoted to chip purchases.

On Wall Street, Broadcom took off by 9.4% at the start of the session following the announcement of this partnership.

A different technology from Nvidia

Charlie Kawwas, a senior executive at Broadcom, told CNBC that this contract was not the $10 billion contract that management had referred to when it last reported earnings in early September.

Note that Broadcom markets processors (XPUs) that are different from graphics processors (GPUs) from Nvidia and AMD. Broadcom’s XPUs can provide an alternative to providing the computing power needed to develop large AI language models (LLMs), such as ChatGPT or Gemini. Even if these products are not exactly the same.

The specialist site KrAsia explains that Nvidia GPUs are chips for broader applications, and are therefore more versatile than Broadcom XPUs. XPUs are integrated circuits for customer-specific applications (ASICs in English) and are therefore more tailored to specific tasks.

In fact, GPUs are more suitable for training LLMs, i.e. the phase where AI models are built. XPUs are more relevant for “inference”, the stage where LLMs are deployed on a large scale.

In addition, Broadcom has enormous know-how in network connections, which is a crucial point when we want to set up “AI factories”, to use the words of Jensen Huang, boss of Nvidia. These are in fact no longer data centers, but rather computing centers, whose computing potential is now given in gigawatts.

For OpenAI this is therefore the third major partnership in semiconductors.

A worrying circularity

A week ago to the day, OpenAI had entered into a partnership with AMD, for up to six gigawatts of chips, accompanied by a potential equity investment involving 160 million shares. AMD’s stock jumped 23.7% this year.

Nvidia, for its part, announced an investment of $100 billion in the capital of OpenAI at the beginning of October.

This is, moreover, yet another announcement involving tens or even hundreds of billions of dollars in tech and AI. For example, Oracle announced a $300 billion contract with OpenAI in September.

These contracts between tech giants may worry some observers who see a dangerous circularity. These companies make expenses which, ultimately, come directly or indirectly back into their accounts.

For example, Nvidia’s $100 billion investment in OpenAI will be used to purchase the company’s graphics processors and will at least partly go back to Jensen Huang’s company.

“One does not need to be skeptical of the general promises of AI technology to consider this announcement (Nvidia’s investment in OpenAI, editor’s note) as a worrying signal about the self-sufficiency now demonstrated by the entire sector,” wrote Bespoke Investment Group in a note cited by CNBC. “If Nvidia has to provide the capital that constitutes its revenues in order to maintain its growth, the entire ecosystem could become unviable,” added the financial intermediary.