OpenAI hits the brakes on computing power ahead of IPO: bidding farewell to "growth at all costs", the billion-dollar data center gamble turns towards practicality

date
09:29 23/03/2026
avatar
GMT Eight
At the BlackRock American Infrastructure Summit earlier this month, OpenAI CEO Sam Altman admitted that the company is facing a harsh reality: building data centers is no easy task.
At the beginning of this month, while attending the BlackRock US infrastructure summit, OpenAI CEO Sam Altman admitted that the company is facing a harsh reality: building data centers is no easy task. "Such a large-scale project, there are just too many areas where things can go wrong," Altman said during a fireside chat at the summit in Washington, D.C. He cited an example of a data center park in Abilene, Texas, which experienced extreme weather that led to a "brief interruption" in services. The facility is the flagship site of the $50 billion "Stargate" project funded by OpenAI, Oracle, and Softbank. Altman also mentioned that the company has been dealing with supply chain challenges and the pressure to meet urgent deadlines. As Altman strives to transform OpenAI - the darling of the private market with a record valuation of $730 billion in its last funding round - into an investable asset that will attract more discerning public market fund managers, the challenges he faces are increasing. This means the company needs to scale back some of its ambitious spending plans, put some grand projects on hold, and accept its role as a major purchaser of cloud computing power rather than a giant data center builder. "OpenAI has realized that the market may not appreciate a growth and spending strategy without consequences," said Daniel Newman, CEO of Futurum Group, in an interview. "The market wants to see OpenAI's revenue growth matching the scale of its spending. In my view, this shift in strategy aims to demonstrate more financial responsibility." This shift in strategy means that OpenAI may have to slow down its expansion pace while still fiercely competing with companies like Anthropic, Google, and many others developing AI models, applications, and features. Training and running AI models require massive computing resources, including chips, processing power, memory, and energy. Altman and other executives have long emphasized that computational power is the main bottleneck for the company's development, prompting them to continuously raise substantial funds, including the $110 billion received earlier this year, with a contribution of $50 billion from Amazon. In November last year, Altman wrote on the X platform that due to severe computational constraints, OpenAI and other companies "had to restrict our product performance, unable to roll out new features and models". At that time, one of the highlights of OpenAI was Altman's extreme actions to ensure access to computational resources. The company signed infrastructure agreements worth tens of billions of dollars with companies like NVIDIA, AMD, Broadcom, etc. Altman mentioned in his November post that the company is considering pledging around $1.4 trillion over the next eight years. These agreements shook the public market, raised concerns about a potential AI bubble, and led many investors to question: how can OpenAI commit to such staggering investments with an annual revenue of only $13.1 billion? The most striking agreement of OpenAI was with NVIDIA. The world's largest chipmaker agreed last September to invest up to $100 billion in OpenAI over the next few years, with funding tied to the deployment and use of NVIDIA technology by OpenAI. The company announced its plans to deploy at least 10 gigawatts of NVIDIA systems, with the first $10 billion investment to be made when the first gigawatt (equivalent to the power consumption of a medium-sized city) is completed. It is understood that this collaboration "enables OpenAI to build and deploy at least 10 gigawatts of AI data centers". Analysts at the time suggested that this deal drew parallels to the financing model of suppliers that fueled the dot-com bubble in the late 1990s. Altman repeatedly downplayed external concerns about OpenAI's ambitious infrastructure plans, hinting that the company's revenue would soar to several hundred billion dollars by 2030. However, in recent months, as the company prepares for a possible IPO this year, OpenAI has lowered its expectations and outlined a more cautious strategy. In February, the company told investors that its current goal is to spend a total of about $600 billion in computing expenses by 2030, a figure intended to be more directly linked to expected revenue growth. OpenAI also emphasizes discipline in other aspects of its business. In December last year, faced with increasing competition from Google and Anthropic, OpenAI announced a "red alert" to focus on improving its ChatGPT chatbot and Siasun Robot & Automation. OpenAI's CEO of applied business, Phigie Simo, held a company-wide meeting earlier this month to discuss business operations and indicated that the company is actively focusing on high-productivity application scenarios. "For us, what is truly important right now is to stay focused and achieve excellent execution," Simo said, according to excerpts from the reviewed meeting records. "It's a race." Insiders revealed that OpenAI currently does not own any data centers and may not own any in the foreseeable future. Instead, the company chooses to heavily rely on partners like Oracle, Microsoft, and Amazon to integrate as many computing resources as possible. A year ago, the situation was quite different for OpenAI. In January 2025, then-President Donald Trump and Altman, Softbank CEO Masayoshi Son, and Oracle Chairman Larry Ellison announced the "Stargate" project at a White House event. These companies pledged to invest $500 billion over four years to build new AI infrastructure in the United States. According to a blog post at the time, OpenAI would be responsible for project operations, while Softbank would handle finances. Oracle and NVIDIA were designated as key initial technology partners. "Oracle, NVIDIA, and OpenAI will work closely together to build and operate this computing system," the press release said. As the "Stargate" project got underway, OpenAI prepared to develop most of the project's content on its own and planned to lease or own parts of the data center parks directly. However, after encountering actual construction issues and difficulty obtaining support from lending institutions, the company changed its strategy. Oracle is leasing the Abilene data center park for "Stargate" and providing funds for the project's construction by taking on billions of dollars in debt. In their statement last September, OpenAI and NVIDIA mentioned that the first gigawatt of NVIDIA system would be deployed in the latter half of 2026. Experts say that even in the most ideal scenario, this timetable is quite challenging. Walid Saad, an engineering professor at Virginia Tech, says that it may take three to ten years to build a 1 gigawatt data center from scratch. From site selection, obtaining necessary permits and approvals, accessing power, constructing physical structures, transporting hardware, and finally bringing it online, each step may pose challenges. "Involving regulations and permits, there are different processes in different locations," Saad said. "Some processes are beyond their control. You never know what problems may arise." Analyst Allen Chandrasekaran of Gartner AI said in an interview that these obstacles have become very realistic for OpenAI. "They are realizing that 'Let's get as much as we can from the suppliers who are now willing to provide us with computational resources'," Chandrasekaran said. As part of the $110 billion financing announced last month, OpenAI agreed to consume about 2 gigawatts of Trainium computational power from Amazon Web Services. Trainium is Amazon Web Services' proprietary AI chip. Amazon released the latest version Trainium3 in December last year. NVIDIA also participated in this financing round with an investment of $30 billion. As part of the deal, OpenAI stated that it would expand its cooperation with NVIDIA and agree to use 3 gigawatts of dedicated inference power and 2 gigawatts of training power on NVIDIA's upcoming Vera Rubin system. "OpenAI is doing what it must do, which is to acquire computing resources on a large scale," said Newman of the Futurum Group, adding that Meta, Anthropic, and Google are also taking similar actions. "It's a race." Prior to this latest investment by NVIDIA, the market had been speculating for months about the progress of the major infrastructure agreement announced by the two companies last September. The chipmaker disclosed in its quarterly filing last November that the $100 billion deal might not materialize, and reports in January indicated that the agreement had been "put on hold." In a document released in February, NVIDIA pointed out that it "cannot guarantee" that the company will reach an "investment and cooperation agreement" with OpenAI, or that the transaction can be completed. At a meeting earlier this month, NVIDIA CEO Jensen Huang further downplayed expectations, suggesting that the opportunity to invest $100 billion in OpenAI might be "off the table." "To be honest, they have built an incredible growth story. It's just that the road ahead won't be smooth," Newman said in his assessment of OpenAI. "And with their cost structure so high, every step towards profitability will be scrutinized closely."