AI computing power demand continues to surge! South Korea's "AI squid game" is in full swing, igniting the demand for ten thousand AMD AI chips.

date
14:57 23/03/2026
avatar
GMT Eight
AMD's AI computing power solution is gaining more positive market recognition, and is expected to continue to erode the market share of "AI chip superpower" NVIDIA, with a share of up to 90% in the trillion-dollar AI core computing cluster field.
According to media reports, the South Korean artificial intelligence startup company Upstage is currently in talks with the US PC and data center high-performance chip leader AMD (AMD.US) to purchase its latest 10,000 AI accelerators. This is an important part of their efforts to introduce larger-scale AI computing infrastructure into the South Korean market. The South Korean AI startup company has purchased a total of 10,000 AMD AI chips, and with the recent collaboration between AMD and tech leaders such as Celestica Inc. (CLS.US) and Hewlett Packard Enterprise Co. (HPE.US), it is clear that AMD's AI computing power solution is gaining more market recognition. There is a possibility that they could continue to erode the 90% market share dominance of "AI chip superpower" NVIDIA Corporation (NVDA.US) in the trillion-dollar AI core computing cluster field. Upstage CEO Sung Kim mentioned in a media interview that he met with AMD CEO Lisa Su last week in Seoul to discuss the purchase of the AMD MI355 AI accelerator computing product. Kim stated in another media interview on Monday, "While we have many NVIDIA Corporation chips in the South Korean market, we want to diversify our AI computing supply and shift towards other AI chip suppliers including AMD." It is understood that Upstage is one of four participating teams in a super AI competition supported by the South Korean government to select the best national AI basic model. This competition, referred to as the "AI Squid Game," is named after the popular survival-themed series created by the Korean Netflix team, and is an important part of South Korea's ambition to become a top global AI powerhouse. Under the supervision of the South Korean Ministry of Science and ICT, a professional jury will conduct evaluations and eliminations of these teams' AI basic models every six months. South Korea plans to select two teams to enter the final stage before the beginning of next year. The winners will receive more NVIDIA Corporation AI GPU computing infrastructure. Kim mentioned that Upstage is currently preparing a super large language model with around 200 billion parameters for an upcoming important competition this summer. He added that the advantage of this South Korean AI startup company lies in its ability to build high-performance AI models at relatively lower costs by combining economies of scale with efficient processing methods, in order to compete with rivals from China and the US who focus on cost-effective AI models. Upstage is a leading South Korean AI startup focusing on AI large models and enterprise-level AI software solutions. It holds a prominent position in the industry as one of the four participating teams in the "Sovereign AI Basic Model" competition supported by the South Korean government, and it has disclosed that its cumulative financing has exceeded $100 million by 2024, claiming to be the most funded AI large model company in South Korea's history. The company not only focuses on general AI models but also actively invests in enterprise-level Document AI and LLM+ Sovereign AI going global. Official data shows that this AI startup company targets key industries such as finance, insurance, healthcare, and high-end manufacturing as the focus of its "AI+" scenarios. Kim also mentioned that they are considering Vietnam and the United Arab Emirates as significant potential target markets, offering sovereign-level AI training/inference systems that can be deployed within their territories. AMD stepping into the era of rack-level AI infrastructure! Striving to expand Helios computing cluster capacity Upstage is currently in talks with AMD to purchase 10,000 MI355 chips. With the CEO specifically stating that South Korea already has numerous NVIDIA Corporation chips but hoping for a "diversified" strategy deploying AMD, it shows that AMD is transitioning from being an "optional AI GPU substitute" to a viable option for large-scale AI computing infrastructure deployment by some customers. Last week, media reports indicated that AMD announced a deep collaboration with Celestica Inc., pushing forward the new Helios rack-level AI computing infrastructure platform to the global AI data center market as a response to NVIDIA Corporation's NVL72 rack-level AI platform. These two latest developments combined suggest that AMD's AI computing cluster solution is gaining more market recognition. What's more significant is that Helios is not just about individual cards, it is at the rack-level: AMD has elevated the competition from individual GPUs to 72-card racks, network interconnections, and an integrated platform of CPU+GPU+NIC, and has partnered with Celestica Inc. to accelerate mass production. As AMD partners with Celestica to accelerate the deployment of the Helios rack-level AI platform, this comes at a time when AMD, along with several tech leaders, are joining hands to counter NVIDIA Corporation's vertically integrated AI computing infrastructure solutions. Previously, AMD announced collaborations with Hewlett Packard Enterprise Co., Broadcom Inc., aiming to provide open and rack-scale artificial intelligence computing infrastructure for high-performance computing clusters and large AI data centers, with a goal to drive forward global "Sovereign AI" research progress. The Helios platform is expected to be available to customers by late 2026, with Meta (parent company of Facebook) signing a multi-generational and multi-year deep collaboration agreement with AMD, planning to deploy a maximum of 6 gigawatts of AMD Instinct GPU computing clusters, with the first gigawatt-level deployment anticipated to start in the second half of 2026; OpenAI has also been involved in the design optimization of AMD MI450. When you add the strong demand for "sovereign AI/local large-scale computing power" from companies like South Korean Upstage, it reveals not just a singular AI computing power order, but a trend: more and more customers are unwilling to bet all their AI infrastructure on a single supplier, and AMD is well-positioned to cater to this demand for "secondary core alternative sources + open standards + reducing single lock-ins" in AI computing power. Will the "King of the Hill" AMD's stock price enter a new bull market cycle? Undoubtedly, these latest positive catalysts are strong drivers for AMD's stock price outlook in the short to medium term. AMD has moved from being a follower in the AI chip market to a competitor in the AI training/inference system infrastructure; the company also presented very aggressive targets at their Analyst Day in 2025, including achieving an annual revenue scale of $100 billion in data center chips over the next five years, and a compound annual growth rate of over 80% in revenue related to AI computing power in data centers. AMD CEO Lisa Su also predicted at the Analyst Day that by 2030, the total market size of AI data centers, including AI central processors, AI accelerators, and high-performance networking products, is expected to surpass $1 trillion. This is significantly higher than the projected $200 billion by 2025, indicating a compound annual growth rate exceeding 40%. In terms of overall profits, Su expects the company's earnings per share (EPS) to reach $20 within three to five years. AMD has been dubbed the "King of the Hill" by analysts at Wall Street powerhouse Citigroup, who have given a 12-month target price as high as $260. According to analysts compiled by TIPRANKS, the average target price on Wall Street is a staggering $285, indicating a potential upside of 42% in the next 12 months. As of last Friday's closing on the US stock market, AMD's stock price was at $201.330. NVIDIA Corporation CEO Jensen Huang presented at the GTC conference in the early hours of March 17th (Beijing time) an unprecedented AI computing infrastructure vision for NVIDIA Corporation, informing global investors that due to the strong demand for Blackwell architecture GPU computing and the explosive demand expected for the upcoming Vera Rubin architecture AI computing system, their future revenue scale in the artificial intelligence chip field may reach at least $1 trillion by 2027, significantly higher than the $500 billion AI computing infrastructure blueprint presented at the previous GTC conference to be achieved by 2026. As model scale, inference routing, and multimodal/agent-based Agentic AI workloads drive exponential expansion in computing power consumption, the capital expenditure strategies of tech giants are leaning towards AI computing infrastructure. Global investors are anchoring the narrative of the "AI bull market" around NVIDIA Corporation, Alphabet Inc. Class C TPU clusters, AMD's new product iterations, and the expected AI computing cluster deliveries, continuing to position it as one of the most promising investment narratives in the global stock market, as well as signaling that investments related to AI training/inference such as power supply, liquid cooling systems, optical interconnect supply chains, etc., continue to remain hot investment themes alongside leaders in AI computing such as NVIDIA Corporation, AMD, Broadcom Inc., Taiwan Semiconductor Manufacturing Co., Ltd. ADR, Micron, amidst geopolitical uncertainties in the Middle East with GEO Group Inc. According to Wall Street giants Morgan Stanley, Citigroup, Loop Capital, and Wedbush, the investment wave centered around AI hardware-based global artificial intelligence infrastructure is far from over, just at the beginning. Under the unprecedented "AI inference computing power demand storm", estimated to continue until 2030, the size of this global AI infrastructure investment wave is expected to reach $3 trillion to $4 trillion.