Microsoft strengthens the narrative of ASIC rising with "Maia 200," self-developed AI chips leading in high-speed copper cables, DCI, and optical interconnect stations.

date
14:59 28/01/2026
avatar
GMT Eight
Will there be a surge in the second half of 2026? Microsoft's self-developed upgraded version AI chip - Maia 200 is set to boost AI ASIC computing power solutions, potentially benefiting ANET/CRDO/ALAB.
European financial giant BNP Paribas released a research report on Tuesday, stating that with the launch of the second-generation self-developed artificial intelligence chip "Maia 200" after the upgrade of Microsoft Corporation (MSFT.US), and this self-developed AI chip has ignited a new round of investment frenzy in the AI computing power industry, focusing on the leaders in the large-scale AI data center customized AI chip (AI ASIC chip) market - such as the American chip design giant Marvell (MRVL.US) and its largest competitor Broadcom Inc. (AVGO.US) are expected to become the leaders who benefit the most from this investment frenzy. Analysts at BNP Paribas emphasized in the research report that the trend of cloud computing giants led by Microsoft Corporation in self-developed AI chips is the trend of the times, and the market share of AI computing power infrastructure between ASIC and NVIDIA Corporation's AI GPU clusters may increase significantly from the current 1:9/2:8 to close to parity in the future. The team of analysts led by senior analyst Karl Ackerman at BNP Paribas stated that in the new round of AI computing power investment frenzy triggered by the trend of self-developed AI chips, in addition to the above two leaders in AI ASIC, the leaders in data center high-speed interconnection (DCI), high-speed copper cables, and data center optical interconnection are also expected to benefit significantly from the new round of AI computing power investment frenzy. Taking a deeper look at the entire AI computing power industry chain system, it is easy to see that AI ASIC, DCI, high-speed copper cables, and data center optical interconnection, are all benefiting from the super trend of AI ASIC led by Microsoft Corporation, Alphabet Inc. Class C, and Amazon.com, Inc. cloud computing giants, as well as the AI GPU computing power infrastructure system led by NVIDIA Corporation/AMD. Whether it is Alphabet Inc. Class C's massive TPU AI computing power cluster (TPU also follows the AI ASIC technology route), or the massive purchase of NVIDIA Corporation AI GPU computing power clusters by Alphabet Inc. Class C, OpenAI, Microsoft Corporation, and other AI giants, they all rely on data center high-speed interconnection (DCI), high-speed copper cables, and leaders in data center optical interconnection. In addition, leaving aside the two major routes of GPU and ASIC, whether it is the high-performance network infrastructure led by NVIDIA Corporation, "InfiniBand + Spectrum-X/Ethernet", or Alphabet Inc. Class C's "OCS (Optical Circuit Switching)" high-performance network infrastructure, both rely on high-speed copper cables, data center high-speed interconnection (DCI), and optical interconnection equipment suppliers. In other words, whether it is the "AI ASIC computing power chain led by Alphabet Inc. Class C" (TPU/OCS) or the "OpenAI computing power chain" (NVIDIA Corporation IB/Ethernet), they will ultimately converge on the same set of "hard constraints" - data center high-speed interconnection (DCI), data center optical interconnection, high-speed copper cables, and data storage bases (enterprise storage/capacity media/memory testing). After Alphabet Inc. Class C launched the Gemini3 AI application ecosystem in late November, this cutting-edge AI application software quickly became popular worldwide, driving the demand for Alphabet Inc. Class C's AI computing power to increase significantly. The Gemini3 series products brought a huge amount of AI token processing immediately upon release, forcing Alphabet Inc. Class C to significantly reduce the free access to Gemini 3 Pro and Nano Banana Pro, and temporarily restrict Pro subscription users. In addition, recent trade export data from South Korea shows strong demand for SK Hynix and Samsung Electronics' HBM storage systems and enterprise-level SSDs, further confirming the "AI frenzy is still in the early stage of undersupply of computing power infrastructure construction" as proclaimed by Wall Street. According to Wall Street giants Morgan Stanley, Citigroup, Loop Capital, and Wedbush, the global artificial intelligence infrastructure investment wave centered on AI computing hardware is far from over, it is just the beginning. Under the unprecedented "AI inference end computing power demand storm", this round of global AI infrastructure investment wave is expected to reach a scale of 3-4 trillion US dollars by 2030. As the AI inference frenzy sweeps the world, the golden age of AI ASIC has arrived. Similar to the massive AI data center construction like "Stargate", the cost is high, so technology giants increasingly demand cost-effective AI computing systems, and under the constraints of electricity, technology giants strive to make "per token cost, per watt output" reach the extreme, and the prosperous era for AI ASIC technology route has arrived. Undoubtedly, the significant constraints in terms of cost-effectiveness and electricity have forced Microsoft Corporation, Amazon.com, Inc., Alphabet Inc. Class C, and Meta, the parent company of Facebook, to develop AI chips internally in their cloud computing systems under the AI ASIC technology route, with the core purpose of making AI computing clusters more cost-effective and energy-efficient. Microsoft Corporation directly positioned Maia 200 as "significantly improving the cost-effectiveness of AI token generation," and emphasized performance per dollar multiple times; AWS also set the goal for Trainium3 as "best token economics," using energy efficiency/cost-effectiveness as selling points; Alphabet Inc. Class C's cloud computing platform defined Ironwood as the "era of artificial intelligence inference" dedicated TPU (TPU also belongs to the AI ASIC technology route), emphasizing energy efficiency and large-scale inference services. With DeepSeek completely revolutionizing the efficiency of AI training and inference, driving future AI large model development focusing on "low cost" and "high performance", compared to NVIDIA Corporation's AI GPU route, AI ASIC has a more cost-effective advantage in the surge of demand for AI inference computing power in the cloud. It enters into a more robust demand expansion trajectory than the AI frenzy period from 2023 to 2025, with big customers in the future such as Alphabet Inc. Class C, OpenAI, and Meta expected to continue investing heavily in developing AI ASIC chips with Broadcom Inc. The team of analysts at BNP Paribas stated that while Marvell and Broadcom Inc. are unlikely to become chip design partners for Microsoft Corporation's self-developed Maia 200 AI chip - BNP Paribas believes that the exclusive technical supplier for Maia 200 shared by Microsoft Corporation may come from Global Unichip in Taiwan, similar to the partnership between Broadcom Inc. and Alphabet Inc. Class C in developing TPU AI computing clusters. However, BNP Paribas believes that with Microsoft Corporation igniting a new wave of AI computing power investment frenzy, Marvell and Broadcom Inc. may still benefit from the investment theme led by the trend of self-developed AI chips through the label of "absolute leaders in ASIC". A research report recently released by Morgan Stanley shows that the actual production volume of Alphabet Inc. Class C's TPU AI chips is expected to reach 5 million and 7 million pieces in 2027 and 2028, respectively, marking a significant upward revision of 67% and 120% compared to the expectations given by this financial giant. This expected increase in production volume may indicate that Alphabet Inc. Class C will start selling TPU AI chips directly to external markets. Furthermore, the more substantial impact is that Morgan Stanley's research report estimates that for every 500,000 TPU chips exported, Alphabet Inc. Class C could generate an additional $13 billion in revenue and up to $0.40 earnings per share. Market research firm Counterpoint Research predicts in a recent research report that the core AI chips of non-AI GPU series servers - the AI ASIC camp, will experience a rapid growth curve in the near future, with shipments expected to triple by 2027 compared to 2024, and potentially surpass GPU shipments with a scale of over 15 million in 2028. The report shows that behind this explosive growth is the strong demand for Alphabet Inc. Class C's TPU infrastructure, the continuous expansion of AWS Trainium clusters, and the capacity increase brought by Meta (MTIA) and Microsoft Corporation (Maia) as they expand their internal self-developed AI chip product portfolio in cloud computing systems. High-speed copper cables, data center high-speed interconnections (DCI), and optical interconnections Among the other potential beneficiaries listed in the research report by BNP Paribas are the leaders in data center high-speed copper cables (DAC/AEC) such as Amphenol; participants in data center optical interconnection like Lumentum; leaders in data center high-speed interconnections (DCI) like Arista Networks (ANET.US); and if using active electrical cables (AEC), potentially including Credo Technologies (CRDO.US) and Astera Labs (ALAB.US). Whether it is NVIDIA Corporation leading the InfiniBand/Spectrum-X Ethernet high-performance network, or Alphabet's Alphabet Inc. Class C (Google) introducing OCS (optical circuit switching) in the Jupiter architecture, ultimately it all comes down to the "physical interconnect stack" - high-speed copper interconnections (short distance) between servers/accelerator card clusters to switches, optical interconnections at rack/data center/building scales (medium to long distance), and DCI optical transmission/interconnection linking networks and storage domains across buildings/campuses/sites. NVIDIA itself defines its Ethernet platform by including switches, network cards/SmartNICs, DPUs and cables/transceivers (LinkX) as integral components, and clearly outlines the coverage range from DAC copper cables to AOC/multi-mode/single-mode optics. From the perspective of the global AI data center super project like "Stargate", a more precise description is: copper cables and optical interconnections each have their roles. In a super scale AI cluster, the short-distance high-speed interconnection within/between racks (e.g., switch to server network card) is usually prioritized using DAC/AEC (copper cables) to reduce latency, power consumption, and cost; whereas when the bandwidth reaches 400/800G and distances are longer (across racks/rows/data center), the link budget and power consumption push the solution towards AOC/pluggable high-speed optics level optical interconnection devices, and possibly more aggressive silicon photonics/CPO routes. On the other hand, Google's OCS, led by Google, introduces optical circuit switching in Jupiter for data center network architecture to address wiring/capacity issues in scale and evolution - essentially more "polarized", but still relying on optical-electrical interfaces and high-speed wiring systems on the port side. Lumentum is one of the biggest winners in the explosion of AI demand for Alphabet Inc. Class C, mainly because it provides essential optical interconnection in the "high-performance networking base system" deeply integrated with Alphabet Inc. Class C's TPU AI computing clusters - OCS (optical circuit switch) + high-speed optical devices. Its shipments increase along with the increasing number of TPUs. On the OpenAI side, NVIDIA Corporation's deep integration of "InfiniBand and Ethernet" with ANET/CSCO/HPE companies' such switches and data center network systems. NVIDIA Corporation, not disregarding optical interconnectivity - in fact, NVIDIA Corporation's "IB+Ethernet" successfully combines "the determinism of copper + the long-distance/high-bandwidth density of optical" into a standardized interconnect system. Analysts such as Ackerman wrote in their research report to clients, "We believe that Maia 200's rack-level AI computing infrastructure system will include 12 large computational trays, 4 Tier-1 Ethernet scale-up switches, 6 CPU head nodes, 2 rack-top Ethernet switches for front-end networks (ToR), and 1 out-of-band management switch." Ackerman and other analysts added, "What interests us the most is that Maia 200 may not use a backend scale-out network architecture." Analysts at BNP Paribas also stated, "Maia 200 will be deployed in small-scale up clusters with 6,144 ASICs as a unit, connected to the 'external world' through CPU head nodes and front-end Ethernet switches. Given that Maia 200 is tailored for inference workloads, we believe this topology is reasonable, as massive AI inference workloads may not require superclusters consisting of tens of thousands of ASICs that need to synchronize and collaborate." Ackerman and the analyst team further added that the large-scale deployment of Microsoft Corporation's self-developed Maia 200 AI computing infrastructure is expected to accelerate in the second half of 2026 and further penetrate Microsoft Corporation's global network of super-scale data centers next year.