"AI Bull Market Narrative" Sparks Another Wave! Huang Renxun throws out a trillion-dollar AI grand plan NVIDIA Corporation (NVDA.US) sets sail for a market value of 6 trillion dollars
For Nvidia's stock price, it may soon hit another all-time high and drive the global AI computing industry chain towards a new upward trajectory. Nvidia's trillion-dollar super AI computing plan will do everything possible to support the "AI bull market narrative" as the main theme in the capital market.
NVIDIA CEO Huang Renxun demonstrated NVIDIA's unprecedented AI computing power infrastructure in the field of AI computing power infrastructure at the GTC conference in the early hours of March 17 Beijing time. He informed global investors that, under the strong demand for Blackwell architecture GPU computing power and the explosive demand driven by the upcoming mass production of the Vera Rubin architecture AI computing power system, NVIDIA's future revenue in the artificial intelligence chip field may reach at least $1 trillion by 2027, far exceeding the $500 billion AI computing power infrastructure blueprint proposed at the last GTC conference to achieve by 2026.
According to analysts at Goldman Sachs, Wedbush, and Morgan Stanley who are optimistic about the future stock price prospects of NVIDIA, under stronger-than-expected revenue growth prospects, NVIDIA's market value is about to surpass the $5 trillion mark once again after last October and is very likely to rise to much higher historical highs than the current high.
For NVIDIA's stock price, it may soon reach a new all-time high and drive the global AI computing power industry towards a new round of upward trajectory. NVIDIA's trillion-dollar super AI computing power plan will do everything possible to support the narrative of the "AI bull market" in the capital market. As for the average target price of Wall Street analysts, it means that NVIDIA's market value will exceed $6 trillion within the next 12 months, and the most optimistic target price is as high as $8.8 trillion total market value.
When model size, inference pathways, and multimodal/agent-based Agentic AI workloads drive computing power consumption to expand exponentially, the capital expenditure main line of technology giants tends to focus on AI computing power infrastructure. Global investors will continue to anchor the "AI bull market narrative" around NVIDIA, Google's TPU cluster, and AMD's new product iterations and AI computing power cluster delivery expectations, and this will remain one of the most certain investment narratives in the global stock market.
At the annual GTC developer conference held in San Jose, California, CEO Huang Renxun unveiled a new central processor (CPU level server CPU) and an LPU AI inference computing power infrastructure system built on Groq's exclusive AI inference architecture technology. Groq is an AI chip startup, and NVIDIA acquired technology licensing from it for $17 billion last December.
These measures are part of Huang Renxun's efforts to consolidate the company's position in the field of "inference computing." In the so-called inference computing, it refers to the entire calculation process of answering queries from global B-side and C-side users; in this field, NVIDIA's AI GPU computing power system is facing fierce competition from central processors and custom AI ASIC processors developed by companies such as Google. In recent years, NVIDIA's chips have dominated the AI large-scale model training phase, which has also been the focus of the market's attention.
NVIDIA's AI GPU needs a more powerful AI computing power cluster universality and the rapid iteration ability of the entire computing system, while the AI inference side values token cost, latency, and energy efficiency after the scale deployment of advanced AI technologies.
"The age of artificial intelligence inference has arrived," Huang Renxun said at the GTC conference. "And the demand for inference is still growing," he added.
Wearing his iconic black leather jacket, Huang Renxun delivered his speech in an ice hockey arena that can accommodate over 18,000 people. This four-day technology conference has become one of the largest display platforms for global AI technology. "I just want to remind everyone that this is a highly anticipated technology conference," he told the audience.
The AI inference frenzy is approaching, and NVIDIA's "AI computing power blueprint" has risen to a trillion-dollar level
If Huang Renxun's GTC speech is condensed into one sentence, the core is: NVIDIA is completely restructuring itself from a company that sells AI GPUs to a chip giant that sells AI factories. The official keynote started with the token as the basic unit of modern AI, Huang Renxun advanced the industry's main line from "training" to "inference + agentic AI," and raised the revenue opportunity for AI infrastructure from the previous $500 billion estimate to at least $1 trillion by 2025-2027. This is not a simple demand adjustment, but a signal to the capital market that the future computing power competition will no longer be based solely on training peak FLOPS, but on who can produce tokens at the lowest cost, with the highest level of data throughput, and the best latency.
Surrounding this expansion narrative of AI computing power demand, the underlying logic Huang Renxun has given is very clear: data centers are no longer just "storage centers," but "AI factories." The most crucial metric under a fixed power budget is not the peak performance of a single chip but "tokens per watt, cost per token, and time to first production." This is why he repeatedly emphasizes "extreme codesign"optimizing computing, networking, storage, software, power supply, and cooling as a whole. According to official information, the Vera Rubin NVL72 platform can achieve up to 10 times the inference throughput per watt compared to the Blackwell platform, with only one-tenth of token cost, and the number of GPUs needed for training large-scale MoE models can be reduced to one-fourth of the original quantity. This is not just a chip iteration but a rewriting of the economics of AI infrastructure.
At the hardware level, the most important change at this GTC is that NVIDIA has officially integrated CPU, GPU, LPU, DPU, SuperNIC, switch chips, and storage architecture into a platform-level system. The Vera Rubin platform defined by the company includes Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, Spectrum-6 Ethernet switch, and the latest integrated NVIDIA Groq 3 LPU; the Vera Rubin NVL72 rack consists of 72 Rubin GPUs + 36 Vera CPUs, while the Groq 3 LPX rack is specially designed for low-latency inference. Huang Renxun innovatively divides AI inference into two parts: prefill is handled by Vera Rubin, and decode by Groq AI chips. This means that NVIDIA's answer to the inference era is no longer "let the GPU do everything" but separates high throughput from ultra-low latency processing using heterogeneous computing.
On the software and ecosystem sides, Huang Renxun's position in his speech is also radical. The Dynamo 1.0 is defined by NVIDIA as the inference operating system for the AI factory, which the company claims can deliver up to a sevenfold increase in inference performance for Blackwell. In the direction of intelligent agents, NVIDIA has introduced the Agent Toolkit, OpenShell, NemoClaw, elevating OpenClaw to a platform that is like an "operating system for personal AI," and providing businesses with strategic control, privacy routing, and security boundaries. At the same time, NVIDIA has expanded the open large model family, such as Nemotron, Cosmos, Isaac GR00T, Alpaymayo, BioNeMo, Earth-2, and announced the roadmap for the Feynman architecture: the next-generation platform will introduce the Rosa CPU, LP40 LPU, BlueField-5, CX10, Kyber, advancing copper interconnection and integrated optical interconnection in the next generation AI factory.
Expanding further, GTC 2026 is not just about the data center. NVIDIA has also brought "physical AI" and "spatial computing" onto the main stage: IGX Thor has entered the general availability stage, targeting industrial, medical, Siasun Robot & Automation, and edge computing; the Open Physical AI Data Factory Blueprint is used to accelerate data generation, enhancement, and evaluation for Siasun Robot & Automation, visual AI intelligent agents, and autonomous driving; while the Space-1 Vera Rubin Module extends the Vera Rubin architecture to orbital data centers, with the company claiming it can provide up to a 25-fold increase in AI computing power for space inference compared to the H100. This shows that NVIDIA has expanded the "AI factory" from cloud data centers to a unified infrastructure paradigm across clouds, edges, endpoints, vehicles, Siasun Robot & Automation, and even space.
The real theme of this GTC 2026 is not just about new product launches as in previous years but about NVIDIA elevating itself from a single GPU supplier to an AI infrastructure provider. This is why this conference is most worth paying attention to, not for the parameters of a specific AI chip, but because NVIDIA is using system-level products to lock in the token economics, inference monetization process, and infrastructure bargaining power for the next few years ahead of time.
The consolidation of NVIDIA's monopoly position in AI computing power infrastructure and the stock price heading towards historic highs?
"Investors have previously been concerned about the sustainability of tech giants' massive AI infrastructure spending, but with Huang Renxun outlining a revenue opportunity of $1 trillion by 2027, investors begin to believe that NVIDIA's demand for AI infrastructure is still long-lasting and strong," said Emarketer analyst Jacob Bourne. "As the entire AI industry transitions from the early experimental stage to large-scale deployment, NVIDIA continues to maintain its leading position in the AI computing power market."
When Huang Renxun raised the revenue opportunity for NVIDIA's AI chips and infrastructure to at least $1 trillion by 2027 at the GTC conference, the market saw more than just a company continuing to sell more powerful GPUs but a company trying to define the next generation of the "AI factory" production function in the infrastructure empire: from the training era to the inference era, from competing with single chips to dominating the whole cabinet, networking, software stack system. From Blackwell and Vera Rubin to the Groq technology collaboration aimed at low-latency decoding, NVIDIA is rewriting the value language of throughput per token, revenue per watt, and inference monetization capability.
While Huang Renxun proved the ongoing expansion of demand with a $1 trillion revenue opportunity at the GTC conference, he also presented a full platform of CPU, GPU, LPU, high-performance network components, software ecosystem, and agent toolchain, indicating that NVIDIA's competitive unit is no longer just a single AI chip but a whole AI factory.
The "inference inflection point has arrived" as stated by Huang Renxun essentially declares to the capital market that AI capital spending is far from peaking, and the real large-scale deployment is just beginning. When NVIDIA integrates CPU, GPU, LPU, network, Agent software, and data center economics into a coherent narrative, it is not just starting a new product cycle but steering towards the super giant ship of imagining a $5 trillion market value space once again. The average stock price shown by Tipranks' Wall Street analysts suggests that analysts generally have a positive view of NVIDIA's stock price reaching $273, implying an astonishing 51% potential for growth in the next 12 months, with the most optimistic target price as high as $360. The $273 target price corresponds to NVIDIA's market value of about $6.6 trillion. As of the close of the US stock market on Monday, NVIDIA's stock price closed at $183.220, with a market value of around $4.45 trillion.
Huang Renxun raising the revenue opportunity for AI chips/AI computing power infrastructure to at least $1 trillion by 2027 at the conference is significantly higher than the previous estimates around the Blackwell and Rubin architectures aiming to reach $500 billion by 2026. Goldman Sachs stated after the GTC conference that the new $1 trillion revenue prospect at the latest GTC provides a longer-term demand endorsement to the market, alleviating investors' anxiety about AI capital spending possibly peaking in 2026. In other words, Goldman Sachs' analyst team believes that this speech is not just a showcase of new products but a repositioning of NVIDIA's order ceiling for the next two to three years and the sustainability of its performance.
Goldman Sachs emphasized that NVIDIA did not just release another incredibly powerful AI GPU but officially commercialized inference in an exclusive way, upgrading NVIDIA's AI computing power infrastructure to the most core equipment for the next phase of the global AI arms race. As mentioned earlier, Huang Renxun divides inference into prefill and decode stages: the former is handled by Vera Rubin, and the latter by Groq 3 LPX/LPU, meaning NVIDIA is further expanding from being the "training dominator" to becoming an "AI computing power inference infrastructure overall contractor." Goldman Sachs emphasizes that the figures presented by NVIDIA exceed market expectations: Vera Rubin + LPX can achieve up to 35 times the inference throughput per megawatt and can provide up to a 10x revenue opportunity for trillion-parameter models.
Goldman Sachs states that NVIDIA is not just holding onto the training market but in the inference era where power is limited, and latency is critical, it has presented a stronger monetization framework and a more complete heterogeneous computing solution. Goldman Sachs is more bullish mainly because this GTC simultaneously addresses the two main concerns investors have: whether demand has peaked and whether NVIDIA will be diluted in the inference era by CPUs, in-house ASICs, or other custom chips.
Goldman Sachs stated that the forward-looking $1 trillion opportunity far exceeds market expectations, confirming that the demand from hyperscalers for cloud computing supergiants (Hyperscalers) remains strong and persistent. Based on the optimistic assessment of potential catalysts in the coming months, Goldman Sachs reiterated its "buy" rating for NVIDIA and maintained a 12-month target price of $250, emphasizing that super cloud service providers' capital expenditure plans and new models based on Blackwell and Rubin architectures will continue to consolidate the company's performance leadership.
Related Articles

US Stock Market Move | Military drone technology company Swarmer (SWMR.US) lands on the US stock market with a soaring 315.4% opening.

US Stock Market Move | 36Kr Holdings Inc ADR Class A (KRKR.US) surged 42%, achieving profitability in 2025.

Microsoft Corporation (MSFT.US) integrates Copilot development team to accelerate AI product deployment.
US Stock Market Move | Military drone technology company Swarmer (SWMR.US) lands on the US stock market with a soaring 315.4% opening.

US Stock Market Move | 36Kr Holdings Inc ADR Class A (KRKR.US) surged 42%, achieving profitability in 2025.

Microsoft Corporation (MSFT.US) integrates Copilot development team to accelerate AI product deployment.

RECOMMEND

European Carmakers Embrace China: Under Technology And Cost Pressure, Stellantis And Mercedes Seek Partnerships With Chinese Automakers
17/03/2026

HKEX Listing Mechanism Reform Revisited: How To Balance New Favorites And Established Names
17/03/2026

International Oil Prices Plunge Boosts U.S. Stocks; Morgan Stanley Chief Says Market Correction Nearing End
17/03/2026


