Initial Report: NVIDIA (NASDAQ: NVDA), 223% 5-yr Potential Upside (Khadijah PINARDI, EIP)
Khadijah PINARDI presents a "BUY" recommendation for NVIDIA based on growing future demand, its CUDA and software layer moat, and high margin business model.
I’m initiating coverage on NVIDIA (NASDAQ: NVDA) with a BUY and a 3-year target of $211.68, which implies about 95% upside from the current share price of $108 (adjusted for the 10-for-1 split in June 2024) and an estimated IRR of ~25%.
The stock had a massive run — up over 200% in the year leading up to the June 2024 stock split — and naturally, there’s been hesitation and buzz around how much room is left. Most of the concern centers on two things: (1) signs of slowing sequential growth in the Data Center business, and (2) increasing competition from custom silicon players like Google (TPU), Amazon (Trainium), and AMD’s MI300X.
That said, I believe NVIDIA still has significant runway ahead. It’s no longer just selling chips — it’s building a tightly integrated AI platform that’s deeply embedded across the industry. CUDA, its proprietary parallel computing platform and GPU programming interface, anchors a developer ecosystem with real switching costs. This gives NVIDIA meaningful leverage that extends well beyond hardware. I see enterprise AI adoption and sovereign AI deployment as the next major legs of growth — and NVIDIA is just starting to tap into those opportunities.
With $130B+ in FY25 revenue, high gross margins, high-ROIC, and capital-light model, NVIDIA’s fundamentals are solid. Even if headline growth moderates, the business has more than enough tailwinds — software, systems, inference — to keep compounding.
A. Company Overview
NVIDIA began as a graphics-focused computing company, initially enhancing multimedia and gaming through its first integrated chip, NV1. It later pioneered the modern GPU and transformed it into a powerful parallel computing engine with the introduction of CUDA (Compute Unified Device Architecture). CUDA enabled GPUs to handle thousands of simultaneous tasks, expanding their role beyond graphics into high-performance computing and artificial intelligence.
In 2024, NVIDIA capitalized on the accelerating AI wave with the launch of the Blackwell architecture, designed to support trillion-parameter models and optimized for both AI training and inference at scale. Today, it serves as the backbone of AI infrastructure globally.
The company now operates across four segments, with one clearly driving the bulk of the business:
Data Center ($115B): This is where most of NVIDIA's revenue comes from. It includes AI and high-performance computing GPUs (such as H100 and H200), high-speed networking (Mellanox, NVLink, InfiniBand), and software layers like CUDA and AI Enterprise.
Gaming ($11.4B): A steady business built around GeForce GPUs and ray tracing. Growth here is more cyclical and content-driven, but the segment remains high-margin and cash generative.
Professional Visualization ($1.8B): Focuses on workstation GPUs for industries such as architecture, design, and simulation. It’s not a needle-mover but is strategically important in high-performance visualization markets.
Automotive ($1.7B): Primarily includes NVIDIA DRIVE, which powers in-car systems and ADAS (Advanced Driver Assistance Systems). While the business is still in its early stages, NVIDIA has already secured over $14B in design wins and is actively deepening ties with major automotive OEMs.
B. Industry Overview
NVIDIA is competing in a market that is not only massive but still expanding at a rapid pace. Demand for accelerated computing is currently being driven by three structural trends:
AI Training and Inference:
The surge in LLMs and generative AI workloads has pushed hyperscalers, enterprises, and governments to invest heavily in compute infrastructure. Most of that spending continues to go toward NVIDIA’s GPUs, where it holds a dominant position — estimated at 80–90% share of AI training workloads.
Enterprise AI Adoption Is Just Beginning:
While hyperscalers have already built out massive clusters, enterprise adoption remains in its early stages. Companies across sectors — finance, healthcare, retail, industrials — are just starting to experiment with private AI models, copilots, and internal tools. These deployments may be smaller in scale but will be broader in scope.
AI Infrastructure Is Becoming More Vertically Integrated:
Since the breakout success of ChatGPT, companies are increasingly looking for an end-to-end stack — hardware, networking, frameworks, inference, and tools — that simply works. NVIDIA is ahead here: it already offers the full stack that companies are willing to pay for, while most competitors focus narrowly on hardware or have gaps in their offering. The performance lead and developer momentum still tilt heavily in NVIDIA’s favor, as most players — including cloud providers — continue building around NVIDIA’s infrastructure, even while exploring alternatives. While the initial AI buildout may begin to moderate, the deployment phase is just getting underway — and NVIDIA is well-positioned for both.
C. Competitive Positioning & Differentiation
While competitors are making progress in certain areas of the stack, NVIDIA maintains a defensible position through its deep integration across hardware, software, and systems — a combination that remains difficult to replicate.
Hardware
The H100, built on the Hopper architecture, remains the benchmark for AI training. It delivers leading performance per watt and broad support for state-of-the-art models. The newly introduced H200 now advances on this with increased memory bandwidth and up to 2x faster inference throughput — designed specifically for LLMs.
AMD’s MI300X shows promise on paper, particularly for inference workloads. However, real-world adoption remains limited as most hyperscaler clusters continue to rely on NVIDIA hardware. Even Meta, while experimenting with AMD, is still scaling out deployments using NVIDIA’s H100s.
Software & Ecosystem Lock-In
CUDA represents a central part of NVIDIA’s long-term moat. The proprietary parallel computing platform, launched in 2006, is embedded across nearly all machine learning workflows. While switching is technically possible, it presents significant friction and challenge.
In addition to CUDA, NVIDIA owns other core software components — such as cuDNN (deep learning primitives), TensorRT (inference optimization), and a growing portfolio of enterprise software including AI Enterprise, Omniverse, and NIMs (NVIDIA Inference Microservices). These tools create ecosystem lock-in and lay the groundwork for more SaaS-like revenue streams in the future.
Systems & Vertical Integration
NVIDIA’s value proposition extends beyond chips. It offers complete AI systems — including DGX and HGX — which are tightly integrated across hardware and software. These turnkey solutions are optimized for enterprise training needs, minimizing setup complexity and engineering overhead.
This approach gives NVIDIA a meaningful edge over AMD and Intel, both of which still rely on OEMs to build and tune full systems. Through its acquisition of Mellanox, NVIDIA also controls the networking layer, further enhancing its ability to optimize system-wide performance.
Custom Silicon from Hyperscalers
One of the most prominent long-term risks stems from NVIDIA’s own customers: hyperscalers such as Google, Amazon, and Microsoft — all of whom are developing in-house AI chips.
Google Cloud continues to offer NVIDIA’s H100s alongside its own TPUs.
AWS has introduced Trainium2 but remains one of NVIDIA’s largest customers.
Microsoft is piloting Maia, while simultaneously deploying H100 racks in large volumes across Azure.
These internal chip efforts serve hyperscalers as strategic hedges to external dependence rather than full replacements. In most cases, custom silicon is limited to inference or internal applications. For general-purpose infrastructure and model training at scale, NVIDIA continues to dominate.
D. Investment Thesis
I believe understanding the durability of NVIDIA’s growth requires paying attention to several structural drivers — especially as the company transitions from one growth phase to the next.
The Capex Wave Is Only in Its First Phase
NVIDIA’s current growth cycle is being powered by hyperscalers building massive clusters to train foundation models. That buildout has been a key driver of the explosive growth in the Data Center segment. However, I believe this is just the first phase of a broader AI infrastructure wave.
As enterprises begin deploying large language models internally — either through fine-tuning or inference APIs — a second capex cycle is set to follow. While this is now still early, this enterprise layer could be even larger. Many companies are still experimenting, and only a small fraction have moved to scale. I believe that NVIDIA is well-positioned to lead the next phase through:
DGX Cloud: AI infrastructure-as-a-service.
AI Enterprise: Full-stack runtime and development tools.
NIMs (Inference Microservices): NVIDIA’s push into serving high-volume inference.
As enterprise adoption matures — from pilots to scaled deployments — demand for inference infrastructure will accelerate. NVIDIA is already ahead of this curve.
CUDA and the Software Layer Are NVIDIA’s Real Moat
I believe the long-term edge of NVIDIA lies in software as CUDA has become the backbone of modern machine learning development. It underpins frameworks such as PyTorch (Meta), TensorFlow (Google), and JAX. This ecosystem lock-in from CUDA is also a major differentiator to its peers. As the enterprise AI stack becomes more complex, the stickiness of this software will only increase. While the market still largely values NVIDIA as a semiconductor company, I see the shift toward recurring software revenue as underappreciated.
If NVIDIA continues to scale its software monetization — especially through enterprise platforms — it could evolve into a hybrid hardware-platform business. This shift would justify an even higher multiple for NVIDIA.
Gross Margins and Free Cash Flow Provide Flexibility
One of NVIDIA’s strongest attributes is its high-margin business model. With FY25 gross margins exceeding 70%, the company has one of the highest margins in the semiconductor industry, giving it a substantial competitive edge compared to other market players.
More importantly, NVIDIA is also generating massive free cash flow — over $60 billion in FY25 — providing significant financial flexibility. This allows NVIDIA to do three important things:
First, buy back stock, which supports the share price and signals confidence in the business and operations;
Second, invest more heavily in R&D to widen its lead over competitors such as AMD;
Third, invest in new verticals such as healthcare and robotics without needing those investments to be immediately accretive.
This capital-light model, combined with high margins and strong free cash flow, enables NVIDIA to stay ahead in its core markets while scaling into new ones. In a landscape where many competitors are capital-constrained, NVIDIA has a clear advantage in terms of reinvestment capacity.
E. Risk & Mitigation
Slower Growth in Data Center
NVIDIA’s growth is heavily attributed to its Data Center segment, but with some deceleration in the near term—particularly after its explosive growth in FY25, there is a risk that future growth may not meet market expectations. If NVIDIA faces a steeper slowdown than anticipated, it could also lead to a pullback in the stock as investors reassess its growth potential.
I believe NVIDIA has built a strong hedge against the risk of slower growth in any single segment by continuously expanding and advancing its product offerings, ensuring it is not solely reliant on one revenue stream—despite the Data Center business currently dominating its top line. While its Data Center revenue is still heavily driven by hyperscalers, NVIDIA has actively pursued geographic expansion to broaden its customer base.
In response to export restrictions, the company has also pivoted to alternative markets and deepened global partnerships. Notably, it has supported the development of sovereign AI initiatives—government-backed AI data centers designed to give countries full control over their data, infrastructure, and AI capabilities—further diversifying demand within the Data Center segment itself.
Competition from Custom Silicon (In-house chips and AI accelerators)
Hyperscalers such as Google, Amazon, and Microsoft have ramped up their investments in their own AI chips for at least 5-10 years now to reduce dependence on NVIDIA’s GPUs and inference. This poses risks of declining revenue and performance from its Data Center market, and risk of having its pricing power being undercut, weakening NVIDIA's ability to maintain premium pricing for its products. As more companies adopt their internal custom chips, CUDA lock-in may weaken and NVIDIA’s moat may become less dominant.
While these in-house solutions currently do not yet match NVIDIA’s flexibility and comprehensive ecosystem, further increase in adoption could pressure NVIDIA’s future market share, particularly in inference workloads. While they remain a secondary option to NVIDIA’s GPUs, increased development by these competitors could lead to market share erosion over time.
That said, I believe NVIDIA is managing this risk exceptionally well. Its pace of innovation remains unmatched, with each new product iteration delivering substantial performance gains and deepening ecosystem value. The company’s continued leadership in R&D — even from a position of dominance — ensures it stays several steps ahead of the competition. While internal chips may complement hyperscaler strategies, NVIDIA’s end-to-end platform and developer ecosystem will be difficult to displace.
Geopolitical Risk & Export Controls
NVIDIA’s chips—such as CUDA-enabled architectures, Blackwell, and the Hopper (H-series)—are designed in the United States and are therefore subject to U.S. Origin Technology Jurisdiction under the Export Administration Regulations (EAR). As a result, they fall under U.S. export controls, exposing NVIDIA to significant geopolitical risk. In recent years, the U.S. government has imposed escalating restrictions on the export of high-performance AI chips to China and other sensitive regions.
These measures have directly impacted NVIDIA’s ability to sell its flagship products, including the A100 and H100, to one of its largest international markets. Although NVIDIA attempted to mitigate this by offering downgraded versions like the H20, further tightening of export rules in 2023 effectively blocked these as well, reflecting the intensifying U.S.–China tech tensions.
While geopolitical tensions remain a material risk to NVIDIA’s access to China—historically its second-largest market—I believe the company is strategically positioned to manage these constraints through product adaptation, regulatory alignment, and demand diversification.
F. Financials & Projections
NVIDIA delivered outstanding financial results in FY2025, with $130.5 billion in revenue and $72.9 billion in net income, largely driven by the continued growth in its Data Center segment. After learning more about NVIDIA’s business and how comprehensive and powerful its platform has become, I ran a DCF based on a relatively bullish outlook.
To support a 25% IRR over the next five years, I reverse-engineered the DCF—backed by growing AI infrastructure spending from hyperscalers, NVIDIA's strong R&D investments, and the ecosystem advantages that continue to reinforce its edge.
G. ESG & ESG Risks
As an industry leader in AI infrastructure, NVIDIA’s energy consumption and supply chain practices are increasingly coming under scrutiny, especially as ESG considerations grow more prominent. While the company operates a fabless model that limits direct control over its manufacturing, it still faces reputational and regulatory risks, particularly related to carbon emissions and supply chain accountability.
To address these issues, NVIDIA has made several tangible efforts:
In 2024, the company launched a Corporate Sustainability Steering (CSS) group to guide and execute its ESG strategy.
It achieved 76% renewable electricity usage across its operations in FY24 and aims to reach 100% by the end of FY25 for all offices and data centers.
Over 60% of Scope 3, Category 1 emissions are now covered through supplier engagement — a meaningful step for a fabless company.
It introduced a closed-loop liquid cooling system in its data centers to improve water efficiency and eliminate evaporation-based water loss.
It is conducting product lifecycle assessments to better quantify and reduce embedded emissions in its systems.
It promotes supply chain responsibility, including the use of recyclable materials in its GPU systems.
On the social front, NVIDIA continues to prioritize its employees. It is consistently ranked among the best places to work in the U.S. by Glassdoor — supported by strong retention, competitive compensation, and a culture of lifelong learning. The company also fosters diversity, inclusion, and belonging culture for its employees.
On governance, NVIDIA operates under a strong framework that combines a founder-led mindset with institutional discipline. Its core values — from innovation and integrity to acting as one team — are embedded in a global code of conduct that applies to employees, directors, and third-party partners. Ethics and compliance are reinforced through regular training, with over 98% completion rates in FY24 for code of conduct and anti-bribery courses.
The company maintains a no-retaliation policy and provides an anonymous reporting hotline through an independent third party. On the AI front, NVIDIA has also taken a leadership role in promoting responsible and transparent model development. Its Trustworthy AI principles are backed by strong governance mechanisms, including internal audits, model risk management, and oversight by the Audit Committee. NVIDIA also actively engages in global policy discussions to help shape responsible AI standards and supports international human rights frameworks through supplier audits and proactive remediation efforts.
ESG Risks
NVIDIA is exposed to several physical and reputational risks that it still needs to navigate and improve upon:
Its energy-intensive operations expose it to water scarcity and power grid instability, which could delay deployments or force relocation of data center facilities.
Scope 3 emissions, while partially addressed, remain challenging to fully control and may come under greater regulatory pressure going forward.
Growing institutional scrutiny on supply chain labor practices and lifecycle carbon footprints may intensify as AI infrastructure scales globally.
That said, I believe NVIDIA has taken meaningful steps toward addressing these challenges. The company is aligning with evolving global ESG expectations and regulations, and is beginning to embed sustainability into its long-term strategy — not just to meet compliance requirements, but also to take responsibility for its emissions and reduce operational risk as it scales.
H. Summary
Although NVIDIA’s stock has already experienced massive growth over the past few years, I believe the company’s core fundamentals remain strong. It is well-positioned to lead not only in high-performance computing hardware but also in the software ecosystem that enables and scales artificial intelligence globally.
I believe NVIDIA will continue to flourish, and I am excited to see what future innovations they bring—innovations that will power not just generative AI, but a broad range of chips and software that enhance human work and life. The company is deeply embedded in both cloud and enterprise infrastructure, and it has room to grow. With its strong free cash flow generation, NVIDIA has the flexibility to reinvest and defend its position well through its formidable moat.
While growth may moderate from FY25 levels, I have considered the risks NVIDIA may face going forward and based my projection on a 25% IRR over the next five years. I believe NVIDIA’s current position does not mark the end of the story, but rather the beginning of a stronger cycle. As enterprise demand scales, Blackwell delivers, and the software business shows meaningful and increasing monetization, there will be significant upside ahead.