Nvidia’s $46.7B Q2 proves the platform, but its next fight is ASIC economics on inference
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Nvidia reported $46.7 billion in revenue for fiscal Q2 2026 in their earnings announcement and call yesterday, with data center revenue hitting $41.1 billion, up 56% year over year. The company also released guidance for Q3, predicting a $54 billion quarter. Behind these confirmed earnings call numbers lies a more complex story of how custom application-specific integrated circuits (ASICs) are gaining ground in key Nvidia segments and will challenge their growth in the quarters to come. Bank of America’s Vivek Arya asked Nvidia’s president and CEO, Jensen Huang, if he saw any scenario where ASICs could take market share from Nvidia GPUs. ASICs continue to gain ground on performance and cost advantages over Nvidia, Broadcom projects 55% to 60% AI revenue growth next year. Huang pushed back hard on the earnings call. He emphasized that building AI infrastructure is “really hard” and most ASIC projects fail to reach production. That’s a fair point, but they have a competitor in Broadcom, which is seeing its AI revenue steadily ramp up, approaching a $20 billion annual run rate. Further underscoring the growing competitive fragmentation of the market is how Google, Meta and Microsoft all deploy custom silicon at scale. The market has spoken. AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Secure your spot to stay ahead: https://bit.ly/4mwGngO ASICs are redefining the competitive landscape in real-time Nvidia is more than capable of competing with new ASIC providers. Where they’re running into headwinds is how effectively ASIC competitors are positioning the combination of their use cases, performance claims and cost positions. They’re also looking to differentiate themselves in terms of the level of ecosystem lock-in they require, with Broadcom leading in this competitive dimension. The following table compares Nvidia Blackwell with its primary competitors. Real-world results vary significantly depending on specific workloads and deployment configurations: Metric Nvidia Blackwell Google TPU v5e/v6 AWS Trainium/Inferentia2 Intel Gaudi2/3 Broadcom Jericho3-AI Primary Use Cases Training, inference, generative AI Hyperscale training & inference AWS-focused training & inference Training, inference, hybrid-cloud deployments AI cluster networking Performance Claims Up to 50x improvement over Hopper* 67% improvement TPU v6 vs v5* Comparable GPU performance at lower power* 2-4x price-performance vs prior gen* InfiniBand parity on Ethernet* Cost Position Premium pricing, comprehensive ecosystem Significant savings vs GPUs per Google* Aggressive pricing per AWS marketing* Budget alternative positioning* Lower networking TCO per vendor* Ecosystem Lock-In Moderate (CUDA, proprietary) High (Google Cloud, TensorFlow/JAX) High (AWS, proprietary Neuron SDK) Moderate (supports open stack) Low (Ethernet-based standards) Availability Universal (cloud, OEM) Google Cloud-exclusive AWS-exclusive Multiple cloud and on-premise Broadcom direct, OEM integrators Strategic Appeal Proven scale, broad support Cloud workload optimization AWS integration advantages Multi-cloud flexibility Simplified networking Market Position Leadership with margin pressure Growing in specific workloads Expanding within AWS Emerging alternative Infrastructure enabler *Performance-per-watt improvements and cost savings depend on specific workload characteristics, model types, deployment configurations and vendor testing assumptions. Actual results vary significantly by use case. Hyperscalers continue building their own paths Every major cloud provider has adopted custom silicon to gain the performance, cost, ecosystem scale and extensive DevOps advantages of defining an ASIC from the ground up. Google operates TPU v6 in production through its partnership with Broadcom. Meta built MTIA chips specifically for ranking and recommendations. Microsoft develops Project Maia for sustainable AI workloads. Amazon Web Services encourages customers to use Trainium for training and Inferentia for inference. Add to that the fact that ByteDance runs TikTok recommendations on custom silicon despite geopolitical tensions. That’s billions of inference requests running on ASICs daily, not GPUs. CFO Colette Kress acknowledged the competitive reality during the call. She referenced China revenue, saying it had dropped to a low single-digit percentage of data center revenue. Current Q3 guidance excludes H20 shipments to China completely. While Huang’s statements about China’s extensive opportunities tried to steer the earnings call in a positive direction, it was clear that equity analysts weren’t buying all of it. The general tone and perspective is that export controls create ongoing uncertainty for Nvidia in a market that arguably represents its second most significant growth opportunity. Huang said that 50% of all AI researchers are in China and he is fully committed to serving that market. Nvidia’s platform advantage is one of their greatest strengths Huang made a valid case for Nvidia’s integrated approach during the earnings call. Building modern AI requires six different chip types working together, he argued, and that complexity creates barriers competitors struggle to match. Nvidia doesn’t just ship GPUs anymore, he emphasized multiple times on the earnings call. The company delivers a complete AI infrastructure that scales globally, he emphatically stated, returning to AI infrastructure as a core message of the earnings call, citing it six times. The platform’s ubiquity makes it a default configuration supported by nearly every DevOps cycle of cloud hyperscalers. Nvidia runs across AWS, Azure and Google Cloud. PyTorch and TensorFlow also optimize for CUDA by default. When Meta drops a new Llama model or Google updates Gemini, they target Nvidia hardware first because that’s where millions of developers already work. The ecosystem creates its own gravity. The networking business validates the AI infrastructure strategy. Revenue hit $7.3 billion in Q2, up 98% year over year. NVLink connects GPUs at speeds traditional networking can’t touch. Huang revealed the real economics during the call: Nvidia captures about 35% of a typical gigawatt AI factory’s budget. “Out of a gigawatt AI factory, which can go anywhere from 50 to, you know, plus or minus 10%, let’s say, to $60 billion, we represent about 35% plus or minus of that. … And of course, what you get for that is not a GPU. … we’ve really transitioned to become an AI infrastructure company,” Huang said. That’s