6 min readfrom VentureBeat

Google doesn't pay the Nvidia tax. Its new TPUs explain why.

Google doesn't pay the Nvidia tax. Its new TPUs explain why.

Every frontier AI lab right now is rationing two things: electricity and compute. Most of them buy their compute for model training from the same supplier, at the steep gross margins that have turned Nvidia into one of the most valuable companies in the world. Google does not.

On Tuesday night, inside a private gathering at F1 Plaza in Las Vegas, Google previewed its eighth-generation Tensor Processing Units. The pitch: two custom silicon designs shipping later this year, each purpose-built for a different half of the modern AI workload. TPU 8t targets training for frontier models, and TPU 8i targets the low-latency, memory-hungry world of agentic inference and real-time sampling.

Amin Vahdat, Google's SVP and chief technologist for AI and infrastructure (pictured above left), used his time onstage to make a point that matters more to enterprise buyers than any individual spec: Google designs every layer of its AI stack end-to-end, and that vertical integration is starting to show up in cost-per-token economics that Google says its rivals cannot match.

"One chip a year wasn't enough": Inside Google's 2024 bet on a two-chip roadmap

The more interesting story behind v8t and v8i is when the decision to split the roadmap was made. The call came in 2024, according to Vahdat — a year before the industry at large pivoted to reasoning models, agents and reinforcement learning as the dominant frontier workload.

At the time, it was a contrarian read. "We realized two years ago that one chip a year wouldn't be enough," Vahdat said during the fireside. "This is our first shot at actually going with two super high-powered specialized chips."

For enterprise buyers, the implication is concrete. Customers running fine-tuning or large-scale training on Google Cloud and customers serving production agents on Vertex AI have been renting the same accelerators and eating the inefficiency. V8 is the first generation where the silicon itself treats those as different problems with two sets of chips.

TPU 8t: A training fabric that scales to a million chips

On paper, TPU 8t is an aggressive generational step. According to Google, 8t delivers 2.8x the FP4 EFlops per pod (121 vs 42.5) against Ironwood, the seventh-generation TPU that shipped in 2025, doubles bidirectional scale-up bandwidth to 19.2 Tb/s per chip, and quadruples scale-out networking to 400 Gb/s per chip. Pod size grows modestly from 9,216 to 9,600 chips, held together by Google's 3D Torus topology.

The number that matters most to IT leaders evaluating where to run frontier-scale training: 8t clusters (Superpods) can scale beyond 1 million TPU chips in a single training job via a new interconnect Google is calling Virgo networking. 

8t also introduces TPU Direct Storage, which moves data from Google's managed storage tier directly into HBM without the usual CPU-mediated hops. For long training runs where wall-clock time is the cost driver, collapsing that data path reduces the number of pod-hours needed to finish each epoch.

TPU 8i and Boardfly: Re-engineering the network for agents

If 8t is an evolutionary step, TPU 8i is the more architecturally interesting chip. It is also where the story for IT buyers gets most compelling.

The year-over-year spec jumps are, as Vahdat put it, “stunning.” According to Google, 8i delivers 9.8x the FP8 EFlops per pod (11.6 vs 1.2), 6.8x the HBM capacity per pod (331.8 TB vs 49.2), and a pod size that grows 4.5x from 256 to 1,152 chips.

What drove those numbers is a rethink of the network itself. Vahdat explained the insight directly: Google's default way of connecting chips together supported bandwidth over latency — good for moving large amounts of data through, not built for the minimum time it takes a response to get back. That profile works for training. For agents, it does not. In partnership with Google DeepMind, the TPU team built what Google calls Boardfly topology specifically to reduce the network diameter — shrinking the number of hops between any two chips in a pod. Paired with a Collective Acceleration Engine and what Google describes as very large on-chip SRAM, 8i delivers a claimed 5x improvement in latency for real-time LLM sampling and reinforcement learning.

The vertical-integration moat: Why Google doesn't pay the "Nvidia tax"

The subtext across Vahdat's presentation was a six-layer diagram Google calls its AI stack: energy at the foundation, then data center land and enclosures, AI infrastructure hardware, AI infrastructure software, models (Gemini 3), and services on top. Vahdat noted that designing each layer in isolation forces you to the least common denominator for each layer. Google designs them together.

This is where the competitive story for IT buyers and analysts crystallizes. OpenAI, Anthropic, xAI and Meta all depend heavily on Nvidia silicon to train their frontier models. Every H200 and Blackwell GPU they buy carries Nvidia’s data-center gross margin — the informal "Nvidia tax" that industry analysts have flagged for two years running as a structural cost disadvantage for anyone renting rather than designing. Google pays fab, packaging and engineering costs on its TPUs. It does not pay that margin. 

What v8 means for the compute race: A new evaluation checklist for IT leaders

For procurement and infrastructure teams, TPUv8 reframes the 2026–2027 cloud evaluation in concrete ways.

Teams training large proprietary models should look at 8t availability windows, Virgo networking access, and goodput SLAs — not just headline EFlops. Teams serving agents or reasoning workloads should evaluate 8i availability on Vertex AI, independent latency benchmarks as they emerge, and whether HBM-per-pod sizing fits their context windows. Teams consuming Gemini through Gemini Enterprise should inherit the 8i lift and should expect the ceiling on what they can deploy in production to rise meaningfully through 2026.

The caveats are real. General availability is still "later in 2026." The v8 is a roadmap signal, not a procurement decision today. Google's benchmarks are self-reported; undoubtedly independent numbers will come from early cloud customers and third-party evaluators over the next two quarters. And portability between JAX/XLA and the CUDA/PyTorch ecosystem remains a friction cost worth thinking about when negotiating any multi-year commitment.

Looking further out, Vahdat made two predictions worth noting. First, general-purpose CPUs will see a resurgence inside AI systems — not as accelerators, but as orchestration compute for agent sandboxes, virtual machines and tool execution. Second, framed explicitly as an industry prediction rather than a Google roadmap preview, specialization also keeps going strong. As general-purpose CPUs gain plateau at a few percent a year, workloads that matter will demand purpose-built silicon. "Two chips might become more," Vahdat said — without specifying whether the "more" would mean future TPU variants or other classes of specialized accelerators.

The frontier compute race used to be a question of who could buy the most H100s. It is now a question of who controls the stack. The shortlist of companies that genuinely do is, for the moment, two: Google and Nvidia.

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#generative AI for data analysis
#Excel alternatives for data analysis
#google sheets
#natural language processing for spreadsheets
#real-time data collaboration
#real-time collaboration
#enterprise data management
#large dataset processing
#big data management in spreadsheets
#conversational data analysis
#intelligent data visualization
#data visualization tools
#big data performance
#data analysis tools
#data cleaning solutions
#financial modeling with spreadsheets
#cloud-based spreadsheet applications
#cloud-native spreadsheets
#AI formula generation techniques
#enterprise-level spreadsheet solutions