

NVIDIA stock slipped after reports indicated that Meta Platforms plans to spend billions of dollars on Google’s custom AI chips. The talks highlight how Google’s tensor processing units, or TPUs, increasingly challenge NVIDIA’s graphics processing units in the fast-growing AI accelerator market.
Meta is in discussions to deploy Google’s TPUs across its data centers from 2027, according to a report from The Information. The company may also rent Google’s AI chips through Google Cloud as early as next year. These talks signal that Meta wants to diversify beyond NVIDIA’s GPUs, which currently power many of its artificial intelligence workloads.
If Meta finalizes a long-term deal, Google TPUs would gain a major new customer alongside existing users such as Anthropic. The move would present a direct alternative to NVIDIA’s high-end accelerators for training and running large language models. It would also give Meta more leverage over supply, pricing, and infrastructure choices as AI spending grows.
NVIDIA shares fell as much as about 2.7% in extended trading after the report, while Alphabet, Google’s parent, gained around 2% - 3%. Advanced Micro Devices also traded lower, reflecting broader concerns that large buyers of AI hardware may shift part of their budgets to alternative platforms.
Google continues to promote TPUs as purpose-built accelerators for artificial intelligence and machine learning tasks. These chips use an application-specific integrated circuit (ASIC) design, which the company tuned over more than a decade for workloads such as large model training and inference.
Earlier this year, Google agreed to supply up to 1 million TPUs to Anthropic, a leading AI startup. Analysts described that agreement as strong validation for the TPU ecosystem. The Anthropic deal showed that large-scale AI developers now view Google’s hardware as a credible choice rather than relying only on NVIDIA GPUs.
Bloomberg Intelligence analysts Mandeep Singh and Robert Biggar noted that Meta’s potential use of TPUs indicates a broader pattern. They expect many third-party large language model providers to treat Google as a secondary accelerator supplier, especially for inference workloads. The analysts estimate that Meta could spend $40 billion - $50 billion on inference chip capacity next year, based on its capital expenditure plans for 2026.
This trend could also support faster growth in Google Cloud consumption and backlog. Enterprise customers that want access to TPUs and Google’s Gemini models may increasingly route workloads through Google’s infrastructure, which would raise competitive pressure on rival hyperscalers.
Despite the latest pullback, NVIDIA stock still trades in an established long-term uptrend. Shares hover around the $180 region, below recent highs above $190. Chart indicators show consolidation, with the 50-day moving average flattening near current levels and the 200-day moving average holding far lower, which underscores the longer-term positive structure.
Technical levels now frame a key range for NVIDIA. Support sits between $170 and $175, an area that attracted buyers during earlier declines. A break below that band could open room toward $150 - $155. On the upside, resistance appears near $190, followed by a psychological barrier around $200. A firm move above $200 could encourage momentum-driven buying.
Also Read: Alphabet Stocks Report a Record High on Gemini 3 Launch, AI Momentum Surges