Google in Talks with Marvell for Custom AI Inference Chips
Google is in talks with Marvell Technology to develop two custom AI inference chips, including a memory processing unit and an inference-optimized TPU. This move signals Google's strategic diversification of its chip supply chain, expanding beyond its primary partner Broadcom to address the rapidly growing demand and cost of AI inference workloads. The collaboration aims to enhance Google's competitive advantage in the burgeoning custom silicon market.

Google is reportedly in advanced discussions with Marvell Technology to develop two new specialized artificial intelligence (AI) chips, marking a significant move to diversify its custom silicon supply chain. This potential collaboration focuses on building a memory processing unit (MPU) and a Tensor Processing Unit (TPU) specifically optimized for AI inference workloads.
The talks, though not yet formalized into a signed contract, surfaced just days after Google solidified a long-term agreement with Broadcom, its primary custom chip partner, extending through 2031. This timing underscores Google’s strategy of expansion and diversification rather than a replacement of existing partners. Broadcom will continue to design and supply high-performance TPUs and networking components, while MediaTek contributes cost-optimized “e” variants. Should a deal with Marvell materialize, it would add a third key design partner to Google’s robust custom silicon ecosystem, alongside fabrication giant TSMC.
Shifting Focus to AI Inference
Google’s increased interest in inference-optimized silicon reflects a pivotal shift in AI compute demand. While training frontier AI models requires immense computational power over weeks or months, inference – the process of running trained models to serve user queries – represents a continuous, escalating cost that scales directly with user engagement. As AI-powered products reach hundreds of millions of users daily, inference costs are becoming the dominant expense.
Purpose-built inference chips offer a competitive advantage in terms of cost and efficiency that general-purpose GPUs often cannot match. Google’s seventh-generation TPU, Ironwood, launched earlier this month, is already being heralded as “the first Google TPU for the age of inference.” It boasts ten times the peak performance of its predecessor, the TPU v5p, and can scale to superpods of 9,216 liquid-cooled chips. Google plans to deploy millions of Ironwood units this year, and any Marvell-designed chips would likely complement this existing infrastructure, potentially addressing different workload profiles or cost requirements.
Marvell’s Growing Influence in Custom Silicon
Marvell Technology has emerged as a significant player in the custom silicon market. The company reported record data center revenue of $6.1 billion in its fiscal year ending February 2026, contributing to total revenue of $8.2 billion, a 42% year-over-year increase. Its custom silicon business generates a $1.5 billion annual run rate, having secured 18 design wins with major cloud providers, including Amazon (Trainium processors), Microsoft (Maia AI accelerator), and Meta (a new data processing unit). Marvell also already collaborates with Google on the Axion ARM CPU.
Recent strategic moves further bolster Marvell’s position. Nvidia invested $2 billion in Marvell in late March, forging a partnership through NVLink Fusion to integrate Marvell’s custom chips and networking with Nvidia’s interconnect fabric. Additionally, Marvell acquired Celestial AI in December 2025 for up to $5.5 billion, gaining advanced photonic interconnect technology. CEO Matt Murphy aims for a 20% market share in custom AI chips and projects roughly 30% year-over-year revenue growth in fiscal 2027. Marvell’s stock has reflected this strong momentum, rallying approximately 50% year-to-date.
Broadcom’s Enduring Market Leadership
Despite Google’s exploration of new partners, Broadcom’s position in the custom AI accelerator market remains robust. The company commands over 70% market share in this segment, with its AI revenue soaring to $8.4 billion in its most recent quarter—a 106% year-over-year increase. Guidance for the following quarter projects $10.7 billion in AI revenue, with the company targeting $100 billion in AI chip revenue by 2027. Following the Google extension announcement, Broadcom’s shares rose over 6%. Mizuho analysts estimate Broadcom will generate $21 billion in AI revenue from its Google and Anthropic relationships in 2026, potentially doubling to $42 billion in 2027.
The broader custom chip market is experiencing explosive growth, with TrendForce projecting a 45% increase in custom chip sales in 2026, significantly outpacing the 16% growth forecast for GPU shipments. Counterpoint Research anticipates Broadcom will hold approximately 60% of the custom AI accelerator market by 2027, with Marvell securing about 25%. The overall market for custom AI chips is projected to reach an astounding $118 billion by 2033.
Google’s Multi-Partner Approach
Google’s evolving chip strategy now encompasses at least four external partners (Broadcom, MediaTek, Marvell, and TSMC) alongside its own internal design teams. This complex, multi-vendor approach for its diverse product line—spanning AI training, inference, and general-purpose cloud compute—is a deliberate strategic choice. Hyperscalers dependent on a single chip supplier face inherent risks related to pricing, supply chain stability, and strategic vulnerability. By diversifying, Google aims to mitigate these risks and gain greater control over its foundational AI infrastructure.
This inference-focused engagement with Marvell highlights Google’s commitment to achieving massive cost efficiencies at scale. Shaving even a small percentage off the cost per inference across billions of daily AI-augmented search queries, Gemini conversations, and Cloud AI API calls translates into billions of dollars in annual savings. While chip development timelines mean any Marvell-designed products are likely years from production, the strategic direction is unequivocally clear: Google is building a resilient, multi-partner supply chain designed to power the world’s most demanding AI inference workloads.
FAQ
Q: Why is Google diversifying its AI chip suppliers?
A: Google is diversifying its AI chip suppliers to mitigate risks associated with relying on a single vendor, including pricing risk, supply chain vulnerabilities, and strategic dependence. A multi-partner approach provides greater control and resilience for its extensive AI infrastructure.
Q: What type of AI chips is Google discussing with Marvell?
A: Google is in talks with Marvell Technology to develop two specific types of AI chips: a memory processing unit (MPU) designed to work alongside existing Tensor Processing Units (TPUs), and a new TPU explicitly optimized for AI inference workloads.
Q: How does this potential partnership impact Google’s relationship with Broadcom?
A: The discussions with Marvell do not appear to replace Broadcom but rather to add a third design partner. Broadcom recently secured a long-term agreement with Google through 2031 and continues to hold a dominant market share in custom AI accelerators, indicating Google's strategy is diversification, not substitution.
Related articles
startups: Meta targets 20 May for 8,000 layoffs as it redirects
Meta Platforms is set to commence a significant company-wide restructuring on May 20, initiating layoffs that will impact approximately 8,000 employees, representing 10% of its global workforce. This substantial
Keychron's New Ultra 8K Keyboards Boast Marathon Battery Life
Keychron's new V5 and Q1 Ultra 8K mechanical keyboards revolutionize wireless performance with up to 660 hours of battery life, thanks to ZMK firmware. They also feature 8,000Hz wireless polling, improved stabilizers, and new Silk POM switches for a refined typing experience. These models set a new standard for battery endurance in mechanical keyboards.
in-depth: Our Favorite Apple Watch Has Never Been Less Expensive
The highly regarded Apple Watch Series 11, a top recommendation for iPhone users seeking a premium smartwatch experience, is currently available at its lowest price ever. As of April 19, 2026, the device is discounted
AI Chip Startup Cerebras Files for IPO Amid Market Excitement — Key
AI chip startup Cerebras Systems has officially filed for an initial public offering (IPO), marking a renewed attempt after a 2024 withdrawal. The company, which touts its "fastest AI hardware" and boasts major deals with AWS and OpenAI, looks to capitalize on recent momentum and substantial private funding to accelerate its growth.
Anthropic's Ties to Trump Admin Warm Amid Pentagon Rift
Anthropic's ties with the Trump administration are thawing, marked by a high-level meeting between CEO Dario Amodei and White House officials. This occurs despite an ongoing legal battle with the Pentagon, which labeled Anthropic a "supply-chain risk" over ethical disagreements on AI use.
analysis: Hundreds of Fake Pro-Trump Avatars Emerge on Social Media
A network of hundreds of AI-generated pro-Trump influencer accounts has surged across TikTok, Instagram, Facebook, and YouTube ahead of midterm elections. These fake personas rapidly post political content, seemingly aiming to sway conservative voters. President Trump has even reposted content from one such artificial account.





