Copyright © HT Digital Streams Limited All rights reserved. Robbie Whelan, The Wall Street Journal 6 min read 6 Dec 2025, 12:17 IST Nvidia’s dominance in AI computing power has made it the most valuable company in the world. REUTERS/Dado Ruvic/Illustration/File photo (REUTERS) Summary Google, Amazon, AMD and Nvidia’s own customers are rising to challenge the 800-pound gorilla. For a decade, one company has had an almost total stranglehold on the business of selling the advanced computer chips that power machine learning and artificial intelligence: Nvidia. Armed with the most advanced blueprints for graphics processing units, or GPUs, and aided by the rapid pace of innovation at Taiwan Semiconductor Manufacturing, the contract manufacturer that makes 90% of the world’s advanced AI chips, Nvidia has become synonymous with AI processors. This is starting to change. New entrants to the AI chip design business, including Google and Amazon, are talking about selling their most advanced chips, which rival Nvidia’s GPUs in power and efficiency, to a variety of external customers. Smaller rivals such as Advanced Micro Devices, Qualcomm and Broadcom are introducing products that help them focus more on AI data center computing. Even some of Nvidia’s biggest customers, such as ChatGPT maker OpenAI and Meta Platforms, are starting to design their own custom chips, providing a new challenge to the company’s ubiquity. While Nvidia is unlikely to see a mass exodus of customers, efforts by AI firms to diversify their suppliers could make it harder for the market leader to generate the stellar sales growth investors have grown accustomed to seeing. The landscape is changing rapidly. Every week that passes seems to be with a new massive technology infrastructure deal or the release of a new generation of powerful AI chips. Here’s an overview of the major companies vying for position in the fast-growing market for AI chips. The Top Dog Nvidia’s dominance in AI computing power has made it the most valuable company in the world and propelled its leather jacket-clad CEO, Jensen Huang, to celebrity status. Investors are scrutinizing Huang’s every word, viewing the company’s quarterly earnings as a barometer of the overall AI boom. Nvidia likes to describe its business as more than just chips, stressing that it offers “rack-scale server solutions” and calling the data centers it uses “AI factories.” But the basic product that Nvidia offers—accelerated computing—is the same one that all AI firms want. From February to October, Nvidia sold $147.8 billion worth of chips, network connectors and other hardware that support the explosive growth of AI. That’s up from $91 billion in the same period a year earlier. In July, Nvidia passed $4 trillion in market value, the first company on the planet to do so. Five months later, it briefly topped $5 trillion, before fears of a bubble swept through the AI industry. Nvidia’s share price, like that of most of its competitors, has fallen a little closer to earth. Even with the correction, the company is worth more than twice its closest rival, Broadcom, which is valued at $1.8 trillion. Nvidia had humble beginnings. In what is now corporate legend, Huang, Curtis Priem and Chris Malachowsky—three friends, all electrical engineers—founded the company in 1993 over Grand Slam breakfast plates at a Denny’s in San Jose, California. Their original goal was to develop chips that could produce more realistic 3-D graphics for personal computers. Unlike the central processing units, or CPUs, that power most computers, GPUs are capable of parallel computing: They can perform millions or billions of simple tasks simultaneously. Originally used by video game developers, Nvidia’s GPUs were perfect for deep learning and AI, the company later realized. In 2006, Nvidia released CUDA, its proprietary software library that allows developers to build applications using the company’s chips and make them run faster. As the AI gold rush took off, thousands of developers got caught up in Nvidia’s ecosystem of hardware and software. Nvidia has accelerated its cadence for releasing each new generation of advanced AI chips. Late last year, it began shipping its Grace Blackwell line of servers—its most powerful AI processors yet, with its most advanced chips—and sold out almost immediately. At an October conference in Washington, DC, Huang said the company had sold six million Blackwell chips so far in 2025 and had orders for 14 million more, representing a total of half a trillion dollars in sales. Challenges remain. Nvidia has been effectively banned from selling its chips in China for the past three years, a problem because Huang maintains that the rival superpower is home to half the world’s AI developers. Without the billions in sales that Chinese customers represent, the company’s growth will be limited, and China’s technology sector will likely get used to working with homegrown chips instead. Nvidia now also faces increased pressure at home. View full image AMD CEO Lisa Su holds an MI355X GPU at the company’s Austin, Texas, campus. Rival designers AMD made a critical change of course three years ago to create a classic David vs. Setting up Goliath challenge for Nvidia. When it became clear that demand for advanced AI processors was skyrocketing, AMD CEO Lisa Su told her board that she planned to reorient the entire company around AI. She predicted that the “insatiable demand for computers” would continue. The bet has paid off handsomely so far: AMD’s market capitalization has nearly quadrupled to more than $350 billion, and the company recently struck big deals to supply chips to OpenAI and Oracle. Another chip designer, Broadcom, once a division of Hewlett-Packard, has also emerged as a formidable competitor. It has expanded into a $1.8 trillion leviathan through a series of big-ticket mergers. Broadcom now makes custom chips called XPUs, which are designed for specific computing tasks, and networking hardware that helps data centers join together large racks of servers. Intel, one of the original Silicon Valley titans, has fallen on hard times. It has mostly missed the AI revolution due to a series of strategic mistakes, but it has recently invested heavily in both its design and manufacturing businesses and is courting customers for its advanced data center processors. Qualcomm, best known for designing chips for mobile devices and cars, sent its stock up 20% after its October announcement that it would launch two new AI accelerator chips. The company said the new AI200 and AI250 are characterized by their very high memory capacities and energy efficiency. View Full Image A Trainium2 chip produced by Amazon Web Services’ Annapurna Labs. This week, the company introduced faster custom AI chips. The Giant Interlopers In recent weeks, competition has intensified. Armed with mountains of cash from other lines of business, Alphabet’s Google unit and Amazon’s cloud computing segment, Amazon Web Services, have invested in AI chips and are also seeing increased demand for them from third-party customers. For more than a decade, Google has been designing and using chips known as tensor processing units, or TPUs, for internal use. The company first made it available for third-party use in 2018, but it hasn’t been sold as widely to large customers for several years. Now giants including Meta, Anthropic and Apple are either buying access to TPUs to train and run their models, or are in talks to do so. In late November, Dylan Patel, founder of the influential AI infrastructure consulting firm SemiAnalysis, opined that the growing popularity of Google’s chips may mean “the end of Nvidia’s dominance.” Amazon, meanwhile, is expanding a data center cluster for Anthropic that will eventually have more than one million of Amazon’s Trainium chips, and AWS has just launched broader sales of chips it says are faster and use much less energy than Nvidia’s equivalents. View full image OpenAI, run by CEO Sam Altman, has tremendous computing needs. The Do-It-Yourselfers Even Nvidia’s customers are beginning to eat away at its dominance by developing their own application-specific integrated circuits, or ASICs. This class of chips, co-designed by AI companies and the big silicon firms, is optimized for highly specific computing tasks. OpenAI and Broadcom recently entered into a multibillion-dollar partnership to develop custom chips to meet the ChatGPT manufacturer’s computing needs. A few months ago, Meta announced that it would acquire chip startup Rivos to boost its efforts to develop in-house AI training chips. Microsoft’s chief technology officer said in October that the company plans to rely more on its own custom accelerator chips in its data center business. And over the summer, Elon Musk’s xAI posted a job listing for chip designers to help “design and refine new hardware architectures” to help with AI model training. Most industry watchers say Nvidia is unlikely to lose its dominant market position, and Nvidia argues that its computing systems are more flexible and have wider uses than custom chips. But with demand rising rapidly, it’s no longer the only game in town. Write to Robbie Whelan at [email protected] Get all the corporate news and updates on Live Mint. Download the Mint News app to get daily market updates and live business news. more topics #artificial intelligence Read next story