The development of artificial intelligence was once a largely technical issue, confined to the halls of academia and the labs of the private sector. Today, it is an arena of geopolitical competition. The United States and China each invest billions every year in growing their AI industries, increasing the autonomy and power of futuristic weapons systems, and pushing the frontiers of possibility. Fears of an AI arms race between the two countries abound—and although the rhetoric often outpaces the technological reality, rising political tensions mean that both countries increasingly view AI as a zero-sum game.

For all its geopolitical complexity, AI competition boils down to a simple technical triad: data, algorithms, and computing power. The first two elements of the triad receive an enormous amount of policy attention. As the sole input to modern AI, data is often compared to oil—a trope repeated everywhere from technology marketing materials to presidential primaries. Equally central to the policy discussion are algorithms, which enable AI systems to learn and interpret data. While it is important not to overstate its capability in these realms, China does well in both: its expansive government bureaucracy hoovers up massive amounts of data, and its tech firms have made notable strides in advanced AI algorithms.

But the third element of the triad is often neglected in policy discussions. Computing power—or compute, in industry parlance—is treated as a boring commodity, unworthy of serious attention. That is in part because compute is usually taken for granted in everyday life. Few people know how fast the processor in their laptop is—only that it is fast enough. But in AI, compute is quietly essential. As algorithms learn from data and encode insights into neural networks, they perform trillions or quadrillions of individual calculations. Without processors capable of doing this math at high speed, progress in AI grinds to a halt. Cutting-edge compute is thus more than just a technical marvel; it is a powerful point of leverage between nations.

Recognizing the true power of compute would mean reassessing the state of global AI competition. Unlike the other two elements of the triad, compute has undergone a silent revolution led by the United States and its allies—one that gives these nations a structural advantage over China and other countries that are rich in data but lag in advanced electronics manufacturing. U.S. policymakers can build on this foundation as they seek to maintain their technological edge. To that end, they should consider increasing investments in research and development and restricting the export of certain processors or manufacturing equipment. Options like these have substantial advantages when it comes to maintaining American technological superiority—advantages that are too often underappreciated but too important to ignore.

A REVOLUTION IN COMPUTE

Computing power in AI has undergone a radical transformation in the last decade. According to the research lab OpenAI, the amount of compute used to train top AI projects increased by a factor of 300,000 between 2012 and 2018. To put that number into context, if a cell phone battery lasted one day in 2012 and its lifespan increased at the same rate as AI compute, the 2018 version of that battery would last more than 800 years.

Greater computing power has enabled remarkable breakthroughs in AI, including OpenAI’s GPT-3 language generator, which can answer science and trivia questions, fix poor grammar, unscramble anagrams, and translate between languages. Even more impressive, GPT-3 can generate original stories. Give it a headline and a one-sentence summary, and like a student with a writing prompt, it can conjure paragraphs of coherent text that human readers would struggle to identify as machine generated. GPT-3’s data (almost a trillion words of human writing) and complex algorithm (running on a giant neural network with 175 billion parameters) attracted the most attention, but both would have been useless without the program’s enormous computing power—enough to run the equivalent of 3,640 quadrillion calculations per second every second for a day.

Recognizing the importance of computing power means reassessing the state of global AI competition.

The rapid advances in compute that OpenAI and others have harnessed are partly a product of Moore’s law, which dictates that the basic computing power of cutting-edge chips doubles every 24 months as a result of improved processor engineering. But also important have been rapid improvements in “parallelization”—that is, the ability of multiple computer chips to train an AI system at the same time. Those same chips have also become increasingly efficient and customizable for specific machine-learning tasks. Together, these three factors have supercharged AI computing power, improving its capacity to address real-world problems.

None of these developments has come cheap. The production cost and complexity of new computer chip factories, for instance, increase as engineering problems get harder. Moore’s lesser-known second law says that the cost of building a factory to make computer chips doubles every four years. New facilities cost upward of $20 billion to build and feature chip-making machines that sometimes run more than $100 million each. The growing parallelization of machines also adds expense, as does the use of chips specially designed for machine learning. 

COMPUTING LEVERAGE

The increasing cost and complexity of compute give the United States and its allies an advantage over China, which still lags behind its competitors in this element of the AI triad. American companies dominate the market for the software needed to design computer chips, and the United States, South Korea, and Taiwan host the leading chip-fabrication facilities. Three countries—Japan, the Netherlands, and the United States—lead in chip-manufacturing equipment, controlling more than 90 percent of global market share.

For decades, China has tried to close these gaps, sometimes with unrealistic expectations. When Chinese planners decided to build a domestic computer chip industry in 1977, they thought the country could be internationally competitive within several years. Beijing made significant investments in the new sector. But technical barriers, a lack of experienced engineers, and poor central planning meant that Chinese chips still trailed behind their competitors several decades later. By the 1990s, the Chinese government’s enthusiasm had largely receded.

In 2014, however, a dozen leading engineers urged the Chinese government to try again. Chinese officials created the National Integrated Circuit Fund—more commonly known as “the big fund”—to invest in promising chip companies. Its long-term plan was to meet 80 percent of China’s demand for chips by 2030. But despite some progress, China remains behind. The country still imports 84 percent of its computer chips from abroad, and even among those produced domestically, half are made by non-Chinese companies. Even in Chinese fabrication facilities, Western chip design, software, and equipment still predominate.

The current advantage enjoyed by the United States and its allies—stemming in part from the growing importance of compute—presents an opportunity for policymakers interested in limiting China’s AI capabilities. By choking off the chip supply with export controls or limiting the transfer of chip-manufacturing equipment, the United States and its allies could slow China’s AI development and ensure its reliance on existing producers. The administration of U.S. President Donald Trump has already taken limited actions along these lines: in what may be a sign of things to come, in 2018, it successfully pressured the Netherlands to block the export to China of a $150 million cutting-edge chip-manufacturing machine.

The United States and its allies must consider how to bolster their own computer chip industries.

Export controls on chips or chip-manufacturing equipment might well have diminishing marginal returns. A lack of competition from Western technology could simply help China build its industry in the long run. Limiting access to chip-manufacturing equipment may therefore be the most promising approach, as China is less likely to be able to develop that equipment on its own. But the issue is time sensitive and complex; policymakers have a window in which to act, and it is likely closing. Their priority must be to determine how best to preserve the United States’ long-term advantage in AI.

In addition to limiting China’s access to chips or chip-making equipment, the United States and its allies must also consider how to bolster their own chip industries. As compute becomes increasingly expensive to build and deploy, policymakers must find ways to ensure that Western companies continue to push technological frontiers. Over several presidential administrations, the United States has failed to maintain an edge in the telecommunications industry, ceding much of that sector to others, including China’s Huawei. The United States can’t afford to meet the same fate when it comes to chips, chip-manufacturing equipment, and AI more generally.

Part of ensuring that doesn’t happen will mean making compute accessible to academic researchers so they can continue to train new experts and contribute to progress in AI development. Already, some AI researchers have complained that the prohibitive cost of compute limits the pace and depth of their research. Few, if any, academic researchers could have afforded the compute necessary to develop GPT-3. If such power becomes too expensive for academic researchers to employ, even more research will shift to large private-sector companies, crowding out startups and inhibiting innovation.

When it comes to U.S.-Chinese competition, the often-overlooked lesson is that computing power matters. Data and algorithms are critical, but they mean little without the compute to back them up. By taking advantage of their natural head start in this realm, the United States and its allies can preserve their ability to counter Chinese capabilities in AI.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • BEN BUCHANAN is Director of the CyberAI Project at the Center for Security and Emerging Technology (CSET), an Assistant Teaching Professor in the School of Foreign Service at Georgetown University, and the author of The Hacker and the State.
  • More By Ben Buchanan