Artificial Intelligence & Other Innovations in the Battle Between “Democracy” and “Autocracy”

NEW – September 11, 2022

The new cold war becomes technological

Now in the United States and in the West as a whole, it is fashionable to divide the world into “democratic” and “autocratic”. Although the development of the situation in the same America no longer makes it clear who is a democrat and who is an autocrat inside the country. What are these terms? On the eve of the midterm elections in November 2022 in the United States, supporters of the Democratic Party call Trump and all Republicans fascists, and the Republicans themselves declared Biden a fascist.

Nevertheless, let’s look at how American scientists see the world and technologies of the 21st century within the categories of “autocracy” and “democracy”.

AI is reshaping the world

The article of Henry Farrell, Abraham Newman, and Jeremy Wallace entitled “Spirals of Delusion. How AI Distorts Decision-Making and Makes Dictators More Dangerous,” published in Foreign Affairs in September 2022, concludes that “machine learning challenges each political system in its own way”.

“The challenges to democracies such as the United States are all too visible. Machine learning may increase polarisation—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.”

“Early pioneers of AI, including the political scientist Herbert Simon, realised that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a ‘cybernetic’ system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking.”

“Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force.

What is much less well understood is that democracy and authoritarianism are cybernetic systems, too. Under both forms of rule, governments enact policies and then try to figure out whether these policies have succeeded or failed. In democracies, votes and voices provide powerful feedback about whether a given approach is really working. Authoritarian systems have historically had a much harder time getting good feedback. Before the information age, they relied not just on domestic intelligence but also on petitions and clandestine opinion surveys to try to figure out what their citizens believed.”

Now, the authors write, machine learning is disrupting traditional forms of democratic feedback (voices and votes) as new technologies facilitate disinformation and worsen existing biases—taking prejudice hidden in data and confidently transforming it into incorrect assertions <…> Such technology can tell rulers whether their subjects like what they are doing without the hassle of surveys or the political risks of open debates and elections. For this reason, many observers have fretted that advances in AI will only strengthen the hand of dictators and further enable them to control their societies.

However, according to the authors, the truth is more complicated. Bias is visibly a problem for democracies. But because it is more visible, citizens can mitigate it through other forms of feedback. When, for example, a racial group sees that hiring algorithms are biased against them, they can protest and seek redress with some chance of success. Authoritarian countries are probably at least as prone to bias as democracies are, perhaps more so. Much of this bias is likely to be invisible, especially to the decision-makers at the top. That makes it far more difficult to correct, even if leaders can see that something needs correcting.

“Contrary to conventional wisdom, AI can seriously undermine autocratic regimes by reinforcing their own ideologies and fantasies at the expense of a finer understanding of the real world. Democratic countries may discover that, when it comes to AI, the key challenge of the twenty-first century is not winning the battle for technological dominance. Instead, they will have to contend with authoritarian countries that find themselves in the throes of an AI-fuelled spiral of delusion.”

Most discussions about AI, the authors write, have to do with machine learning—statistical algorithms that extract relationships between data. These algorithms make guesses: Is there a dog in this photo? Will this chess strategy win the game in ten moves? What is the next word in this half-finished sentence? A so-called objective function, a mathematical means of scoring outcomes, can reward the algorithm if it guesses correctly. This process is how commercial AI works. YouTube, for example, wants to keep its users engaged, watching more videos so that they keep seeing ads. The objective function is designed to maximise user engagement. The algorithm tries to serve up content that keeps a user’s eyes on the page. Depending on whether its guess was right or wrong, the algorithm updates its model of what the user is likely to respond to.

Machine learning’s ability to automate this feedback loop with little or no human intervention has reshaped e-commerce. It may, someday, allow fully self-driving cars, although this advance has turned out to be a much harder problem than engineers anticipated. Developing autonomous weapons is a harder problem still. When algorithms encounter truly unexpected information, they often fail to make sense of it. Information that a human can easily understand but that machine learning misclassifies—known as ‘adversarial examples’—can gum up the works badly. For example, black and white stickers placed on a stop sign can prevent a self-driving car’s vision system from recognising the sign. Such vulnerabilities suggest obvious limitations in AI’s usefulness in wartime.

Diving into the complexities of machine learning helps make sense of the debates about technological dominance. It explains why some thinkers, such as the computer scientist Lee, believe that data is so important. The more data you have, the more quickly you can improve the performance of your algorithm, iterating tiny change upon tiny change until you have achieved a decisive advantage. But machine learning has its limits. For example, despite enormous investments by technology firms, algorithms are far less effective than is commonly understood at getting people to buy one nearly identical product over another. Reliably manipulating shallow preferences is hard, and it is probably far more difficult to change people’s deeply held opinions and beliefs.

General AI, a system that might draw lessons from one context and apply them in a different one, as humans can, faces similar limitations. Netflix’s statistical models of its users’ inclinations and preferences are almost certainly dissimilar to Amazon’s, even when both are trying to model the same people grappling with similar decisions. Dominance in one sector of AI, such as serving up short videos that keep teenagers hooked (a triumph of the app TikTok), does not easily translate into dominance in another, such as creating autonomous battlefield weapons systems. An algorithm’s success often relies on the very human engineers who can translate lessons across different applications rather than on the technology itself. For now, these problems remain unsolved.

Bias can also creep into code. When Amazon tried to apply machine learning to recruitment, it trained the algorithm on data from résumés that human recruiters had evaluated. As a result, the system reproduced the biases implicit in the humans’ decisions, discriminating against résumés from women. Such problems can be self-reinforcing.

As the sociologist Ruha Benjamin has pointed out, if policymakers used machine learning to decide where to send police forces, the technology could guide them to allocate more police to neighbourhoods with high arrest rates, in the process sending more police to areas with racial groups whom the police have demonstrated biases against. This could lead to more arrests that, in turn, reinforce the algorithm in a vicious circle.

The old programming adage “garbage in, garbage out” has a different meaning in a world where the inputs influence the outputs and vice versa. Without appropriate outside correction, machine-learning algorithms can acquire a taste for the garbage that they themselves produce, generating a loop of bad decision-making. All too often, policymakers treat machine learning tools as wise and dispassionate oracles rather than as fallible instruments that can intensify the problems they purport to solve.

Political systems, the authors write, are feedback systems too. In democracies, the public literally evaluates and scores leaders in elections that are supposed to be free and fair. Political parties make promises with the goal of winning power and holding on to it. A legal opposition highlights government mistakes, while a free press reports on controversies and misdeeds. Incumbents regularly face voters and learn whether they have earned or lost the public trust, in a continually repeating cycle.

But feedback in democratic societies does not work perfectly. The public may not have a deep understanding of politics, and it can punish governments for things beyond their control. Politicians and their staff may misunderstand what the public wants. The opposition has incentives to lie and exaggerate. Contesting elections costs money, and the real decisions are sometimes made behind closed doors. Media outlets may be biased or care more about entertaining their consumers than edifying them.

All the same, feedback makes learning possible. Politicians learn what the public wants. The public learns what it can and cannot expect. People can openly criticise government mistakes without being locked up. As new problems emerge, new groups can organise to publicise them and try to persuade others to solve them. All this allows policymakers and governments to engage with a complex and ever-changing world.

In autocracies, according to the authors, feedback works quite differently. “Leaders are chosen not through free and fair elections but through ruthless succession battles and often opaque systems for internal promotion. Even where opposition to the government is formally legal, it is discouraged, sometimes brutally. If media criticise the government, they risk legal action and violence. Elections, when they do occur, are systematically tilted in favour of incumbents. Citizens who oppose their leaders don’t just face difficulties in organising; they risk harsh penalties for speaking out, including imprisonment and death. For all these reasons, authoritarian governments often don’t have a good sense of how the world works or what they and their citizens want.

Such systems therefore face a tradeoff between short-term political stability and effective policymaking; a desire for the former inclines authoritarian leaders to block outsiders from expressing political opinions, while the need for the latter requires them to have some idea of what is happening in the world and in their societies. Because of tight controls on information, authoritarian rulers cannot rely on citizens, media, and opposition voices to provide corrective feedback as democratic leaders can.”

“Data seem to provide objective measures that explain the world and its problems, with none of the political risks and inconveniences of elections or free media. But there is no such thing as decision-making devoid of politics. The messiness of democracy and the risk of deranged feedback processes are apparent to anyone who pays attention to U.S. politics. Autocracies suffer similar problems, although they are less immediately perceptible. Officials making up numbers or citizens declining to turn their anger into wide-scale protests can have serious consequences, making bad decisions more likely in the short run and regime failure more likely in the long run.”

The most urgent question, the authors write, is not whether the United States or China will win or lose in the race for AI dominance. It is how AI will change the different feedback loops that democracies and autocracies rely on to govern their societies. Many observers have suggested that as machine learning becomes more ubiquitous, it will inevitably hurt democracy and help autocracy. In their view, social media algorithms that optimise engagement, for instance, may undermine democracy by damaging the quality of citizen feedback. As people click through video after video, YouTube’s algorithm offers up shocking and alarming content to keep them engaged. This content often involves conspiracy theories or extreme political views that lure citizens into a dark wonderland where everything is upside down.

READ:  Zelensky’s Desperate Step: Why Kiev's Sabotage Groups Tried to Seize the Zaporozhye Nuclear Power Plant

By contrast, machine learning is supposed to help autocracies by facilitating greater control over their people. Historian Yuval Harari and a host of other scholars claim that AI “favours tyranny.” According to this camp, AI centralises data and power, allowing leaders to manipulate ordinary citizens by offering them information that is calculated to push their “emotional buttons.” This endlessly iterating process of feedback and response is supposed to produce an invisible and effective form of social control. In this account, social media allows authoritarian governments to take the public’s pulse as well as capture its heart.

But these arguments rest on uncertain foundations. Although leaks from inside Facebook suggest that algorithms can indeed guide people toward radical content, recent research indicates that the algorithms don’t themselves change what people are looking for. People who search for extreme YouTube videos are likely to be guided toward more of what they want, but people who aren’t already interested in dangerous content are unlikely to follow the algorithms’ recommendations

“There is no good evidence that machine learning enables the sorts of generalised mind control that will hollow out democracy and strengthen authoritarianism. If algorithms are not very effective at getting people to buy things, they are probably much worse at getting them to change their minds about things that touch on closely held values, such as politics. The claims that Cambridge Analytica, a British political consulting firm, employed some magical technique to fix the 2016 U.S. presidential election for Donald Trump have unraveled. The firm’s supposed secret sauce provided to the Trump campaign seemed to consist of standard psychometric targeting techniques—using personality surveys to categorise people—of limited utility.”

Indeed, fully automated data-driven authoritarianism may turn out to be a trap for states such as China that concentrate authority in a tiny insulated group of decision-makers. Democratic countries have correction mechanisms—alternative forms of citizen feedback that can check governments if they go off track. Authoritarian governments, as they double down on machine learning, have no such mechanism

“Although ubiquitous state surveillance could prove effective in the short term, the danger is that authoritarian states will be undermined by the forms of self-reinforcing bias that machine learning facilitates. As a state employs machine learning widely, the leader’s ideology will shape how machine learning is used, the objectives around which it is optimised, and how it interprets results. The data that emerge through this process will likely reflect the leader’s prejudices right back at him.

As the technologist Maciej Ceglowski has explained, machine learning is ‘money laundering for bias,’ a ‘clean, mathematical apparatus that gives the status quo the aura of logical inevitability.’ What will happen, for example, as states begin to use machine learning to spot social media complaints and remove them? Leaders will have a harder time seeing and remedying policy mistakes—even when the mistakes damage the regime. A 2013 study speculated that China has been slower to remove online complaints than one might expect, precisely because such griping provided useful information to the leadership <…> Artificial intelligence–fuelled disinformation may poison the well for autocracies, too.

Chinese President Xi Jinping is aware of these problems in at least some policy domains. He long claimed that his antipoverty campaign—an effort to eliminate rural impoverishment—was a signature victory powered by smart technologies, big data, and AI. But he has since acknowledged flaws in the campaign, including cases where officials pushed people out of their rural homes and stashed them in urban apartments to game poverty statistics. As the resettled fell back into poverty, Xi worried that ‘uniform quantitative targets’ for poverty levels might not be the right approach in the future. Data may indeed be the new oil, but it may pollute rather than enhance a government’s ability to rule.”

This problem, the authors write, has implications for China’s so-called social credit system, a set of institutions for keeping track of pro-social behaviour that Western commentators depict as a perfectly functioning “AI-powered surveillance regime that violates human rights.” As experts on information politics such as Shazeda Ahmed and Karen Hao have pointed out, the system is, in fact, much messier. The Chinese social credit system actually looks more like the U.S. credit system, which is regulated by laws such as the Fair Credit Reporting Act, than a perfect Orwellian dystopia.

“More machine learning may also lead authoritarian regimes to double down on bad decisions. If machine learning is trained to identify possible dissidents on the basis of arrest records, it will likely generate self-reinforcing biases similar to those seen in democracies—reflecting and affirming administrators’ beliefs about disfavoured social groups and inexorably perpetuating automated suspicion and backlash. In democracies, public pushback, however imperfect, is possible. In autocratic regimes, resistance is far harder; without it, these problems are invisible to those inside the system, where officials and algorithms share the same prejudices. Instead of good policy, this will lead to increasing pathologies, social dysfunction, resentment, and, eventually, unrest and instability.”

The international politics of AI, the authors believe, will not create a simple race for dominance.

“The crude view that this technology is an economic and military weapon and that data is what powers it conceals a lot of the real action. In fact, AI’s biggest political consequences are for the feedback mechanisms that both democratic and authoritarian countries rely on. Some evidence indicates that AI is disrupting feedback in democracies, although it doesn’t play nearly as big a role as many suggest. By contrast, the more authoritarian governments rely on machine learning, the more they will propel themselves into an imaginary world founded on their own tech-magnified biases. The political scientist James Scott’s classic 1998 book, Seeing Like a State, explained how twentieth-century states were blind to the consequences of their own actions in part because they could see the world through only bureaucratic categories and data. As sociologist Marion Fourcade and others have argued, machine learning may present the same problems but at an even greater scale.

One rapidly emerging problem is how autocracies such as Russia might weaponise large language models, a new form of AI that can produce text or images in response to a verbal prompt, to generate disinformation at scale. As the computer scientist Timnit Gebru and her colleagues have warned, programs such as Open AI’s GPT-3 system can produce apparently fluent text that is difficult to distinguish from ordinary human writing. Bloom, a new open-access large language model, has just been released for anyone to use. Its license requires people to avoid abuse, but it will be very hard to police.

These developments will produce serious problems for feedback in democracies. Current online policy-comment systems are almost certainly doomed, since they require little proof to establish whether the commenter is a real human being. Contractors for big telecommunications companies have already flooded the U.S. Federal Communications Commission with bogus comments linked to stolen email addresses as part of their campaign against net neutrality laws. Still, it was easy to identify subterfuge when tens of thousands of nearly identical comments were posted. Now, or in the very near future, it will be trivially simple to prompt a large language model to write, say, 20,000 different comments in the style of swing voters condemning net neutrality.

Artificial intelligence–fuelled disinformation may poison the well for autocracies, too. As authoritarian governments seed their own public debate with disinformation, it will become easier to fracture opposition but harder to tell what the public actually believes, greatly complicating the policymaking process. It will be increasingly hard for authoritarian leaders to avoid getting high on their own supply, leading them to believe that citizens tolerate or even like deeply unpopular policies.

“<…> Data may be the new oil, but it may pollute rather than enhance a government’s ability to rule.”

Perhaps even more cynically, the authors write, policymakers in the West may be tempted to exploit the closed loops of authoritarian information systems. So far, the United States has focused on promoting Internet freedom in autocratic societies. Instead, it might try to worsen the authoritarian information problem by reinforcing the bias loops that these regimes are prone to. It could do this by corrupting administrative data or seeding authoritarian social media with misinformation. Unfortunately, there is no virtual wall to separate democratic and autocratic systems. Not only might bad data and crazy beliefs leak into democratic societies from authoritarian ones, but terrible authoritarian decisions could have unpredictable consequences for democratic countries, too. As governments think about AI, they need to realise that we live in an interdependent world, where authoritarian governments’ problems are likely to cascade into democracies.

“One dangerous path would be for the United States to get sucked into a race for AI dominance, which would extend competitive relations still further. Another would be to try to make the feedback problems of authoritarianism worse. Both risk catastrophe and possible war. Far safer, then, for all governments to recognise AI’s shared risks and work together to reduce them.

Commission

Not all politicians and scientists in the United States come to such compromise conclusions.

READ:  In Latvia, Sympathy Not Only for Putin, but Also for Pushkin Has Become a Crime

Keith Kratch (Chairman of the Institute for Technical Diplomacy, formerly Undersecretary of State for Economic Growth, Energy and the Environment from 2019 to 2021) and Kersti Kaljulaid (President of Estonia from 2016 to 2021), in The National Interest (09.09.2022) article entitled “Emerging Technologies Can Protect Democratic Freedoms”, conclude that “today, threats to the United States and our allies increasingly come from emerging technologies that can have devastating consequences if they are in the wrong hands. Quantum computing, next-generation drones, biomedical engineering, and other technologies have the potential to improve the lives of millions of people—or to empower dictators.”

The authors believe that “we shouldn’t have to wait for a crisis to start preparing ourselves for these threats.”

“New technologies are already reshaping our lives and how we communicate with people all over the world. Democratic nations need to stay at the forefront of innovation, not play catch-up with adversaries.”

That’s why, as a group of former diplomats and technology industry experts from the Krach Institute for Tech Diplomacy at Purdue and the Atlantic Council, we’ve assembled a world-class commission to create a blueprint for how the free world can safeguard freedom through adopting trusted technology.

The two-year cooperation in this commission, according to the authors of this article, its co-chairs, is distinguished by three factors.

First, the commission will focus on seventeen critical tech sectors and integrate our findings about each of them into one overarching security strategy.

Second, it will be led by the private sector and global stakeholders, with commissioners from international companies and institutions representing more than a dozen countries as part of democracies’ common effort to compete in emerging technologies.

Third, while previous commissions have focused on identifying defensive solutions, ours will integrate offensive strategies to develop common standards for trusted technologies and recommendations for investment in key areas of research and development.

The main goal of the commission is to resist China technologically. According to the authors, “Companies doing business with China have endured parasitic joint ventures, blatant thievery of intellectual property, a worldwide bullying spree, and the coerced collection of proprietary technology.

Corporate boards increasingly understand doing business with, in, or for China represents tremendous risk. That’s why many respected board members are demanding a China contingency plan from their CEOs.”

The commission already has the backing of lawmakers and private-sector leaders at a time when the United States is working to unite its transatlantic and Indo-Pacific allies and partners across a range of critical technology issues.

During recent briefings with U.S. under secretary of commerce Alan Estevez on “Global Tech Security” and President Joe Biden’s “Asia Chief” Kurt Campbell on “Building Alliances with the ‘Trust Principle,’” they echoed their strong support for the commission’s urgent mission of securing high tech from the growing techno-authoritarian threats.

The commission’s work will be based on lessons learned from Clean Net. To do this, “united sixty nations around the world that all committed themselves to use only trusted 5G telecommunications vendors and rejected untrusted vendors like Huawei and ZTE that are known to follow the orders of the Chinese Communist Party.”

This is why, in the future, the commission should have shared commons and agreements to make the most of the technologies created by the great minds of American private sector companies.

“The technologies that our commission will focus on include semiconductors; autonomous and electric vehicles; clean energy and electrical grids; quantum computing; robotics; and electronic payments and digital currencies.

These technologies have already demonstrated great promise for a more efficient and advanced world. But as recent supply chain disruptions have made clear, we need to do more to ensure we’re not overly reliant on authoritarian nations like China.”

GPUs

It’s necessary to say that in addition to the commission’s activities, in early September 2022 (according to Reuters) the United States stepped up its efforts to stem the flow of cutting-edge technology to China by enforcing on Nvidia (NVDA.O) and Advanced Micro Devices (AMD.O) an agreement to block the shipment of its flagship AI chips to China.

The rules appear to apply to chips called GPUs, with the most powerful computing capabilities, a critical but niche market that has only two significant players, Nvidia and AMD. Their only potential competitor – Intel Corp (INTC.O) – tries to break into the market, but does not produce competitive products.

Originally developed for video games, the use of GPUs has been expanded to include a wider range of applications that include working with AI, such as image recognition, categorising photos, or searching for military equipment in digital satellite images. Since all chip suppliers are American, the US controls access to the technology.

The only products that Nvidia says will be affected are the A100 and H100 chips. These chips cost tens of thousands of dollars each, and complete computers containing the chips cost hundreds of thousands of dollars.

Similarly, AMD said the new demand only affects its most powerful chip, the MI250, a version of which is being used at Oak Ridge National Laboratory, one of several US supercomputer centres supporting nuclear weapons. Less powerful chips, such as the AMD MI210 and lower, are not affected.

What unites the affected chips is the ability to perform calculations for AI operation quickly, on a huge scale, and with high accuracy. Less powerful AI chips can work quickly at lower levels of accuracy, which are enough to tag photos of friends and when the cost of an accidental error is small, but not enough to design fighter planes.

The only major market competitor to AMD’s and Nvidia’s chips is Intel’s yet-to-be-released Ponte Vecchio chip, whose first buyer is Argonne National Laboratory, another US entity that supports nuclear weapons.

Front line in the field of technical security

In an article written by Tai Ming Cheung (director of the Institute for Global Conflict and Cooperation at the University of California) and Thomas G. Mahnken (President of the Centre for Strategic and Budgetary Assessments) entitled “The Grand Race for Techno-Security Leadership”, published in the Texas National Security Review on August 31, 2022, it is noted that “nowhere are the front lines drawn as clearly as in the field of technosecurity”.

“Central to the Sino-American rivalry are two different models of industrial and technological innovation in defence: China’s top-down approach, driven by the state, and the bottom-up system, driven by the US market. Which of them will ultimately prevail will depend on how capable, strong, and adept they are at meeting the challenges of rapid and disruptive change <…>

<…> The U.S. and Chinese techno-security systems are designed, configured, and operated differently from each other. The U.S. techno-security system is anchored in a deeply held anti-statist ethos that emphasises limited government and an expansive leading role for the private sector, even though the U.S. government has at times exerted a powerful influence in shaping the techno-security ecosystem. By contrast, although pro-market forces have played a vital role in China’s economic development, its techno-security system is overwhelmingly statist with the party-state dominating ownership, control, and management. Since the end of the 20th century, the Chinese party-state has thrown its weight behind a focused program of innovation aimed at blunting the ability of the United States to defend its interests in the Western Pacific, and at closing the gap between U.S. and Chinese defence technology more broadly. Indeed, according to the deputy assistant secretary of the Air Force for acquisition, China has been acquiring new weapons five times faster than United States. As a result, the United States now faces a series of increasingly unfavourable military balances in the Western Pacific and beyond. In order to regain momentum in the competition with China, the United States will need to unleash the power of its own unique approach to defence innovation by revitalising public-private partnerships and deepening engagement with allies.”

Since the 1990s, China has undertaken a concerted effort to transform itself from a struggling technological laggard to a leading global innovator. Defence innovation has been at the forefront of Beijing’s effort, and China has made impressive strides in pace, scale, and quality of output. At the outset of the reform drive in the mid-to-late 1990s, the Chinese defence science, technology, and innovation system was in a spiralling decline and could only produce outdated foreign-derived weapons. By the second half of the 2010s, select pockets of excellence within the defence innovation system began to turn out advanced armaments such as stealthy fighter aircraft and large-sized aircraft carriers and the strike planes that fly off their decks.

Today, the authors fear, Beijing could steal a march on the West when it comes to cutting-edge fields of innovation, such as quantum and artificial intelligence.

“Although both the Bush and Obama administrations expressed concern about the growth of Chinese military power, it was not until the Trump administration that documents such as the National Security Strategy and National Defence Strategy spoke openly about the challenge posed by China and made great power competition the foremost priority.

The Biden administration views China as ‘our most consequential strategic competitor and the pacing challenge‘ in its defence planning. Although today there is general consensus on the need to counter its aim to become a high-technology superpower, action has lagged rhetoric.

Centralised top-down coordination has been instrumental to many if not most of China’s signature strategic technological achievements, from nuclear weapons and ballistic missiles to the manned space program and high-performance computers. This top-down approach has been governed by a central planning system that relies on directly enforced administrative controls from state and party agencies and the use of penalties to ensure compliance by enterprises, research institutes, and other actors. Although there has been some relaxation and rollback of this pervasive state control in the post-1978 reform era, state planning, management, and intervention remain extensive because the techno-security ecosystem continues to be overwhelmingly under state ownership.

The Chinese authorities have sought to spur innovation by placing strategic bets on a hybrid approach to innovation and by seeking to promote domestic innovation

First, in the second half of the 2010s, China began to lay the foundations of a robust and expansive military-civil fusion framework. Beijing seems to hope that it will be able to tap civilian sources of innovation as extensively as the United States within the next decade or so. Although the approach has yet to make a significant impact on Chinese innovation, and the structural barriers to realising this goal are high, Xi’s active leadership of the military-civil fusion initiative means the prospects for success are good

Second, in another long-term big bet, Beijing is increasingly focused on self-reliance and broadening from foreign absorption of technology to emphasising original, indigenous innovation. That having been said, a key and intentionally designed limitation of this model is that it can only manage a select number of high priority strategic and defence-related projects. Gaining access to and leveraging foreign technology and knowledge will continue to be an essential feature for the foreseeable future. Techno-nationalist dependence is a well-proven low-risk, high-reward development strategy and provides a safeguard, whereas the forging of an original innovation capacity is a long-term high-risk endeavour.

A bottom-up approach to innovation focused on the US market

Whereas China has adopted a state-led, top-down approach to defence innovation, traditionally the United States has succeeded under a market-driven, bottom-up approach. The relationship between the state and market flourished during the Cold War, and this was a leading factor contributing to the success of the U.S. techno-security system over its counterpart in the Soviet Union. However, in the post-Cold War era, and especially in the 21st century, the traditional strengths of the U.S. techno-security system have not aged well.

<…> The public-private relationship has, however, become strained in the 21st century. All too often, the views of those in the defence industry have been greeted with suspicion, and an adversarial narrative between government and industry has grown more prominent in recent years. This threatens to turn this pillar of strength into a source of weakness. Whereas Beijing aspires to military-civil fusion, the U.S. government often holds the defence industry at arm’s length. Whereas there has been much talk in recent years about the need to embrace innovation, such talk has often not been matched by action.

<…> the defence acquisition system has become increasingly rigid and risk averse. It gives corporations few incentives to take the sort of risks that are crucial to innovation. The system also discourages firms from quickly fixing problems with known or promising solutions. The system is so expansive and complex as to defy reform. Moreover, the Defence Department is increasingly isolated from large portions of the most innovative and thriving commercial sectors of the economy. It should not be surprising that, according to former Under Secretary of Defence for Research and Engineering Mike Griffin, it takes the Defence Department 16 years to deliver an idea to operational capability, whereas it is claimed that China can sometimes do it in less than seven years — although a careful select analysis of Chinese programs shows that this is not the case.

<…> the U.S. techno-security system is struggling to have its voice heard in guiding innovation, as its once-dominant position as the biggest source of investment in research and development has eroded. The U.S. Department of Defence at the beginning of the 2020s accounts for a mere 3.6 percent of global research and development outlays, compared to 36 percent at its height in 1960.

Moreover, the Pentagon has gone from being a first adopter of technologies to being increasingly an investor in technology research. This means that many technologies originate in the civilian sphere and are subsequently — and often belatedly — adapted for defence and dual-use applications. While this is cost-efficient and allows access to a more extensive pool of innovation, the U.S. techno-security system risks becoming a follower rather than a leader unless it steps up to fill the gaps in defence-specific areas where the commercial sector is reluctant or unable to participate.

If these trends persist, the U.S. techno-security system could find its influence and place in the U.S. innovation system increasingly marginalised. This is already happening in the corporate sector. By the second half of the 2010s, the top five U.S. technology companies such as Google, Amazon, and Apple spent 10 times more annually on research and development than the top five U.S. defence prime contractors including Lockheed Martin, Boeing, and Raytheon. This growing imbalance in the public-private relationship could lead firms to decide that doing business with the techno-security system is not sufficiently lucrative and encourage them to focus instead on more profitable commercial markets domestically and internationally, including in China. Reinvigorating the public-private relationship will be critical in any effort by the United States to credibly compete against China over the long term.

As the world’s most advanced techno-security power, the United States has been the dominant exporter of advanced technology, knowledge, and industrial products in both the military and civilian spheres. The possession of a comprehensive world-class science and technology base, especially in the defence technological arena, has meant that the United States has traditionally had little appetite to acquire foreign technology or know-how. This sense of industrial and technological superiority led to a fierce and enduring techno-nationalist ideology and posture in which the United States viewed itself as head and shoulders above the rest of the world.

But the global technological landscape has changed rapidly in the 21st century with the advent of a diverse array of emerging technologies, many of which have defence and dual-use applications. With its shrinking overall share of global research and development investment, the United States has found that it is increasingly difficult and costly to keep abreast of technological advances in all the key domains, which has made collaboration with foreign partners increasingly attractive and necessary. This cooperation is taking place in areas such as 5G, quantum computing, and communications — areas where China has been especially active and is vying for global leadership. But techno-nationalist primacy has been deeply entrenched within the institutional culture of the U.S. techno-security system for so long that a fundamental shift toward a more collaborative techno-globalist approach is likely to encounter entrenched resistance and will take time to effectively implement.

There have been occasional attempts to establish the foundations of a more globalist-oriented techno-security approach. The formation of the security compact known as ‘AUKUS’ (Australia, United Kingdom, and United States) in 2021 — centred on advanced defence and dual-use capabilities — is the most recent and promising opportunity for the rise of a U.S. globalist-oriented techno-security regime.

One area in which the United States has been able to pursue a more collaborative partnership with foreign allies is in controlling the spread of sensitive technologies. To respond to the technological challenges of the Soviet Union and Japan in the 20th century, the United States established a number of institutional frameworks to control the flow of technologies and know-how to these countries, especially the Coordinating Committee for Multilateral Export Controls. These regimes worked effectively in their own spheres, but the integrated civil-military challenge posed by China requires the U.S. government to develop a more robust and whole-of-government approach than the ad hoc and underdeveloped intra-agency process that currently exists.

The United States has been revamping these legacy regimes through incremental reforms such as the 2018 Foreign Investment Risk Review Modernisation Act and a revamped export control regime.”

However, the authors believe that there is still a gaping hole in the emerging areas of dual-use high technologies and strategic new technologies that require a new, fully specialised institutional mechanism that can respond and work more effectively in this area.

READ:  Ukrainian Posters in Donetsk: “Scorch the Russian Subculture”

The authors make the main conclusion: the U.S. techno-security system in the opening years of the 2020s remains much stronger and more innovative than its Chinese counterpart. This dominance is being steadily eroded, however, by U.S. institutional sclerosis, far-reaching global technological changes, and China’s intensive pace of techno-security development. Revitalising key components of the U.S. techno-security system, especially public-private partnerships and engagement with global partners, will allow the United States to retain its global leadership edge over the long-term, although the gap with China will continue to shrink. The United States will need to undertake more transformative reforms to stay well ahead.

For China, the revamping of the techno-security state under Xi has seen the gap steadily close with the United States — but even more significant structural changes will be required to successfully transition from catching up to gaining parity or even leading. More effective coordination between the state and market mechanisms will be essential. Allowing hybridisation — greater military-civil fusion — to be fully implemented will also be a vital step. The enhancement of the centralised top-down coordination model will be especially important in the race for the development of emerging core technologies as active early state intervention can play a more effective and decisive role than “bottom-up” market support.


Vladimir Ovchinsky, Yury Zhdanov

Copyright © 2022. All Rights Reserved.

Tags: