Posts with «military & defense» label

Hitting the Books: When the military-industrial complex came to Silicon Valley

As with most every other aspect of modern society, computerization, augmentation and automation have hyper-accelerated the pace at which wars are prosecuted — and who better to help reshape the US military into a 21st century fighting force than an entire industry centered on moving fast and breaking things? In his latest book, War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future, professor and chair of the Anthropology Department at San José State University, Roberto J González examines the military's increasing reliance on remote weaponry and robotic systems are changing the way wars are waged. In the excerpt below, González investigates Big Tech's role in the Pentagon's high-tech transformations.  

UC Press

Excerpted from War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future by Roberto J. González, published by the University of California Press. © 2022 by Roberto J. González.


Ash Carter’s plan was simple but ambitious: to harness the best and brightest ideas from the tech industry for Pentagon use. Carter’s premise was that new commercial companies had surpassed the Defense Department’s ability to create cutting-edge technologies. The native Pennsylvanian, who had spent several years at Stanford University prior to his appointment as defense secretary, was deeply impressed with the innovative spirit of the Bay Area and its millionaire magnates. “They are inventing new technology, creating prosperity, connectivity, and freedom,” he said. “They feel they too are public servants, and they’d like to have somebody in Washington they can connect to.” Astonishingly, Carter was the first sitting defense secretary to visit Silicon Valley in more than twenty years.

The Pentagon has its own research and development agency, DARPA, but its projects tend to pursue objectives that are decades, not months, away. What the new defense secretary wanted was a nimble, streamlined office that could serve as a kind of broker, channeling tens or even hundreds of millions of dollars from the Defense Department’s massive budget toward up-and-coming firms developing technologies on the verge of completion. Ideally, DIUx would serve as a kind of liaison, negotiating the needs of grizzled four-star generals, the Pentagon’s civilian leaders, and hoodie-clad engineers and entrepreneurs. Within a year, DIUx opened branch offices in two other places with burgeoning tech sectors: Boston, Massachusetts, and Austin, Texas.

In the short term, Carter hoped that DIUx would build relationships with local start-ups, recruit top talent, get military reservists involved in projects, and streamline the Pentagon’s notoriously cumbersome procurement processes. “The key is to contract quickly — not to make these people fill out reams of paperwork,” he said. His long-term goals were even more ambitious: to take career military officers and assign them to work on futuristic projects in Silicon Valley for months at a time, to “expose them to new cultures and ideas they can take back to the Pentagon... [and] invite techies to spend time at Defense.”

In March 2016, Carter organized the Defense Innovation Board (DIB), an elite brain trust of civilians tasked with providing advice and recommendations to the Pentagon’s leadership. Carter appointed former Google CEO (and Alphabet board member) Eric Schmidt to chair the DIB, which includes current and former executives from Facebook, Google, and Instagram, among others.

Three years after Carter launched DIUx, it was renamed the Defense Innovation Unit (DIU), indicating that it was no longer experimental. This signaled the broad support the office had earned from Pentagon leaders. The Defense Department had lavished nearly $100 million on projects from forty-five companies, almost none of which were large defense contractors. Despite difficulties in the early stages — and speculation that the Trump administration might not support an initiative focused on regions that tended to skew toward the Democratic Party — DIUx was “a proven, valuable asset to the DoD,” in the words of Trump’s deputy defense secretary, Patrick Shanahan. “The organization itself is no longer an experiment,” he noted in an August 2018 memo, adding: “DIU remains vital to fostering innovation across the Department and transforming the way DoD builds a more lethal force.” Defense Secretary James “Mad Dog” Mattis visited Amazon’s Seattle headquarters and Google’s Palo Alto office in August 2017 and had nothing but praise for the tech industry. “I’m going out to see what we can pick up in DIUx,” he told reporters. In early 2018, the Trump administration requested a steep increase in DIU’s budget for fiscal year 2019, from $30 million to $71 million. For 2020, the administration requested $164 million, more than doubling the previous year’s request.

Q BRANCH

Although Pentagon officials portrayed DIUx as a groundbreaking organization, it was actually modeled after another firm established to serve the US Intelligence Community in a similar way. In the late 1990s, Ruth David, the CIA’s deputy director for science and technology, suggested that the agency needed to move in a radically new direction to ensure that it could capitalize on innovations being developed in the private sector, with a special focus on Silicon Valley firms. In 1999, under the leadership of its director, George Tenet, the CIA established a nonprofit legal entity called Peleus to fulfill this objective, with help from former Lockheed Martin CEO Norman Augustine. Soon after, the organization was renamed In-Q-Tel.

The first CEO, Gilman Louie, was an unconventional choice to head the enterprise. Louie had spent nearly twenty years as a video game developer who, among other things, created a popular series of Falcon F-16 flight simulators. At the time he agreed to join the new firm, he was chief creative officer for the toy company Hasbro. In a 2017 presentation at Stanford University, Louie claimed to have proposed that In-Q-Tel take the form of a venture capital fund. He also described how, at its core, the organization was created to solve “the big data problem”:

The problem they [CIA leaders] were trying to solve was: How to get technology companies who historically have never engaged with the federal government to actually provide technologies, particularly in the IT space, that the government can leverage. Because they were really afraid of what they called at that time the prospects of a “digital Pearl Harbor” Pearl Harbor

happened with every different part of the government having a piece of information but they couldn’t stitch it together to say, “Look, the attack at Pearl Harbor is imminent.” The White House had a piece of information, naval intelligence had a piece of information, ambassadors had a piece of information, the State Department had a piece of information, but they couldn’t put it all together [In] 1998, they began to realize that information was siloed across all these different intelligence agencies of which they could never stitch it together [F]undamentally what they were trying to solve was the big data problem. How do you stitch that together to get intelligence out of that data?

Louie served as In-Q-Tel’s chief executive for nearly seven years and played a crucial role in shaping the organization.

By channeling funds from intelligence agencies to nascent firms building technologies that might be useful for surveillance, intelligence gathering, data analysis, cyberwarfare, and cybersecurity, the CIA hoped to get an edge over its global rivals by using investment funds to co-opt creative engineers, hackers, scientists, and programmers. The Washington Post reported that “In-Q-Tel was engineered with a bundle of contradictions built in. It is independent of the CIA, yet answers wholly to it. It is a non- profit, yet its employees can profit, sometimes handsomely, from its work. It functions in public, but its products are strictly secret.” In 2005, the CIA pumped approximately $37 million into In-Q-Tel. By 2014, the organization’s funding had grown to nearly $94 million a year and it had made 325 investments with an astonishing range of technology firms, almost none of which were major defense contractors.

If In-Q-Tel sounds like something out of a James Bond movie, that’s because the organization was partly inspired by — and named after — Q Branch, a fictional research and development office of the British secret service, popularized in Ian Fleming’s spy novels and in the Hollywood blockbusters based on them, going back to the early 1960s. Ostensibly, both In-Q-Tel and DIUx were created to transfer emergent private-sector technologies into the US intelligence and military agencies, respectively. A somewhat different interpretation is that these organizations were launched “to capture technological innovations... [and] to capture new ideas.” From the perspective of the CIA these arrangements have been a “win-win,” but critics have described them as a boondoggle — lack of transparency, oversight, and streamlined procurement means that there is great potential for conflicts of interest. Other critics point to In-Q-Tel as a prime example of the militarization of the tech industry.

There’s an important difference between DIUx and In-Q-Tel. DIUx is part of the Defense Department and is therefore financially dependent on Pentagon funds. By contrast, In-Q-Tel is, in legal and financial terms, a distinct entity. When it invests in promising companies, In-Q-Tel also becomes part owner of those firms. In monetary and technological terms, it’s likely that the most profitable In-Q-Tel investment was funding for Keyhole, a San Francisco–based company that developed software capable of weaving together satellite images and aerial photos to create three-dimensional models of Earth’s surface. The program was capable of creating a virtual high-resolution map of the entire planet. In-Q-Tel provided funding in 2003, and within months, the US military was using the software to support American troops in Iraq.

Official sources never revealed how much In-Q-Tel invested in Keyhole. In 2004, Google purchased the start-up for an undisclosed amount and renamed it Google Earth. The acquisition was significant. Yasha Levine writes that the Keyhole-Google deal “marked the moment the company stopped being a purely consumer-facing internet company and began integrating with the US government [From Keyhole, Google] also acquired an In-Q-Tel executive named Rob Painter, who came with deep connections to the world of intelligence and military contracting.” By 2006 and 2007, Google was actively seeking government contracts “evenly spread among military, intelligence, and civilian agencies,” according to the Washington Post.

Apart from Google, several other large technology firms have acquired startups funded by In-Q-Tel, including IBM, which purchased the data storage company Cleversafe; Cisco Systems, which absorbed a conversational AI interface startup called MindMeld; Samsung, which snagged nanotechnology display firm QD Vision; and Amazon, which bought multiscreen video delivery company Elemental Technologies. While these investments have funded relatively mundane technologies, In-Q-Tel’s portfolio includes firms with futuristic projects such as Cyphy, which manufactures tethered drones that can fly reconnaissance missions for extended periods, thanks to a continuous power source; Atlas Wearables, which produces smart fitness trackers that closely monitor body movements and vital signs; Fuel3d, which sells a handheld device that instantly produces detailed three-dimensional scans of structures or other objects; and Sonitus, which has developed a wireless communication system, part of which fits inside the user’s mouth. If DIUx has placed its bets with robotics and AI companies, In-Q-Tel has been particularly interested in those creating surveillance technologies — geospatial satellite firms, advanced sensors, biometrics equipment, DNA analyzers, language translation devices, and cyber-defense systems.

More recently, In-Q-Tel has shifted toward firms specializing in data mining social media and other internet platforms. These include Dataminr, which streams Twitter data to spot trends and potential threats; Geofeedia, which collects geographically indexed social media messages related to breaking news events such as protests; PATHAR, a company specializing in social network analysis; and TransVoyant, a data integration firm that collates data from satellites, radar, drones, and other sensors. In-Q-Tel has also created Lab41, a Silicon Valley technology center specializing in big data analysis and machine learning.

Hitting the Books: How American militarism and new technology may make war more likely

There's nobody better at persecuting a war than the United States — we've got the the best-equipped and biggest-budgeted fighting force on the face of the Earth. But does carrying the biggest stick still constitute a strategic advantage if the mere act of possessing it seems to make us more inclined to use it?

In his latest book, Future Peace (sequel to 2017's Future War) Dr. Robert H. Latiff, Maj Gen USAF (Ret), explores how the American military's increasing reliance on weaponized drones, AI and Machine Learning systems, automation and similar cutting-edge technologies, when paired with an increasingly rancorous and often outright hostile global political environment, could create the perfect conditions for getting a lot of people killed. In the excerpt below, Dr. Latiff looks at the impact that America's lionization of its armed forces in the post-Vietnam era and new access to unproven tech have on our ability to mitigate conflict and prevent armed violence.

Notre Dame University Press

Excerpted from Future Peace: Technology, Aggression, and the Rush to War by Robert H. Latiff. Published by University of Notre Dame Press. Copyright © 2022 by Robert H. Latiff. All rights reserved.


Dangers of Rampant Militarism

I served in the military in the decades spanning the end of the Vietnam War to the post-9/11 invasion of Iraq and the war on terror. In that time, I watched and participated as the military went from being widely mistrusted to being the subject of veneration by the public. Neither extreme is good or healthy. After Vietnam, military leaders worked to reestablish trust and competency and over the next decade largely succeeded. The Reagan buildup of the late 1980s further cemented the redemption. The fall of the USSR and the victory of the US in the First Gulf War demonstrated just how far we had come. America’s dominant technological prowess was on full display, and over the next decade the US military was everywhere. The attacks of 9/11 and the subsequent invasions of Afghanistan and Iraq, followed by the long war on terror, ensured that the military would continue to demand the public’s respect and attention. What I have seen is an attitude toward the military that has evolved from public derision to grudging respect, to an unhealthy, unquestioning veneration. Polls repeatedly list the military as one of the most respected institutions in the country, and deservedly so. The object of that adulation, the military, is one thing, but militarism is something else entirely and is something about which the public should be concerned. As a nation, we have become alarmingly militaristic. Every international problem is looked at first through a military lens; then maybe diplomacy will be considered as an afterthought. Non-military issues as diverse as budget deficits and demographic trends are now called national security issues. Soldiers, sailors, airmen, and marines are all now referred to as “warfighters,” even those who sit behind a desk or operate satellites thousands of miles in space. We are endlessly talking about threats and dismiss those who disagree or dissent as weak, or worse, unpatriotic.

The young men and women who serve deserve our greatest regard and the best equipment the US has to offer. Part of the respect we could show them, however, is to attempt to understand more about them and to question the mindset that is so eager to employ them in conflicts. In the words of a soldier frequently deployed to war zones in Iraq and Afghanistan, “[An] important question is how nearly two decades of sustained combat operations have changed how the Army sees itself... I feel at times that the Army culturally defines itself less by the service it provides and more by the wars it fights. This observation may seem silly at first glance. After all, the Army exists to fight wars. Yet a soldier’s sense of identity seems increasingly tied to war, not the service war is supposed to provide to our nation.” A 1955 American Friends Service Committee pamphlet titled Speak Truth to Power described eloquently the effects of American fascination with militarism:

The open-ended nature of the commitment to militarization prevents the pursuit of alternative diplomatic, economic, and social policies that are needed to prevent war. The constant preparation for war and large-scale investment in military readiness impose huge burdens on society, diverting economic, political and psychological resources to destructive purposes. Militarization has a corrosive effect on social values… distorting political culture and creating demands for loyalty and conformity… Under these conditions, mass opinion is easily manipulated to fan the flames of nationalism and military jingoism.

Barbara Tuchman described the national situation with regard to the Vietnam War in a way eerily similar to the present. First was an overreaction and overuse of the term national security and the conjuring up of specters and visions of ruin if we failed to meet the imagined threat. Second was the “illusion” of omnipotence and the failure to understand that conflicts were not always soluble by the application of American force. Third was an attitude of “Don’t confuse me with the facts”: a refusal to credit evidence in decision-making. Finally — and perhaps most importantly in today’s situation — was “a total absence of reflective thought” about what we were doing. Political leaders embraced military action on the basis of a perceived, but largely uninformed, view of our technological and military superiority. The public, unwilling to make the effort to challenge such thinking, just went along. “There is something in modern political and bureaucratic life,” Tuchman concluded, “that subdues the functioning of the intellect.”

High Tech Could Make Mistakes More Likely

Almost the entire world is connected and uses computer networks, but we’re never really sure whether they are secure or whether the information they carry is truthful. Other countries are launching satellites, outer space is getting very crowded, and there is increased talk of competition and conflict in space. Countries engage in attacks on adversary computers and networks, and militaries are rediscovering the utility of electronic warfare, employing radio-frequency (RF) signals to damage, disrupt, or spoof other systems. While in cyber war and electronic warfare the focus is on speed, they and space conflict are characterized by significant ambiguity. Cyber incidents and space incidents as described earlier, characterized as they are by such great uncertainty, give the hotheads ample reason to call for response, and the cooler heads reasons to question the wisdom of such a move.

What could drag us into conflict? Beyond the geographical hot spots, a mistake or miscalculation in the ongoing probes of each other’s computer networks could trigger an unwanted response. US weapon systems are extremely vulnerable to such probes. A 2018 study by the Government Accountability Office found mission-critical vulnerabilities in systems, and testers were able to take control of systems largely undetected. Worse yet, government managers chose not to accept the seriousness of the situation. A cyber probe of our infrastructure could be mistaken for an attack and result in retaliation, setting off response and counter response, escalating in severity, and perhaps lethality. Much of the DOD’s high-priority traffic uses space systems that are vulnerable to intrusion and interference from an increasing number of countries. Electronic warfare against military radios and radars is a growing concern as these capabilities improve.

China and Russia both have substantial space programs, and they intend to challenge the US in space, where we are vulnerable. With both low-earth and geosynchronous orbits becoming increasingly crowded, and with adversary countries engaging in close approaches to our satellites, the situation is ripe for misperception. What is mere intelligence gathering could be misconstrued as an attack and could generate a response, either in space or on the ground. There could be attacks, both direct and surreptitious, on our space systems. Or there could be misunderstandings, with too-close approaches of other satellites viewed as threatening. Threats could be space-based or, more likely, ground-based interference, jamming, or dazzling by lasers. Commercial satellite imagery recently revealed the presence of an alleged ground-based laser site in China, presumed by intelligence analysts to be for attacks against US satellites. Russia has engaged in close, on-orbit station-keeping with high-value US systems. New technology weapons give their owners a new sense of invincibility, and an action that might have in the past been considered too dangerous or provocative might now be deemed worth the risk.

Enormous vulnerability comes along with the high US dependence on networks. As the scenarios at the beginning of this chapter suggest, in a highly charged atmosphere, the uncertainty and ambiguity surrounding incidents involving some of the new war-fighting technologies can easily lead to misperceptions and, ultimately, violence. The battlefield is chaotic, uncertain, and unpredictable anyway. Such technological additions — and the vulnerabilities they entail — only make it more so. A former UK spy chief has said, “Because technology has allowed humans to connect, interact, and share information almost instantaneously anywhere in the world, this has opened channels where misinformation, blurred lines, and ambiguity reign supreme.”

It is easy to see how such an ambiguous environment could make a soldier or military unit anxious to the point of aggression. To carry the “giant armed nervous system” metaphor a bit further, consider a human being who is excessively “nervous.” Psychologists and neuroscientists tell us that excessive aggression and violence likely develop as a consequence of generally disturbed emotional regulation, such as abnormally high levels of anxiety. Under pressure, an individual is unlikely to exhibit what we could consider rational behavior. Just as a human can become nervous, super sensitive, overly reactive, jumpy, perhaps “trigger-happy,” so too can the military. A military situation in which threats and uncertainty abound will probably make the forces anxious or “nervous.” Dealing with ambiguity is stressful. Some humans are able to deal successfully with such ambiguity. The ability of machines to do so is an open question.

Technologies are not perfect, especially those that depend on thousands or millions of lines of software code. A computer or human error by one country could trigger a reaction by another. A computer exploit intended to gather intelligence or steal data might unexpectedly disrupt a critical part of an electric grid, a flight control system, or a financial system and end up provoking a non proportional and perhaps catastrophic response. The hyper-connectedness of people and systems, and the almost-total dependence on information and data, are making the world—and military operations—vastly more complicated. Some military scholars are concerned about emerging technologies and the possibility of unintended, and uncontrollable, conflict brought on by decisions made by autonomous systems and the unexpected interactions of complex networks of systems that we do not fully understand. Do the intimate connections and rapid communication of information make a “knee-jerk” reaction more, or less, likely? Does the design for speed and automation allow for rational assessment, or will it ensure that a threat impulse is matched by an immediate, unfiltered response? Command and control can, and sometimes does, break down when the speed of operations is so great that a commander feels compelled to act immediately, even if he or she does not really understand what is happening. If we do not completely understand the systems—how they are built, how they operate, how they fail—they and we could make bad and dangerous decisions.

Technological systems, if they are not well understood by their operators, can cascade out of control. The horrific events at Chernobyl are sufficient evidence of that. Flawed reactor design and inadequately trained personnel, with little understanding of the concept of operation, led to a fatal series of missteps. Regarding war, Richard Danzig points to the start of World War I. The antagonists in that war had a host of new technologies never before used together on such a scale: railroads, telegraphs, the bureaucracy of mass mobilization, quick-firing artillery, and machine guns. The potential to deploy huge armies in a hurry put pressure on decision makers to strike first before the adversary was ready, employing technologies they really didn’t understand. Modern technology can create the same pressure for a first strike that the technology of 1914 did. Americans are especially impatient. Today, computer networks, satellites in orbit, and other modern infrastructures are relatively fragile, giving a strong advantage to whichever side strikes first. Oxford professor Lucas Kello notes that “in our era of rapid technological change, threats and opportunities arising from a new class of weapons produce pressure to act before the laborious process of strategic adoption concludes.” In other words, we rush them to the field before we have done the fundamental work of figuring out their proper use.

Decorated Vietnam veteran Hal Moore described the intense combat on the front lines with his soldiers in the Ia Drang campaign in 1965. He told, in sometimes gruesome detail, of the push and shove of the battle and how he would, from time to time, step back slightly to gather his thoughts and reflect on what was happening and, just as importantly, what was not happening. Political leaders, overwhelmed by pressures of too much information and too little time, are deprived of the ability to think or reflect on the context of a situation. They are hostage to time and do not have the luxury of what philosopher Simone Weil calls “between the impulse and the act, the tiny interval that is reflection.”

Today’s battles, which will probably happen at lightning speed, may not allow such a luxury as reflection. Hypersonic missiles, for instance, give their targets precious little time for decision-making and might force ill-informed and ill-advised counter decisions. Autonomous systems, operating individually or in swarms, connected via the internet in a network of systems, create an efficient weapon system. A mistake by one, however, could speed through the system with possibly catastrophic consequences. The digital world’s emphasis on speed further inhibits reflection.

With systems so far-flung, so automated, and so predisposed to action, it will be essential to find ways to program our weapon systems to prevent unrestrained independent, autonomous aggression. However, an equally, if not more, important goal will be to identify ways to inhibit not only the technology but also the decision makers’ proclivity to resort to violence.

Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie's latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of "data, algorithms, and computing power") are transforming how nations wage war both domestically and abroad.

MIT Press

Excerpted from The New Fire: War, Peacem, and Democracy in the Age of AI by Andrew Imbrie and Ben Buchanan. Published by MIT Press. Copyright © 2021 by Andrew Imbrie and Ben Buchanan. All rights reserved.


THE DEAD HAND

As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty. 

Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.

The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said. 

“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.

Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.

Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.

Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another. 

Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.

The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.

The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.

If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:

The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy. 

To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.

Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible. 

Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious. 

Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier. 

For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.

While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control. 

Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences. 

The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators. 

Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real. 

Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink. 

Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.

Senator letter claims a secret CIA surveillance program is bulk collecting data

The CIA has been conducting a secret mass surveillance program that affects Americans' privacy, according to a newly declassified letter (PDF) by US Senators Ron Wyden (D-Ore) and Martin Heinrich (D-N.M.). In the letter dated April of 2021, the members of the Senate Intelligence Committee urged the agency to tell the public the kind of records it collected, the amount of American records' maintained and the nature of the CIA's relationship with its sources. 

The Senators also asked the Director of National Intelligence to declassify the studies conducted by a watchdog called the Privacy and Civil Liberties Oversight Board (PCLOB), which prompted the letter in the first place. PCLOB did in-depth examinations of two CIA counterterrorism-related programs back in 2015 as part of a larger oversight review of Executive Order 12333, a Reagan-era EO that extends the powers of US intelligence agencies. 

According to The Wall Street Journal, surveillance activities conducted under EO 12333 — like the CIA's bulk program — aren't subject to the same oversight as those under the Foreign Intelligence Surveillance Act. The publication also notes that that the CIA isn't legally allowed to conduct domestic spying, but some Americans' information get scooped up in certain instances. One example is if they're communicating with an overseas target via phone or the internet. Intelligence agencies are required to protect any information from the US, as well, such as redacting Americans' names unless they're pertinent to the investigation. According to the Senators, PCLOB noted problems with how the CIA handled and searched Americans' information under the program.

The Senators said the existence of the program was hidden not just to the public, but also to the Congress. An intelligence official told The New York Times, though, the the Intelligence Committee already knew about the data collection. It's the program's tools for storying and querying that collected data, which are discussed in PCLOB's reports, that it may not know the details of.

While both the Senators' letter and one of PCLOB's studies have now been released, they've both been heavily redacted. It's impossible to tell, based on the documents that came out, what kind of information was collected and what the nature of the program was. Or is — it's also unclear whether the program is still ongoing or if the CIA has already ended it. The CIA said in a statement:

"CIA has kept, and continues to keep, the Senate Select Committee for Intelligence (SSCI) and House Permanent Select Committee on Intelligence (HPSCI) fully and currently apprised of its intelligence programs, to include the activities reviewed by PCLOB. Moreover, all CIA officers have a solemn obligation to protect the privacy and civil liberties of Americans. CIA will continue to seek opportunities to provide better transparency into the rules and procedures governing our collection authorities to both Congress and the American public."

FedEx wants to equip cargo aircraft with anti-missile lasers

FedEx jets might soon pack defensive weaponry. NBC News and Reuters report FedEx has asked the Federal Aviation Administration for permission to equip an upcoming fleet of Airbus A321-200 aircraft with an anti-missile laser system. The proposed hardware would disrupt the tracking on heat-seeking missiles by steering infrared laser energy toward the oncoming projectiles.

The courier service pointed to "several" foreign incidents where attackers used portable air defense systems against civilian aircraft. While there weren't specific examples, NBC pointed to Iran shooting down a Ukranian airliner in January 2020 (reportedly due to mistaking the jet for a cruise missile) and a Malaysian flight brought down by Russia-backed Ukranian separatists in July 2014.

FedEx first applied for the laser system in October 2019. The FAA is open to approval, but has proposed "special conditions" before lasers could enter service. The system would need failsafes to prevent activation on the ground, and couldn't cause harm to any aircraft or people.

The concept of including countermeasures isn't strictly new. Some American commercial aircraft have used anti-missile systems as early as 2008, and FedEx helped trial a Northrop Grumman countermeasure system around the same time. Israel's El Al has used anti-missile systems since 2004. FedEx's plans would be significant, though, and rare for a courier company. It wouldn't be surprising if more commercial aircraft followed suit, even if the risks of attacks remain relatively low.

GM plans to build a military vehicle based on the Hummer EV

The Hummer H1 was based on a military truck, and now it appears GM is ready to return the favor. GM Defense president Steve duMont told CNBC the company planned to build a military vehicle prototype based on the upcoming Hummer EV. The eLRV, or electric Light Reconnaissance Vehicle, would modify the Hummer's frame, motors and Ultium batteries to suit US military requirements.

The prototype should be ready sometime in 2022. There's no guarantee American armed forces will use the eLRV, however. The Army is still exploring the viability of EVs like this, and GM will have to meet formal requirements (along with a rival manufacturer) if and when they exist. A choice is due sometime in the mid-2020s.

Any military EV faces logistical challenges, at least for machines on the front lines. Soldiers couldn't just find a charging station on the battlefield, for starters — they'd need transportable charging systems that aren't dependent on a working electrical grid. DuMont said GM could provide combustion-powered charging systems. We'd add that temperatures significantly affect EV range, and swappable batteries (important for quick turnarounds and repairs) are still in their relative infancy.

There could be advantages to military EV adoption. They might improve overall emissions, even if the need for combustion-based chargers partly offset that advantage. EVs generally require less maintenance due to fewer moving parts. And their quiet operation could be extremely useful for recon and stealth missions where conventional rides would be too noisy. The challenge is to make the most of these advantages while minimizing drawbacks that could hurt operational speeds.

Iraqi prime minister say he was the target of a drone assassination attempt

Drones are apparently turning into assassination tools. According to CBS News and Reuters, Iraqi Prime Minister Mustafa al-Kadhimi says he survived a drone-based assassination attempt today (November 7th) at his home in Baghdad's highly secure Green Zone. The country's Interior Ministry said the attack involved three drones, including at least one bomb-laden vehicle. Six bodyguards were injured during the incident, and an official speaking talking to Reuters claimed security forces obtained the remnants of a small drone at the scene.

While the Iraqi government publicly said it was "premature" to identify culprits, CBS sources suspected the perpetrators belonged to pro-Iranian militias that have used similar tactics against Erbil International Airport and the US Embassy. The militias directly blamed al-Kadhimi for casualties in a fight between Iraqi security forces and pro-militia protesters who objected to their side's losses in an October 10th parliamentary vote.

Iraq, the US, Saudi Arabia and Iran have publicly condemned the attack. Militia leaders, however, suggested the drone attack might have been faked to distract from protesters' reported deaths.

Drone-based terrorism isn't a completely novel concept. ISIS, for instance, modified off-the-shelf drones to drop explosives. Attacks against political leaders are still very rare, though. If accurate, the reported Iraqi plot suggests drone terrorism is entering a new phase — extremists are using robotic fliers to hit major targets too dangerous to strike using conventional methods.

The US Army will test a 300 kW laser weapon system in 2022

This week, the federal government awarded a team that includes Boeing a contract to build a prototype 300-kilowatt laser weapon for the US Army. The military will “demonstrate” the design sometime next year. The prototype will “produce a lethal output greater than anything fielded to date,” said General Atomics Electromagnetic Systems, the other company working on the project. “This technology represents a leap-ahead capability for air and missile defense that is necessary to support the Army’s modernization efforts and defeat next-generation threats in a multi-domain battlespace.”

Even if it’s only a demonstration, the system represents a significant step up from the lasers the military has had access to in the past. Back in 2014, the US Navy deployed the experimental Laser Weapon System (LaWS) on the USS Ponce. That system could reportedly output a 30-kilowatt beam, making it mostly useful for shooting down drones and other small craft. Per the New Scientist, a 300-kilowatt laser could potentially take down missiles, in addition drones, helicopters and even airplanes. The announcement comes as the global weapons race intensifies following China’s successful trial of a hypersonic missile

Researcher says a US terrorist watchlist was exposed online for three weeks

The FBI’s Terrorist Screening Center (TSC) may have exposed the records of nearly 2 million individuals and left them accessible online for three weeks. Security researcher Bob Diachenko says he discovered a terrorist watchlist on July 19th that included information like the name, date of birth and passport number of those listed in the database. The cluster also included “no-fly” indicators.

According to Diachenko, the watchlist wasn’t password protected. Moreover, it was quickly indexed by search engines like Censys and ZoomEye before the Department of Homeland Security took the server offline on August 9th. It’s unclear who may have accessed the data.

“I immediately reported it to Department of Homeland Security officials, who acknowledged the incident and thanked me for my work,” Diachenko said in a LinkedIn post spotted by Bleeping Computer. “The DHS did not provide any further official comment, though.” We’ve reached out to the Department of Homeland Security.

Among the watchlists the TSC maintains is America’s no-fly list. Federal agencies like Transportation Security Administration (TSA) use the database to identify known or suspected terrorists attempting to enter the country. Suffice to say, the information included in the exposed watchlist was highly sensitive.

A recent bipartisan Senate report recently warned of glaring cybersecurity holes at several federal agencies, including the Department of Homeland Security. It said many of the bodies it audited had failed to implement even basic cybersecurity practices like multi-factor authentication and warned national security information was open to theft as a result.

DARPA's PROTEUS program gamifies the art of war

The nature of war continues to evolve through the 21st century with conflict zones shifting from jungles and deserts to coastal cities. Not to mention the rapidly increasing commercial availability of cutting-edge technologies including UAVs and wireless communications. To help the Marine Corps best prepare for these increased complexities and challenges, the Department of Defense tasked DARPA with developing a digital training and operations planning tool. The result is the Prototype Resilient Operations Testbed for Expeditionary Urban Scenarios (PROTEUS) system, a real-time strategy simulator for urban-littoral warfare.

When the PROTEUS program first began in 2017, “there was a big push across DARPA under what we call a sustainment focus area, and that included urban warfare,” Dr. Tim Grayson, director of DARPA’s Strategic Technology Office, told Engadget, looking at how to best support and “sustain” US fighting forces in various combat situations until they can finish their mission.

DARPA

The PROTEUS program manager (who has since departed DARPA), Dr. John S Paschkewitz, “came to the realization that the urban environment is really complex, both from a maneuver perspective,” Grayson said, “but also going into the future where there's all this commercial technology that will involve communications and spectrum stuff, maybe even robotics and things of that nature.”

Even without the threat of armed UAVs and autonomous killbots, modern urban conflict zones pose a number of challenges including limited lines of sight and dense, pervasive civilian populations. “​​There's such a wide range of missions that happen in urban environments,” Grayson said. “A lot of it is almost like peacekeeping, stabilization operations. How do we… help the local populace and protect them.” He also notes that the military is often called in to assist with both national emergencies and natural disasters, which pose the same issues albeit without nearly as much shooting.

“So, if someone like the Marines or some other kind of sustainment military unit had to go conduct operations in a complex urban environment,” he continued, “it'd be a limited footprint. So, [Paschkewitz] started looking at what we refer to as the ‘what do I put in the rucksack problem.’”

“The urban fight is about delivering precise effects and adapting faster than the adversary in an uncertain, increasingly complex environment,” Paschkewitz said in a DARPA release from June. “For US forces to maintain a distinct advantage in urban coastal combat scenarios, we need agile, flexible task organizations able to create surprise and exploit advantages by combining effects across operational domains.”

PROTEUS itself is a software program designed to run on a tablet or hardened PDA and allow anyone from a squad leader up to a company commander to monitor and adjust the “composition of battlefield elements — including dismounted forces, vehicles, unmanned aerial vehicles (UAVs), manned aircraft and other available assets,” according to the release. “Through PROTEUS, we aim to amplify the initiative and decision-making capabilities of NCOs and junior officers at the platoon and squad level, as well as field-grade officers, commanding expeditionary landing teams, for example, by giving them new tools to compose tailored force packages not just before the mission, but during the mission as it unfolds.”

But PROTEUS isn’t just for monitoring and redeploying forces, it also serves as a real-time strategy training system to help NCOs and officers test and analyze different capabilities and tactics virtually. “One of the beauties of [PROTEUS] is it's flexible enough to program with whatever you want,” Grayson said. It allows warfighters to “go explore their own ideas, their own structure concepts, their own tactics. They're totally free to use it just as an open-ended experimentation, mission rehearsal or even training type of tool.”

But for its design flexibility, the system’s physics engine closely conforms to the real-world behaviors and tolerances of existing military equipment as well as commercial drones, cellular, satellite and Wi-Fi communications, sensors and even weapons systems. “The simulation environment is sophisticated but doesn't let them do things that are not physically realizable,” Grayson explained.

The system also includes a dynamic composition engine called COMPOSER which not only automate the team’s equipment loadout but can also look at a commander’s plan and provide feedback on multiple aspects including “electromagnetic signature risk, assignment of communications assets to specific units and automatic configuration of tactical networks,” according to a DARPA press release.

DARPA

“Without the EMSO and logistics wizards, it’s hard to effectively coordinate and execute multi-domain operations,” Paschkewitz said. “Marines can easily coordinate direct and indirect fires, but coordinating those with spectrum operations while ensuring logistical support without staff is challenging. These tools allow Marines to focus on the art of war, and the automation handles the science of war.”

Currently, the system is set up for standard Red vs Blue fights between opposing human forces though Grayzon does not expect PROTEUS to be upgraded to the point that humans will be able to compete against the CPU and even less likely that we’ll see CPU vs CPU — given our current computational and processing capabilities. He does note that the Constructive Machine-learning Battles with Adversary Tactics (COMBAT) program, which is still underway at DARPA, is working to develop “models of Red Force brigade behaviors that challenge and adapt to Blue Forces in simulation experiments.”

“Building a commander’s insight and judgment is driven by the fact that there’s a live opponent,” Paschkewitz said in June. “We built ULTRA [the sandbox module that serves as the basis for the larger system] around that concept from day one. This is not AI versus AI, or human versus AI, rather there is always a Marine against an ADFOR (adversary force), that’s another Marine, typically, forcing the commander to adapt tactics, techniques, and procedures (TTPs) and innovate at mission speed.”

“PROTEUS enables commanders to immerse themselves in a future conflict where they can deploy capabilities against a realistic adversary,” Ryan Reeder, model and simulation director, MCWL Experiment Division, said in a statement. “Commanders can hone their battlefield skills, while also training subordinates on employment techniques, delivering a cohesive unit able to execute in a more effective manner.

Technically, DARPA’s involvement with the PROTEUS program has come to an end following its transfer to the Marine Corps Warfighting Lab where it is now being used for ADFOR training and developing new TTPs and CONOPS. “My guess is they will mostly use it for their own purposes, as opposed to continuing to develop it,” Grayson said. “The Warfighting Lab is less focused on technology and more focused on our future force, concepts and what are our new tactics.”