Posts with «nuclear policy» label

NASA picks Lockheed Martin to build the nuclear rocket that’ll take us to Mars

NASA and DARPA have chosen aerospace and defense company Lockheed Martin to develop a spacecraft with a nuclear thermal rocket engine. Announced in January, the initiative — in which BWX Technologies will provide the reactor and fuel — is dubbed the Demonstration Rocket for Agile Cislunar Operations (DRACO). The agencies aim to showcase the tech no later than 2027 with an eye toward future Mars missions.

Nuclear thermal propulsion (NTP) has several advantages over chemically propelled rockets. First, it’s two to five times more efficient, allowing ships to travel faster and farther with greater agility. In addition, its reduced propellant needs leave more room on the spaceship for storing scientific equipment and other essentials. It also provides more options for abort scenarios, as the nuclear engines make it easier to alter the ship’s trajectory for a quicker-than-expected return trip. These factors combine to make NTP (perhaps) the ideal Mars travel method.

“These more powerful and efficient nuclear thermal propulsion systems can provide faster transit times between destinations,” said Kirk Shireman, VP of Lunar Exploration Campaigns for Lockheed Martin. “Reducing transit time is vital for human missions to Mars to limit a crew’s exposure to radiation.”

NASA / DARPA

The NTP system will use a nuclear reactor to heat hydrogen propellant rapidly to extremely high temperatures. That gas is funneled through the engine’s nozzle, creating the ship’s thrust. “This nuclear thermal propulsion system is designed to be extremely safe and reliable, using High Assay Low Enriched Uranium (HALEU) fuel to rapidly heat a super-cold gas, such as liquid hydrogen,” BWX said today. “As the gas is heated, it expands quickly and creates thrust to move the spacecraft more efficiently than typical chemical combustion engines.”

To help quell concerns about radioactive leaks in the Earth’s atmosphere, NASA and DARPA plan not to power up the reactor until the ship has reached a “nuclear safe orbit,” where any tragedies would occur outside the zone where it would affect Earth. The agencies aim for a nuclear spacecraft demonstration by 2027, launched from a conventional rocket until it reaches “an appropriate location above low earth orbit.”

Nuclear reactors will also likely play a key role in powering future Martian habitats, with NASA testing small and portable versions of the tech as far back as 2018.

Before NTP propels the first humans to Mars, it could find use on much shorter flights, as nuclear-powered spacecraft could also make transporting material to the Moon more efficient. “A safe, reusable nuclear tug spacecraft would revolutionize cislunar operations,” said Shireman. “With more speed, agility and maneuverability, nuclear thermal propulsion also has many national security applications for cislunar space.”

This article originally appeared on Engadget at https://www.engadget.com/nasa-picks-lockheed-martin-to-build-the-nuclear-rocket-thatll-take-us-to-mars-170035659.html?src=rss

US lawmakers introduce bill to prevent AI-controlled nuclear launches

Bipartisan US lawmakers from both chambers of Congress introduced legislation this week that would formally prohibit AI from launching nuclear weapons. Although Department of Defense policy already states that a human must be “in the loop” for such critical decisions, the new bill — the Block Nuclear Launch by Autonomous Artificial Intelligence Act — would codify that policy, preventing the use of federal funds for an automated nuclear launch without “meaningful human control.”

Aiming to protect “future generations from potentially devastating consequences,” the bill was introduced by Senator Ed Markey (D-MA) and Representatives Ted Lieu (D-MA), Don Beyer (D-VA) and Ken Buck (R-CO). Senate co-sponsors include Jeff Merkley (D-OR), Bernie Sanders (I-VT), and Elizabeth Warren (D-MA). “As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons – not robots,” said Markey. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”

Artificial intelligence chatbots (like the ever-popular ChatGPT, the more advanced GPT-4 and Google Bard), image generators and voice cloners have taken the world by storm in recent months. (Republicans are already using AI-generated images in political attack ads.) Various experts have voiced concerns that, if left unregulated, humanity could face grave consequences. “Lawmakers are often too slow to adapt to the rapidly changing technological environment,” Cason Schmit, Assistant Professor of Public Health at Texas A&M University, toldThe Conversation earlier this month. Although the federal government hasn’t passed any AI-based legislation since the proliferation of AI chatbots, a group of tech leaders and AI experts signed a letter in March requesting an “immediate” six-month pause on developing AI systems beyond GPT-4. Additionally, the Biden administration recently opened comments seeking public feedback about possible AI regulations.

“While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear,” said Rep. Lieu. “It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences. That’s why I’m pleased to introduce the bipartisan, bicameral Block Nuclear Launch by Autonomous AI Act, which will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”

Given the current political climate in Washington, passing even the most common-sense of bills isn’t guaranteed. Nevertheless, perhaps a proposal as fundamental as “don’t let computers decide to obliterate humanity” will serve as a litmus test for how prepared the US government is to deal with this quickly evolving technology.

This article originally appeared on Engadget at https://www.engadget.com/us-lawmakers-introduce-bill-to-prevent-ai-controlled-nuclear-launches-184727260.html?src=rss

NASA and DARPA will test nuclear thermal engines for crewed missions to Mars

NASA is going back to an old idea as it tries to get humans to Mars. It is teaming up with the Defense Advanced Research Projects Agency (DARPA) to test a nuclear thermal rocket engine in space with the aim of using the technology for crewed missions to the red planet. The agencies hope to "demonstrate advanced nuclear thermal propulsion technology as soon as 2027," NASA administrator Bill Nelson said. "With the help of this new technology, astronauts could journey to and from deep space faster than ever — a major capability to prepare for crewed missions to Mars."

Under the Demonstration Rocket for Agile Cislunar Operations (DRACO) program, NASA's Space Technology Mission Directorate will take the lead on technical development of the engine, which will be integrated with an experimental spacecraft from DARPA. NASA says that nuclear thermal propulsion (NTP) could allow spacecraft to travel faster, which could reduce the volume of supplies needed to carry out a long mission. An NTD engine could also free up space for more science equipment and extra power for instrumentation and communication.

As far back as the 1940s, scientists started speculating about the possibility of using nuclear energy to power spaceflight. The US conducted ground experiments on that front starting in the '50s. Budget cutbacks and changing priorities (such as a focus on the Space Shuttle program) led to NASA abandoning the project at the end of 1972 before it carried out any test flights.

There are, of course, risks involved with NTP engines, such as the possible dispersal of radioactive material in the environment should a failure occur in the atmosphere or orbit. Nevertheless, NASA says the faster transit times that NTP engines can enable could lower the risk to astronauts — they could reduce travel times to Mars by up to a quarter. Nuclear thermal rockets could be at least three times more efficient than conventional chemical propulsion methods.

NASA is also looking into nuclear energy to power related space exploration efforts. In 2018, it carried out tests of a portable nuclear reactor as part of efforts to develop a system capable of powering a habitat on Mars. Last year, NASA and the Department of Energy selected three contractors to design a fission surface power system that it can test on the Moon. DARPA and the Defense Department have worked on other NTP engine projects over the last few years.

Meanwhile, the US has just approved a small modular nuclear design for the first time. As Gizmodo reports, the design allows for a nuclear facility that's around a third the size of a standard reactor. Each module is capable of producing around 50 megawatts of power. The design, from a company called NuScale, could lower the cost and complexity of building nuclear power plants.

NASA picks three companies to develop lunar nuclear power systems

NASA and the Department of Energy have awarded contracts to three companies that are designing concepts to bring nuclear power to the Moon. The agencies will award Lockheed Martin, Westinghouse and IX around $5 million each to fund the design of a fission surface power system, an idea that NASA has been working on for at least 14 years

The three companies are being tasked with developing a 40-kilowatt class fission power system that can run for at least 10 years on the lunar surface. NASA hopes to test the system on the Moon as soon as the end of this decade. If the demonstration proves successful, it could lead to nuclear energy powering long-term missions on the Moon and Mars as part of the Artemis program. "Developing these early designs will help us lay the groundwork for powering our long-term human presence on other worlds," Jim Reuter, associate administrator for NASA's Space Technology Mission Directorate, said in a statement.

Under the 12-month contracts, Lockheed Martin will partner with BWXT and Creare. Westinghouse will team up with Aerojet Rocketdyne, while IX (a joint venture of Intuitive Machines and X-Energy) will work with Maxar and Boeing on a proposal.

Lockheed Martin was one of three companies chosen by the Pentagon's Defense Advanced Research Projects Agency last year to develop nuclear-powered spacecraft. The Defense Department has also sought nuclear propulsion systems for spacecraft.

Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie's latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of "data, algorithms, and computing power") are transforming how nations wage war both domestically and abroad.

MIT Press

Excerpted from The New Fire: War, Peacem, and Democracy in the Age of AI by Andrew Imbrie and Ben Buchanan. Published by MIT Press. Copyright © 2021 by Andrew Imbrie and Ben Buchanan. All rights reserved.


THE DEAD HAND

As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty. 

Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.

The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said. 

“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.

Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.

Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.

Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another. 

Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.

The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.

The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.

If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:

The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy. 

To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.

Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible. 

Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious. 

Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier. 

For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.

While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control. 

Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences. 

The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators. 

Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real. 

Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink. 

Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.

China plans to build the first 'clean' commercial nuclear reactor

Are you intrigued by the possibility of using nuclear reactors to curb emissions, but worried about their water use and long-term safety? There might be an impending solution. LiveSciencereports that China has outlined plans to build the first 'clean' commercial nuclear reactor using liquid thorium and molten salt.

The first prototype reactor should be ready in August, with the first tests due in September. A full-scale commercial reactor should be ready by 2030.

The technology should not only be kinder to the environment, but mitigate some political controversy. Conventional uranium reactors produce waste that stays extremely radioactive for up to 10,000 years, requiring lead containers and extensive security. The waste also includes plutonium-239, an isotope crucial to nuclear weapons. They also risk spilling dramatic levels of radiation in the event of a leak, as seen in Chernobyl. They also need large volumes of water, ruling out use in arid climates.

Thorium reactors, however, dissolve their key element into fluoride salt that mostly outputs uranium-233 you can recycle through other reactions. Other leftovers in the reaction have a half-life of 'just' 500 years — still not spectacular, but much safer. If there is a leak, the molten salt cools enough that it effectively seals in the thorium and prevents significant leaks. The technology doesn't require water, and can't easily be used to produce nuclear weapons. You can build reactors in the desert, far away from most cities, and without raising concerns that it will add to nuclear weapon stockpiles.

China is accordingly building the first commercial reactor in Wuwei, a desert city in the country's Gansu province. Officials also see this as a way to foster China's international expansion — it plans up to 30 in countries participating in the company's "Belt and Road" investment initiative. In theory, China can extend its political influence without contributing to nuclear arms proliferation.

That might worry the US and other political rivals that are behind on thorium reactors. The US-based Natrium reactor, for instance, is still in development. Even so, it might go a long way toward fighting climate change and meeting China's goal of becoming carbon neutral by 2060. The country is still heavily dependent on coal energy, and there's no guarantee renewable sources will keep up with demand by themselves. Thorium reactors could help China wean itself off coal relatively quickly, especially small-scale reactors that could be built over shorter periods and fill gaps where larger plants would be excessive.