Posts with «science» label

Scientists discover microbes that can digest plastics at cool temperatures

In a potentially encouraging sign for reducing environmental waste, researchers have discovered microbes from the Alps and the Arctic that can break down plastic without requiring high temperatures. Although this is only a preliminary finding, a more efficient and effective breakdown of industrial plastic waste in landfills would give scientists a new tool for trying to reduce its ecological damage.

Scientists from the Swiss Federal Institute WSL published their findings this week in Frontiers in Microbiology, detailing how cold-adapted bacteria and fungus from polar regions and the Swiss Alps digested most of the plastics they tested — while only needing low to average temperatures. That last part is critical because plastic-eating microorganisms tend to need impractically high temperatures to work their magic. “Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above [30 degrees Celsius / 86 degrees Fahrenheit],” the researchers explained. “The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral.”

Unfortunately, none of the microorganisms tested succeeded at breaking down non-biodegradable polyethylene (PE), one of the most challenging plastics commonly found in consumer products and packaging. (They failed at degrading PE even after 126 days of incubation on the material.) But 56 percent of the strains tested decomposed biodegradable polyester-polyurethane (PUR) at 15 degrees Celsius (59 degrees Fahrenheit). Others digested commercially available biodegradable mixtures of polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA). The two most successful strains were fungi from the genera Neodevriesia and Lachnellula: They broke down every plastic tested other than the formidable PE.

Plastics are too recent an invention for the microorganisms to have evolved specifically to break them down. But the researchers highlight how natural selection equipping them to break down cutin, a protective layer in plants that shares much in common with plastics, played a part. “Microbes have been shown to produce a wide variety of polymer-degrading enzymes involved in the break-down of plant cell walls. In particular, plant-pathogenic fungi are often reported to biodegrade polyesters, because of their ability to produce cutinases which target plastic polymers due [to] their resemblance to the plant polymer cutin,” said co-author Dr. Beat Frey.

The researchers see promise in their findings but warn that hurdles remain. “The next big challenge will be to identify the plastic-degrading enzymes produced by the microbial strains and to optimize the process to obtain large amounts of proteins,” said Frey. “In addition, further modification of the enzymes might be needed to optimize properties such as protein stability.”

This article originally appeared on Engadget at https://www.engadget.com/scientists-discover-microbes-that-can-digest-plastics-at-cool-temperatures-173419885.html?src=rss

Vast and SpaceX plan to launch the first commercial space station in 2025

Another company is racing to launch the first commercial space station. Vast is partnering with SpaceX to launch its Haven-1 station as soon as August 2025. A Falcon 9 rocket will carry the platform to low Earth orbit, with a follow-up Vast-1 mission using Crew Dragon to bring four people to Haven-1 for up to 30 days. Vast is taking bookings for crew aiming to participate in scientific or philanthropic work. The company has the option of a second crewed SpaceX mission.

Haven-1 is relatively small. It isn't much larger than SpaceX's capsule, and is mainly intended for science and small-scale orbital manufacturing for the four people who dock. Vast hopes to make Haven-1 just one module in a larger station, though, and it can simulate the Moon's gravity by spinning.

As TechCrunchnotes, the 2025 target is ambitious and might see Vast beat well-known rivals to deploying a private space station. Jeff Bezos' Blue Origin doesn't expect to launch its Orbital Reef until the second half of the decade. Voyager, Lockheed Martin and Nanoracks don't expect to operate their Starlab facility before 2027. Axiom stands the best chance of upstaging Vast with a planned late 2025 liftoff.

There's no guarantee any of these timelines will hold given the challenges and costs of building an orbital habitat — this has to be a safe vehicle that comfortably supports humans for extended periods, not just the duration of a rocket launch. However, this suggests that stations represent the next major phase of private spaceflight after tourism and lunar missions.

This article originally appeared on Engadget at https://www.engadget.com/vast-and-spacex-plan-to-launch-the-first-commercial-space-station-in-2025-134256156.html?src=rss

Meta's open-source ImageBind AI aims to mimic human perception

Meta is open-sourcing an AI tool called ImageBind that predicts connections between data similar to how humans perceive or imagine an environment. While image generators like Midjourney, Stable Diffusion and DALL-E 2 pair words with images, allowing you to generate visual scenes based only on a text description, ImageBind casts a broader net. It can link text, images / videos, audio, 3D measurements (depth), temperature data (thermal), and motion data (from inertial measurement units) — and it does this without having to first train on every possibility. It’s an early stage of a framework that could eventually generate complex environments from an input as simple as a text prompt, image or audio recording (or some combination of the three).

You could view ImageBind as moving machine learning closer to human learning. For example, if you’re standing in a stimulating environment like a busy city street, your brain (largely unconsciously) absorbs the sights, sounds and other sensory experiences to infer information about passing cars and pedestrians, tall buildings, weather and much more. Humans and other animals evolved to process this data for our genetic advantage: survival and passing on our DNA. (The more aware you are of your surroundings, the more you can avoid danger and adapt to your environment for better survival and prosperity.) As computers get closer to mimicking animals’ multi-sensory connections, they can use those links to generate fully realized scenes based only on limited chunks of data.

So, while you can use Midjourney to prompt “a basset hound wearing a Gandalf outfit while balancing on a beach ball” and get a relatively realistic photo of this bizarre scene, a multimodal AI tool like ImageBind may eventually create a video of the dog with corresponding sounds, including a detailed suburban living room, the room’s temperature and the precise locations of the dog and anyone else in the scene. “This creates distinctive opportunities to create animations out of static images by combining them with audio prompts,” Meta researchers said today in a developer-focused blog post. “For example, a creator could couple an image with an alarm clock and a rooster crowing, and use a crowing audio prompt to segment the rooster or the sound of an alarm to segment the clock and animate both into a video sequence.”

Meta’s graph showing ImageBind’s accuracy outperforming single-mode models.
Meta

As for what else one could do with this new toy, it points clearly to one of Meta’s core ambitions: VR, mixed reality and the metaverse. For example, imagine a future headset that can construct fully realized 3D scenes (with sound, movement, etc.) on the fly. Or, virtual game developers could perhaps eventually use it to take much of the legwork out of their design process. Similarly, content creators could make immersive videos with realistic soundscapes and movement based on only text, image or audio input. It’s also easy to imagine a tool like ImageBind opening new doors in the accessibility space, generating real-time multimedia descriptions to help people with vision or hearing disabilities better perceive their immediate environments.

“In typical AI systems, there is a specific embedding (that is, vectors of numbers that can represent data and their relationships in machine learning) for each respective modality,” said Meta. “ImageBind shows that it’s possible to create a joint embedding space across multiple modalities without needing to train on data with every different combination of modalities. This is important because it’s not feasible for researchers to create datasets with samples that contain, for example, audio data and thermal data from a busy city street, or depth data and a text description of a seaside cliff.”

Meta views the tech as eventually expanding beyond its current six “senses,” so to speak. “While we explored six modalities in our current research, we believe that introducing new modalities that link as many senses as possible — like touch, speech, smell, and brain fMRI signals — will enable richer human-centric AI models.” Developers interested in exploring this new sandbox can start by diving into Meta’s open-source code.

This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-imagebind-ai-aims-to-mimic-human-perception-181500560.html?src=rss

JWST captures images of the first asteroid belts seen beyond the Solar System

About 25 light years away from Earth lies Fomalhaut, one of the brightest stars in the night sky. The Fomalhaut system has captivated astronomers for decades, but it’s only now that we’re developing a better understanding of it thanks to the James Webb Space Telescope. In a study published in the journal Nature Astronomy on Monday, a group of scientists made up primarily of astronomers from the University of Arizona and NASA’s Jet Propulsion Laboratory say the Fomalhaut system is far more complex than previously thought.

Since 1983, astronomers have known the 440 million-year-old Fomalhaut is surrounded by dust and debris, but what they didn’t expect to find was three different debris fields surrounding the star. One of those, the closest to Fomalhaut, is similar to our solar system’s asteroid belt but far more expansive than expected. As the New Scientist explains, Fomalhaut’s inner asteroid belt stretches from about seven astronomical units from the star to about 80 astronomical units out. To put those numbers in perspective, that’s about 10 times broader of an inner asteroid belt than astronomers expected to find.

NASA, ESA, CSA

However, that’s not even the most interesting feature of the Fomalhaut system. Outside of Fomalhaut’s inner asteroid belt, there is a second debris belt that is tilted at 23 degrees from everything else in orbit of the star. “This is a truly unique aspect of the system,” András Gáspár, lead author on the study, told Science News. He added that the tilted belt could be the result of planets in orbit of Fomalhaut astronomers haven’t discovered yet.

“The belts around Fomalhaut are kind of a mystery novel: Where are the planets?” said George Rieke, one of the astronomers involved in the study. "I think it's not a very big leap to say there's probably a really interesting planetary system around the star.”

Yet out further out from Fomalhaut is an outer debris ring similar to our solar system’s Kuiper belt. It includes a feature Gáspár and his colleagues have named the Great Dust Cloud. It’s unclear if this feature is part of the Fomalhaut system or something shining from beyond it, but they suspect it was formed when two space rocks more than 400 miles wide collided with one another. According to Gáspár and company, there may be three or more planets about the size of Uranus and Neptune orbiting Fomalhaut. They’re now analyzing JWST images that may reveal the existence of those planetoids.

This article originally appeared on Engadget at https://www.engadget.com/jwst-captures-images-of-the-first-asteroid-belts-seen-beyond-the-solar-system-192847989.html?src=rss

Hitting the Books: Why a Dartmouth professor coined the term 'artificial intelligence'

If the Wu-Tang produced it in '23 instead of '93, they'd have called it D.R.E.A.M. — because data rules everything around me. Where once our society brokered power based on strength of our arms and purse strings, the modern world is driven by data empowering algorithms to sort, silo and sell us out. These black box oracles of imperious and imperceptible decision-making deign who gets home loans, who gets bail, who finds love and who gets their kids taken from them by the state

In their new book, How Data Happened: A History from the Age of Reason to the Age of Algorithms, which builds off their existing curriculum, Columbia University Professors Chris Wiggins and Matthew L Jones examine how data is curated into actionable information and used to shape everything from our political views and social mores to our military responses and economic activities. In the excerpt below, Wiggins and Jones look at the work of mathematician John McCarthy, the junior Dartmouth professor who single-handedly coined the term "artificial intelligence"... as part of his ploy to secure summer research funding.

WW Norton

Excerpted from How Data Happened: A History from the Age of Reason to the Age of Algorithms by Chris Wiggins and Matthew L Jones. Published by WW Norton. Copyright © 2023 by Chris Wiggins and Matthew L Jones. All rights reserved.


Confecting “Artificial Intelligence”

A passionate advocate of symbolic approaches, the mathematician John McCarthy is often credited with inventing the term “artificial intelligence,” including by himself: “I invented the term artificial intelligence,” he explained, “when we were trying to get money for a summer study” to aim at “the long term goal of achieving human level intelligence.” The “summer study” in question was titled “The Dartmouth Summer Research Project on Artificial Intelligence,” and the funding requested was from the Rockefeller Foundation. At the time a junior professor of mathematics at Dartmouth, McCarthy was aided in his pitch to Rockefeller by his former mentor Claude Shannon. As McCarthy describes the term’s positioning, “Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice.” However, McCarthy wanted to avoid overlap with the existing field of “automata studies” (including “nerve nets” and Turing machines) and took a stand to declare a new field. “So I decided not to fly any false flags anymore.” The ambition was enormous; the 1955 proposal claimed “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” McCarthy ended up with more brain modelers than axiomatic mathematicians of the sort he wanted at the 1956 meeting, which came to be known as the Dartmouth Workshop. The event saw the coming together of diverse, often contradictory efforts to make digital computers perform tasks considered intelligent, yet as historian of artificial intelligence Jonnie Penn argues, the absence of psychological expertise at the workshop meant that the account of intelligence was “informed primarily by a set of specialists working outside the human sciences.” Each participant saw the roots of their enterprise differently. McCarthy reminisced, “anybody who was there was pretty stubborn about pursuing the ideas that he had before he came, nor was there, as far as I could see, any real exchange of ideas.”

Like Turing’s 1950 paper, the 1955 proposal for a summer workshop in artificial intelligence seems in retrospect incredibly prescient. The seven problems that McCarthy, Shannon, and their collaborators proposed to study became major pillars of computer science and the field of artificial intelligence:

  1. “Automatic Computers” (programming languages)

  2. “How Can a Computer be Programmed to Use a Language” (natural language processing)

  3. “Neuron Nets” (neural nets and deep learning)

  4. “Theory of the Size of a Calculation” (computational complexity)

  5. “Self-​improvement” (machine learning)

  6. “Abstractions” (feature engineering)

  7. “Randomness and Creativity” (Monte Carlo methods including stochastic learning).

The term “artificial intelligence,” in 1955, was an aspiration rather than a commitment to one method. AI, in this broad sense, involved both discovering what comprises human intelligence by attempting to create machine intelligence as well as a less philosophically fraught effort simply to get computers to perform difficult activities a human might attempt.

Only a few of these aspirations fueled the efforts that, in current usage, became synonymous with artificial intelligence: the idea that machines can learn from data. Among computer scientists, learning from data would be de-​emphasized for generations.

Most of the first half century of artificial intelligence focused on combining logic with knowledge hard-​coded into machines. Data collected from everyday activities was hardly the focus; it paled in prestige next to logic. In the last five years or so, artificial intelligence and machine learning have begun to be used synonymously; it’s a powerful thought-​exercise to remember that it didn’t have to be this way. For the first several decades in the life of artificial intelligence, learning from data seemed to be the wrong approach, a nonscientific approach, used by those who weren’t willing “to just program” the knowledge into the computer. Before data reigned, rules did.

For all their enthusiasm, most participants at the Dartmouth workshop brought few concrete results with them. One group was different. A team from the RAND Corporation, led by Herbert Simon, had brought the goods, in the form of an automated theorem prover. This algorithm could produce proofs of basic arithmetical and logical theorems. But math was just a test case for them. As historian Hunter Heyck has stressed, that group started less from computing or mathematics than from the study of how to understand large bureaucratic organizations and the psychology of the people solving problems within them. For Simon and Newell, human brains and computers were problem solvers of the same genus.

Our position is that the appropriate way to describe a piece of problem-​solving behavior is in terms of a program: a specification of what the organism will do under varying environmental circumstances in terms of certain elementary information processes it is capable of performing... ​Digital computers come into the picture only because they can, by appropriate programming, be induced to execute the same sequences of information processes that humans execute when they are solving problems. Hence, as we shall see, these programs describe both human and machine problem solving at the level of information processes.

Though they provided many of the first major successes in early artificial intelligence, Simon and Newell focused on a practical investigation of the organization of humans. They were interested in human problem-​solving that mixed what Jonnie Penn calls a “composite of early twentieth century British symbolic logic and the American administrative logic of a hyper-​rationalized organization.” Before adopting the moniker of AI, they positioned their work as the study of “information processing systems” comprising humans and machines alike, that drew on the best understanding of human reasoning of the time.

Simon and his collaborators were deeply involved in debates about the nature of human beings as reasoning animals. Simon later received the Nobel Prize in Economics for his work on the limitations of human rationality. He was concerned, alongside a bevy of postwar intellectuals, with rebutting the notion that human psychology should be understood as animal-​like reaction to positive and negative stimuli. Like others, he rejected a behaviorist vision of the human as driven by reflexes, almost automatically, and that learning primarily concerned the accumulation of facts acquired through such experience. Great human capacities, like speaking a natural language or doing advanced mathematics, never could emerge only from experience—​they required far more. To focus only on data was to misunderstand human spontaneity and intelligence. This generation of intellectuals, central to the development of cognitive science, stressed abstraction and creativity over the analysis of data, sensory or otherwise. Historian Jamie Cohen-​Cole explains, “Learning was not so much a process of acquiring facts about the world as of developing a skill or acquiring proficiency with a conceptual tool that could then be deployed creatively.” This emphasis on the conceptual was central to Simon and Newell’s Logic Theorist program, which didn’t just grind through logical processes, but deployed human-​like “heuristics” to accelerate the search for the means to achieve ends. Scholars such as George Pólya investigating how mathematicians solved problems had stressed the creativity involved in using heuristics to solve math problems. So mathematics wasn’t drudgery — ​it wasn’t like doing lots and lots of long division or of reducing large amounts of data. It was creative activity — ​and, in the eyes of its makers, a bulwark against totalitarian visions of human beings, whether from the left or the right. (And so, too, was life in a bureaucratic organization — ​it need not be drudgery in this picture — ​it could be a place for creativity. Just don’t tell that to its employees.)

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-how-data-happened-wiggins-jones-ww-norton-143036972.html?src=rss

Astronomers finally spot a star consuming a planet

Scientists know that a dying star will become a giant that swallows all the planets within a certain radius, but they've never seen it happen... before now, that is. Astronomers at Caltech, Harvard, MIT and other schools have detected a star consuming one of its orbiting planets as it turns into a red giant. A star about 12,000 light-years away, close to the Aquila constellation, became 100 times brighter for over 10 days in an outburst that researchers say represented a hot jovian world falling into its host star's atmosphere and, ultimately, its core.

The group first observed the burst in May 2020, but took roughly a year to determine what happened. Thanks to the NEOWISE infrared telescope, the team ruled out merging stars. The energy from the outburst was only a thousandth of what it should have been for a star-on-star collision, and there was a stream of cold dust rather than hot plasma. MIT's Kishalay De, who led the paper, also notes that Jupiter's mass is about a thousandth that of the Sun, providing a handy reference point.

This phenomenon is believed to be common in the universe, and it's believed that the Earth and other inner Solar System planets will face a similar demise when the Sun dies roughly 5 billion years from now. In that regard, the astronomers confirmed their existing models. Past studies caught stars just before and after they swallowed planets, but never in mid-digestion.

There are still unknowns surrounding planet-munching stars. This finding helps complete the picture, though, and De tellsScienceNews that the next wave of infrared-capable observatories will increase the chances of finding similar events. That, in turn, could illustrate how these apocalyptic processes vary across the cosmos.

This article originally appeared on Engadget at https://www.engadget.com/astronomers-finally-spot-a-star-consuming-a-planet-145950652.html?src=rss

Scientists observe elusive missing step in photosynthesis’ final stage

Researchers at the SLAC National Accelerator Laboratory and Lawrence Berkeley National Laboratory (along with collaborators in Sweden, Germany and the UK) have shed new light on the final step of photosynthesis. They observed in atomic detail how Photosystem II, a protein complex found in plants, undergoes a transformation that leads to the loss of an extra oxygen atom. Scientists believe the discoveries will help provide a roadmap for optimizing clean energy sources. “It’s really going to change the way we think about Photosystem II,” said Uwe Bergmann, scientist and professor at the University of Wisconsin-Madison, who co-authored the paper.

Researchers took “extremely high-resolution images” of different stages of the process (at room temperature), giving them new insight into specifically how and where the oxygen is produced. Baseball can provide a simple (if somewhat forced) metaphor to illustrate the process. “The center cycles through four stable oxidation states, known as S0 through S3, when exposed to sunlight,” SLAC explains. “On a baseball field, S0 would be the start of the game when a player on home base is ready to go to bat. S1-S3 would be players on first, second, and third.” Based on this metaphor, a batter making contact to advance the runners signifies the complex absorbing a sunlight photon. “When the fourth ball is hit, the player slides into home, scoring a run or, in the case of Photosystem II, releasing one molecule of breathable oxygen.” It’s that final stage (S4, between third base and sliding home in our metaphor) that they imaged for the first time, where two oxygen atoms bond to release an oxygen molecule, revealing additional steps previously unseen.

The video below illustrates the team’s process and discoveries.

“Most of the process that produces breathable oxygen happens in this last step,” said Vittal Yachandra, a scientist at Berkeley Lab and co-author of the paper, published inNature. “But there are several things happening at different parts of Photosystem II and they all have to come together in the end for the reaction to succeed. Just like how in baseball, factors like the location of the ball and the position of the basemen and fielders affect the moves a player takes to get to home base, the protein environment around the catalytic center influences how this reaction plays out.”

The researchers expect an X-ray upgrade later this year to shed more light on the process. It will use a repetition rate of up to a million pulses per second, up from the 120 per second used in this experiment. “With these upgrades, we will be able to collect several days’ worth of data in just a few hours,” Bergmann said. “We will also be able to use soft X-rays to further understand the chemical changes happening in the system. These new capabilities will continue to drive this research forward and shed new light on photosynthesis.”

The team believes the results will help them “develop artificial photosynthetic systems that mimic photosynthesis to harvest natural sunlight to convert carbon dioxide into hydrogen and carbon-based fuels.” Jan Kern, another co-author and scientist at Berkley Lab, said, “The more we learn about how nature does it, the closer we get to using those same principles in human-made processes, including ideas for artificial photosynthesis as a clean and sustainable energy source.”

This article originally appeared on Engadget at https://www.engadget.com/scientists-observe-elusive-missing-step-in-photosynthesis-final-stage-214947146.html?src=rss

SpaceX's Starship didn't immediately respond to a self-destruct command

In a Twitter audio chat on Saturday, SpaceX's founder, Elon Musk, shared more details about what went awry during the first fully integrated Starship rocket and Super Heavy booster launch in April. One of the biggest revelations: The self-destruct setting took 40 seconds to work — a seemingly short time, except when you're uncertain if the massive rocket you just launched will blow up before hitting land. To recap the day's events, the rocket and booster cleared the launch pad before being unable to separate from each other, flipping and, finally, blowing up. The automated command should have immediately caused an explosion, but tumbled around for a bit first, The New York Times reported. 

In one of many spins on the day's failures, Musk claimed it was because "the vehicle’s structural margins appear to be better than we expected." While SpaceX previously said the only goal was that initial takeoff, a lot clearly went wrong. 

The delayed self-destruction wasn't the only issue following the launch from SpaceX's facility in Boca Chica, Texas. After the eventual explosion, debris fell across about 385 acres of land made up of the SpaceX facility and Boca Chica State Park. The latter resulted in a 3.5-acre fire. Musk's response? "To the best of our knowledge there has not been any meaningful damage to the environment that we’re aware of." 

The FAA has already announced it's investigating the events and will ground Starship until "determining that any system, process or procedure related to the mishap does not affect public safety." Even with all of that, Musk went so far as to call the launch "successful" and "maybe slightly exceeding my expectations." 

In this case, success was clearing the launch pad and, apparently, learning lessons along the way. "The goal of these missions is just information," Musk said. "Like, we don’t have any payload or anything — it’s just to learn as much as possible." 

This article originally appeared on Engadget at https://www.engadget.com/spacexs-starship-didnt-immediately-respond-to-a-self-destruct-command-120010127.html?src=rss

Russia will continue supporting the International Space Station until 2028

Russia has formally agreed to remain aboard the International Space Station (ISS) until 2028, NASA has announced. Yuri Borisov, the Director General of Roscosmos, previously said that the country was pulling out of the ISS after 2024 so it can focus on building its own space station. "After 2024" is pretty vague, though, and even Roscosmos official Sergei Krikalev said it could mean 2025, 2028 or 2030. Now, we have a more solid idea of until when Russia intends to remain a partner. To note, the United States, Japan, Canada and the participating countries of the ESA (European Space Agency) have previously agreed to keep the ISS running until 2030. 

After the United States and other countries imposed sanctions on Russia following the invasion of Ukraine, former Roscosmos director Dmitry Rogozin spoke up and threatened to stop working with his agency's western counterparts. "I believe that the restoration of normal relations between the partners at the International Space Station and other projects is possible only with full and unconditional removal of illegal sanctions," Rogozin said at the time. 

While Roscosmos has now agreed to continue cooperating with its fellow ISS partners, the increasing tension between Russia and the US even before the invasion of Ukraine began prompted NASA to prepare for the possibility of the former leaving the space station. NASA and the White House reportedly drew plans to pull astronauts out of the station if Russia leaves abruptly, as well as to keep the ISS running without the Russian thrusters keeping the flying lab in orbit. 

Private space companies had reportedly been called in to help out, and a previous report said Boeing already formed a team of engineers to figure out how to control the ISS without Russia's thrusters. It's unclear if the remaining ISS partners will use any of those contingencies after 2028 and if a private space corp will step in to keep the space station running. It's worth noting, however, that NASA and other space agencies are already preparing to leave Low Earth Orbit to explore the moon

This article originally appeared on Engadget at https://www.engadget.com/russia-will-continue-supporting-the-international-space-station-until-2028-121505126.html?src=rss

Japan's ispace confirms that Hakuto-R failed its lunar landing

ispace's Hakuto-R Mission 1 was poised to make history. It was going to be the first successful moon landing by a private company and the first Japanese lunar landing overall. But shortly before the spacecraft was supposed to touch down on the lunar surface, ispace lost contact with it. Now, the Japanese company has announced that there was a "high probability that the lander eventually made a hard landing on the moon's surface." It didn't use the word "crash," but the spacecraft is clearly not in a condition that would allow the company to proceed with the mission. 

The spacecraft was scheduled to land on the moon on April 26th at 1:40 AM Japan time (April 25th, 12:40PM Eastern time). ispace said it was able to confirm that the lander was in vertical position as it approached the surface and that its descent speed rapidly increased by the time its propellant was almost gone. 

By 8AM Japan time, ispace has determined that "Success 9" of Hakuto-R's mission milestones, which is the completion of its lunar landing, was no longer achievable. The company has yet to detail what happened to the spacecraft and what the root cause of the failure was, but it's currently analyzing the telemetry data it had acquired and will announce its findings once it's done. 

Hakuto-R launched on top of a SpaceX rocket around 100 days ago, carrying payloads from NASA, JAXA, as well as the UAE's first lunar rover called Rashid. While the mission failed to reach its ultimate goal, ispace said it was "able to acquire valuable data and know-how from the beginning to nearly the end of the landing sequence" and that it will use what it has learned from this event to enable a "future successful lunar landing mission." The company still intends to push through with Mission 2 scheduled for launch in 2024 and Mission 3 for 2025.

ispace will continue to make the most of the data and know-how acquired during the operation through Success 8, and landing sequence, including aspects of Success 9, aiming to dramatically improve the technological maturity of Mission 2 in 2024 and Mission 3 in 2025. (2/3)

— ispace (@ispace_inc) April 26, 2023

This article originally appeared on Engadget at https://www.engadget.com/japans-ispace-confirms-that-hakuto-r-failed-its-lunar-landing-110531710.html?src=rss