A Japanese company might be on the cusp of making history. Japan's ispace is attempting to land its Hakuto-R craft on the Moon at 12:40PM Eastern, and you can watch the livestream right now. If all goes well, ispace will claim both the first successful private Moon landing and the first Japanese lunar landing of any kind. To date, only China, the Soviet Union and the US have touched down. The vehicle includes payloads from NASA, Japan's JAXA and a small robotic rover (Rashid) from the United Arab Emirates. The rover is also historic as the UAE's first lunar craft.
Hakuto-R launched aboard a SpaceX rocket about 100 days ago. The landing is divided into six stages that include a de-orbit insertion, a largely unpowered "cruise" phase, a braking burn, a reorientation and two final phases where the machine slows down and (hopefully) reaches the surface intact. Israel's SpaceIL tried a private Moon landing in 2019, but it crashed following an engine failure.
A completed landing will help ispace's goals of sending two more landers to the Moon in 2024 and 2025. It could also spur Japan's broader spaceflight ambitions. Both JAXA and Japanese companies have struggled to get into space using domestically-made rockets. While ispace is relying on an American rocket to complete its mission, a landing would upstage SpaceX, Blue Origin and other private outfits racing to land on Earth's cosmic neighbor.
This article originally appeared on Engadget at https://www.engadget.com/watch-japans-ispace-try-to-land-on-the-moon-today-at-1240pm-et-161525731.html?src=rss
The gargantuan artificial construct enveloping your local star is going to be rather difficult to miss, even from a few light years away. And given the literally astronomical costs of resources needed to construct such a device — the still-theoretical-for-humans Dyson Sphere — having one in your solar system will also serve as a stark warning of your technological capacity to ETs that comes sniffing around.
Or at least that's how 20th century astronomers like Nikolai Kardashev and Carl Sagan envisioned our potential Sol-spanning distant future going. Turns out, a whole lot of how we predict intelligences from outside our planet will behave is heavily influenced by humanity's own cultural and historical biases. In The Possibility of Life, science journalist Jaime Green examines humanity's intriguing history of looking to the stars and finding ourselves reflected in them.
The way we imagine human progress — technology, advancement — seems inextricable from human culture. Superiority is marked by fast ships, colonial spread, or the acquisition of knowledge that fuels mastery of the physical world. Even in Star Trek, the post-poverty, post-conflict Earth is rarely the setting. Instead we spend our time on a ship speeding faster than light, sometimes solving philosophical quandaries, but often enough defeating foes. The future is bigger, faster, stronger — and in space.
Astronomer Nikolai Kardashev led the USSR’s first SETI initiatives in the early 1960s, and he believed that the galaxy might be home to civilizations billions of years more advanced than ours. Imagining these civilizations was part of the project of searching for them. So in 1964, Kardashev came up with a system for classifying a civilization’s level of technological advancement.
The Kardashev scale, as it’s called, is pretty simple: a Type I civilization makes use of all the energy available on or from its planet. A Type II civilization uses all the energy from its star. A Type III civilization harnesses the energy of its entire galaxy.
What’s less simple is how a civilization gets to any of those milestones. These leaps, in case it’s not clear, are massive. On Earth we’re currently grappling with how dangerous it is to try to use all the energy sources on our planet, especially those that burn. (So we’re not even a Type I civilization, more like a Type Three-quarters.) A careful journey toward Type I would involve taking advantage of all the sunlight falling on a planet from its star, but that’s just one billionth or so of a star’s total energy output. A Type II civilization would be harnessing all of it.
It’s not just that a Type II civilization would have to be massive enough to make use of all that energy, they’d also have to figure out how to capture it. The most common imagining for this is called a Dyson sphere, a massive shell or swarm of satellites surrounding the star to capture and convert all its energy. If you wanted enough material to build such a thing, you’d essentially have to disassemble a planet, and not just a small one — more like Jupiter. And then a Type III civilization would be doing that, too, but for all the stars in its galaxy (and maybe doing some fancy stuff to suck energy off the black hole at the galaxy’s core).
On the one hand, these imaginings are about as close to culturally agnostic as we can get: they require no alien personalities, no sociology, just the consumption of progressively more power, to be put to use however the aliens might like. But the Kardashev scale still rests on assumptions that are baked into so many of our visions of advanced aliens (and Earth’s own future as well). This view conflates advancement not only with technology but with growth, with always needing more power and more space, just the churning and churning of engines. Astrophysicist Adam Frank identifies the Kardashev scale as a product of the midcentury “techno-utopian vision of the future.” At the point when Kardashev was writing, humanity hadn’t yet been forced to face the sensitive feedback systems our energy consumption triggers. “Planets, stars, and galaxies,” Frank writes, “would all simply be brought to heel.”
Even in the Western scientific tradition, alternatives to Kardashev’s scale have been offered. Aerospace engineer Robert Zubrin proposed one scale that measures planetary mastery and another that measured colonizing spread. Carl Sagan offered one that accounts for the information available to a civilization. Cosmologist John D. Barrow proposed microscopic manipulation, going from Type I–minus, where people can manipulate objects of their own scale, down through the parts of living things, molecules, atoms, atomic nuclei, subatomic particles, to the very fabric of space and time. Frank proposed looking not at energy consumption but transformation, noting that a sophisticated civilization does more than bring a planet to heel, it must learn to find balance between resource use and long-term survival.
Of these — again, all white American or European men — only Sagan offers a measure of advancement that isn’t necessarily acquisitive. Even the manipulation of atoms, which may seem so small and delicate, requires massive amounts of energy in the form of particle accelerators, not to mention that this kind of tinkering has also unleashed humanity’s greatest destructive force. But Sagan’s super-advanced civilization could be nothing more than a massive, massive library, filled with scholars and philosophers, expanding and exploring mentally but with no dominion over their planet or star. (Yet, one has to ask: What is powering those libraries? The internet is ephemeral, but it is not free.)
Implicit in any vision of vast progress is not just longevity but continuity. The assumption of the ever upward-sloping line is bold to say the least. In the novella A Man of the People, Ursula K. Le Guin writes of one world, Hain, where civilization has existed for three million years. But just as the last few thousand years on Earth have seen empires rise and fall, and cultures collapse and displace one another, so it is on Hain at larger scale. Le Guin writes, “There had been…billions of lives lived in millions of countries…infinite wars and times of peace, incessant discoveries and forgettings…an endless repetition of unceasing novelty.” To hope for more than that is perhaps more optimistic than to imagine we might domesticate a star. Perhaps it’s also shortsighted, extrapolating out eons of future from just the last few centuries of life on two continents, rather than a wider view of many millennia on our whole world.
All of these scales of progress are built on human assumptions, specifically the colonizing, dominating, fossil-fuel-burning history of Europe and the United States. But scientists don’t see much use in thinking about the super-advanced alien philosophers and artists and dolphins, brilliant as they might be, because it would be basically impossible for us to find them.
The scientific quest for advanced aliens is about trying to imagine not just who might be out there but how we might find them. Which is how we end up at Dyson spheres.
Dyson spheres are named for Freeman Dyson, the physicist, mathematician, and general polymath. While most SETI scientists in the early 1960s were looking for extraterrestrial beacons, Dyson thought “one ought to be looking at the uncooperative society.” Not obstinate, just not actively trying to help us. “The idea of searching for radio signals was a fine idea,” he said in a 1981 interview, “but it only works if you have some cooperation at the other end. So I was always thinking about what to do if you were looking just for evidence of intelligent activities without anything in the nature of a message.” And you might as well start with the easiest technology to detect — the biggest or brightest. So the massive spheres Dyson popularized in his 1960 paper were the result of him asking What is the largest feasible technology?
In the Star Trek: The Next Generation episode “Relics,” the Enterprise finds itself caught in a massive gravitational field, even though there are no stars nearby. The source, on the view screen, is a matte, dark gray sphere. Riker says its diameter is almost as wide as the Earth’s orbit.
Picard asks, with hushed wonder, “Mr. Data, could this be a Dyson sphere?”
Data replies, “The object does fit the parameters of Dyson’s theory.”
Commander Riker isn’t familiar with the concept, but Picard doesn’t give him any trouble for that. “It’s a very old theory, Number One. I’m not surprised that you haven’t heard of it.” He tells him that a twentieth century physicist, Freeman Dyson, had proposed that a massive, hollow sphere built around a star could capture all the star’s radiating energy for use. “A population living on the interior surface would have virtually inexhaustible sources of power.”
Riker asks, with some skepticism, if Picard thinks there are people living in the sphere.
“Possibly a great number of people, Commander,” Data says. “The interior surface area of a sphere this size is the equivalent of more than two hundred and fifty million Class M [Earthlike] planets.”
In Dyson’s thinking, the goal wasn’t living space but energy — how would a civilization reach Type II? And Dyson’s writing was clearly speculative. In the paper, he wrote, “I do not argue that this is what will happen in our system; I only say that this is what may have happened in other systems.” Decades later, astrophysicist Jason Wright took up the search.
One of the great benefits to this approach, Wright told me, is that “nature doesn’t make Dyson spheres.” Wright is a professor of astronomy and astrophysics at Penn State, where he is director of the Penn State Extraterrestrial Intelligence Center. But while the best known version of SETI is listening for radio signals (more on that in the next chapter), Wright focuses on looking for technosignatures — evidence of technology out among the stars. Technosignatures allow you to find those uncooperative aliens Dyson thought would make the best targets. We don’t even need to find the aliens, in this case, just proof they once existed. That could be a stargate, or a distant planet covered in elemental silicon (geologically unlikely, but technologically great for solar panels), or it could be a Dyson sphere.
Wright’s first big search for Dyson spheres was called Glimpsing Heat from Alien Technologies, or G-HAT. Or, even better, Gˆ (because that’s a G with a little hat on it). The premise was simple: Dyson spheres don’t just absorb energy, they transform it, inevitably radiating some waste as heat which we can see as infrared radiation. So, from 2012 to 2015, Wright and his team looked at about a million galaxies, searching for a Type II civilization on its way to Type III, having ensconced enough of a galaxy’s stars in Dyson spheres that the galaxy might glow unusually bright in infrared. (They surveyed galaxies rather than individual stars because, as Wright writes, “A technological species that could build a Dyson sphere could also presumably spread to nearby star systems,” so it’s fair to think a galaxy with one Dyson sphere may have several, and several would be easier to find than just one. Might as well start there.) None were found, but you know that because you would’ve surely heard about it if Wright’s search had succeeded.
Wright prides himself on the agnosticism of this approach. He doesn’t need aliens to be looking for us or to have any certain sociological impulses. They just need technology. “Technology uses energy,” he told me. “That’s kind of what makes it technology. Just like life uses energy.” That view makes demolishing a Jupiter-sized planet to build a star-encompassing megastructure seem almost comically simple, but Wright doesn’t even see the existence of a Dyson sphere as requiring massive coordination or forethought on the aliens’ part. It is truly, in his view, a low-intensity ask. He compared it to Manhattan, a fair example of a human “megastructure,” a massive, interconnected, artificial system. “It was planned to some degree, but no one was ever like, ‘Hey, let’s build a huge city here.’ It’s just every generation made it a little bigger.” He thinks a Dyson sphere or swarm could accumulate in a similar manner. “If the energy is out there to take and it’s just gonna fly away to space anyway, then why wouldn’t someone take it?”
Wright knows the objections: that this imagines a capitalist orientation, a drive to “dominate nature” that is by no means universal, not even among human societies. But for his research to work, this drive doesn’t need to be universal among the stars. It just has to have happened sometimes, enough for us to see the results. As he put it, “There’s nothing that drives all life on Earth to be large. In fact, most life is small. But some life is large.” And if an alien were to come to Earth, they wouldn’t need to see all the small life to know the planet was inhabited. A single elephant would do the trick.
Some hypothetical alien technosignatures might be less definitive. In 2017, astronomers detected a roughly quarter-mile-long rocky object slingshotting through the solar system. They realized that this object, called ‘Oumuamua, came from outside the system — because of its speed and the path it took. It was the first interstellar object ever detected in our system. While hopes or fears that it was an alien probe were not realized, it was a reminder that alien technology could be found closer to home, lurking around our own sun.
“We don’t know that there’s not technology here because we’ve never really checked,” Wright said. “I mean, I guess if they had cities on Mars, we would notice—if they were on the surface, anyway.” But, he pointed out, much of the Earth’s surface doesn’t have active, visible technology. The same could go for the solar system beyond Earth, too. There could be alien probes or debris, like ‘Oumuamua but constructed, moving so fast or so dark that we don’t see them. Maybe there’s an alien base on the dwarf planet Ceres, or buried under the surface of Mars. The lunar monolith in 2001: A Space Odyssey, Wright reminded me, was buried just under the surface of the moon. All those ancient interstellar gates sci-fi is fond of have to be found before they can be used. Don’t forget, until 2015, our best image of Pluto was a blurry blob. So much of what we know about even our own solar system is inference and assumption.
Skeptics love to ask Okay, so where is everyone? But we don’t know for sure that they aren’t — or haven’t been — here.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-possibility-of-life-jaime-greene-hanover-square-press-113047089.html?src=rss
SpaceX has finally completed its first fully integrated Starship flight test after months of delays and a scrubbed launch earlier in the week, albeit not as smoothly as it would have like. The combination of Starship and a Super Heavy booster lifted off from SpaceX's Boca Chica, Texas facility at 9:34AM Eastern after a brief hold, but failed to separate and tumbled down in a botched flip maneuver before exploding.
CEO Elon Musk previously told enthusiasts to temper their expectations. The Starship flight was meant to collect data for future boosted trips. As SpaceX explained during its livestream, clearing the launchpad was the only objective — anything beyond that was just a bonus. The company scrubbed the first attempt due to a frozen pressurant valve.
SpaceX hasn't yet said when it expects to conduct its next flight attempt. The outfit says it can produce more than one Starship at a time, so the delay won't necessarily be lengthy.
Starship and Super Heavy together are 394 feet tall, or taller than the Saturn V rocket. The 39 total Raptor engines (33 in the booster, six in Starship) are powerful enough to haul payloads up to 330,000lbs to low Earth orbit when fully reusable, and 550,000lbs when expendable. For context, even Falcon Heavy can bring 'just' 141,000lbs to that orbit. The new rocketry allows missions that simply weren't possible before, including eventual trips to the Moon and Mars that require extensive fuel and supplies.
Success with the next test is vital given the timing for both SpaceX's own plans and NASA's exploration efforts. SpaceX is counting on Starship for lunar tourism and other commercial flights. NASA's Artemis Moon landings, currently slated to start in December 2025, will depend on the rocket for reaching the surface and returning astronauts to the Orion capsule for the trip home. The sooner SpaceX can prove Starship is viable, the better its chances of minimizing further delays.
This article originally appeared on Engadget at https://www.engadget.com/spacexs-starship-completes-its-first-fully-integrated-flight-test-but-fails-to-reach-orbit-134226956.html?src=rss
Drones have a wide range of applications, but sending them into unfamiliar environments can be a challenge. Whether delivering a package, monitoring wildlife or conducting search and rescue missions, knowing how to navigate previously unseen surroundings (or ones that have changed significantly) is critical for a drone to effectively complete tasks. Researchers at the Massachusetts Institute of Technology (MIT) believe they've found a more effective way of helping drones fly through unknown spaces, thanks to liquid neural networks.
MIT created its liquid neural networks — which are inspired by the adaptability of organic brains — in 2021. The artificial intelligence and machine learning algorithms are able to learn and adapt to new data in the real world, not only while they're being trained. They can think on the fly, in other words.
They're able to understand information that's critical to a drone's task while dismissing irrelevant features of an environment, the researchers note. The liquid neural nets can also "dynamically capture the true cause-and-effect of their given task," according to a paper published in Science Robotics. This is "the key to liquid networks’ robust performance under distribution shifts."
The liquid neural nets outperformed other approaches to navigation tasks, the researchers noted in the paper. The algorithms "showed prowess in making reliable decisions in unknown domains like forests, urban landscapes and environments with added noise, rotation and occlusion," the university said in a press release.
MIT points out that deep learning systems can flounder when it comes to understanding causality and can't always adapt to different environments or conditions. That poses a problem for drones, which have to be able to react quickly to obstacles.
"Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings with varied tasks such as seeking and following,” Computer Science and Artificial Intelligence Laboratory (CSAIL) director, MIT professor and paper co-author Daniela Rus said in a statement. “This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications."
The researchers trained their system on data captured by a human pilot. This enabled them to account for the pilot's ability to use their navigation skills in new environments that have undergone significant changes in conditions and scenery. In testing the liquid neural nets, the researchers found that drones were able to track moving targets, for instance. They suggest that marrying limited data from expert sources with an improved ability to understand new environments could make drone operations more reliable and efficient.
“Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society critical applications,” says Alessio Lomuscio, PhD, professor of AI Safety (in the Department of Computing) at Imperial College London. “In this context the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable."
This article originally appeared on Engadget at https://www.engadget.com/drones-may-better-navigate-unfamiliar-surroundings-with-the-help-of-liquid-neural-networks-180015474.html?src=rss
After many delays and a last-minute approval, SpaceX appears ready to conduct Starship's first orbital test flight. The next-generation rocket is now expected to launch from Boca Chica, Texas at 9:20AM Eastern with a livestream already available through the company's YouTube channel (below). While conditions are generally favorable, there are backup launch windows on Tuesday and Wednesday.
This is the first time SpaceX is launching a fully integrated Starship system with a Super Heavy booster underneath to get the main vehicle into orbit. The combination is about 394 feet tall, or taller than the Saturn V rocket. While both Starship and the booster are designed to be reusable, both will splash into the sea during the test.
There's no guarantee of success. In a Twitter Spaces chat on Sunday, Elon Musk told fans to "set your expectations low." Don't be surprised if something goes awry, in other words. Instead, this test is more about collecting data to improve future boosted Starship flights.
A successful test is crucial both for SpaceX's long-term plans, including lunar tourism, as well as NASA's exploration plans. The Artemis Moon landings beginning in December 2025 will use Starship to take crews from an orbiting Gateway station to the lunar surface. While those won't depend on a booster, NASA needs to know that Starship is reliable before these crewed missions can go forward.
This article originally appeared on Engadget at https://www.engadget.com/watch-spacexs-starship-orbital-test-flight-at-920am-et-124553427.html?src=rss
If you haven’t heard of Virginia Norwood, it’s about time you did. An aerospace pioneer whose career would have been historic even without its undercurrent of triumph over misogynistic discrimination, she invented the Landsat satellite program that monitors the Earth’s surface today. Norwood passed away on March 27th at the age of 96, as reported by NASA and The New York Times.
She achieved all this despite significant pushback from the male-dominated industry before and after her rise. Despite her obvious talent, numerous employers declined to hire her after graduating from the Massachusetts Institute of Technology. For example, Sikorsky Aircraft told her they would never pay her requested salary, equivalent to the lowest rank in the civil service. Another food lab she applied for asked her to promise not to get pregnant as a condition of her employment. (She withdrew her application.) Finally, the gun manufacturer Remington appreciated her “brilliant” ideas in an interview but told her they were hiring a man instead.
Her career finally progressed after landing jobs with the US Army Signal Corps Laboratories (where she designed a radar reflector for weather balloons) and Sylvania Electronic Defense Labs (where she set up the company’s first antenna lab). Norwood began working in the 1950s as one of a small group of women at Hughes Aircraft Company, where she gained a reputation as a resourceful problem-solver. “She said, ‘I was kind of known as the person who could solve impossible problems,’” her daughter, Naomi Norwood, told NASA. “So people would bring things to her, even pieces of other projects.”
Hughes Aircraft / NASA
In the late 1960s, the director of the Geological Survey wanted to take photographs of the Earth from space to help manage land resources; partnering with NASA, a plan was hatched to send satellites into space. Then working on an advanced design team in Hughes’ space and communications division, Norwood formed the idea that would define her legacy. She gathered feedback from agriculture, meteorology and geology experts to develop a scanner to record different light and energy spectra. Although it used existing technology made for (lower-altitude) agricultural observations, she adapted the tech to meet the Geological Survey’s and NASA’s goals.
However, she faced numerous obstacles in securing a spot for her Multispectral Scanner System (MSS) on the launch satellite. It was already hauling an enormous three-camera system developed by RCA using television tube technology, which the agencies viewed as the primary imaging source. To get the MSS onboard, Norwood was tasked with scaling back its size to no more than 100 lbs, a significant downsizing; the RCA system took up most of the satellite’s 4,000 lb. payload.
She reduced the device to recording only four energy bands (down from its original seven) to ensure it would make the trip as a secondary measurement system. The satellite launched on July 23rd, 1972, and the MSS captured its first images — of Oklahoma’s Ouachita Mountains — two days later. The results exceeded all expectations, forcing a quick reevaluation of the satellite payload’s hierarchy. Norwood’s system performed better and was more reliable than the clunky RCA project, which caused power surges and had to be shut down for good two weeks into the mission.
Landsat quickly became the de facto method of surveying the Earth’s surface. Norwood continued to improve the system, leading the development of Landsat 2, 3, 4 and 5. Landsat 8 and 9, the current versions monitoring the effects of climate change today, are still based on her initial concept. Her other projects included leading the microwave group in Hughes Aircraft’s missile lab and designing the ground-control communications equipment for NASA’s Surveyor lunar lander.
She reportedly had no issue with the “the mother of Landsat” moniker her peers gave her. “Yes, I like it, and it’s apt,” she said. “I created it, I birthed it, and I fought for it.”
This article originally appeared on Engadget at https://www.engadget.com/remembering-virginia-norwood-the-mother-of-nasas-landsat-program-213705046.html?src=rss
Astronomers have discovered a new exoplanet — but this time, the way they found it may be as significant as the discovery itself. Researchers used a breakthrough combination of indirect and direct planetary detection to locate the distant world known as HIP 99770 b. It could inch us closer to finding Earth-like exoplanets among our (distantly) neighboring stars.
Direct imaging is what most casual observers would expect to lie at the heart of exoplanet hunting: using powerful telescopes with advanced optics to capture images of distant planetary bodies. However, direct imaging is most effective for planets orbiting far from their stars; an exoplanet closer to its sun is usually obscured by the star’s bright light, making it difficult to detect or image. (When they’re farther away, there’s greater contrast between the exoplanet’s and the star’s light.)
Meanwhile, indirect imaging (precision astrometry) looks for stars that appear to “wobble,” meaning their gravity may be affected by an (otherwise unseen to us) exoplanet. This method can more easily detect the presence of planets orbiting closer to their stars — like the Earth’s relationship to the Sun. As a result, indirect imaging has yielded over 5,000 exoplanet discoveries, while direct imaging has only captured about 20.
The international team of researchers, led by Thayne Currie of the National Astronomical Observatory of Japan (NAOJ) and the University of Texas at San Antonio, combined the two methods to discover the new exoplanet. First, they used data from the Hipparcos-Gaia Catalogue of Accelerations — a map tracking the precise positions and motions of nearly two million stars in the Milky Way — to identify the star HIP 99770 as a prime candidate for hosting an exoplanet. Then, they used Japan’s ultra-powerful Subaru telescope (in Mauna Kea, Hawaii) to directly image the newly discovered exoplanet, creatively titled HIP 99770 b.
European Space Agency
The European Space Agency image above illustrates that the exoplanet is about 16 times as massive as Jupiter. Despite having an orbit over three times longer than Jupiter’s orbit around our Sun, HIP 99770 b receives around the same amount of light as Jupiter because its sun is about twice as massive as ours. The researchers say it may have water and carbon monoxide in its atmosphere.
Astronomers believe the new method combining direct and indirect imaging opens an exciting new door for future discoveries. “It provides a new path forward to discovering more exoplanets, and characterizing them in a far more holistic way than we could do before,” says Currie. Additionally, the group views Gaia’s upcoming fourth data release, which will yield nearly double the previous version’s data, will make it easier to identify stars wobbling from the gravity of planetary bodies. “The discovery of this planet will spawn dozens of follow-on studies.” The team is now studying data from about 50 other stars showing promise for hosting exoplanets.
“This is sort of a test run for the kind of strategy we need to be able to image an earth,” said Currie. “It demonstrates that an indirect method sensitive to a planet’s gravitational pull can tell you where to look and exactly when to look for direct imaging. So I think that’s really exciting.”
This article originally appeared on Engadget at https://www.engadget.com/researchers-use-novel-method-to-find-a-distant-exoplanet-175055335.html?src=rss
Researchers at Stanford Medicine have made a promising discovery that could lead to new cancer treatments in the future. Scientists conducted tests in which they altered the genomes of skin-based microbes and bacteria to fight cancer. These altered microbes were swabbed onto cancer-stricken mice and, lo and behold, tumors began to dissipate.
The bacteria in question, Staphylococcus epidermidis, was grabbed from the fur of mice and altered to produce a protein that stimulates the immune system with regard to specific tumors. The experiment seemed to be a resounding success, with the modified bacteria killing aggressive types of metastatic skin cancer after being gently applied to the fur. The results were also achieved without any noticeable inflammation.
“It seemed almost like magic,” said Michael Fischbach, PhD, an associate professor of bioengineering at Stanford. “These mice had very aggressive tumors growing on their flank, and we gave them a gentle treatment where we simply took a swab of bacteria and rubbed it on the fur of their heads.”
This is yet another foray into the misunderstood world of microbiomes and all of the bacteria that reside there. Gut biomes get all of the press these days, but the skin also plays host to millions upon millions of bacteria, fungi and viruses, and the purpose of these entities is often unknown.
In this instance, scientists found that staph epidermidis cells trigger the production of immune cells called CD8 T cells. The researchers basically hijacked the S. epidermidis into producing CD8 T cells that target specific antigens. In this case, the antigens were related to skin cancer tumors. When the cells encountered a matching tumor, they began to rapidly reproduce and shrink the mass, or extinguish it entirely.
“Watching those tumors disappear — especially at a site distant from where we applied the bacteria — was shocking,” Fischbach said. “It took us a while to believe it was happening.”
As with all burgeoning cancer treatments, there are some heavy caveats. First of all, these experiments are being conducted on mice. Humans and mice are biologically similar in many respects, but a great many treatments that work on mice are a dud with people. Stanford researchers have no idea if S. epidermidis triggers an immune response in humans, though our skin is littered with the stuff, so they may need to find a different microbe to alter. Also, this treatment is designed to treat skin cancer tumors and is applied topically. It remains to be seen if the benefits carry over to internal cancers.
With that said, the Stanford team says they expect human trials to start within the next few years, though more testing is needed on both mice and other animals before going ahead with people. Scientists hope that this treatment could eventually be pointed at all kinds of infectious diseases, in addition to cancer cells.
This article originally appeared on Engadget at https://www.engadget.com/scientists-have-successfully-engineered-bacteria-to-fight-cancer-in-mice-165141857.html?src=rss
Researchers have used machine learning to tighten up a previously released image of a black hole. As a result, the portrait of the black hole at the center of the galaxy Messier 87, over 53 million light years away from Earth, shows a thinner ring of light and matter surrounding its center in a report published today in The Astrophysical Journal Letters.
The original images were captured in 2017 by the Event Horizon Telescope (EHT), a network of radio telescopes around Earth that combine to act as a planet-sized super-imaging tool. The initial picture looked like a “fuzzy donut,” as described by NPR, but researchers used a new method called PRIMO to reconstruct a more accurate image. PRIMO is “a novel dictionary-learning-based algorithm” that learns to “recover high-fidelity images even in the presence of sparse coverage” by training on generated simulations of over 30,000 black holes. In other words, it uses machine learning data based on what we know about the universe’s physical laws — and black holes specifically — to produce a better-looking and more accurate shot from the raw data captured in 2017.
Black holes are mysterious and strange regions of space where gravity is so strong that nothing can escape. They form when dying stars collapse onto themselves under their gravity. As a result, the collapse squeezes the star’s mass into a tiny space. The boundary between the black hole and its surrounding mass is called the event horizon, a point of no return where anything that crosses it (whether light, matter or Matthew McConaughey) won’t be coming back.
“What we really do is we learn the correlations between different parts of the image. And so we do this by analyzing tens of thousands of high-resolution images that are created from simulations,” the astrophysicist and author of the paper Lia Medeiros of the Institute for Advanced Study in Princeton, NJ, told NPR. “If you have an image, the pixels close to any given pixel are not completely uncorrelated. It’s not that each pixel is doing completely independent things.”
The researchers say the new image is consistent with Albert Einstein’s predictions. However, they expect further research in machine learning and telescope hardware to lead to additional revisions. “In 20 years, the image might not be the image I’m showing you today,” said Medeiros. “It might be even better.”
This article originally appeared on Engadget at https://www.engadget.com/researchers-used-machine-learning-to-improve-the-first-photo-of-a-black-hole-170722614.html?src=rss
SpaceX could conduct Starship’s first orbital flight test as early as the week after next. On Thursday, the private space firm tweeted new photos of the super heavy-lift rocket at its Boca Chica facility in Texas. “Starship fully stacked at Starbase,” SpaceX said of the images. “Team is working towards a launch rehearsal next week followed by Starship’s first integrated flight test ~week later pending regulatory approval.” That same day, SpaceX owner Elon Musk offered an even more aggressive timeline. “Starship is stacked & ready to launch next week, pending regulatory approval,” he said on Twitter.
Starship fully stacked at Starbase. Team is working towards a launch rehearsal next week followed by Starship’s first integrated flight test ~week later pending regulatory approval pic.twitter.com/9VbJLppswp
The date of Starship’s first orbital flight has been a moving target for nearly two years. At the start of February, a week after SpaceX successfully carried out the rocket’s first-ever stacked fueling test, Musk said the company would attempt to launch Starship in March if its remaining tests went well. Days later, SpaceX attempted to static fire all of the vehicle’s 33 first-stage Raptor engines, something it had not tried to do before. The trial was a critical step toward Starship’s first orbital flight, though the rocket didn’t exactly ace the test, with two engines failing before the end of the firing.
Still, the timeline Musk shared this week may be overly optimistic. According to Space.com, the US Federal Aviation Administration (FAA) set a provisional April 17th launch window for Starship. However, the outlet reports the FAA has yet to grant SpaceX a launch license for the rocket, something it will need to do before Starship can legally fly.
This article originally appeared on Engadget at https://www.engadget.com/spacex-will-conduct-a-starship-launch-rehearsal-next-week-173504593.html?src=rss