NASA has achieved a technological milestone that could one day play an important role in missions to the Moon and beyond. This week, the space agency revealed (via Space.com) that the International Space Station’s Environmental Control and Life Support System (ECLSS) is recycling 98 percent of all water astronauts bring onboard the station. Functionally, you can imagine the system operating in a way similar to the Stillsuits described in Frank Herbert’s Dune. One part of the ECLSS uses “advanced dehumidifiers” to capture moisture the station’s crew breaths and sweat out as they go about their daily tasks.
Another subsystem, the imaginatively named “Urine Processor Assembly,” recovers what astronauts pee with the help of vacuum distillation. According to NASA, the distillation process produces water and a urine brine that still contains reclaimable H20. The agency recently began testing a new device that can extract what water remains in the brine, and it’s thanks to that system that NASA observed a 98 percent water recovery rate on the ISS, where previously the station was recycling about 93 to 94 percent of the water astronauts were bringing aboard.
“This is a very important step forward in the evolution of life support systems,” said NASA’s Christopher Brown, who is part of the team that manages the International Space Station’s life support systems. “Let’s say you collect 100 pounds of water on the station. You lose two pounds of that and the other 98 percent just keeps going around and around. Keeping that running is a pretty awesome achievement.”
If the thought of someone else drinking their urine is causing you to gag, fret not. “The processing is fundamentally similar to some terrestrial water distribution systems, just done in microgravity,” said Jill Williamson, NASA’s ECLSS water subsystems manager. “The crew is not drinking urine; they are drinking water that has been reclaimed, filtered, and cleaned such that it is cleaner than what we drink here on Earth.”
According to Williamson, systems like the ECLSS will be critical as NASA conducts more missions beyond Earth's orbit. “The less water and oxygen we have to ship up, the more science that can be added to the launch vehicle,” Williamson said. “Reliable, robust regenerative systems mean the crew doesn’t have to worry about it and can focus on the true intent of their mission.”
This article originally appeared on Engadget at https://www.engadget.com/nasa-is-recycling-98-percent-of-astronaut-pee-and-sweat-on-the-iss-into-drinkable-water-184332789.html?src=rss
The problem with studying the universe around us is that it is simply too big. The stars overhead remain too far away to interact with directly, so we are relegated to testing our theories on the formation of the galaxies based on observable data.
Simulating these celestial bodies on computers has proven an immensely useful aid in wrapping our heads around the nature of reality and, as Andrew Pontzen explains in his new book, The Universe in a Box: Simulations and the Quest to Code the Cosmos, recent advances in supercomputing technology are further revolutionizing our capability to model the complexities of the cosmos (not to mention myriad Earth-based challenges) on a smaller scale. In the excerpt below, Pontzen looks at the recent emergence of astronomy-focused AI systems, what they're capable of accomplishing in the field and why he's not too worried about losing his job to one.
As a cosmologist, I spend a large fraction of my time working with supercomputers, generating simulations of the universe to compare with data from real telescopes. The goal is to understand the effect of mysterious substances like dark matter, but no human can digest all the data held on the universe, nor all the results from simulations. For that reason, artificial intelligence and machine learning is a key part of cosmologists’ work.
Consider the Vera Rubin Observatory, a giant telescope built atop a Chilean mountain and designed to repeatedly photograph the sky over the coming decade. It will not just build a static picture: it will particularly be searching for objects that move (asteroids and comets), or change brightness (flickering stars, quasars and supernovae), as part of our ongoing campaign to understand the ever-changing cosmos. Machine learning can be trained to spot these objects, allowing them to be studied with other, more specialized telescopes. Similar techniques can even help sift through the changing brightness of vast numbers of stars to find telltale signs of which host planets, contributing to the search for life in the universe. Beyond astronomy there are no shortage of scientific applications: Google’s artificial intelligence subsidiary DeepMind, for instance, has built a network that can outperform all known techniques for predicting the shapes of proteins starting from their molecular structure, a crucial and difficult step in understanding many biological processes.
These examples illustrate why scientific excitement around machine learning has built during this century, and there have been strong claims that we are witnessing a scientific revolution. As far back as 2008, Chris Anderson wrote an article for Wired magazine that declared the scientific method, in which humans propose and test specific hypotheses, obsolete: ‘We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.’
I think this is taking things too far. Machine learning can simplify and improve certain aspects of traditional scientific approaches, especially where processing of complex information is required. Or it can digest text and answer factual questions, as illustrated by systems like ChatGPT. But it cannot entirely supplant scientific reasoning, because that is about the search for an improved understanding of the universe around us. Finding new patterns in data or restating existing facts are only narrow aspects of that search. There is a long way to go before machines can do meaningful science without any human oversight.
To understand the importance of context and understanding in science, consider the case of the OPERA experiment which in 2011 seemingly determined that neutrinos travel faster than the speed of light. The claim is close to a physics blasphemy, because relativity would have to be rewritten; the speed limit is integral to its formulation. Given the enormous weight of experimental evidence that supports relativity, casting doubt on its foundations is not a step to be taken lightly.
Knowing this, theoretical physicists queued up to dismiss the result, suspecting the neutrinos must actually be traveling slower than the measurements indicated. Yet, no problem with the measurement could be found – until, six months later, OPERA announced that a cable had been loose during their experiment, accounting for the discrepancy. Neutrinos travelled no faster than light; the data suggesting otherwise had been wrong.
Surprising data can lead to revelations under the right circumstances. The planet Neptune was discovered when astronomers noticed something awry with the orbits of the other planets. But where a claim is discrepant with existing theories, it is much more likely that there is a fault with the data; this was the gut feeling that physicists trusted when seeing the OPERA results. It is hard to formalize such a reaction into a simple rule for programming into a computer intelligence, because it is midway between the knowledge-recall and pattern-searching worlds.
The human elements of science will not be replicated by machines unless they can integrate their flexible data processing with a broader corpus of knowledge. There is an explosion of different approaches toward this goal, driven in part by the commercial need for computer intelligences to explain their decisions. In Europe, if a machine makes a decision that impacts you personally – declining your application for a mortgage, maybe, or increasing your insurance premiums, or pulling you aside at an airport – you have a legal right to ask for an explanation. That explanation must necessarily reach outside the narrow world of data in order to connect to a human sense of what is reasonable or unreasonable.
Problematically, it is often not possible to generate a full account of how machine-learning systems reach a particular decision. They use many different pieces of information, combining them in complex ways; the only truly accurate description is to write down the computer code and show the way the machine was trained. That is accurate but not very explanatory. At the other extreme, one might point to an obvious factor that dominated a machine’s decision: you are a lifelong smoker, perhaps, and other lifelong smokers died young, so you have been declined for life insurance. That is a more useful explanation, but might not be very accurate: other smokers with a different employment history and medical record have been accepted, so what precisely is the difference? Explaining decisions in a fruitful way requires a balance between accuracy and comprehensibility.
In the case of physics, using machines to create digestible, accurate explanations which are anchored in existing laws and frameworks is an approach in its infancy. It starts with the same demands as commercial artificial intelligence: the machine must not just point to its decision (that it has found a new supernova, say) but also give a small, digestible amount of information about why it has reached that decision. That way, you can start to understand what it is in the data that has prompted a particular conclusion, and see whether it agrees with your existing ideas and theories of cause and effect. This approach has started to bear fruit, producing simple but useful insights into quantum mechanics, string theory, and (from my own collaborations) cosmology.
These applications are still all framed and interpreted by humans. Could we imagine instead having the computer framing its own scientific hypotheses, balancing new data with the weight of existing theories, and going on to explain its discoveries by writing a scholarly paper without any human assistance? This is not Anderson’s vision of the theory-free future of science, but a more exciting, more disruptive and much harder goal: for machines to build and test new theories atop hundreds of years of human insight.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-universe-in-a-box-andrew-pontzen-riverhead-books-153005483.html?src=rss
After years of development, Virgin Galactic is finally ready to take paying customers. The company has confirmed that its first commercial spaceflight, Galactic 01, will launch between June 27th and June 30th. This inaugural mission will carry three people from Italy's Air Force and National Research Council as they conduct microgravity research. Virgin had anticipated a late June start, but hadn't committed to that window until now.
The company already has follow-up flights scheduled. Galactic 02 is expected to launch in early August and will carry a private crew. Virgin will fly on a monthly basis afterward, although details of future missions aren't yet available. At least the first two flights will stream live through the company's website.
Virgin conducted its last pre-commercial flight test, its fifth spaceflight of any kind, in late May. The company faced numerous delays and incidents getting to that point, however. The company completed its first SpaceShipTwo test flights in 2013, but paused its efforts after the deadly 2014 crash of VSS Enterprise. Flight testing didn't resume until VSS Unity's glide test at the end of 2016. The firm finally reached space in 2018, but had to wait until 2021 to complete its first fully crewed spaceflight with founder Richard Branson aboard. It pushed back commercial service multiple times due to varying factors, most recently delays in upgrading the VMS Eve "mothership" that carries SpaceShipTwo vehicles to their launch altitude.
The debut is important for Virgin's business. Virgin has operated at a loss for years, losing more than $500 million just in 2022. Commercial service won't recoup those investments quickly even at $450,000 per ticket, but it will give the company a significant source of revenue.
This isn't the start of space tourism for Virgin. In that sense, it's still trailing Blue Origin. Galactic 01 will put Virgin ahead of SpaceX, though, as that company's Starship rocket has yet to reach space and isn't expected to launch its first lunar tourist flights until late 2024 at the earliest. While Virgin is less ambitious than Elon Musk's operation, it's also achieving its goals sooner.
This article originally appeared on Engadget at https://www.engadget.com/virgin-galactic-will-start-commercial-spaceflight-as-soon-as-june-27th-214515616.html?src=rss
Saturn’s moon Enceladus has phosphorous. The finding came from recently analyzed icy particles emitted from the natural satellite’s ocean plumes, detected by NASA’s Cassini spacecraft. The discovery means Enceladus has all the chemical building blocks for life as we know it on Earth. “This is the final one saying, ‘Yes, Enceladus does have all of the ingredients that typical Earth life would need to live and that the ocean there is habitable for life as we know it,” Morgan Cable, astrobiology chemist at NASA’s Jet Propulsion Laboratory, toldThe Wall Street Journal.
Cassini, which plunged to its demise in Saturn’s atmosphere in 2017, collected data by passing through Enceladus’ continually erupting geysers at its south pole and Saturn’s E ring, also containing escaped particles from the moon. Beneath its icy crust, Enceladus has a warm subsurface ocean, over 30 miles deep, enveloping the entire moon. The eruptions at its south pole spit icy particles into space, allowing research crafts like Cassini to study the ocean’s chemical makeup without taking a dip or even touching the moon’s surface.
NASA
Data from previous missions indicated the moon had all of life’s essential building blocks — carbon, hydrogen, nitrogen, oxygen and sulfur — except for phosphorous. A team of planetary scientists found nine grains containing phosphate (phosphorous bound to oxygen atoms) among around 1,000 samples initially overlooked by researchers. The tiny amount detected reflects phosphorous’ scarcity. “Of the six bioessential elements, phosphorus is by far the rarest in the cosmos,” said Frank Postberg, the study’s lead author.
Of course, Enceladus containing the requirements for life doesn’t necessarily mean life exists on the moon. “The next step is to figure out if indeed it is inhabited, and it is going to take a future mission to answer that question,” Cable said. “But this is exciting, because it makes Enceladus an even more compelling destination to go and do that kind of search.” NASA will get a chance to learn more when the Dragonfly mission heads for Saturn’s moon Titan in 2027; another proposed mission could arrive at Enceladus around 2050. In addition, the James Webb Space Telescope may help further to illuminate the chemical breakdown of Enceladus’ warm subterranean ocean.
This article originally appeared on Engadget at https://www.engadget.com/saturns-moon-enceladus-could-support-species-similar-to-earth-182535342.html?src=rss
Researchers have developed a promising synthetic heart valve that may eventually be used for growing children. Harvard’s Wass Institute and John A. Paulson School of Engineering and Applied Sciences (SEAS) created what they call FibraValve. This implant can be manufactured in minutes using a spun-fiber method that lets them shape the valve’s delicate flaps on a microscopic level — ready to be colonized by the patient’s living cells, developing with them as they mature.
FibraValve is a follow-up to JetValve, the team’s 2017 artificial heart valve that employed many of the same principles. The updated version uses “focused rotary jet spinning,” which adds streams of focused air to more quickly and accurately collect synthetic fibers on a spinning mandrel — making it easier to fine-tune the valve’s shape. As a result, the polymer’s micro- and nano-fibers can more precisely replicate the tissue structure of an organic heart valve. The manufacturing process takes less than 10 minutes; alternative methods can require hours.
Wyss Institute at Harvard University
The technique also uses “a new, custom polymer material” called PLCL (a combination of polycaprolactone and polylactic acid) that can last inside a patient’s body for about six months — enough time (in theory) for the patient’s cells to infiltrate the structure and take over. Although it’s only been successfully tested in sheep so far, the long-term vision is for the resulting organic tissue to develop with human children as they mature, potentially voiding the need for risky replacement surgeries as their bodies grow. “Our goal is for the patient’s native cells to use the device as a blueprint to regenerate their own living valve tissue,” said corresponding author Kevin “Kit” Parker.
In the researchers’ test on a living sheep, the FibraValve “started to function immediately, its leaflets opening and closing to let blood flow through with every heartbeat.” Additionally, they observed red and white blood cells and fibrin protein collecting on the valve’s scaffolding within the first hour. The scientists say the synthetic valve showed no signs of damage or other problems. “This approach to heart valve replacement might open the door towards customized medical implants that regenerate and grow with the patient, making children’s lives better,” said co-author Michael Peters.
The research is still preliminary, and the team plans to conduct longer-term animal testing over weeks and months for further evaluation. However, they believe their breakthrough could eventually find other uses, including creating different valves, cardiac patches and blood vessels. You can read the entire paper on Matter.
This article originally appeared on Engadget at https://www.engadget.com/harvards-synthetic-heart-valve-is-designed-to-grow-in-step-with-the-human-body-180456235.html?src=rss
It seems like every few weeks, NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA) drop an impressive image from the James Webb Space Telescope that is both stunning to behold and advances our knowledge of the universe. The latest is of the barred spiral galaxy NGC 5068, called a "barred" galaxy because of the bright central bar you can see in the upper left of the above image. It's a combination image consisting of infrared shots taken from the telescope's MIRI (Mid-Infrared Instrument) and NIRCam (Near-Infrared Camera) sensors.
What those sensors captured is a galaxy in the Virgo constellation about 20 million light-years from Earth, and because the JWST can see through the dust and gas that surrounds stars as they're born, the instrument is particularly suited to producing images that show the process of star formation.
Looking at the two individual images that make up the composite reveals different layers of the galaxy. As Gizmodo notes, the image produced by the MIRI sensor provides a view of the galaxy's structure and the glowing gas bubbles that represent newly formed stars.
ESA/Webb, NASA & CSA, J. Lee and
The second image, taken from the NIRCam, put the focus on a huge swath of stars in the foreground. The composite, meanwhile, shows both the enormous amount of stars in the region as well as the highlights of the stars that have just been "born."
ESA/Webb, NASA & CSA, J. Lee and
There isn't one specific breakthrough finding in this image; instead, NASA notes that this is part of a wider effort to collect as many images of star formation from nearby galaxies as it can. (No, 20 million light-years doesn't exactly feel nearby to me, either, but that's how things go in space.) NASA pointed to another few images as other "gems" from its collection of star births, including this impressive "Phantom Galaxy" that was shown off last summer. As for what the agency hopes to learn? Simply that star formation "underpins so many fields in astronomy, from the physics of the tenuous plasma that lies between stars to the evolution of entire galaxies." NASA goes on to say that it hopes the data being gathered of galaxies like NGC 5068 can help to "kick-start" major scientific advances, though what those might be remains a mystery.
This article originally appeared on Engadget at https://www.engadget.com/latest-webb-telescope-images-gives-a-look-at-stars-being-born-in-the-virgo-constellation-120044569.html?src=rss
We Americans love to have ourselves a big old time. It's not just our waistlines that have exploded outward since the post-WWII era. Our houses have grown larger, as have the appliances within them, the vehicles in their driveways, the income inequalities between ourselves and our neighbors, and the challenges we face on a rapidly warming planet. In his new book, Size: How It Explains the World, Dr. Vaclav Smil, Distinguished Professor Emeritus at the University of Manitoba, takes readers on a multidiscipline tour of the social quirks, economic intricacies, and biological peculiarities that result from our function following our form.
William Morrow
From SIZE by Vaclav Smil. Copyright 2023 by Vaclav Smil. Reprinted courtesy of William Morrow, an imprint of HarperCollins Publishers.
Modernity’s Infatuation With Larger Sizes
A single human lifetime will have witnessed many obvious examples of this trend in sizes. Motor vehicles are the planet’s most numerous heavy mobile objects. The world now has nearly 1.5 billion of them, and they have been getting larger: today’s bestselling pickup trucks and SUVs are easily twice or even three times heavier than Volkswagen’s Käfer, Fiat’s Topolino, or Citroën’s deux chevaux — family cars whose sales dominated the European market in the early 1950s.
Sizes of homes, refrigerators, and TVs have followed the same trend, not only because of technical advances but because the post–Second World War sizes of national GDPs, so beloved by the growth-enamored economists, have grown by historically unprecedented rates, making these items more affordable. Even when expressed in constant (inflation-adjusted) monies, US GDP has increased 10-fold since 1945; and, despite the postwar baby boom, the per capita rate has quadrupled. This affluence-driven growth can be illustrated by many other examples, ranging from the heights of the highest skyscrapers to the capacity of the largest airplanes or the multistoried cruise ships, and from the size of universities to the size of sports stadiums. Is this all just an expected, inevitable replication of the general evolutionary trend toward larger size?
We know that life began small (at the microbial level as archaea and bacteria that emerged nearly 4 billion years ago), and that, eventually, evolution took a decisive turn toward larger sizes with the diversification of animals during the Cambrian period, which began more than half a billion years ago. Large size (increased body mass) offers such obvious competitive advantages as increased defense against predators (compare a meerkat with a wildebeest) and access to a wider range of digestible biomass, outweighing the equally obvious disadvantages of lower numbers of offspring, longer gestation periods (longer time to reach maturity), and higher food and water needs. Large animals also live (some exceptions aside — some parrots make it past 50 years!) longer than smaller ones (compare a mouse with a cat, a dog with a chimpanzee). But at its extreme the relationship is not closely mass-bound: elephants and blue whales do not top the list; Greenland sharks (more than 250 years), bowhead whales (200 years), and Galapagos tortoises (more than 100 years) do.
The evolution of life is, indeed, the story of increasing size — from solely single-celled microbes to large reptiles and modern African megafauna (elephants, rhinos, giraffes). The maximum body length of organisms now spans the range of eight orders of magnitude, from 200 nanometers (Mycoplasma genitalium) to 31 meters (the blue whale, Balaenoptera musculus), and the extremes of biovolume for these two species range from 8 × 10^12 cubic millimeters to 1.9 × 10^11 cubic millimeters, a difference of about 22 orders of magnitude.
The evolutionary increase in size is obvious when comparing the oldest unicellular organisms, archaea and bacteria, with later, larger, protozoans and metazoans. But average biovolumes of most extinct and living multicellular animals have not followed a similar path toward larger body sizes. The average sizes of mollusks and echinoderms (starfish, urchins, sea cucumbers) do not show any clear evolutionary trend, but marine fish and mammals have grown in size. The size of dinosaurs increased, but then diminished as the animals approached extinction. The average sizes of arthropods have shown no clear growth trend for half a billion years, but the average size of mammals has increased by about three orders of magnitude during the past 150 million years.
Analyses of living mammalian species show that subsequent generations tend to be larger than their parents, but a single growth step is inevitably fairly limited. In any case, the emergence of some very large organisms has done nothing to diminish the ubiquity and importance of microbes: the biosphere is a highly symbiotic system based on the abundance and variety of microbial biomass, and it could not operate and endure without its foundation of microorganisms. In view of this fundamental biospheric reality (big relying on small), is the anthropogenic tendency toward objects and design of larger sizes an aberration? Is it just a temporary departure from a long-term stagnation of growth that existed in premodern times as far as both economies and technical capabilities were concerned, or perhaps only a mistaken impression created by the disproportionate attention we pay nowadays to the pursuit and possession of large-size objects, from TV screens to skyscrapers?
The genesis of this trend is unmistakable: size enlargements have been made possible by the unprecedented deployment of energies, and by the truly gargantuan mobilization of materials. For millennia, our constraints — energies limited to human and animal muscles; wood, clay, stone, and a few metals as the only choices for tools and construction — circumscribed our quest for larger-designed sizes: they determined what we could build, how we could travel, how much food we could harvest and store, and the size of individual and collective riches we could amass. All of that changed, rather rapidly and concurrently, during the second half of the 19th century.
At the century’s beginning, the world had very low population growth. It was still energized by biomass and muscles, supplemented by flowing water turning small wheels and wind-powering mills as well as relatively small ships. The world of 1800 was closer to the world of 1500 than it was to the mundane realities of 1900. By 1900, half of the world’s fuel production came from coal and oil, electricity generation was rapidly expanding, and new prime movers—steam engines, internal combustion engines, steam and water turbines, and electric motors—were creating new industries and transportation capabilities. And this new energy abundance was also deployed to raise crop yields (through fertilizers and the mechanization of field tasks), to produce old materials more affordably, and to introduce new metals and synthetics that made it possible to make lighter or more durable objects and structures.
This great transformation only intensified during the 20th century, when it had to meet the demands of a rapidly increasing population. Despite the two world wars and the Great Depression, the world’s population had never grown as rapidly as it did between 1900 and 1970. Larger sizes of everything, from settlements to consumer products, were needed both to meet the growing demand for housing, food, and manufactured products and to keep the costs affordable. This quest for larger size—larger coal mines or hydro stations able to supply distant megacities with inexpensive electricity; highly automated factories producing for billions of consumers; container vessels powered by the world’s largest diesel engines and carrying thousands of steel boxes between continents—has almost invariably coincided with lower unit costs, making refrigerators, cars, and mobile phones widely affordable. But it has required higher capital costs and often unprecedented design, construction, and management efforts.
Too many notable size records have been repeatedly broken since the beginning of the 20th century, and the following handful of increases (all quantified by 1900–2020 multiples, calculated from the best available information) indicate the extent of these gains. Capacity of the largest hydroelectricity-generating station is now more than 600 times larger than it was in 1900. The volume of blast furnaces — the structures needed to produce cast iron, modern civilization’s most important metal — has grown 10 times, to 5,000 cubic meters. The height of skyscrapers using steel skeletons has grown almost exactly nine times, to the Burj Khalifa’s 828 meters. Population of the largest city has seen an 11-fold increase, to Greater Tokyo’s 37 million people. The size of the world’s largest economy (using the total in constant monies): still that of the US, now nearly 32 times larger.
But nothing has seen a size rise comparable to the amount of information we have amassed since 1900. In 1897, when the Library of Congress moved to its new headquarters in the Thomas Jefferson Building, it was the world’s largest depository of information and held about 840,000 volumes, the equivalent of perhaps no more than 1 terabyte if stored electronically. By 2009 the library had about 32 million books and printed items, but those represented only about a quarter of all physical collections, which include manuscripts, prints, photographs, maps, globes, moving images, sound recordings, and sheet music, and many assumptions must be made to translate these holdings into electronic storage equivalents: in 1997 Michael Lesk estimated the total size of the Library’s holdings at “perhaps about 3 petabytes,” and hence at least a 3,000-fold increase in a century.
Moreover, for many new products and designs it is impossible to calculate the 20th-century increases because they only became commercialized after 1900, and subsequently grew one, two, or even three orders of magnitude. The most consequential examples in this category include passenger air-travel (Dutch KLM, the first commercial airline, was established in 1919); the preparation of a wide variety of plastics (with most of today’s dominant compounds introduced during the 1930s); and, of course, advances in electronics that made modern computing, telecommunications, and process controls possible (the first vacuum-tube computers used during the Second World War; the first microprocessors in 1971). While these advances have been creating very large numbers of new, small companies, increasing shares of global economic activity have been coming from ever-larger enterprises. This trend toward larger operating sizes has affected not only traditional industrial production (be it of machinery, chemicals, or foods) and new ways of automated product assembly (microchips or mobile phones), but also transportation and a wide range of services, from banks to consulting companies.
This corporate aggrandization is measurable from the number and the value of mergers, acquisitions, alliances, and takeovers. There was a rise from fewer than 3,000 mergers — worth in total about $350 billion — in 1985 to a peak of more than 47,000 mergers worth nearly $5 trillion in 2007, and each of the four pre-COVID years had transactions worth more than $3 trillion. Car production remains fairly diversified, with the top five (in 2021 by revenue: Volkswagen, Toyota, Daimler, Ford, General Motors) accounting for just over a third of the global market share, compared to about 80 percent for the top five mobile phone makers (Apple, Samsung, Xiaomi, Huawei, Oppo) and more than 90 percent for the Boeing–Airbus commercial jetliner duopoly.
But another size-enlarging trend has been much in evidence: increases in size that have nothing to do with satisfying the needs of growing populations, but instead serve as markers of status and conspicuous consumption. Sizes of American houses and vehicles provide two obvious, and accurately documented, examples of this trend, and while imitating the growth of housing has been difficult in many countries (including Japan and Belgium) for spatial and historical reasons, the rise of improbably sized vehicles has been a global trend.
A Ford Model T — the first mass-produced car, introduced in 1908 and made until 1927 — is the obvious baseline for size comparisons. The 1908 Model T was a weakly powered (15 kilowatts), small (3.4 meters), and light (540 kilograms) vehicle, but some Americans born in the mid-1920s lived long enough to see the arrival of improbably sized and misleadingly named sports utility vehicles that have become global favorites. The Chevrolet Suburban (265 kilowatts, 2,500 kilograms, 5.7 meters) wins on length, but Rolls Royce offers a 441-kilowatt Cullinan and the Lexus LX 570 weighs 2,670 kilograms.
These size gains boosted the vehicle-to-passenger weight ratio (assuming a 70-kilogram adult driver) from 7.7 for the Model T to just over 38 for the Lexus LX and to nearly as much for the Yukon GMC. For comparison, the ratio is about 18 for my Honda Civic — and, looking at a few transportation alternatives, it is just over 6 for a Boeing 787, no more than 5 for a modern intercity bus, and a mere 0.1 for a light 7-kilogram bicycle. Remarkably, this increase in vehicle size took place during the decades of heightened concern about the environmental impact of driving (a typical SUV emits about 25 percent more greenhouse gases than the average sedan).
This American preference for larger vehicles soon became another global norm, with SUVs gaining in size and expanding their market share in Europe and Asia. There is no rational defence of these extravaganzas: larger vehicles were not necessitated either by concerns for safety (scores of small- and mid-size cars get top marks for safety from the Insurance Institute for Highway Safety) or by the need to cater to larger households (the average size of a US family has been declining).
And yet another countertrend involving the shrinking size of American families has been the increasing size of American houses. Houses in Levittown, the first post–Second World War large-scale residential suburban development in New York, were just short of 70 square meters; the national mean reached 100 in 1950, topped 200 in 1998, and by 2015 it was a bit above 250 square meters, slightly more than twice the size of Japan’s average single-family house. American house size has grown 2.5 times in a single lifetime; average house mass (with air conditioning, more bathrooms, heavier finishing materials) has roughly tripled; and the average per capita habitable area has almost quadrupled. And then there are the US custom-built houses whose average area has now reached almost 500 square meters.
As expected, larger houses have larger refrigerators and larger TV screens. Right after the Second World War, the average volume of US fridges was just 8 cubic feet; in 2020 the bestselling models made by GE, Maytag, Samsung, and Whirlpool had volumes of 22–25 cubic feet. Television screens started as smallish rectangles with rounded edges; their dimensions were limited by the size and mass of the cathode-ray tube (CRT). The largest CRT display (Sony PVM-4300 in 1991) had a 43-inch diagonal display but it weighed 200 kilograms. In contrast, today’s popular 50-inch LED TV models weigh no more than 25 kilograms. But across the globe, the diagonals grew from the post–Second World War standard of 30 centimeters to nearly 60 centimeters by 1998 and to 125 centimeters by 2021, which means that the typical area of TV screens grew more than 15-fold.
Undoubtedly, many larger sizes make life easier, more comfortable, and more enjoyable, but these rewards have their own limits. And there is no evidence for concluding that oversize houses, gargantuan SUVS, and commercial-size fridges have made their owners happier: surveys of US adults asked to rate their happiness or satisfaction in life actually show either no major shifts or long-term declines since the middle of the 20th century. There are obvious physical limits to all of these excesses, and in the fourth chapter I will examine some important long-term growth trends to show that the sizes of many designs have been approaching their inevitable maxima as S-shaped (sigmoid) curves are reaching the final stages of their course.
This new, nearly universal, worship of larger sizes is even more remarkable given the abundance of notable instances when larger sizes are counterproductive. Here are two truly existential examples. Excessive childhood weight is highly consequential because the burden of early onset obesity is not easily shed later in life. And on the question of height, armies have always had height limits for their recruits; a below-average size was often a gift, as it prevented a small man (or a very tall one!) getting drafted and killed in pointless conflicts.
Large countries pose their own problems. If their territory encompasses a variety of environments, they are more likely to be able to feed themselves and have at least one kind of major mineral deposit, though more often several. This is as true of Russia (the world’s largest nation) as it is of the USA, Brazil, China, and India. But nearly all large nations tend to have larger economic disparities than smaller, more homogeneous countries do, and tend to be riven by regional, religious, and ethnic differences. Examples include the NorthSouth divide in the US; Canada’s perennial Quebec separatism; Russia’s problems with militant Islam (the Chechen war, curiously forgotten, was one of the most brutal post–Second World War conflicts); India’s regional, religious, and caste divisions. Of course, there are counterexamples of serious disparities and discord among small-size nations — Belgium, Cyprus, Sri Lanka — but those inner conflicts matter much less for the world at large than any weakening or unraveling of the largest nations.
But the last 150 years have not only witnessed a period of historically unprecedented growth of sizes, but also the time when we have finally come to understand the real size of the world, and the universe, we inhabit. This quest has proceeded at both ends of the size spectrum, and by the end of the 20th century we had, finally, a fairly satisfactory understanding of the smallest (at the atomic and genomic levels) and the largest (size of the universe) scale. How did we get there?
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-size-vaclav-smil-william-morrow-143020501.html?src=rss
The idea of solar energy being transmitted from space is not a new one. In 1968, a NASA engineer named Peter Glaser produced the first concept design for a solar-powered satellite. But only now, 55 years later, does it appear scientists have actually carried out a successful experiment. A team of researchers from Caltech announced on Thursday that their space-borne prototype, called the Space Solar Power Demonstrator (SSPD-1), had collected sunlight, converted it into electricity and beamed it to microwave receivers installed on a rooftop on Caltech's Pasadena campus. The experiment also proves that the setup, which launched on January 3, is capable of surviving the trip to space, along with the harsh environment of space itself.
"To the best of our knowledge, no one has ever demonstrated wireless energy transfer in space even with expensive rigid structures. We are doing it with flexible lightweight structures and with our own integrated circuits. This is a first," said Ali Hajimiri, professor of electrical engineering and medical engineering and co-director of Caltech's Space Solar Power Project (SSPP), in a press release published on Thursday.
The experiment — known in full as Microwave Array for Power-transfer Low-orbit Experiment (or MAPLE for short) — is one of three research projects being carried out aboard the SSPD-1. The effort involved two separate receiver arrays and lightweight microwave transmitters with custom chips, according to Caltech. In its press release, the team added that the transmission setup was designed to minimize the amount of fuel needed to send them to space, and that the design also needed to be flexible enough so that the transmitters could be folded up onto a rocket.
Space-based solar power has long been something of a holy grail in the scientific community. Although expensive in its current form, the technology carries the promise of potentially unlimited renewable energy, with solar panels in space able to collect sunlight regardless of the time of day. The use of microwaves to transmit power would also mean that cloud cover wouldn't pose an interference, as Nikkeinotes.
Caltech's Space Solar Power Project (SSSP) is hardly the only team that has been attempting to make space-based solar power a reality. Late last month, a few days before Caltech's announcement, Japan's space agency, JAXA, announced a public-private partnership that aims to send solar power from space by 2025. The leader of that project, a Kyoto University professor, has been working on space-based solar power since 2009. Japan also had a breakthrough of its own nearly a decade ago in 2015, when JAXA scientists transmitted 1.8 kilowatts of power — about enough energy to power an electric kettle — more than 50 meters to a wireless receiver.
The Space Solar Power Project was founded back in 2011. In addition to MAPLE, the SSPD-1 is being used to assess what types of cells are the most effective in surviving the conditions of space. The third experiment is known as DOLCE (Deployable on-Orbit ultraLight Composite Experiment), a structure measuring six-by-six feet that "demonstrates the architecture, packaging scheme, and deployment mechanisms of the modular spacecraft," according to Caltech. It has not yet been deployed.
This article originally appeared on Engadget at https://www.engadget.com/space-based-solar-power-first-successful-experiment-caltech-000046036.html?src=rss
Boeing's Starliner was supposed to fly its first crewed mission to the International Space Station (ISS) on July 21st, but a couple of technical issues has kept the company from pushing through with its plan. Together with NASA, the aerospace corporation has announced that it's delaying the CST-100 Starliner spacecraft's Crew Flight Test date yet again to address the risks presented by two new problems Boeing engineers have detected.
The first issue lies with the spacecraft's parachute system. Boeing designed the Starliner capsule to float back down to Earth with the help of three parachutes. According to The New York Times, the company discovered that parts of the lines connecting the system to the capsule don't have the ability to tolerate the spacecraft's load in case only two of the three parachutes are deployed correctly. Since the capsule will be carrying human passengers back to our planet, the company has to look at every aspect of its spacecraft to ensure their safety as much as possible. Boeing expects to do another parachute testing before it schedules another launch attempt.
In addition to its parachute problem, Boeing is also reassessing the use of a certain tape adhesive to wrap hundreds of feet of wiring. Apparently, the tape could be flammable, so engineers are looking to use another kind of wrapping for areas of the spacecraft with the greatest fire risk.
The Crew Flight Test is the last hurdle the company has to overcome to regularly start ferrying astronauts to the ISS. NASA chose Boeing as one of its commercial crew partners along with SpaceX, but it has fallen behind its peer over the years. The Starliner has completed uncrewed flights in the past as part of the tests it has to go through for crewed missions. But SpaceX already has 10 crewed flights under its belt, with the first one taking place way back in 2020. In addition to taking astronauts to the ISS and bringing human spaceflight back to American soil since the last space shuttle launch in 2011, SpaceX has also flown civilians to space.
That said, NASA and Boeing remain optimistic about Starliner's future. In a statement, NASA Commercial Crew Program manager Steve Stich said:
"Crew safety remains the highest priority for NASA and its industry providers, and emerging issues are not uncommon in human spaceflight especially during development. If you look back two months ago at the work we had ahead of us, it’s almost all complete. The combined team is resilient and resolute in their goal of flying crew on Starliner as soon as it is safe to do so. If a schedule adjustment needs to be made in the future, then we will certainly do that as we have done before. We will only fly when we are ready."
This article originally appeared on Engadget at https://www.engadget.com/boeing-starliners-first-crewed-iss-flight-delayed-due-to-technical-issues-114023064.html?src=rss
Last year, Rocket Lab announced that it would embark on an ambitious mission to send a small probe to Venus to hunt for organic molecules in its atmosphere. The launch was supposed to happen in May 2023, but now Rocket Lab has confirmed that it's "not imminent," TechCrunch has reported. While company didn't provide a new date, a research paper published in July 2022 states that a "backup launch window is available in January 2025."
News of the mission flew under the radar, as it were, but it's rather ambitious. Rocket Lab plans to use its Electron booster and Photon spacecraft, sending a small probe into Venus's cloud layer about 30-37 miles up, where temperatures are Earth-like. (Thanks to the planet's greenhouse effect, temperatures on the surface are greater than 900 degrees F and pressure more than 75 Earth atmospheres.)
Rocket Lab
Once there, the tiny 40 centimeter diameter probe will search for organic molecules or other clues that the atmosphere could support life. Venus came into the news back in 2020 after researchers claimed to spot signs of phosphine, a chemical that's typically produced by living organisms. While controversial, the findings sparked a new interest in the Venus atmosphere as a possible source for life, and Rocket Lab's mission is centered around just that.
At the same time, it's a way for the company to show off its Photon spacecraft designed to go beyond Earth orbit to the Moon and Mars. Last year, Rocket Lab successfully launched Photon on NASA's CAPSTONE mission, designed to verify the orbital stability of the planned Lunar Gateway space station. The lunar satellite spent nearly six months in orbit and flew within 1,000 miles of the Moon's North Pole in a so-called near-rectilinear halo orbit.
This article originally appeared on Engadget at https://www.engadget.com/rocket-lab-delays-its-venus-atmospheric-probe-mission-090847239.html?src=rss