Posts with «science» label

A mischief of magpies defeated scientists' tracking devices

While we humans can't agree where we stand on tracking devices, one group of birds assertively came out against the technology. In The Conversation, Dominique Potvin, an Animal Ecology professor at the University of the Sunshine Coast in Australia, said he and his team recently witnessed a mischief of magpies display a rare cooperative “rescue” behavior when they attempted to track the birds.

As part of their study, Potvin’s team developed a seemingly ingenious way of collecting data on a group of five magpies. They developed a lightweight but tough harness the birds could wear like backpacks and carry a small tracker with them as they went about their daily lives. They also created a feeding station that would wirelessly charge and download data from the trackers. It even had a magnet for freeing the birds of the harness. “We were excited by the design, as it opened up many possibilities for efficiency and enabled a lot of data to be collected,” Potvin said.

Unfortunately, the study fell apart in mere days. Within 10 minutes of Potvin’s team fitting the final tracker, they saw a female magpie use her bill to remove a harness off of one of the younger birds. Hours later, most of the other test subjects had been freed of their trackers too. By day three, even the most dominant male in the group had allowed one of his flock to assist him.

“We don’t know if it was the same individual helping each other or if they shared duties, but we had never read about any other bird cooperating in this way to remove tracking devices,” Potvin said. “The birds needed to problem solve, possibly testing at pulling and snipping at different sections of the harness with their bill. They also needed to willingly help other individuals, and accept help.”

According to Potvin, the only other example they could find of that kind of behavior among birds involved Seychelles warblers who helped their flockmates escape from sticky Pisonia seed clusters. Visit The Conversation to read the full story.

Scientists study a 'hot Jupiter' exoplanet's dark side in detail for the first time

Astronomers have mapped the atmospheres of exoplanets for a while, but a good look at their night sides has proven elusive — until today. An MIT-led study has provided the first detailed look at a "hot Jupiter" exoplanet's dark side by mapping WASP-121b's altitude-based temperatures and water presence levels. As the distant planet (850 light-years away) is tidally locked to its host star, the differences from the bright side couldn't be starker.

The planet's dark side contributes to an extremely violent water cycle. Where the daytime side tears water apart with temperatures beyond 4,940F, the nighttime is cool enough ('just' 2,780F at most) to recombine them into water. The result flings water atoms around the planet at over 11,000MPH. That dark side is also cool enough to have clouds of iron and corundum (a mineral in rubies and sapphires), and you might see rain made of liquid gems and titanium as vapor from the day side cools down.

The researchers collected the data using spectroscopy from the Hubble Space Telescope for two orbits in 2018 and 2019. Many scientists have used this method to study the bright sides of exoplanets, but the dark side observations required detecting minuscule changes in the spectral line indicating water vapor. That line helped the scientists create temperature maps, and the team sent those maps through models to help identify likely chemicals.

This represents the first detailed study of an exoplanet's global atmosphere, according to MIT. That comprehensive look should help explain where hot Jupiters like WASP-121b can form. And while a jovian world such as this is clearly too dangerous for humans, more thorough examinations of exoplanet atmospheres could help when looking for truly habitable planets.

Hitting the Books: Lab-grown meat is the future, just as Winston Churchill predicted

From domestication and selective breeding to synthetic insulin and CRISPR, humanity has long sought understand, master and exploit the genetic coding of the natural world. In The Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biology authors Amy Webb, professor of strategic foresight at New York University’s Stern School of Business, and Andrew Hessel, co-founder and chairman of the Center of Excellence for Engineering Biology and the Genome Project, delve into the history of the field of synthetic biology, examine today's state of the art and imagine what a future might look like where life itself can be manufactured molecularly.

PublicAffairs

Excerpted from THE GENESIS MACHINE: Our Quest to Rewrite Life in the Age of Synthetic Biology by Amy Webb and Andrew Hessel. Copyright © 2022. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.


It’s plausible that by the year 2040, many societies will think it’s immoral to eat traditionally produced meat and dairy products. Some luminaries have long believed this was inevitable. In his essay “Fifty Years Hence,” published in 1931, Winston Churchill argued, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”

That theory was tested in 2013, when the first lab-grown hamburger made its debut. It was grown from bovine stem cells in the lab of Dutch stem cell researcher Mark Post at Maastricht University, thanks to funding from Google cofounder Sergey Brin. It was fortuitous that a billionaire funded the project, because the price to produce a single patty was $375,000. But by 2015, the cost to produce a lab-grown hamburger had plummeted to $11.43. Late in 2020, Singapore approved a local competitor to the slaughterhouse: a bioreactor, a high-tech vat for growing organisms, run by US-based Eat Just, which produces cultured chicken nuggets. In Eat Just’s bioreactors, cells taken from live chickens are mixed with a plant-based serum and grown into an edible product. Chicken nuggets produced this way are already being sold in Singapore, a highly regulated country that’s also one of the world’s most important innovation hotspots. And the rising popularity of the product could accelerate its market entry in other countries.

An Israel-based company, Supermeat, has developed what it calls a “crispy cultured chicken,” while Finless Foods, based in California, is developing cultured bluefin tuna meat, from the sought-after species now threatened by long-standing overfishing. Other companies, including Mosa Meat (in the Netherlands), Upside Foods (in California, formerly known as Memphis Meats), and Aleph Farms (in Israel), are developing textured meats, such as steaks, that are cultivated in factory-scale labs. Unlike the existing plant-based protein meat alternatives developed by Beyond Meat and Impossible Foods, cell-based meat cultivation results in muscle tissue that is, molecularly, beef or pork.

Two other California companies are also offering innovative products: Clara Foods serves creamy, lab-grown eggs, fish that never swam in water, and cow’s milk brewed from yeast. Perfect Day makes lab-grown “dairy” products—yogurt, cheese, and ice cream. And a nonprofit grassroots project, Real Vegan Cheese, which began as part of the iGEM competition in 2014, is also based in California. This is an open-source, DIY cheese derived from caseins (the proteins in milk) rather than harvested from animals. Casein genes are added to yeast and other microflora to produce proteins, which are purified and transformed using plant-based fats and sugars. Investors in cultured meat and dairy products include the likes of Bill Gates and Richard Branson, as well as Cargill and Tyson, two of the world’s largest conventional meat producers.

Lab-grown meat remains expensive today, but the costs are expected to continue to drop as the technology matures. Until they do, some companies are creating hybrid animal-plant proteins. Startups in the United Kingdom are developing blended pork products, including bacon created from 70 percent cultured pork cells mixed with plant proteins. Even Kentucky Fried Chicken is exploring the feasibility of selling hybrid chicken nuggets, which would consist of 20 percent cultured chicken cells and 80 percent plants.

Shifting away from traditional farming would deliver an enormous positive environmental impact. Scientists at the University of Oxford and the University of Amsterdam estimated that cultured meat would require between 35 and 60 percent less energy, occupy 98 percent less land, and produce 80 to 95 percent fewer greenhouse gases than conventional animals farmed for consumption. A synthetic-biology-centered agriculture also promises to shrink the distance between essential operators in the supply chain. In the future, large bioreactors will be situated just outside major cities, where they will produce the cultured meat required by institutions such as schools, government buildings and hospitals, and perhaps even local restaurants and grocery stores. Rather than shipping tuna from the ocean to the Midwest, which requires a complicated, energy-intensive cold chain, fish could instead be cultured in any landlocked state. Imagine the world’s most delicate, delicious bluefin tuna sushi sourced not from the waters near Japan, but from a bioreactor in Hastings, Nebraska. Synthetic biology will also improve the safety of the global food supply. Every year, roughly 600 million people become ill from contaminated food, according to World Health Organization estimates, and 400,000 die. Romaine lettuce contaminated with E. coli infected 167 people across 27 states in January 2020, resulting in 85 hospitalizations. In 2018, an intestinal parasite known as Cyclospora, which causes what is best described as explosive diarrhea, resulted in McDonald’s, Trader Joe’s, Kroger, and Walgreens removing foods from their shelves. Vertical farming can minimize these problems. But synthetic biology can help in a different way, too: Often, tracing the source of tainted food is difficult, and the detective work can take weeks. But a researcher at Harvard University has pioneered the use of genetic barcodes that can be affixed to food products before they enter the supply chain, making them traceable when problems arise.

That researcher’s team engineered strains of bacteria and yeast with unique biological barcodes embedded in spores. Such spores are inert, durable, and harmless to humans, and they can be sprayed onto a wide variety of surfaces, including meat and produce. The spores are still detectable months later even after being subjected to wind, rain, boiling, deep frying, and microwaving. (Many farmers, including organic farmers, already spray their crops with Bacillus thuringiensis spores to kill pests, which means there’s a good chance you’ve already ingested some.) These barcodes could not only aid in contact tracing, but be used to reduce food fraud and mislabeling. In the mid-2010s, there was a rash of fake extra virgin olive oil on the market. The Functional Materials Laboratory at ETH Zurich, a public research university in Switzerland, developed a solution similar to the one devised at Harvard: DNA barcodes that revealed the producer and other key data about the oil.

Sealants made from nanomaterials could make concrete more durable

In the US, approximately one in every five miles of highway and major road is in poor condition. It’s a problem that’s even worse in colder states where moisture and, most of all, salt accelerate the deterioration of pavement and asphalt. A team of researchers from Washington State University believes nanomaterials like graphene oxide could help harden concrete infrastructure against the elements.

Many state transportation departments use topical sealers to protect bridges and other concrete structures from melting snow, rain and salt. Those products can help, but as is often the case with moisture, it’s a losing battle. What the WSU team found was that they could add nanomaterials – specifically graphene oxide and montmorillonite nanoclay – to a commercial siliconate-based sealer to make the microstructure of concrete denser, thereby making it more difficult for water to make its way into the material. The sealer also helped protect their samples from the physical and chemical abuse inflicted by deicing salts.

Comparing their sealer to a commercial one, they found it was 75 percent better at repelling water and 44 percent better at reducing salt damage. They also made it from water, instead of an organic solvent. That means the final product is safer to use and less harmful to the environment. Normally, water-based sealants don’t perform as well as their organic counterparts, but the nanomaterials the WSU team used helped level the performance gap.

“Concrete, even though it seems like solid rock, is basically a sponge when you look at it under a microscope,” said Professor Xianming Shi, the lead researcher on the project. “It’s a highly porous, non-homogenous composite material.” According to Shi, if you can keep the material dry, most of its durability issues go away.

Compared to most research projects involving the use of nanomaterials, this one looks like it has a chance to make it out of the lab. Sometime in the next two years, Professor Shi’s team plans to work with either the university or the city of Pullman to test the sealant in the real world.

How NASA spots potentially catastrophic geomagnetic storms before they strike

A recent batch of SpaceX’s Starlink internet-beaming cubesats met with tragedy on February 3rd when a 49-member cohort of the newly-launched satellites encountered a strong geomagnetic storm in orbit.

“These storms cause the atmosphere to warm and atmospheric density at our low deployment altitudes to increase. In fact, onboard GPS suggests the escalation speed and severity of the storm caused atmospheric drag to increase up to 50 percent higher than during previous launches,” SpaceX wrote in a blog update last Wednesday. “The Starlink team commanded the satellites into a safe-mode where they would fly edge-on (like a sheet of paper) to minimize drag.” Unfortunately, 40 of the satellites never came out of safe mode and, as of Wednesday’s announcement, are expected to, if they haven’t already, fall to their doom in Earth’s atmosphere.

While this incident constitutes is only a minor setback for SpaceX and its goal of entombing the planet with more than 42,000 of the signal-bouncing devices, geomagnetic storms pose an ongoing threat to the world’s electrical infrastructure — interrupting broadcast and telecommunications signals, damaging electrical grids, disrupting global navigation systems, while exposing astronauts and airline passengers alike with dangerous doses of solar radiation.

The NOAA defines geomagnetic storms as “a major disturbance of Earth's magnetosphere that occurs when there is a very efficient exchange of energy from the solar wind into the space environment surrounding Earth.” Solar winds, composed of plasma and high-energy particles, are ejected from the Sun’s outermost coronal layers and carry the same charge as the sun’s magnetic field, oriented either North or South.

When that charged solar wind hits Earth’s magnetosphere — moreso if it is especially energetic or carries a southern polarization — it can cause magnetic reconnection of the dayside magnetopause. This, in turn, accelerates plasma in that region down the atmosphere’s magnetic field lines towards the planet’s poles where the added energy excites nitrogen and oxygen atoms to generate the Northern Lights aurora effect. That extra energy also causes the magnetosphere itself to oscillate, creating electrical currents which further disrupt the region’s magnetic fields — all of which make up magnetic storms.

“Storms also result in intense currents in the magnetosphere, changes in the radiation belts, and changes in the ionosphere, including heating the ionosphere and upper atmosphere region called the thermosphere,” notes the NOAA. “In space, a ring of westward current around Earth produces magnetic disturbances on the ground.”

Basically, when the Sun belches out a massive blast of solar wind, it travels through space and smacks into the Earth’s magnetic shell where all that energy infuses into the planet’s magnetic field, causing electrical chaos while making a bunch of atoms in the upper reaches of the atmosphere jiggle in just the right way to create a light show. Behold, the majesty of our cosmos, the celestial equivalent of waving away a wet burp from the slob next to you at the bar.

Solar flares occur with varying frequency depending on where the Sun is in its 11-year solar cycle with fewer than one happening each week during solar minimums to multiple flares daily during the maximal period. Their intensities oscillate similarly, though if the electromagnetic storm of 1859 — the largest such event on record, dubbed the Carrington Event — were to occur today, its damage to Earth’s satellite and telecom systems is estimated to run in the trillions of US dollars, requiring months if not years of repairs to undo. The event pushed the Northern aurora borealis as far south as the Caribbean and energized telegraph lines to the point of combustion. A similar storm in March of 1989 was only as third as powerful as Carrington but it still managed to straight up melt an electrical transformer in New Jersey as well as knock out Quebec’s power grid in a matter of seconds, stranding 6 million customers in the dark for nine hours until the system’s equipment could be sequentially checked and reset.

European Space Agency

Even when they’re not electrocuting telegraph operators or demolishing power grids, geomagnetic storms can cause all sorts of havoc with our electrical systems. Geomagnetically induced currents can saturate the magnetic cores within power transformers, causing the voltage and currents traveling within their coils to spike leading to overloads. Changes within the structure and density of the Earth’s ionosphere due to solar storms can disrupt and outright block high frequency radio and ultra-high frequency satellite transmissions. GPS navigation systems are similarly susceptible to disruption during these events.

"A worst-case solar storm could have an economic impact similar to a category 5 hurricane or a tsunami," Dr. Sten Odenwald of NASA's Goddard Space Flight Center, said in 2017. "There are more than 900 working satellites with an estimated replacement value of $170 billion to $230 billion, supporting a $90 billion-per-year industry. One scenario showed a 'superstorm' costing as much as $70 billion due to a combination of lost satellites, service loss, and profit loss."

Most importantly to SpaceX, solar storms can increase the amount of drag the upper edges of the atmosphere exert upon passing spacecraft. There isn’t much atmosphere in low Earth orbit where the ISS and a majority of satellites reside but there is enough to cause a noticeable amount of drag on passing objects. This drag increases during daylight hours as the Sun’s energy excites atoms in lower regions of the atmosphere pushing them higher into LEO and creating a higher-density layer that satellites have to push through. Geomagnetic storms can exacerbate this effect by producing large short-term increases in the upper atmosphere’s temperature and density.

NOAA

“There are only two natural disasters that could impact the entire US,” University of Michigan researcher, Gabor Toth, said in a press statement last August. “One is a pandemic. And the other is an extreme space weather event.”

"We have all these technological assets that are at risk," he continued. "If an extreme event like the one in 1859 happened again, it would completely destroy the power grid and satellite and communications systems — the stakes are much higher."

Austin Brenner, University of Michigan

In order to extend the time between a solar eruption and its resulting winds slamming into our magnetosphere, Toth and his team have worked to develop the Geospace Model version 2.0 (which is what the NOAA currently employs) using state-of-the-art computer learning systems and statistical analysis schemes. With it, astronomers and power grid operators are afforded a scant 30 minutes of advanced warning before solar winds reach the planet — just enough time to put vital electrical systems into standby mode or otherwise mitigate the storm’s impact.

Toth’s team relies on X-ray and UV data “from a satellite measuring plasma parameters one million miles away from the Earth,” he explained, in order to spot coronal mass ejections as they happen. “From that point, we can run a model and predict the arrival time and impact of magnetic events," Toth said.

NASA has developed and launched a number of missions in recent years to better predict the tumultuous behavior of our local star. In 2006, for example, the space agency launched the STEREO (Solar TErrestrial RElations Observatory) mission in which a pair of observatories measured the “flow of energy and matter” from the Sun to Earth. Currently, NASA is working on two more missions — Multi-slit Solar Explorer (MUSE) and HelioSwarm — to more fully understand the Sun-Earth connection.

“MUSE and HelioSwarm will provide new and deeper insight into the solar atmosphere and space weather,” Thomas Zurbuchen, associate administrator for science at NASA, said in a February news release. “These missions not only extend the science of our other heliophysics missions—they also provide a unique perspective and a novel approach to understanding the mysteries of our star.”

MUSE aims to study the forces that heat the corona and drive eruptions in that solar layer. “MUSE will help us fill crucial gaps in knowledge pertaining to the Sun-Earth connection,” Nicola Fox, director of NASA’s Heliophysics Division, added. “It will provide more insight into space weather and complements a host of other missions within the heliophysics mission fleet.”

The HelioSwarm, on the other hand, is actually a collection of nine spacecraft tasked with taking “first multiscale in-space measurements of fluctuations in the magnetic field and motions of the solar wind.”

"The technical innovation of HelioSwarm's small satellites operating together as a constellation provides the unique ability to investigate turbulence and its evolution in the solar wind," Peg Luce, deputy director of the Heliophysics Division, said.

These ongoing research efforts to better comprehend our place in the solar system and how to be neighborly with the massive nuclear fusion reactor down the celestial block are sure to prove vital as humanity’s telecommunications technologies continue to mature. Because, no matter how hardened our systems, we simply cannot afford a repeat of 1859.

SpaceX plans its first commercial spacewalk for this year

SpaceX won't just have launched first all-civilian spaceflight — it should soon be home to a full-fledged private space program. According to The Washington Post, Shift4 founder and Inspiration4 leader Jared Isaacman has unveiled a Polaris Program initiative that will include "up to" three crewed SpaceX flights. The first, Polaris Dawn, is planned for the fourth quarter of 2022 and should include the first commercial spacewalk. The effort will ideally end with the first human-occupied Starship flight. Sorry, Moon tourists.

The Polaris Dawn team will also aim for the highest-ever Earth orbit, conduct health research and test laser-based Starlink communication. Isaacman will return as mission commander, while Inspiration4 mission director and Air Force veteran Scott Poteet will serve as pilot. Two of SpaceX's lead operations engineers will also be aboard, including Anna Menon and Sarah Gillis. Menon's role is symbolic of the shift toward private spaceflight — her husband Anil was chosen to become a NASA astronaut, but she'll likely reach space before her spouse does.

The program hinges on SpaceX and partners solving a number of problems. SpaceX is developing spacesuits necessary for the spacewalk, and Isaacman's group hasn't yet decided how many crew members will step outside. Starship also carries some uncertainty. While there's been ample testing and plenty of progress, development of the next-gen rocket system hasn't always gone according to plan. Expect the Polaris Program to have a relatively loose schedule, and possibly a few setbacks.

Even so, this represents a further normalization of private spaceflight. While the Polaris Program continues a recent 'tradition' of civilian flights led by billionaires (Isaacman is no exception), it also promises to commercialize aspects that were still reserved for government astronauts, such as spacewalks and testing new spacecraft (NASA astronauts helmed SpaceX's Demo-2). Don't be shocked if private crews fulfill other roles in the near future.

Don't blame SpaceX for that rocket on a collision course with the Moon

This past January, astronomer Bill Gray said that the upper stage of a SpaceX Falcon 9 rocket would collide with the Moon sometime in early March. As you might expect, the prediction set off a flurry of media coverage, much of it critical of Elon Musk and his private space firm. After all, the event would be a rare misstep for SpaceX.

But it turns out Elon and company are not about to lose face. Instead, it’s more likely that fate will befall China. That’s because Gray now says he made a mistake in his initial identification of a piece of space debris he and other astronomers dubbed WE0913A in 2015.

When Gray and his colleagues first spotted the object, several clues led them to believe it was the second stage of a Falcon 9 rocket that carried the National Oceanic and Atmospheric Administration’s DSCOVR satellite into deep orbit that same year. The object’s identification would have probably gone unreported in mainstream media if astronomers didn’t subsequently discover it was about to collide with the Moon.

“Back in 2015, I (mis)identified this object as 2015-007B, the second stage of the DSCOVR spacecraft,” Gray said in a blog post he published on Saturday that was spotted Ars Technica. “I had pretty good circumstantial evidence for the identification, but nothing conclusive,” Gray added. “That was not at all unusual. Identifications of high-flying space junk often require a bit of detective work, and sometimes, we never do figure out the ID for a bit of space junk.”

We may have never known the actual identity of the debris if not for NASA Jet Propulsion Laboratory engineer Jon Giorgini. He contacted Gray on Saturday to ask about the identification. According to Giorgini, NASA’s Horizons system, a database that can estimate the location and orbit of almost half a million celestial bodies in our solar system, showed that the DSCOVR spacecraft’s trajectory didn’t take it close to the Moon. As such, it would be unusual if its second stage were to stray off course then and hit the satellite. Giorgini’s email prompted Gray to reexamine the data he used to make the initial identification.

Gray now says he’s reasonably certain the rocket that’s about to collide with the moon belongs to China. In October 2014, the country’s space agency launched its Chang’e 5-T1 mission on a Long March 3C rocket. After reconstructing the probable trajectory of that mission, he found that the Long March 3C is the best fit for the mystery object that’s about to hit Earth’s natural satellite. “Running the orbit back to launch for the Chinese spacecraft makes ample sense,” he told The Verge. “It winds up with an orbit that goes past the Moon at the right time after launch.”

Gray went on to tell The Verge that episodes like this underline the need for more information on rockets boosters that travel into deep space. “The only folks that I know of who pay attention to these old rocket boosters are the asteroid tracking community,” he told the outlet. “This sort of thing would be considerably easier if the folks who launch spacecraft — if there was some regulatory environment where they had to report something.”

Hitting the Books: How crop diversity became a symbol of Mexican national sovereignty

Beginning in the 1940s, Mexico's Green Revolution saw the country's agriculture industrialized on a national scale, helping propel a massive, decades-long economic boom in what has become known as the Mexican Miracle. Though the modernization of Mexico's food production helped spur unparalleled market growth, these changes also opened the industry's doors to powerful transnational seed companies, eroding national control over the genetic diversity of its domestic crops and endangering the livelihoods of Mexico's poorest farmers. 

In the excerpt below from her new book Endangered Maize: Industrial Agriculture and the Crisis of Extinction, author and Peter Lipton Lecturer in History of Modern Science and Technology at Cambridge University, Helen Anne Curry, examines the country's efforts to maintain its cultural and genetic independence in the face of globalized agribusiness.

UC Press

Excerpted from Endangered Maize: Industrial Agriculture and the Crisis of Extinction by Helen Anne Curry. Published by University of California Press. Copyright © 2021 by Helen Anne Curry. All rights reserved.


Amid the clatter and hum generated by several hundred delegates and observers to the 1981 Conference of FAO, a member of the Mexican delegation took the floor. Participants from 145 member nations had already reviewed the state of global agricultural production, assessed and commended ongoing FAO programs, agreed on budget appropriations, and wrestled over the wording of numerous conference resolutions. The Mexican representative opened discussion on yet another draft resolution, this one proposing “The Establishment of an International Plant Germplasm Bank.” Two interlocked elements lie at the resolution’s heart: a collection of duplicate samples of all the world’s major seed collections under the control of the United Nations and a legally binding international agreement that recognized “plant genetic resources” as the “patrimony of humanity.” Together, the bank and agreement would ensure the “availability, utilization and non-discriminatory benefit to all nations” of plant varieties in storage and in cultivation across the globe.

Today, international treaties are integral to the conservation and use of crop genetic diversity. The 1992 Convention on Biological Diversity aims to ensure the sustainable and just use of the world’s biodiversity, which includes plant genetic resources. Meanwhile, the 2001 International Treaty on Plant Genetic Resources for Food and Agriculture, also called the Seed Treaty, establishes protocols specific to crop diversity. Although it draws much of its power from the Convention on Biological Diversity, the roots of the Seed Treaty reach further back, to the 1981 resolution of the Mexican delegation and beyond.

Mexico’s resolution, like today’s Seed Treaty, offered conservation as a principal motivation. It told a story of farmers’ varieties displaced by breeders’ products, the attrition of genetic diversity, and the looming “extinction of material of incalculable value.” Earlier calls for conservation had sketched the same picture. Yet those who prepared and promoted the Mexican proposal mobilized this narrative to different ends. They may well have wanted to protect crop diversity. Far more important, however, was the guarantee of access to this diversity, once conserved. They insisted that a seed bank governed by the United Nations and an international treaty were needed to prevent the “monopolization” of plant genetic materials. This monopolization came in the form of control by national governments, the ultimate decision makers for most existing seed banks. It also resulted from possession by transnational corporations. By exercising intellectual property protections in crop varieties, seed companies could take ownership of these varieties, even if they were derived from seeds sourced abroad. In other words, the survival of a seed sample in a base collection, or its duplicate, did not mean this sample was available to breeders, let alone farmers, in its own place of origin. Binding international agreements were necessary to ensure access.

Mexico’s intervention at the 1981 FAO Conference was just one volley in what would later be called the seed wars, a decades-long conflict over the granting of property rights in plant varieties and the physical control of seed banks. Allusions to endangered crop diversity have been mostly rhetorical flourishes in this debate, deployed in defense of other things considered threatened by agricultural change—namely, peoples and governments across Africa, Asia, and Latin America in the later twentieth century. Seed treaties were meant to protect not seeds, but sovereignty.

Between the late 1960s and the early 1980s, in the midst of this struggle over seeds, consensus fractured about the loss of crop diversity—or, more specifically, about the meaning of this loss. When experts had gathered at FAO in the 1960s to discuss genetic erosion, most saw this as an inevitable consequence of a beneficial transition. Wherever farmers opted for breeders’ lines over their own seeds, the value of these so-called improved lines was confirmed, and agricultural productivity inched forward. In the 1970s genetic erosion featured centrally in a very different narrative. It was offered as evidence of the misguided ideas and practices driving agricultural development, especially the Green Revolution, and of the dangers posed by powerful transnational seed companies. Corporate greed emerged as a new driver of crop diversity loss. The willingness of wealthy countries to sustain this greed through friendly regulations meant both were complicit in undermining the capacities of developing countries to feed themselves. The extinction of farmers’ varieties and landraces was no longer an accepted byproduct of agricultural modernization. It was an argument against this development.

This shift pitted scientists committed to saving crop diversity against activists ostensibly interested in the same thing. It brought competing visions of what agriculture could and should be head to head. Invocations of the imminent loss of crop diversity, the one element everyone seemed able to agree on, reached a fever pitch during the seed wars. This rhetorical barrage often obscured on-the-ground realities. While FAO delegates, government officials, NGO activists, and prominent scientists waged a war of words in meeting rooms and magazines, plant breeders and agronomists tended experimental plots, tested genetic combinations, and presented farmers with varieties they hoped would be improvements. In 1970s Mexico some of these researchers were newly resolved to use Mexican seeds and methods to address the needs of the country’s poorest farmers. Keeping these individuals, their methods, and their corn collections in view grounds the seed wars in actual seeds. If the Mexican delegation’s invocation of crop diversity at FAO in 1981 was a rhetorical flourish in a bid to defend national sovereignty, the concurrent use of crop diversity by some Mexican breeders was a practical strategy for getting Mexican agriculture out from under the thumb of the United States and transnational agribusinesses. On the ground, seeds were not ornaments in oratory but the very stuff of sovereignty.

Inroads for Agribusiness

While scientists in Mexico searched for novel solutions to the country’s rural crises, critical assessments of agricultural aid bolstered the case for these alternatives. By the mid-1970s studies by economists, sociologists, and other development experts indicated that the much-vaunted Green Revolution had done more harm than help, thanks especially to the input- and capital-intensive model of farming it espoused.

The first critiques of the Green Revolution followed close on the heels of its initial celebration. In 1973 the Oxford economist Keith Griffin joined a growing chorus when he cataloged the harms introduced with “high-yielding varieties,” a phrase used to describe types bred to flourish with synthetic fertilizers. Their introduction had neither increased income per capita nor solved the problems of hunger and malnutrition, according to Griffin. They had produced effects, however: “The new technology... has accelerated the development of a market oriented, capitalist agriculture. It has hastened the demise of subsistence oriented, peasant farming... It has increased the power of landowners, especially the larger ones, and this in turn has been associated with a greater polarization of classes and intensified conflict.” In 1973 Griffin thought that the ultimate outcome depended on how governments responded to these changes. Five years later he had come to a final determination. “The story of the green revolution is a story of a revolution that failed,” he declared.

Griffin was a researcher on the project “Social and Economic Implications of the Large-Scale Introduction of High-Yielding Varieties of Foodgrain.” Carried out under the auspices of the United Nations Research Institute for Social Development, this project enlisted social scientists to document the uptake of new agricultural technologies — chiefly new crop varieties — and their social and economic effects across Asia and North Africa. Mexico was also included among the project’s case studies, since organizers pinpointed it as the historical site of the “first experiments in high-yielding seeds for modernizing nations.” An attempt to synthesize a single account from the case studies in the 1970s highlighted the problems arising from the integration of farmers into national and international markets. New varieties, chemical fertilizers, and mechanical equipment demanded that cultivators "become businessmen competent in market operations and small-scale financing and receptive to science-generated information." This was thought to be in marked contrast to their having once been "'artisan’ cultivators' who drew on 'tradition and locally valid practices'" to sustain their families. The fact that only a minority of better-off farmers could make such a transition meant that development programs benefited a few at the expense of the many. Drawing on her case study of Mexico, project contributor Cynthia Hewitt de Alcántara extended this observation about market integration into a reflection on the flow of economic resources around, and out of, the country — from laborers to landowners, from farms to industries, from national programs to foreign businesses. The reconfiguration of agriculture as what she labeled a "capitalist enterprise" had not brought more money to the countryside but instead robbed peasants of what little they had.

This apparent contradiction in Mexico’s agricultural development invited scrutiny from many besides Hewitt. The preceding three decades had been characterized by steady economic growth, thanks to increased international trade during World War II, government policies that encouraged national industry, and investments in infrastructure and education. This period of the so-called Mexican Miracle had also seen a transition from food dependency — needing to import grain to feed the nation — to self-sufficiency. At this level of abstraction, Mexico’s prospects for sustaining adequate food and nutrition looked rosy. When sociologists and economists delved into specifics, however, the miracle revealed itself a mirage. Investments in agriculture had focused on supplying food to urban workers and developing new products for export. State food-aid programs, too, had been oriented to urban labor, with set prices that kept food affordable for consumers in the city but made its cultivation unprofitable for farmers in the countryside. While well-off cultivators in the north of the country benefited from state-funded irrigation programs and guaranteed prices, poor farmers working small plots without access to state grain purchasers found that they could not sustain their families by selling surplus corn. Hewitt estimated that in 1969–70, one-third of the Mexican population experienced calorie deficiency. A 1974 national survey came to similar conclusions, calculating that 18.4 million Mexicans, over a quarter of the population, suffered from malnutrition.

The persistence of poverty in Mexico, in spite of the country’s celebrated economic growth, could be traced to the model of development embraced by national leaders since the 1940s. Politicians and policy makers had assumed that subsistence farmers could be made irrelevant, with their surplus labor absorbed into the growing industrial economy. Yet industry had not acted the sponge, with the result that this “irrelevant” segment of the population had grown while continuing to be neglected by the state. The economist David Barkin linked faulty Mexican policies to a more fundamental problem of emulating the market capitalism of its northern neighbor. The apparently flourishing Mexican economy had invited the interest of foreign investors, in particular US corporations. Despite protectionist policies, these companies had moved in, and national industries had been sold off, leaving Mexicans vulnerable to the whims of private capital.

Agriculture offered a prime example of this pattern. By the 1970s US firms dominated across the sector, from farm machinery (John Deere, International Harvester) to chemicals (Monsanto, DuPont, American Cyanamid) to production and processing (United Brands, Corn Products) to animal feed (Ralston Purina). Observing this trend, another economist pinpointed Mexican agriculture as the place of origin of a “new, world-wide modernization strategy.” He traced a path from the interventions of the Rockefeller Foundation to the stimulus these gave to the importation of costly agricultural inputs to the management of Mexican farms by foreign firms. Foreign control and deepening ties to international markets affected food self-sufficiency. It made sense, from the perspective of increasing individual profits, for large and well-financed producers in Mexico to focus on the crops that would bring the best prices. These were more likely to be fruits and vegetables for US supermarkets or sorghum to feed cattle than corn or wheat to feed Mexican workers. Thanks to these patterns, it was possible to see much of Mexican agriculture as an extension of US agribusiness, operating chiefly “to exploit Mexican rural labor, Mexican land and water resources, and Mexican private and public capital for the principal benefit of US entrepreneurs.” The ultimate outcome of technical assistance to enhance agricultural production, ostensibly undertaken for the betterment of Mexican farmers and the Mexican economy, was the dominance of transnational companies in that very task, for their own aggrandizement. This portended ill for Mexico and especially for the poorest Mexicans.

James Webb Space Telescope captures its first images of a star

The James Webb Space Telescope has finally captured its first image of a star — or rather, images. NASA has shared a mosaic of pictures (shown above) of a star taken using the primary mirror's 18 segments. It looks like a seemingly random collection of blurry dots, but that's precisely what the mission team was expecting. The imagery will help scientists finish the lengthy mirror alignment process using the telescope's Near Infrared Camera, or NIRCam. The first phase is nearly complete as of this writing.

The visuals came from a 25-hour effort that pointed the James Webb Space Telescope to 156 different positions and produced 1,560 images with the NIRCam's sensors. The team created the mosaic using the signature of each mirror segment in a single frame. Visual artifacts come from using the infrared camera at temperatures well above the frigid conditions the telescope will need for scientific observation. And what you see here isn't the entirety of the mosaic — the full-resolution snapshot is over two gigapixels.

NASA

NASA also provided a rare real-world glimpse at the JWST in action. The agency provided a "selfie" of the primary mirror (middle) created by a pupil imaging lens from the NIRCam. This too is blurry, but it offers a valuable look at the fully deployed mirror and helps explain the importance of alignment. Notice how just one segment is brightly lit by a star? It's the only one aligned with that celestial body — it will take a while before all segments are operating in concert.

Researchers expect the first scientifically useful images from JWST in the summer. It's reasonable to presume those pictures will be considerably more exciting, especially as they start providing glimpses of the early universe. Still, what you see here demonstrates the telescope's health and suggests there won't be much trouble in the months ahead.

SpaceX shows what a Starship launch would look like

Elon Musk has given SpaceX's first huge Starship update in years, and during his presentation, the company showed off what a launch with the massive launch system would look like. The Starship system is composed of the Starship spacecraft itself on top of a Super Heavy booster. SpaceX is working towards making it rapidly and fully reusable so as to make launches to the Moon and to Mars feasible. After making its way outside our planet, the booster will break off and return to its launch tower, where it will ideally be caught by the tower arms. As for the spacecraft, it will proceed to its destination before making its way back to Earth. 

Musk said the booster will spend six minutes in the air over all, two upon ascent and four for its return trip. In the future, the system could be reused every six to eight hours for three launches a day. SpaceX says achieving a fully and rapidly reusable system is "key to a future in which humanity is out exploring the stars." Musk also talked about how in-orbit refilling — not "refueling," since the vehicle's Raptor engines use more liquid oxygen than fuel — is essential for long-duration flights. 

The Super Heavy booster, Musk said, has more than twice the thrust of a Saturn V, the largest rocket to ever head to space so far. In its current iteration, it has 29 Raptor engines, but it could eventually have 33. Speaking of those engines, Raptor version 2 is a complete redesign of the first, costs half as much and needs fewer parts. The company is capable of manufacturing five to six a week at the moment, but it could apparently be capable of producing as many as seven by next month. 

Aside from being able to carry hundreds of tons, the Starship could revolutionize space travel if SpaceX can truly make launches as affordable as Musk said it could. He revealed during the event that a Starship launch could cost les than $10 million per flight, all in, within two to three years. That's significantly less than a Falcon 9 launch that costs around $60 million. 

SpaceX wants to launch the Starship from its Boca Chica, Texas facility called Starbase, where it's been building the rocket's prototype. It has yet to secure approval from the Federal Aviation Administration to do so, and Musk said the company doesn't know where things stand with the agency exactly. However, there's apparently a rough indication that the FAA will be come with its environmental assessment in March. SpaceX also expects the rocket to be ready by then, which means Starship's first orbital test flight could be on the horizon.