Swaddled in wet wipes, ensconced in congealed cooking grease and able to transform into pipe-blocking masses so hard as to require excavation equipment to dislodge, fatbergs are truly the bean-and-cheese burritos of the sewage world. They can cause havoc on a town’s bowels, achieving lengths that outspan bridges and accumulating masses that dwarf double-deckers. Fatbergs are a modern problem that have civil engineers increasingly turning to tech in order to keep their cities’ subterranean bits clear of greasy obstructions.
Fatbergs — a portmanteau of fat and iceberg — are a relatively recent but fast-growing problem in the world’s sewers. They form when FOG (fats, oil, grease) poured down drains comes in contact with calcium, phosphorus and sodium to create a hard, soap-like material. This calcium soap then accumulates on non degradable flushed items like wet wipes, sanitary pads, condoms, dental floss, clumps of hair, chunks of food waste, and diapers as they travel through a municipal waste disposal system. Though their components may start off soft and pliable (albeit damp) once ‘bergified, they harden into a mass tougher than concrete, requiring sanitation workers to employ high-pressure water jets, shovels and pickaxes in order to break it up.
“These huge, solid masses can block the sewers, causing sewage to back up through drains, plugholes and toilets,” Anna Boyles, operations manager at Thames Water, told RICS in October. “It can take our teams days, sometimes weeks, to remove them.”
They can also offgas toxic compounds such as hydrogen sulfide. Forensic analyses of dislodged fatbergs have also revealed concentrations of all sorts of chemicals including bodybuilding supplements and the metabolites of illicit drugs — not to mention myriad bacterial species. Not only do these deposits constitute a direct health hazard to the workers tasked with demolishing them, fatbergs can cause pipe blockages and force wastewater to overflow aboveground where the contagion can spread.
A blockage in Maryland in 2018 caused more than a million gallons of wastewater to spill into local waterways (it cost $60,000 to clear the 20-foot obstruction) while a similar backup in Michigan flooded the University of Michigan with 300,000 gallons of the stuff.
These cholesterol-like deposits can reach monumental proportions if left unchecked. Thames Water, which manages sewers in both London and the Thames Valley, told the BBC last February that it spent £18m a year clearing 75,000 blockages from its systems. One of the largest bergs to date was pulled from beneath Birchall Street in Liverpool, UK in 2019. It measured 820 feet in length, weighed 440 tons and required more than four months to clear. The month before, a 200-foot long fatberg was discovered under Sidmouth, a popular coastal tourist location in Devon, UK.
“It is the largest discovered in our service history and it will take our sewer team around eight weeks to dissect this monster in exceptionally challenging work conditions,” South West Water director of Wastewater, Andrew Roantree, told The Guardian in 2019. “Thankfully it has been identified in good time with no risk to bathing waters.”
“If you keep just one new year’s resolution this year,” he added, “let it be to not pour fats, oil or grease down the drain, or flush wet-wipes down the loo. Put your pipes on a diet and don’t feed the fatberg.”
These obstructions are just as problematic on this side of the pond. In 2018, officials in Charleston, South Carolina pulled a 2,000 pound, 12-foot by 3-foot berg from the city’s sewers. The same year, officials in Macomb County, Michigan removed a 100-foot fatberg from one of its 11-foot diameter Lakeshore Interceptor pipes at a cost of $100,000.
"To put it simply, this fatberg is gross. It provides an opportunity, however, to talk with people about the importance of restricting what goes down our sewers. This restriction was caused by people and restaurants pouring grease and similar materials down their drains. We want to change that behavior," Public Works Commissioner Candice S. Miller said at the time.
However, the problem is apparently not universal. “The city of Atlanta does not have ‘fatbergs’ within our sewer system,” a spokesperson from Atlanta’s Department of Water Management told Engadget via email. “Fatbergs are common in other countries.” Any blockages that are encountered within the city’s sewers are disposed of using, “high pressure water and/or rodding equipment.”
This rodding equipment, commonly known as hydrojets, are high-powered versions of the pressure washers used to clean siding and walkways. They’re capable of producing pressures in excess of 4000 ppi and spray omnidirectionally so that they’ll blast detritus from the entire interior surface of a pipe as they’re fed forward. That fecally-caked slurry is then sucked out of the main using a truck-mounted vacuum system and stored in an onboard tank for later disposal – as you can see in the 2010 video from the City of Carlsbad, California below. It’s the same basic idea as the trucks that service Port-A-Potties but, again, a more robust version.
A major contributor to the fatberg problem are wet wipes which were first invented in Manhattan in 1957 by Arthur Julius. He went on to found the Nice-Pak company and, by 1963, had partnered with KFC to offer his company’s pre-moistened Wet-Nap towelettes as an after-meal hand sanitizer to the fried chicken chain’s greasy-fingered customers. In subsequent decades Nice-Pak expanded its offerings to include products such as baby wipes and EPA-rated hand and surface disinfectants. As of 2020, the global market for wet wipes runs an estimated $24 billion annually, according to a recent report from Grandview Research.
“Wet wipes may be convenient, but flushing them is a major cause of sewage blockages. On top of this they contain plastic and can find their way into our seas where they pose a threat to wildlife,” Friends of the Earth spokesperson Julian Kirby explained to The Evening Standard in 2019. “Wet wipe manufacturers should be required to make their products plastic-free and clearly label them as ‘do not flush’.”
While the Museum of London has seen fit to preserve part of the famed Whitechapel fatberg for posterity, most municipalities want them gone, flushed and forgotten, but the fatbergs have to be found first. Typically, that involves visually inspecting the sewer mains either by sending down crews or remotely operated cameras like the modular Rovver X from Envirosight or the IRIS Portable Mainline Crawler from Insight Vision. Alternately, the SL-RAT (Sewer Line Rapid Assessment Tool) from Infosense Inc, relies on sonar technology to check sewer lines for obstructions.
Relying on sound waves offers a number of advantages over conventional visual systems. The SL-RAT is set up at through the access points at either end of a length of sewer main.The transmitting unit blasts a series of tones through the pipe where the receiving unit measures the tonal differences between the two sets to determine the extent of any potential blockage. Since utilities don’t have to physically send people or drones into the pipes when using the SL-RAT, crews can inspect more of the sewer network in less time.
The city of Irvins, Utah, for example, used to expend 6,000 gallons of water daily to flush the entirety of its 50-mile wastewater system, done in order to dislodge blockages occurring in only about 5 percent of the network.
“Just to prevent one blockage, we were cleaning the whole system,” Ivins Public Works director Chuck Gillette told St George News in October. “You’re cleaning every pipe.”
With the city’s implementation of the SL-RAT system in 2020, city crews could more precisely locate clogs to dislodge. A process that used to take weeks and 1,100 labor hours is now done in a few days and 320 labor hours. “It’s less [noise] than the sound of a cleaning truck,” Gillette continued, “and there is zero water usage.”
While municipal authorities beg people to help prevent fatbergs from forming in the first place by following the 3Ps — as in, the only things that should go in the loo, are pee, paper and poo — a team of Canadian researchers are looking at converting the ‘bergs into biofuels once they’ve been harvested from sanitation pipes.
“This method would help to recover and reuse waste cooking oil as a source of energy,” University of British Columbia researcher Asha Srinivasan told Smithsonian Magazine in 2018. The UBC team’s method involves first heating a fatberg chunk to between 90 and 110 degrees Celsius to loosen everything up, then adding hydrogen peroxide to break down organic components and free trapped fatty acids, then breaking those acids down into methane using anaerobic bacteria. This is similar in methodology, albeit on a much smaller scale, as to how some wastewater treatment facilities produce natural gas from methane captured during the cleaning process.
Mildred Dresselhaus' life was one in defiance of odds. Growing up poor in the Bronx — and even more to her detriment, growing up a woman in the 1940s — Dresselhaus' traditional career options were paltry. Instead, she rose to become one of the world's preeminent experts in carbon science as well as the first female Institute Professor at MIT, where she spent 57 years of her career. She collaborated with physics luminaries like Enrico Fermi and laid the essential groundwork for future Nobel Prize winning research, directed the Office of Science at the U.S. Department of Energy and was herself awarded the National Medal of Science.
In the excerpt below from Carbon Queen: The Remarkable Life of Nanoscience Pioneer Mildred Dresselhaus, author and Deputy Editorial Director at MIT News, Maia Weinstock, tells of the time that Dresselhaus collaborated with Iranian American physicist Ali Javan to investigate exactly how charge carriers — ie electrons — move about within a graphite matrix, research that would completely overturn the field's understanding of how these subatomic particles operate.
For anyone with a research career as long and as accomplished as that of Mildred S. Dresselhaus, there are bound to be certain papers that might get a bit lost in the corridors of the mind—papers that make only moderate strides, perhaps, or that involve relatively little effort or input (when, for example, being a minor consulting author on a paper with many coauthors). Conversely, there are always standout papers that one can never forget—for their scientific impact, for coinciding with particularly memorable periods of one’s career, or for simply being unique or beastly experiments.
Millie’s first major research publication after becoming a permanent member of the MIT faculty fell into the standout category. It was one she described time and again in recollections of her career, noting it as “an interesting story for history of science.”
The story begins with a collaboration between Millie and Iranian American physicist Ali Javan. Born in Iran to Azerbaijani parents, Javan was a talented scientist and award-winning engineer who had become well known for his invention of the gas laser. His helium-neon laser, coinvented with William Bennett Jr. when both were at Bell Labs, was an advance that made possible many of the late twentieth century’s most important technologies—from CD and DVD players to bar-code scanning systems to modern fiber optics.
After publishing a couple of papers describing her early magneto-optics research on the electronic structure of graphite, Millie was looking to delve even deeper, and Javan wanted to help. The two met during Millie’s work at Lincoln Lab; she was a huge fan, once calling him “a genius” and “an extremely creative and brilliant scientist.”
For her new work, Millie aimed to study the magnetic energy levels in graphite’s valence and conduction bands. To do this, she, Javan, and a graduate student, Paul Schroeder, employed a neon gas laser, which would provide a sharp point of light to probe their graphite samples. The laser had to be built especially for the experiment, and it took years for the fruits of their labor to mature; indeed, Millie moved from Lincoln to MIT in the middle of the work.
If the experiment had yielded only humdrum results, in line with everything the team had already known, it still would have been a path-breaking exercise because it was one of the first in which scientists used a laser to study the behavior of electrons in a magnetic field. But the results were not humdrum at all. Three years after Millie and her collaborators began their experiment, they discovered their data were telling them something that seemed impossible: the energy level spacing within graphite’s valence and conduction bands were totally off from what they expected. As Millie explained to a rapt audience at MIT two decades later, this meant that “the band structure that everybody had been using up till that point could certainly not be right, and had to be turned upside down.”
In other words, Millie and her colleagues were about to overturn a well-established scientific rule—one of the more exciting and important types of scientific discoveries one can make. Just like the landmark 1957 publication led by Chien-Shiung Wu, who overturned a long-accepted particle physics concept known as conservation of parity, upending established science requires a high degree of precision—and confidence in one’s results. Millie and her team had both.
What their data suggested was that the previously accepted placement of entities known as charge carriers within graphite’s electronic structure was actually backward. Charge carriers, which allow energy to flow through a conducting material such as graphite, are essentially just what their name suggests: something that can carry an electric charge. They are also critical for the functioning of electronic devices powered by a flow of energy.
Electrons are a well-known charge carrier; these subatomic bits carry a negative charge as they move around. Another type of charge carrier can be seen when an electron moves from one atom to another within a crystal lattice, creating something of an empty space that also carries a charge—one that’s equal in magnitude to the electron but opposite in charge. In what is essentially a lack of electrons, these positive charge carriers are known as holes.
MIT Press
FIGURE 6.1 In this simplified diagram, electrons (black dots) surround atomic nuclei in a crystal lattice. In some circumstances, electrons can break free from the lattice, leaving an empty spot or hole with a positive charge. Both electrons and holes can move about, affecting electrical conduction within the material.
Millie, Javan, and Schroeder discovered that scientists were using the wrong assignment of holes and electrons within the previously accepted structure of graphite: they found electrons where holes should be and vice versa. “This was pretty crazy,” Millie stated in a 2001 oral history interview. “We found that everything that had been done on the electronic structure of graphite up until that point was reversed.”
As with many other discoveries overturning conventional wisdom, acceptance of the revelation was not immediate. First, the journal to which Millie and her collaborators submitted their paper originally refused to publish it. In retelling the story, Millie often noted that one of the referees, her friend and colleague Joel McClure, privately revealed himself as a reviewer in hopes of convincing her that she was embarrassingly off-base. “He said,” Millie recalled in a 2001 interview, “‘Millie, you don’t want to publish this. We know where the electrons and holes are; how could you say that they’re backwards?’” But like all good scientists, Millie and her colleagues had checked and rechecked their results numerous times and were confident in their accuracy. And so, Millie thanked McClure and told him they were convinced they were right. “We wanted to publish, and we... would take the risk of ruining our careers,” Millie recounted in 1987.
Giving their colleagues the benefit of the doubt, McClure and the other peer reviewers approved publication of the paper despite conclusions that flew in the face of graphite’s established structure. Then a funny thing happened: bolstered by seeing these conclusions in print, other researchers emerged with previously collected data that made sense only in light of a reversed assignment of electrons and holes. “There was a whole flood of publications that supported our discovery that couldn’t be explained before,” Millie said in 2001.
Today, those who study the electronic structure of graphite do so with the understanding of charge carrier placement gleaned by Millie, Ali Javan, and Paul Schroeder (who ended up with quite a remarkable thesis based on the group’s results). For Millie, who published the work in her first year on the MIT faculty, the experiment quickly solidified her standing as an exceptional Institute researcher. While many of her most noteworthy contributions to science were yet to come, this early discovery was one she would remain proud of for the rest of her life.
Even as governments and corporations around the globe squeeze the Russian economy through increasingly stringent financial sanctions for the country's invasion of its neighbor, Ukraine, some within the aggrieved nation have sought to punish Russia further, by kicking it off the internet entirely.
On Monday, a pair of Ukrainian officials petitioned ICANN (the Internet Corporation for Assigned Names and Numbers) as well as the Réseaux IP Européens Network Coordination Centre (RIPE NCC), to revoke the domains ".ru", ".рф" and ".su." They also asked that root servers in Moscow and St. Petersburg be shut down — potentially knocking websites unde those domains offline. On Thursday, ICANN responded to the request with a hard pass citing that doing so is not within the scope of ICANN's mission and that it's not really feasible to do in the first place.
"As you know, the Internet is a decentralized system. No one actor has the ability to control it or shut it down," ICANN CEO Göran Marby, wrote in his response to ICANN representative for Ukraine, Andrii Nabok, and deputy prime minister and digital transformation minister, Mykhailo Fedorov, on Thursday.
"Our mission does not extend to taking punitive actions, issuing sanctions, or restricting access against segments of the Internet — regardless of the provocations," he continued. "Essentially, ICANN has been built to ensure that the Internet works, not for its coordination role to be used to stop it from working."
Following Russia’s unprovoked invasion of Ukraine last week, the West has united over its condemnation of the aggression and has enacted broad economic sanctions against the nation. A financial fallout is already occurring with the ruble losing 20 percent of its value against the dollar nearly overnight, and which could fall even further as sanctions progressively excise Russia from the international monetary system. The pecuniary shockwaves created by these sanctions are likely to impact every strata of Russian society with far reaching consequences for the Roscosmos space program and the continued safe operation of the International Space Station.
These “strong sanctions,” US President Joe Biden stated at a press conference last Thursday, will impose “severe costs on the Russian economy” in an effort to “strike a blow to their ability to continue to modernize their military. It’ll degrade their aerospace industry, including their space program.”
Russia has issued retaliatory sanctions against Western companies of its own. On Wednesday, Roscosmos announced that it will not launch the next round of 36 OneWeb internet satellites that was scheduled for liftoff March 4th from the Baikonur Cosmodrome in Kazakhstan. Those satellites will not get into orbit, Roscosmos officials threatened until the UK-based company meet two demands: that the UK government sell its stake in OneWeb and that the company guarantee that its satellite constellation will not be used in a military capacity. OneWeb has yet to respond publicly to the demands.
"Russia’s actions are an immediate danger to those living in Ukraine, but also pose a real threat to democracy throughout the world," US Commerce Secretary Gina Raimondo said in a statement Thursday. "By acting decisively and in close coordination with our allies and partners, we are sending a clear message today that the United States of America will not tolerate Russia's aggression against a democratically-elected government."
Despite the economic curb stomping the Russian people are about to endure on behalf of Putin’s cartographic quarrel, NASA remains optimistic that the sanctions will not adversely impact ongoing collaborative space programs, like the running of the ISS.
The ISS has, from its start, been a joint US-Russian effort. Originally born from a foreign policy plan to improve relations between the Cold War foes after the fall of the Berlin Wall and the conclusion of the Space Race, the International Space Station would not exist if not for Russia’s collaboration. Soyuz rockets helped bring ISS modules into orbit and, following the Space Shuttle’s retirement in 2011, served as the only means of getting astronauts into orbit and back, at least until SpaceX came along. Of the station’s 16 habitable modules, six were provided by Russia and eight by the US (with the rest sent up by Japan and the European Space Agency). Jus last summer , Russia successfully launched its largest ISS component to date, the 813-cubic meter Nauka science module.
Dmitry Rogozin, Director General of Roscosmos, himself still personally under sanctions due to the 2014 Crimea incident, voiced an alternative opinion in response to the news.
“If you block cooperation with us, who will save the ISS from an uncontrolled deorbit and fall into the United States or Europe,” Rogozin continued. “There is also the option of dropping a 500-ton structure to India and China. Do you want to threaten them with such a prospect? The ISS does not fly over Russia, so all the risks are yours. Are you ready for them?”
В ответ на санкции Евросоюза в отношении наших предприятий Роскосмос приостанавливает сотрудничество с европейскими партнерами по организации космических запусков с космодрома Куру и отзывает свой технический персонал, включая сводный стартовый расчёт, из Французской Гвианы. pic.twitter.com/w05KACb9nI
“I was not surprised, based on his previous behavior,” former space station commander Terry Virts told Time of Rogozin’s outburst. “This is what I’ve come to expect.”
Rogozin’s comments come more than seven weeks after NASA announced its intent to keep the ISS operational until 2030, though the American space agency and Roscosmos are still negotiating a new "crew exchange" deal, which would see astronauts and cosmonauts sent to the ISS aboard both American and Russian rockets. Russia’s obligations to the ISS officially expire in 2024 and, even prior to the invasion of the Ukraine, Russia was rumbling about pulling out of the project by 2025.
"The Russian segment can't function without the electricity on the American side, and the American side can't function without the propulsion systems that are on the Russian side," former NASA astronaut Garrett Reisman noted to CNN. "So you can't do an amicable divorce. You can't do a conscious uncoupling."
As such, “NASA continues working with all our international partners, including the State Space Corporation Roscosmos, for the ongoing safe operations of the International Space Station,” the agency told Reuters following Rogozin’s rant. “The new export control measures will continue to allow US-Russia civil space cooperation. No changes are planned to the agency’s support for ongoing in orbit and ground station operations.”
However, Russia’s spacefaring future in the eyes of other ISS stakeholders is less clear. "I've been broadly in favor of continuing artistic and scientific collaboration," UK Prime Minister Boris Johnson said on the floor of the House of Commons Thursday. "But in the current circumstances, it's hard to see how even those can continue as normal."
More immediately, Roscosmos reported Monday that its public portal was under cyberassault. "A massive DDoS attack from various IP addresses has been carried out on the Roscosmos website for several days now. Its organizers may think that this affects something. I will answer: this only affects the timely awareness of space enthusiasts about Roscosmos news," Rogozin tweeted, while assuring that the safety of the ISS was not immediately at risk.
And since one cannot so much as utter the phrase “public crisis” without Elon Musk busting through a nearby wall like a mini-sub-slinging Kool-Aid man, SpaceX is of course getting shoehorned into this newfound global conflict.
On February 25th, Musk offered to have SpaceX step in and keep the ISS in orbit, should Russia refuse. The space station is currently where it is thanks to regular deliveries of propellant reactant by the Russian space agency but should those shipments stop, the ISS will be unable to counter the planet’s atmospheric drag and eventually slow into a capture orbit where it will fall to Earth. By taking over those delivery flights, SpaceX could keep the ISS aloft without the added hassle of outfitting a Falcon 9 to stand in for Russia’s undelivered deorbiting spacecraft. And even if SpaceX can’t do so, the engine attached to the uncrewed Cygnus supply ship that arrived on February 21st is powerful enough to give the ISS an orbital boost and temporary reprieve.
Starlink service is now active in Ukraine. More terminals en route.
Starlink has launched more than 2,000 internet-beaming cubesats into orbit to date, a fraction of the more than 40,000 the company plans to eventually launch. CNBC reports that the company has more than 145,000 active subscribers as of January.
It would be imprudent at this point to predict how Russia’s invasion will pan out, whether the imposed economic sanctions will bring a quick resolution to the conflict or slowly strangle a fading world power. We can’t fully foresee the myriad implications emerging from these monetary decisions or how they’ll impact global collaboration and space exploration in coming years. But amidst this uncertainty and chaos we can take solace in knowing that life, aboard the ISS at least, continues unabated.
“Polestar O2 is our vision of a new era for sports cars," Polestar’s Head of Design, Maximilian Missoni, said in a Tuesday press statement. "By mixing the joy of open top driving with the purity of electric mobility, it unlocks a new mix of emotions in a car."
The O2 will reportedly be built upon the same "bespoke" bonded aluminum unibody platform that the company is using for the Polestar 5, and generally resemble the Precept concept design it is derived from which, according to Polestar PR, "shows how Polestar’s evolving design language can be adapted to different body styles with a strong family resemblance." That is, while the Polestar 5 will be a high-performance four-door grand touring vehicle, the O2 will offer a more compact, 2+2 sportscar feel, despite both being built on the same basic underpinnings.
Polestar
Now, you might be wondering how a convertible EV would even work given that traditional convertibles are rather inefficient — their frames are thicker and heavier to offset the structural strength lost by cutting off the roof and their aerodynamics are a mess because, again, no roof — and that is an excellent question. The company doesn't yet have drag coefficient data to share, but it did assert that "disguised design features like integrated ducts that improve laminar air flow over the wheels and body sides, and rear lights that function as air blades to reduce turbulence behind the car," are being investigated to maximize the vehicle's range.
With a shorter wheelbase and only an afterthought of rear seats, the O2 offers a sportier, more aggressive stance than the Polestar 2. And those wheels! The exterior is a study of sharp lines with a low-slung cabin seated between angular fender flares and an acutely angled glass-top roof that retracts back into a broad trunk. It looks like if you mashed up a Ford F40 with a Porsche 718 Spyder and then flattened out all the curves. It looks like a roadster you'd see on the streets of Los Santos. I am a fan.
Polestar
The interior sounds equally supple, featuring a "thermoplastic mono-material" throughout for the hard bits, paired with recycled polyester as "the sole material used for all the soft components." Because nothing beats the seat-squelching experience of sitting on polyester and plastic in full sun with the roof down.
Polestar
Drivers will also be able to film their top-down adventures thanks to the O2's integrated cinematography drone. Developed in collaboration with Hoco Flow, this autonomous camera drone rides in an area of negative pressure generated from an airfoil deployed behind the rear seats. The drone can follow along at speeds up to 56 MPH and the captured footage can subsequently be edited and shared from the central infotainment system once the vehicle is parked. I mean, personally, I'd prefer an eATV or even an electric skateboard if automakers are going to bundle in secondary transports with their vehicle offerings, but sure, a camera drone will definitely remain cool and novel and useful after the first couple flights. I mean, just look at how well they turned out for the Renault KWID or the Lexus LF-30 Electric Concept.
Polestar
Like the Precept, we won't likely see street legal O2 as it is now. Instead, Polestar plans to launch three new cars over the coming three years, "each of which has potential to gradually realize some of the ideas presented by these concept cars," so keep an eye out for low-flying drones.
Who wouldn't want an AI-driven robot sidekick; a little mechanical pal, trustworthy and supportive — the perfect teammate. But should such an automaton be invented would it really be your teammate, an equal partner in your adventurous endeavors? Or would it simply be a tool, albeit a wildly advanced one measured against today's standard? In the excerpt below from Human-Centered AI, author and professor emeritus at the University of Maryland, Ben Shneiderman, examines the pitfalls of our innate desire to humanize the mechanical constructs we build and how we are shortchanging their continued development by doing so.
A common theme in designs for robots and advanced technologies is that human–human interaction is a good model for human–robot interaction, and that emotional attachment to embodied robots is an asset. Many designers never consider alternatives, believing that the way people communicate with each other, coordinate activities, and form teams is the only model for design. The repeated missteps stemming from this assumption do not deter others who believe that this time will be different, that the technology is now more advanced, and that their approach is novel.
Numerous psychological studies by Clifford Nass and his team at Stanford University showed that when computers are designed to be like humans, users respond and engage in socially appropriate ways. Nass’s fallacy might be described as this: since many people are willing to respond socially to robots, it is appropriate and desirable to design robots to be social or human-like.
However, what Nass and colleagues did not consider was whether other designs, which were not social or human-like, might lead to superior performance. Getting beyond the human teammate idea may increase the likelihood that designers will take advantage of unique computer features, including sophisticated algorithms, huge databases, superhuman sensors, information abundant displays, and powerful effectors. I was pleased to find that in later work with grad student Victoria Groom, Nass wrote: “Simply put, robots fail as teammates.” They elaborated: “Characterizing robots as teammates indicates that robots are capable of fulfilling a human role and encourages humans to treat robots as human teammates. When expectations go unmet, a negative response is unavoidable.”
Lionel Robert of the University of Michigan cautions that human-like robots can lead to three problems: mistaken usage based on emotional attachment to the systems, false expectations of robot responsibility, and incorrect beliefs about appropriate use of robots. Still, a majority of researchers believe that robot teammates and social robots are inevitable. That belief pervades the human–robot interaction research community which “rarely conceptualized robots as tools or infrastructure and has instead theorized robots predominantly as peers, communication partners or teammates.”
Psychologist Gary Klein and his colleagues clarify ten realistic challenges to making machines behave as effectively as human teammates. The challenges include making machines that are predictable, controllable, and able to negotiate with people about goals. The authors suggest that their challenges are meant to stimulate research and also “as cautionary tales about the ways that technology can disrupt rather than support coordination.” A perfect teammate, buddy, assistant, or sidekick sounds appealing, but can designers deliver on this image or will users be misled, deceived, and disappointed? Can users have the control inherent in a tele-bot while benefiting from the helpfulness suggested by the teammate metaphor?
My objection is that human teammates, partners, and collaborators are very different from computers. Instead of these terms, I prefer to use tele-bots to suggest human controlled devices. I believe that it is helpful to remember that “computers are not people and people are not computers.”
UOP
Margaret Boden, a long-term researcher on creativity and AI at the University of Sussex, makes an alternate but equally strong statement: “Robots are simply not people.” I think the differences between people and computers include the following:
Responsibility Computers are not responsible participants, neither legally nor morally. They are never liable or accountable. They are a different category from humans. This continues to be true in all legal systems and I think it will remain so. Margaret Boden continues with a straightforward principle: “Humans, not robots, are responsible agents.” This principle is especially true in the military, where chain of command and responsibility are taken seriously. Pilots of advanced fighter jets with ample automation still think of themselves as in control of the plane and responsible for their successful missions, even though they must adhere to their commander’s orders and the rules of engagement. Astronauts rejected designs of early Mercury capsules which had no window to eyeball the re-entry if they had to do it manually — they wanted to be in control when necessary, yet responsive to Mission Control’s rules. Neil Armstrong landed the Lunar Module on the Moon—he was in charge, even though there was ample automation. The Lunar Module was not his partner. The Mars Rovers are not teammates; they are advanced automation with an excellent integration of human tele-operation with high levels of automatic operation.
It is instructive that the US Air Force shifted from using the term unmanned autonomous/aerial vehicles (UAVs) to remotely piloted vehicles (RPVs) so as to clarify responsibility. Many of these pilots work from a US Air Force base in Nevada to operate drones flying in distant locations on military missions that often have deadly outcomes. They are responsible for what they do and suffer psychological trauma akin to what happens to pilots flying aircraft in war zones. The Canadian Government has a rich set of knowledge requirements that candidates must have to be granted a license to operate a remotely piloted aircraft system (RPAS).13 Designers and marketers of commercial products and services recognize that they and their organizations are the responsible parties; they are morally accountable and legally liable.14 Commercial activity is further shaped by independent oversight mechanisms, such as government regulation, industry voluntary standards, and insurance requirements.
Distinctive capabilities Computers have distinctive capabilities of sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors. To buy into the metaphor of “teammate” seems to encourage designers to emulate human abilities rather than take advantage of the distinctive capabilities of computers. One robot rescue design team described their project to interpret the robot’s video images through natural language text messages to the operators.The messages described what the robot was “seeing” when a video or photo could deliver much more detailed information more rapidly. Why settle for a human-like designs when designs that make full use of distinctive computer capabilities would be more effective.
Designers who pursue advanced technologies can find creative ways to empower people so that they are astonishingly more effective—that’s what familiar supertools have done: microscopes, telescopes, bulldozers, ships, and planes. Empowering people is what digital technologies have also done, through cameras, Google Maps, web search, and other widely used applications. Cameras, copy machines, cars, dishwashers, pacemakers, and heating, ventilation, and air conditioning systems (HVAC) are not usually described as teammates—they are supertools or active appliances that amplify, augment empower, and enhance people.
Human creativity The human operators are the creative force — for discovery, innovation, art, music, etc. Scientific papers are always authored by people, even when powerful computers, telescopes, and the Large Hadron Collider are used. Artworks and music compositions are credited to humans, even if rich composition technologies are heavily used. The human qualities such as passion, empathy, humility, and intuition that are often described in studies of creativity are not readily matched by computers. Another aspect of creativity is to give human users of computer systems the ability to fix, personalize, and extend the design for themselves or to provide feedback to developers for them to make improvements for all users. The continuous improvement of supertools, tele-bots, and other technologies depends on human input about problems and suggestions for new features. Those who promote the teammate metaphor are often led down the path of making human-like designs, which have a long history of appealing robots, but succeed only as entertainment, crash test dummies, and medical mannequins. I don’t think this will change. There are better designs than human-like rescue robots, bomb disposal devices, or pipe inspectors. In many cases four-wheeled or treaded vehicles are typical, usually tele-operated by a human controller.
The DaVinci surgical robot is not a teammate. It is a well-designed tele-bot that enables surgeons to perform precise actions in difficult to reach small body cavities (Figure 14.1, above). As Lewis Mumford reminds designers, successful technologies diverge from human forms. Intuitive Surgical, the developer of the DaVinci systems for cardiac, colorectal, urological, and other surgeries, makes clear that “Robots don’t perform surgery. Your surgeon performs surgery with Da Vinci by using instruments that he or she guides via a console.”
Many robotic devices have a high degree of tele-operation, in which an operator controls activities, even though there is a high degree of automation. For example, drones are tele-bots, even though they have the capacity to automatically hover or orbit at a fixed altitude, return to their take-off point, or follow a series of operator-chosen GPS waypoints. The NASA Mars Rover vehicles also have a rich mixture of tele-operated features and independent movement capabilities, guided by sensors to detect obstacles or precipices, with plans to avoid them. The control centers at NASA’s Jet Propulsion Labs have dozens of operators who control various systems on the Rovers, even when they are hundreds of millions of miles away. It is another excellent example of combining high levels of human control and high levels of automation.
Terms like tele-bots and telepresence suggest alternative design possibilities. These instruments enable remote operation and more careful control of devices, such as when tele-pathologists control a remote microscope to study tissue samples. Combined designs take limited, yet mature and proven features of teammate models and embed them in devices that augment humans by direct or tele-operated controls.
Another way that computers can be seen as teammates is by providing information from huge databases and superhuman sensors. When the results of sophisticated algorithms are displayed on information-abundant displays, such as in three-dimensional medical echocardiograms with false color to indicate blood flow volume, clinicians can be more confident in making cardiac treatment decisions. Similarly, users of Bloomberg Terminals for financial data see their computers as enabling them to make bolder choices in buying stocks or rebalancing mutual fund retirement portfolios (Figure 14.2, below). The Bloomberg Terminal uses a specialized keyboard and one or more large displays, with multiple windows typically arranged by users to be spatially stable so they know where to find what they need. With tiled, rather than overlapped, windows users can quickly find what they want without rearranging windows or scrolling. The voluminous data needed for a decision is easily visible and clicking in one window produces relevant information in other windows. More than 300,000 users pay $20,000 per year to have this supertool on their desks.
UOP
In summary, the persistence of the teammate metaphor means it has appeal for many designers and users. While users should feel fine about describing their computers as teammates, designers who harness the distinctive features of computers, such as sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors may produce more effective tele-bots that are appreciated by users as supertools.
Strange days out here on the internet. Dangerous days, too. Facebook groups have people drinking horse dewormer in anticipation of JFK Jr’s resurrection, Instagram’s filling kids up with eating disorders and suicidal ideations, while Twitter just peals along with that irate, mosquito-pitched whine you hear right just before everything goes red. Algorithmically-elected, engagement-optimized push notifications, suggestions, tips and tricks from the hottest thinkfluencers of the minute, pop, pop, popping up unbidden and inescapable, demanding the fealty of our screens as counted by our click throughs.
But the internet today is not the internet of 13 halcyon years ago, in 2009. Nor is it now as it might be 13 years hence, in 2035. The societal divisions we currently face could deepen into outright catastrophe over the next decade because, remember kids, it’s only ever the worst day of your life so far. Then again, humanity might just buck its ingrained tendencies and come together to build a more robust, more resilient reimagining of today’s internet. One that finally exemplifies the “us” that could be if you wasn’t playin’.
What form those future public spaces eventually take is anybody’s guess… so Pew Research Center had some of the best-informed technologists in the industry give theirs. The PRC partnered with Elon University’s Imagining the Internet Center in mid-summer of 2021 to survey 862 “technology innovators, developers, business and policy leaders” in a non-scientific canvassing. They were asked, “looking ahead to 2035, will digital spaces and people’s use of them be changed in ways that significantly serve the public good?”
The results were mixed. Of those polled, 61 percent of respondents predicted that things will change for the better by 2035, though 18 percent of them argued that currently “digital spaces are evolving in a mostly negative way” (compared to just 10 percent who think its evolution is mostly positive).
Their concerns centered around four thematic problems: Humans behave selfishly when not tethered by traditional societal norms; the rate of online advancement has confounded society’s less tech-savvy members, making them more susceptible to malicious digital systems they don’t fully understand; governments are increasingly ineffective at regulating the tech industry; and, as such, trolls, scammers and Nazis continue to run amok in digital public spaces. And though few of the respondents held much confidence in society’s short term solutions, many remained hopeful that we’ll get our collective act together and start acting like grown-ups on the internet by the middle of the next decade. Three cheers for low bars.
A lack of real-life repercussions will continue to foster boorish behaviors online
Harassment, cyberbullying, and doxxing are endemic to online interaction. For example, a 2019 report from the Anti-Defamation League (ADL) found that two-thirds of US online gamers have experienced "severe" harassment with more than half reporting having been been targeted based on their race, religion, ability, gender, sexual orientation or ethnicity; and nearly 30 percent claiming they've been doxxed in an online game. Likewise celebrities, politicians, professional athletes and public figures — even the unwilling ones — are all seemingly fair game for the vitriol of online mobs.
“Toxicity is a human attribute, not an element inherent to digital life,” Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago, told Pew surveyors. “Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”
“My strong sense is that the conditions and causes that underlie the multiple negative affordances and phenomena now so obvious and prevalent will not change substantially,” Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, told Pew. “This is… about human selfhood and identity as culturally and socially shaped, coupled with the ongoing, all but colonizing dominance of the US-based tech giants and their affiliates. Much of this rests on the largely unbridled capitalism favored and fostered by the United States.”
The progression of internet norms is occurring too rapidly for older generations to coherently process, leaving them increasingly vulnerable to bad actors
20th Century Fox
“Transformation and innovation in digital spaces and digital life have often outpaced the understanding and analysis of their intended or unintended impact and hence have far surpassed efforts to rein in their less-savory consequences,” Alexa Raad, chief purpose and policy officer at Human Security, told Pew Research. Rick Doner, a retired emeritus professor formerly at Emory University added, “We now have a vicious cycle in which the digital innovations are undermining both the existing institutions and the potential for stronger institutions down the road.”
The effects of this can be seen in the blackbox problem, in which the decision-making processes of AIs and algorithms are obscured from the humans who built them. Wisconsin’s use of the Compas judicial sentencing software is one such example.
“One of the biggest challenges is that the systems and algorithms that control these digital spaces have largely become unintelligible,” Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, told Pew. “For the most part, the decisions that are made in our apps and platforms are only fully understood by a handful of individuals.”
“We have ample evidence that significant numbers of humans are inherently susceptible to demagogs and sociopaths,” Randall Gellens, director at Core Technology Consulting, told Pew Research. “I see digital communications turbocharging those aspects of social interaction and human nature that are exploited by those who seek power and financial gain, such as groupthink, longing for simplicity and certainty, and wanting to be part of something big and important.”
“Better education, especially honest teaching of history and effective critical-thinking skills, could mitigate this to some degree,” Gellens noted, “but those who benefit from this will fight such education efforts, as they have, and I don’t see how modern, pluralistic societies can summon the political courage to overcome this.”
Wherein America’s gerontocracy sets out to fix a series of tubes
Elizabeth Frantz / reuters
Looking at the interactions between America’s elected representatives and the heads of various social media companies in recent years, Gellens’ prediction seems reasonable if not outright expectable. For example, hearings regarding Section 230 (which governs the liability social media companies face for their users’ posts) in October 2020 were little more than a partisan circus. Follow up hearings last April, without the CEOs in attendance, were only marginally more productive but neither event led to substantive changes in how social media companies operate or how the federal government regulates their actions.
“Laws and regulations might be tried, but these change much more slowly than digital technologies and business practices,” Richard Barke, associate professor in the School of Public Policy at Georgia Tech, commented to Pew. “Policies have always lagged technologies, but the speed of change is much greater now.”
Even when social media purveyors are caught dead to rights, there’s precious little political inertia to do anything about it. This dissonance between technology and policy has raised concerns among Pew respondents that it may lead to the weaponization of data and accelerate America’s transition to a surveillance state.
“We are in a new kind of arms race we naively thought was over with the collapse of the Soviet Union. We are experiencing quantum leaps in AI/robotics capabilities,” said David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic.
“It’s like trying to negotiate a mutually-assured-destruction model with several dozen nation-states holding weapons of mass destruction,” added Sam Punnett, retired owner of FAD Research. “I’d guess many Western legislators aren’t even aware of the scope of the problem.”
Those in power have shown little interest in addressing these structural internet issues
Handout . / reuters
Between the digital world evolving faster than many of us can comfortably accommodate, the ineffectiveness of our elected officials in regulating it and the erosion of societal norms combating bad behavior, it’s little wonder why bad actors run rampant on today’s internet. There’s very little downside to doing it, noted Chris Labash, associate teaching professor of information systems management at Carnegie Mellon.
“My fear is that negative evolution of the digital sphere may be more rapid, more widespread and more insidious than its potential positive evolution,” he told Pew. “We have seen, 2016 to present especially, how digital spaces act as cover and as a breeding ground for some of the most negative elements of society, not just in the US, but worldwide.
“Whether the bad actors are from terror organizations or ‘simply’ from hate groups, these spaces have become digital roach holes that research suggests will only get larger, more numerous and more polarized and polarizing,” he continued. “That we will lose some of the worst and most extreme elements of society to these places is a given. Far more concerning is the number of less-thoughtful people who will become mesmerized and radicalized by these spaces and their denizens: people who, in a less digital world, might have had more willingness to consider alternate points of view.”
Countering this effect will take more than sending out good vibes into the ether, Labash argued. Nor will simply offering alternative spaces be enough, “it will take strategies, incentives and dialogue that is expansive and persuasive to attract those people and subtly educate them in approaches to separate real and accurate inaccurate information from that which fuels mistrust, stupidity and hate.”
On the other hand, nothing says everything has to be terrible in 2035 either
While the experts above raised a number of terrifying(ly salient) points, their predictions are in the minority of respondents to the Pew survey. The majority, as one would expect, had a much rosier outlook for the future of the internet, though not without some reservations of their own. Their overarching reactions followed the common theme that while we face significant challenges now, users, governments and companies will eventually step up to do what is necessary and socially “right,” even if done out of naked self interest.
As Jenny L. Davis, a senior lecturer in sociology at the Australian National University, pointed out, “By 2035, I expect platforms themselves to be better regulated internally. This will be motivated, indeed necessary, to sustain public support, commercial sponsorships and a degree of regulatory autonomy.”
“We need to assume that in the coming 10 to 15 years, we will learn to harness digital spaces in better, less polarizing manners,” Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, added. “In part, this will be due to the ability to use better AI driven for filtering and thus developing more-robust digital governance.”
“There will of course always be those who would weaponize digital spaces, and the need to be vigilant isn’t going to go away for a long while,” he conceded. “Better filtering tools will be met by more-advanced forms of cyberbullying and digital malfeasance, and better media literacy will be met by more elaborate fabrications – so all we can do is hope that we can keep accentuating the positive.”
Social media companies, if properly motivated, could do much towards that goal, argued Internet Hall of Fame inductee and former CTO for the Federal Communications Commission, Henning Schulzrinne. “Some subset of people will choose fact-based, civil and constructive spaces, others will be attracted to or guided to conspiratorial, hostile and destructive spaces,” he replied to Pew. “For quite a few people, Facebook is a perfectly nice way to discuss culture, hobbies, family events or ask questions about travel – and even to, politely, disagree on matter politic. Other people are drawn to darker spaces defined by misinformation, hate and fear. All major platforms could make the ‘nicer’ version the easier choice.”
The problem with these sorts of solutions is that they have to be implemented by the social media companies themselves, few of whom have traditionally shown much concern for anything aside from their bottom line.
“Issues of privacy, autonomy, net neutrality, surveillance, sovereignty, will continue to mark the lines on the battlefield between community advocates and academics on the one hand, and corporations wanting to make money on the other hand,” Marcus Foth, professor of informatics at Queensland University of Technology, told Pew. Convincing these companies to act in the public good will be no easy feat, explained Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications.
“I do believe the largest social media services will continue spending to make their services more appealing to the masses and to avoid regulatory responses that could curb their growth and profitability,” he said. “They will look for ways to support public initiatives toward confronting global warming, advocating for diversity and equality and optimizing our civic infrastructure while supporting innovators of many stripes.” But, in doing so, Arkenberg continued, social media services may have to reevaluate their business models in the face of content moderation at scale.
Such changes could be led by the users themselves, countered Susan Price, human-centered design innovator at Firecat Studio. “People are taking more and more notice of the ways social media has systematically disempowered them, and they are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds,” she said. “The average internet user in 2035 will be more aware of the value of their attention and their content contributions due to platforms like Clubhouse and Twitter Spaces that monetarily reward users for participation.”
Price envisions new platforms and apps touting “fairer value propositions” to set themselves apart from their competition and attract users. “Privacy, malware and trolls will remain an ongoing battleground,” she continued, “human ingenuity and lack of coordination between nations suggests that these larger issues will be with us for a long time.”
When in doubt, make more rules
Perhaps the most audacious suggestion put forth from the canvassed expert pool came from Barry Chudakov, founder and principal at Sertain Research.
“Digital spaces expand our notions of right and wrong; of acceptable and unworthy,” he exclaimed. “Rights that we have fought for and cherished will not disappear; they will continue to be fundamental to freedom and democracy. Public audiences have a significant role to play by expanding our notion of human rights to include integrities. Integrity – the state of being whole and undivided – is a fundamental new imperative in emerging digital spaces which can easily conflate real and fake, fact and artifact.”
As such, Chudakov has proposed a full conceptual framework for enacting more civil digital public spaces, a “Bill of Integrities” which would include Integrities of Speech, Identity, Transparency, Life and Exceptions. How we would enforce such a bill, whether through social norms or government policy, remains to be seen. But even though we don’t currently have all (or really, any) of the solutions to the structural problems we currently face, these challenges are not insurmountable.
“The only way we will push our digital spaces in the right direction will be through deliberation, collective action and some form of shared governance,” Erhardt Graeff, assistant professor of social and computer science at Olin College of Engineering, said. “I am encouraged by the growing number of intellectuals, technologists and public servants now advocating for better digital spaces, realizing that these represent critical public infrastructure that ought to be designed for the public good.”
“We need to continue strengthening our public conversation about what values we want in our technology,” he continued, “honoring the expertise and voices of non-technologists and non-elites; use regulation to address problems such as monopoly and surveillance capitalism; and, when we can, refuse to design or be subject to antidemocratic and oppressive digital spaces.”
During an Inside the Lab: Building for the metaverse with AI livestream event on Wednesday, Meta CEO Mark Zuckerberg didn't just expound on his company's unblinking vision for the future, dubbed the Metaverse. He also revealed that Meta's research division is working on a universal speech translation system that could streamline users' interactions with AI within the company's digital universe.
"The big goal here is to build a universal model that can incorporate knowledge across all modalities... all the information that is captured through rich sensors," Zuckerberg said. "This will enable a vast scale of predictions, decisions, and generation as well as whole new architectures training methods and algorithms that can learn from a vast and diverse range of different inputs."
Zuckerberg noted that Facebook has continually striven to develop technologies that enable more people worldwide to access the internet and is confident that those efforts will translate to the Metaverse as well.
"This is going to be especially important when people begin teleporting across virtual worlds and experiencing things with people from different backgrounds," he continued. "Now, we have the chance to improve the internet and set a new standard where we can all communicate with one another, no matter what language we speak, or where we come from. And if we get this right, this is just one example of how AI can help bring people together on a global scale."
Meta's plan is two-fold. First, Meta is developing No Language Left Behind, a translation system capable of learning "every language, even if there isn't a lot of text available to learn from," according to Zuckerberg. "We are creating a single model that can translate hundreds of languages with state-of-the-art results and most of the language pairs — everything from Austrian to Uganda to Urdu."
Second, Meta wants to create an AI Babelfish. "The goal here is instantaneous speech-to-speech translation across all languages, even those that are mostly spoken; the ability to communicate with anyone in any language," Zuckerberg promised. "That's a superpower that people dreamed of forever and AI is going to deliver that within our lifetimes."
The company still faces the major hurdle of data scarcity. "Machine translation (MT) systems for text translations typically rely on learning from millions of sentences of annotated data," Facebook AI Research wrote in a Wednesday blog post. "Because of this, MT systems capable of high-quality translations have been developed for only the handful of languages that dominate the web."
Translating between two languages that aren't English is even more challenging, according to the FAIR team. Most MT systems will first convert one language to text then translate that over to the second language before converting the text back to speech. This lags the translation process and creates and outsized dependence on the written word, limiting the effectiveness of these systems for primarily oral languages. Direct speech-to-speech systems, like what Meta is working on, would not be hindered in that way resulting in a faster, more efficient translation process.
It took NASA and its partners nearly four dozen trips between 1998 and 2010 to haul the roughly 900,000 pounds worth of various modules into orbit that make up the $100 billion International Space Station. But come the end of this decade, more than 30 years after the first ISS component broke atmosphere, the ISS will reach the end of its venerable service life and be decommissioned in favor of a new, privately-operated cadre of orbital research stations.
NASA
The problem NASA faces is what to do with the ISS once it’s been officially shuttered, because it’s not like we can just leave it where it is. Without regular shipments of propellant reactant to keep the station on course, the ISS’ orbit would eventually degrade to the point where it’s forward momentum would be insufficient to overcome the effects of atmospheric drag, subsequently plummeting back to Earth. So, rather than wait for the ISS to de-orbit on its own, or leave it in place for the Russians to use as target practice, NASA will instead cast down the station from upon high like Vader did Palpatine.
NASA is no stranger to getting rid of refuse via atmospheric incineration. The space agency has long relied on it in order to dispose of trash, expended launch vehicles, and derelict satellites. Both America’s Skylab and Russia’s Mir space stations were decommissioned in this manner.
Skylab was America’s first space station, for the whole 24 weeks it was in use. When the final 3-astronaut crew departed in early 1974, the station was boosted one last time to 6.8 miles further out in a 289-mile graveyard orbit. It was expected to remain there until the 1980s when increased solar activity from the waxing 11-year solar cycle would eventually drag it down into a fiery reentry. However, astronomers miscalculated the relative strength of that solar event, which pushed up Skylab’s demise to 1979.
In 1978, NASA toyed with the idea of using its soon-to-be-completed Space Shuttle to help boost Skylab into a higher orbit but abandoned the plan when it became clear that the Shuttle wouldn’t be finished in time, given the accelerated reentry timetable. The agency also rejected a proposal to blow the station up with missiles while still in orbit. The station eventually came down on July 11th, 1979, though it didn’t burn up in the atmosphere as quickly as NASA had predicted. This caused some rather large pieces of debris to overshoot the intended Indian Ocean target South-Southeast of South Africa and instead land in Perth, Australia. Despite NASA’s calculations of a 1 in 152 chance that a piece of the lab could hit someone during its de-orbit, no injuries were reported.
Mir's deorbit went much more smoothly. After 15 years of service it was brought down on March 23rd, 2001, in three stages. First, its orbit was allowed to degrade to an altitude of 140 miles. Then, the Progress M1-5 spacecraft — basically an attachable rocket designed specifically to help deorbit the station — docked with the Mir. It subsequently lit its engine for a little over 22 minutes to precisely put the Mir down over a distant expanse of the Pacific Ocean, east of Fiji.
As for the ISS’ oncoming demise, NASA has a plan — or at least a pretty good idea — for what’s going to happen. "We've done a lot of studies," Kirk Shireman, deputy manager of NASA's space station program, told Space.com in 2011. "We have found an orbit and a change in velocity that we believe is achievable, and it creates a debris footprint that’s all in water in an unpopulated area."
According to NASA standards — specifically NASA-STD-8719.14A, Process for Limiting Orbital Debris — the risk of human casualty on the ground is limited to less than 1 in 10,000 (< 0.0001). However, a 1998 study conducted by the ISS Mission Integration Office discovered that an uncontrolled reentry would carry an unacceptable casualty probability of between .024 to .077 (2 in 100 to 8 in 100). A number of controllable decommissioning alternatives have been discussed over the decades, including boosting the ISS farther into orbit in the event of an unexpected evacuation of the station’s crew.
"We've been working on plans and update the plans periodically," Shireman continued. "We don’t want to ever be in a position where we couldn’t safely deorbit the station. It's been a part of the program from the very beginning."
Beginning about a year before the planned decommissioning date, NASA will allow the ISS to begin degrading from its normal 240-mile high orbit and send up an uncrewed space vehicle (USV) to dock with the station and help propel it back Earthward. The ultimate crew from the ISS will evacuate just before the station hits an altitude of 115 miles, at which point the attached USV will fire its rockets in a series of deorbital burns to set the station into a capture trajectory over the Pacific Ocean.
NASA has not yet settled on which USV will be employed. A 2019 plan approved by NASA’s safety council, ASAP, relied on Roscosmos to outfit and send up another Progress spacecraft to do what it did for the Mir. However, that vehicle might not actually be available when the ISS is set to come down because Russia’s commitment to the ISS program terminates in 2024. In April of last year, Russian state media began making noise that the country would abandon the station entirely by 2025, potentially stripping parts from this station to reuse in its upcoming national station and leaving the ISS without a reliable way to break orbit. The ESA’s Automated Transfer Vehicle or NASA's Orion Multi-Purpose Crew Vehicle, though still in development, are both potential alternatives to the Progress.
“NASA is continuing to work with its international partners to ensure a safe deorbit plan of the station and is considering a number of options," spokeswoman Leah Cheshier told UPI via email in 2021, declining to elaborate on what those options might entail but adding that any deorbiting mission would be "shared by the ISS partnership and is negotiation-sensitive at this time."
The ISS is now entering its third and most productive decade of utilization, including research advancement, commercial value, and global partnership. The first decade of ISS was dedicated to assembly, and the second was devoted to research and technology development and learning how to conduct these activities most effectively in space. The third decade is one in which NASA aims to verify exploration and human research technologies to support deep space exploration, continue to return medical and environmental benefits to humanity, continue to demonstrate U.S. leadership in LEO through international partnerships, and lay the groundwork for a commercial future in LEO.
More than half of the experiments performed aboard the ISS nowadays are for non-NASA users, according to the report — including nearly two dozen commercial facilities — “hundreds of experiments from other government agencies, academia, and commercial users to return benefits to people and industry on the ground.” This influx of orbital commercial activity is expected — and being actively encouraged — to further increase over the next few years until humanity can collectively realize Jeff Bezos’ dream of building a low Earth orbit mixed-use business park.
From domestication and selective breeding to synthetic insulin and CRISPR, humanity has long sought understand, master and exploit the genetic coding of the natural world. In The Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biology authors Amy Webb, professor of strategic foresight at New York University’s Stern School of Business, and Andrew Hessel, co-founder and chairman of the Center of Excellence for Engineering Biology and the Genome Project, delve into the history of the field of synthetic biology, examine today's state of the art and imagine what a future might look like where life itself can be manufactured molecularly.
It’s plausible that by the year 2040, many societies will think it’s immoral to eat traditionally produced meat and dairy products. Some luminaries have long believed this was inevitable. In his essay “Fifty Years Hence,” published in 1931, Winston Churchill argued, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”
That theory was tested in 2013, when the first lab-grown hamburger made its debut. It was grown from bovine stem cells in the lab of Dutch stem cell researcher Mark Post at Maastricht University, thanks to funding from Google cofounder Sergey Brin. It was fortuitous that a billionaire funded the project, because the price to produce a single patty was $375,000. But by 2015, the cost to produce a lab-grown hamburger had plummeted to $11.43. Late in 2020, Singapore approved a local competitor to the slaughterhouse: a bioreactor, a high-tech vat for growing organisms, run by US-based Eat Just, which produces cultured chicken nuggets. In Eat Just’s bioreactors, cells taken from live chickens are mixed with a plant-based serum and grown into an edible product. Chicken nuggets produced this way are already being sold in Singapore, a highly regulated country that’s also one of the world’s most important innovation hotspots. And the rising popularity of the product could accelerate its market entry in other countries.
An Israel-based company, Supermeat, has developed what it calls a “crispy cultured chicken,” while Finless Foods, based in California, is developing cultured bluefin tuna meat, from the sought-after species now threatened by long-standing overfishing. Other companies, including Mosa Meat (in the Netherlands), Upside Foods (in California, formerly known as Memphis Meats), and Aleph Farms (in Israel), are developing textured meats, such as steaks, that are cultivated in factory-scale labs. Unlike the existing plant-based protein meat alternatives developed by Beyond Meat and Impossible Foods, cell-based meat cultivation results in muscle tissue that is, molecularly, beef or pork.
Two other California companies are also offering innovative products: Clara Foods serves creamy, lab-grown eggs, fish that never swam in water, and cow’s milk brewed from yeast. Perfect Day makes lab-grown “dairy” products—yogurt, cheese, and ice cream. And a nonprofit grassroots project, Real Vegan Cheese, which began as part of the iGEM competition in 2014, is also based in California. This is an open-source, DIY cheese derived from caseins (the proteins in milk) rather than harvested from animals. Casein genes are added to yeast and other microflora to produce proteins, which are purified and transformed using plant-based fats and sugars. Investors in cultured meat and dairy products include the likes of Bill Gates and Richard Branson, as well as Cargill and Tyson, two of the world’s largest conventional meat producers.
Lab-grown meat remains expensive today, but the costs are expected to continue to drop as the technology matures. Until they do, some companies are creating hybrid animal-plant proteins. Startups in the United Kingdom are developing blended pork products, including bacon created from 70 percent cultured pork cells mixed with plant proteins. Even Kentucky Fried Chicken is exploring the feasibility of selling hybrid chicken nuggets, which would consist of 20 percent cultured chicken cells and 80 percent plants.
Shifting away from traditional farming would deliver an enormous positive environmental impact. Scientists at the University of Oxford and the University of Amsterdam estimated that cultured meat would require between 35 and 60 percent less energy, occupy 98 percent less land, and produce 80 to 95 percent fewer greenhouse gases than conventional animals farmed for consumption. A synthetic-biology-centered agriculture also promises to shrink the distance between essential operators in the supply chain. In the future, large bioreactors will be situated just outside major cities, where they will produce the cultured meat required by institutions such as schools, government buildings and hospitals, and perhaps even local restaurants and grocery stores. Rather than shipping tuna from the ocean to the Midwest, which requires a complicated, energy-intensive cold chain, fish could instead be cultured in any landlocked state. Imagine the world’s most delicate, delicious bluefin tuna sushi sourced not from the waters near Japan, but from a bioreactor in Hastings, Nebraska. Synthetic biology will also improve the safety of the global food supply. Every year, roughly 600 million people become ill from contaminated food, according to World Health Organization estimates, and 400,000 die. Romaine lettuce contaminated with E. coli infected 167 people across 27 states in January 2020, resulting in 85 hospitalizations. In 2018, an intestinal parasite known as Cyclospora, which causes what is best described as explosive diarrhea, resulted in McDonald’s, Trader Joe’s, Kroger, and Walgreens removing foods from their shelves. Vertical farming can minimize these problems. But synthetic biology can help in a different way, too: Often, tracing the source of tainted food is difficult, and the detective work can take weeks. But a researcher at Harvard University has pioneered the use of genetic barcodes that can be affixed to food products before they enter the supply chain, making them traceable when problems arise.
That researcher’s team engineered strains of bacteria and yeast with unique biological barcodes embedded in spores. Such spores are inert, durable, and harmless to humans, and they can be sprayed onto a wide variety of surfaces, including meat and produce. The spores are still detectable months later even after being subjected to wind, rain, boiling, deep frying, and microwaving. (Many farmers, including organic farmers, already spray their crops with Bacillus thuringiensis spores to kill pests, which means there’s a good chance you’ve already ingested some.) These barcodes could not only aid in contact tracing, but be used to reduce food fraud and mislabeling. In the mid-2010s, there was a rash of fake extra virgin olive oil on the market. The Functional Materials Laboratory at ETH Zurich, a public research university in Switzerland, developed a solution similar to the one devised at Harvard: DNA barcodes that revealed the producer and other key data about the oil.