Posts with «author_name|andrew tarantola» label

Water recycling technologies developed for space are helping a parched American west

Whether you live in the rapidly drying American West or are aboard the International Space Station for a six-month stint, having enough water to live on is a constant concern. As climate change continues to play havoc on the West’s aquifers, and as humanity pushes further into the solar system, the potable supply challenges we face today will only grow. In their efforts to ensure humanity has enough to drink, some of NASA’s cutting-edge in-orbit water recycling research is coming back down to Earth.

On Earth

In California, for example, the four billion gallons of wastewater generated daily from the state’s homes and businesses, storm drain and roof-connected runoff, makes its way through more than 100,000 miles of sewer lines where it — barring obstructionist fatbergs — eventually ends up at one of the state’s 900 wastewater treatment plants. How that water is processed depends on whether it’s destined for human consumption or non-potable uses like agricultural irrigation, wetland enhancement and groundwater replenishment.

The city of Los Angeles takes a multi-step approach to reclaiming its potable wastewater. Large solids are first strained from incoming fluids using mechanical screens at the treatment plant’s headworks. From there, the wastewater flows into a settling tank where most of the remaining solids are removed — sludged off to anaerobic digesters after sinking to the bottom of the pool. The water is then sent to secondary processing where it is aerated with nitrogen-fixing bacteria before being pushed into another settling, or clarifying, tank. Finally it’s filtered through a tertiary cleaning stage of cationic polymer filters where any remaining solids are removed. By 2035, LA plans to recycle all of its wastewater for potable reuse while Aurora, Colorado, and Atlanta, Georgia, have both already begun augmenting their drinking water supplies with potable reuse.

“There are additional benefits beyond a secure water supply. If you're not relying on importing water, that means there's more water for ecosystems in northern California or Colorado,” Stanford professor William Mitch, said in a recent Stanford Engineering post. “You're cleaning up the wastewater, and therefore you're not discharging wastewater and potential contaminants to California's beaches.”

Wastewater treatment plants in California face a number of challenges, the Water Education Foundation notes, including aging infrastructure; contamination from improperly disposed pharmaceuticals and pesticide runoff; population demands combined with reduced flows due to climate change-induced drought. However their ability to deliver pristine water actually outperforms nature.

“We expected that potable reuse waters would be cleaner, in some cases, than conventional drinking water due to the fact that much more extensive treatment is conducted for them,” Mitch argued in an October study in Nature Sustainability. “But we were surprised that in some cases the quality of the reuse water, particularly the reverse-osmosis-treated waters, was comparable to groundwater, which is traditionally considered the highest quality water.”

The solids pulled from wastewater are also heavily treated during recycling. The junk from the first stage is sent to local landfills, while the biological solids strained from the second and third stages are sent to anaerobic chambers where their decomposition generates biogas that can be burned for electrical production and converted to nitrogen-rich fertilizer for agricultural use.

New York, for example, produces 22,746 tons of wastewater sludge per day from its 1,200-plus statewide wastewater treatment plants (WWTPs). However, less than a tenth of plants (116 specifically) actually use that sludge to produce biogas, per a 2021 report from the Rockefeller Institute for Government, and is “mainly utilized to fuel the facilities and for the combined heat and power generation of the WWTPs.”

Non-potable water can be treated even more directly and, in some cases, on-site. Wastewater, rainwater and greywater can all be reused for non-drinking uses like water the lobby plants and flushing toilets after being captured and treated in an Onsite non-potable water reuse system (ONWS).

EPA

“Increasing pressures on water resources have led to greater water scarcity and a growing demand for alternative water sources,” the Environmental Protection Agency points out. “Onsite non-potable water reuse is one solution that can help communities reclaim, recycle, and then reuse water for non-drinking water purposes.”

In Orbit

Aboard the ISS, astronauts have even less leeway in their water use on account of the station being a closed-loop system isolated in space. Also because SpaceX charges $2,500 per pound of cargo (after the first 440 pounds, for which it charges $1.1 million) to send into orbit on one of its rockets — and liquid water is heavy.

ESA

While the ISS does get the occasional shipment of water in the form of 90-pound duffle bag-shaped Contingency Water Containers to replace what’s invariably lost to space, its inhabitants rely on the complicated web of levers and tubes you see above and below to reclaim every dram of moisture possible and process it into potability. The station’s Water Processing Assembly can produce up to 36 gallons of drinkable water every day from the crew’s sweat, breath and urine. When it was installed in 2008, the station’s water delivery needs dropped by around 1,600 gallons, weighing 15,960 pounds. It works in conjunction with the Urine Processor Assembly (UPA), Oxygen Generation Assembly (OGA), Sabatier reactor (which recombines free oxygen and hydrogen split by the OGA back into water) and Regenerative Environmental Control and Life Support Systems (ECLSS) systems to maintain the station’s “water balance” and supply American astronauts with a minimum of 2.5 liters of water each day. Cosmonauts in the Russian segment of the ISS rely on a separate filtration system that only collects shower runoff and condensation and therefore require more regular water deliveries to keep their tanks topped off.

ESA

In 2017, NASA upgraded the WPA with a new reverse-osmosis filter in order to, “reduce the resupply mass of the WPA Multi-filtration Bed and improved catalyst for the WPA Catalytic Reactor to reduce the operational temperature and pressure,” the agency announced that year. “Though the WRS [water recovery system] has performed well since operations began in November 2008, several modifications have been identified to improve the overall system performance. These modifications aim to reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions.”

One such improvement is the upgraded Brine Processor Assembly (BPA) delivered in 2021, a filter that sieves more salt out of astronaut urine to produce more reclaimed water than its predecessor. But there is still a long way to go before we can securely transport crews through interplanetary space. NASA notes that the WPA that got delivered in 2008 was originally rated to recover 85 percent of the water in crew urine though its performance has since improved to 87 percent.

NASA

“To leave low-Earth orbit and enable long-duration exploration far from Earth, we need to close the water loop,” Caitlin Meyer, deputy project manager for Advanced Exploration Systems Life Support Systems at NASA’s Johnson Space Center in Houston, added. “Current urine water recovery systems utilize distillation, which produces a brine. The [BPA] will accept that water-containing effluent and extract the remaining water.”

When the post-processed urine is then mixed with reclaimed condensation and runs through the WPA again, “our overall water recovery is about 93.5 percent,” Layne Carter, International Space Station Water Subsystem Manager at Marshall, said in 2021. To safely get to Mars, NASA figures it needs a reclamation rate of 98 percent or better.

But even if the ISS’s current state-of-the-art recycling technology isn’t quite enough to get us to Mars, it’s already making an impact planetside. For example, in the early 2000’s the Argonide company developed a “NanoCeram” nanofiber water filtration system with NASA small business funding support. The filter uses positively charged microscopic alumina fibers to remove virtually all contaminants without overly restricting flow rate, eventually spawning the Oas shower from Orbital Systems.

“The shower starts with less than a gallon of water and circulates it at a rate of three to four gallons per minute, more flow than most conventional showers provide,” NASA noted last July. “The system checks water quality 20 times per second, and the most highly polluted water, such as shampoo rinse, is jettisoned and replaced. The rest goes through the NanoCeram filter and then is bombarded with ultraviolet light before being recirculated.” According to the Swedish Institute for Communicable Disease Control, the resulting water is cleaner than tap.

San Francisco police seek permission for its robots to use deadly force

The San Francisco Police Department is currently petitioning the city's Board of Supervisors for permission to deploy robots to kill suspects that law enforcement deems a sufficient threat that the "risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD." The draft policy, which was written by the SFPD itself, also seeks to exclude "hundreds of assault rifles from its inventory of military-style weapons and for not include personnel costs in the price of its weapons," according to a report from Mission Local.  

As Mission Local notes, this proposal has already seen significant opposition from both within and without the Board. Supervisor Aaron Peskin, initially pushed back against the use of force requirements, inserting “Robots shall not be used as a Use of Force against any person," into the policy language. The SFPD removed that wording in a subsequent draft, which I as a lifelong San Francisco resident did not know was something that they could just do. The three-member Rules Committee, which Peskin chairs, then unanimously approved that draft and advanced it to the full Board of Supervisors for a vote on November 29th. Peskin excused his decision by claiming that “there could be scenarios where deployment of lethal force was the only option.”

The police force currently maintains a dozen fully-functional remote-controlled robots, which are typically used for area inspections and bomb disposal. However, as the Dallas PD showed in 2016, they make excellent bomb delivery platforms as well. Bomb disposal units are often equipped with blank shotgun shells used to forcibly disrupt an explosive device's internal workings, though there is nothing stopping police from using live rounds if they needed, as Oakland police recently acknowledged to that city's civilian oversight board. 

While San Francisco has never explicitly allowed for robots to take human lives, lethal autonomous weapons (LAWs), are increasingly common in modern warfare. Anti-personnel mines, one of the earliest iterations of automated weaponry, have been banned since 1997 (but tell that to the mines already in the ground) and fully automated defenses like shipboard Phalanx systems have been in use since the 1970s. Autonomous offensive systems, such as UAVs and combat drones, have been used for years but have always required a "human in the loop" to bear the responsibility of actually firing the weapons. Now, the SFPD — the same department that regularly costs the city six-figure settlements for its excessive use of force and obstructs investigations into its affinity for baton-based beatings — wants to wield that same life-and-death power over San Francisco's civilians.

Add 'Diplomacy' to the list of games AI can play as well as humans

Machine learning systems have been mopping the floor with their human opponents for well over a decade now (seriously, that first Watson Jeopardy win was all the way back in 2011), though the types of games they excel at are rather limited. Typically competitive board or video games using a limited play field, sequential moves and at least one clearly-defined opponent, any game that requires the crunching of numbers is to their advantage. Diplomacy, however, requires very little computation, instead demanding players negotiate directly with their opponents and make respective plays simultaneously — things modern ML systems are generally not built to do. But that hasn't stopped Meta researchers from designing an AI agent that can negotiate global policy positions as well as any UN ambassador.

Diplomacy was first released in 1959 and works like a more refined version of RISK where between two and seven players assume the roles of a European power and attempt to win the game by conquering their opponents' territories. Unlike RISK where the outcome of conflicts are decided by a simple the roll of the dice, Diplomacy demands players first negotiate with one another — setting up alliances, backstabbing, all that good stuff — before everybody moves their pieces simultaneously during the following game phase. The abilities to read and manipulate opponents, convince players to form alliances and plan complex strategies, navigate delicate partnerships and know when to switch sides, are all a huge part of the game — and all skills that machine learning systems generally lack.

On Wednesday, Meta AI researchers announced that they had surmounted those machine learning shortcomings with CICERO, the first AI to display human-level performance in Diplomacy. The team trained Cicero on 2.7 billion parameters over the course of 50,000 rounds at webDiplomacy.net, an online version of the game, where it ended up in second place (out of 19 participants) in a 5-game league tournament, all while doubling up the average score of its opponents.

The AI agent proved so adept "at using natural language to negotiate with people in Diplomacy that they often favored working with CICERO over other human participants," the Meta team noted in a press release Wednesday. "Diplomacy is a game about people rather than pieces. If an agent can't recognize that someone is likely bluffing or that another player would see a certain move as aggressive, it will quickly lose the game. Likewise, if it doesn't talk like a real person — showing empathy, building relationships, and speaking knowledgeably about the game — it won't find other players willing to work with it."

Meta

Essentially, Cicero combines the strategic mindset from Pluribot or AlphaGO with the natural language processing (NLP) abilities of Blenderbot or GPT-3. The agent is even capable of forethought. "Cicero can deduce, for example, that later in the game it will need the support of one particular player, and then craft a strategy to win that person’s favor – and even recognize the risks and opportunities that that player sees from their particular point of view," the research team noted.

The agent does not train through a standard reinforcement learning scheme as similar systems do. The Meta team explains that doing so would lead to suboptimal performance as, "relying purely on supervised learning to choose actions based on past dialogue results in an agent that is relatively weak and highly exploitable."

Instead Cicero uses "iterative planning algorithm that balances dialogue consistency with rationality." It will first predict its opponents' plays based on what happened during the negotiation round, as well as what play it thinks its opponents think it will make before "iteratively improving these predictions by trying to choose new policies that have higher expected value given the other players' predicted policies, while also trying to keep the new predictions close to the original policy predictions." Easy, right?

The system is not yet fool-proof, as the agent will occasionally get too clever and wind up playing itself by taking contradictory negotiating positions. Still, its performance in these early trials is superior to that of many human politicians. Meta plans to continue developing the system to "serve as a safe sandbox to advance research in human-AI interaction."

Hitting the Books: How Dave Chappelle and curious cats made Roomba a household name

Autonomous vacuum maker iRobot is a lot like Tesla, not necessarily by reinventing an existing concept — vacuums, robots and electric cars all existed before these two companies came on the scene — but by imbuing their products with that intangible quirk that makes people sit up and take notice. Just as Tesla ignited the public's imagination as to what an electric car could be and do, iRobot has expanded our perception of how domestic robots can fit into our homes and lives. 

More than two dozen leading experts from across the technology sector have come together in ‘You Are Not Expected to Understand This’: How 26 Lines of Code Changed the World to discuss how seemingly innocuous lines of code have fundamentally shaped and hemmed the modern world. In the excerpt below, Upshot Deputy Editor Lowen Liu, explores the development of iRobot's Roomba vacuum and its unlikely feline brand ambassadors.

Hachette Book Group

Excerpted with permission from ‘You Are Not Expected to Understand This’: How 26 Lines of Code Changed the World edited by Torie Bosch. Published by Princeton University Press. Copyright © 2022. All rights reserved.


The Code That Launched a Million Cat Videos 

by Lowen Liu

According to Colin Angle, the CEO and cofounder of iRobot, the Roomba faced some early difficulties before it was rescued by two events. The disc-shaped robot vacuum had gotten off to a hot start in late 2002, with good press and a sales partner in the novelty chain store Brookstone. Then sales started to slow, just as the company had spent heavily to stock up on inventory. The company found itself on the other side of Black Friday in 2003 with thousands upon thousands of Roombas sitting unsold in warehouses. 

Then around this time, Pepsi aired a commercial starring comedian Dave Chappelle. In the ad, Chappelle teases a circular robot vacuum with his soft drink while waiting for a date. The vacuum ends up eating the comedian’s pants—schlupp. Angle remembers that at a team meeting soon after, the head of e-commerce said something like: “Hey, why did sales triple yesterday?” The second transformative moment for the company was the rapid proliferation of cat videos on a new video-sharing platform that launched at the end of 2005. A very specific kind of cat video: felines pawing suspiciously at Roombas, leaping nervously out of Roombas’ paths, and, of course, riding on them. So many cats, riding on so many Roombas. It was the best kind of advertising a company could ask for: it not only popularized the company’s product but made it charming. The Roomba was a bona fide hit. 

By the end of 2020, iRobot had sold 35 million vacuums, leading the charge in a booming robot vacuum market.

The Pepsi ad and the cat videos appear to be tales of early days serendipity, lessons on the power of good luck and free advertising. They also appear at first to be hardware stories— stories of cool new objects entering the consumer culture. But the role of the Roomba’s software can’t be underestimated. It’s the programming that elevates the round little suckers from being mere appliances to something more. Those pioneering vacuums not only moved, they decided in some mysterious way where to go. In the Pepsi commercial, the vacuum is given just enough personality to become a date-sabotaging sidekick. In the cat videos the Roomba isn’t just a pet conveyer, but a diligent worker, fulfilling its duties even while carrying a capricious passenger on its back. For the first truly successful household robot, the Roomba couldn’t just do its job well; it had to win over customers who had never seen anything like it. 

Like many inventions, the Roomba was bred of good fortune but also a kind of inevitability. It was the brainchild of iRobot’s first hire, former MIT roboticist Joe Jones, who began trying to make an autonomous vacuum in the late 1980s. He joined iRobot in 1992, and over the next decade, as it worked on other projects, the company developed crucial expertise in areas of robotics that had nothing to do with suction: it developed a small, efficient multithreaded operating system; it learned to miniaturize mechanics while building toys for Hasbro; it garnered cleaning know-how while building large floor sweepers for SC Johnson; it honed a spiral-based navigation system while creating mine-hunting robots for the US government. It was a little like learning to paint a fence and wax a car and only later realizing you’ve become a Karate Kid. 

The first Roombas needed to be cheap—both to make and (relatively) to sell—to have any chance of success reaching a large number of American households. There was a seemingly endless list of constraints: a vacuum that required hardly any battery power, and navigation that couldn’t afford to use fancy lasers—only a single camera. The machine wasn’t going to have the ability to know where it was in a room or remember where it had been. Its methods had to be heuristic, a set of behaviors that combined trial and error with canned responses to various inputs. If the Roomba were “alive,” as the Pepsi commercial playfully suggested, then its existence would more accurately have been interpreted as a progression of instants—did I just run into something? Am I coming up to a ledge? And if so, what should I do next? All conditions prepared for in its programming. An insect, essentially, reacting rather than planning. 

And all this knowledge, limited as it was, had to be stuffed inside a tiny chip within a small plastic frame that also had to be able to suck up dirt. Vacuums, even handheld versions, were historically bulky and clumsy things, commensurate with the violence and noise of what they were designed to do. The first Roomba had to eschew a lot of the more complicated machinery, relying instead on suction that accelerated through a narrow opening created by two rubber strips, like a reverse whistle. 

But the lasting magic of those early Roombas remains the way they moved. Jones has said that the navigation of the original Roomba appears random but isn’t—every so often the robot should follow a wall rather than bounce away from it. In the words of the original patent filed by Jones and Roomba cocreator Mark Chiappetta, the system combines a deterministic component with random motion. That small bit of unpredictability was pretty good at covering the floor—and also made the thing mesmerizing to watch. As prototypes were developed, the code had to account for an increasing number of situations as the company uncovered new ways for the robot to get stuck, or new edge cases where the robot encountered two obstacles at once. All that added up until, just before launch, the robot’s software no longer fit on its allotted memory. Angle called up his cofounder, Rodney Brooks, who was about to board a transpacific flight. Brooks spent the flight rewriting the code compiler, packing the Roomba’s software into 30 percent less space. The Roomba was born.

In 2006 Joe Jones moved on from iRobot, and in 2015 he founded a company that makes robots to weed your garden. The weeding robots have not, as yet, taken the gardening world by storm. And this brings us to perhaps the most interesting part of the Roomba’s legacy: how lonely it is. 

You’d be in good company if you once assumed that the arrival of the Roomba would open the door to an explosion of home robotics. Angle told me that if someone went back in time and let him know that iRobot would build a successful vacuum, he would have replied, “That’s nice, but what else did we really accomplish?” A simple glance around the home is evidence enough that a future filled with robots around the home has so far failed to come true. Why? Well for one, robotics, as any roboticist will tell you, is hard. The Roomba benefited from a set of very limited variables: a flat floor, a known range of obstacles, dirt that is more or less the same everywhere you go. And even that required dozens of programmed behaviors. 

As Angle describes it, what makes the Roomba’s success so hard to replicate is how well it satisfied the three biggest criteria for adoption: it performed a task that was unpleasant; it performed a task that had to be done relatively frequently; and it was affordable. Cleaning toilets is a pain but not done super frequently. Folding laundry is both, but mechanically arduous. Vacuuming a floor, though—well, now you’re talking. 

Yet for all the forces that led to the creation of the Roomba, its invention alone wasn’t a guarantee of success. What is it that made those cat videos so much fun? It’s a question that lies close to the heart of the Roomba’s original navigation system: part determinism, part randomness. My theory is that it wasn’t just the Roomba’s navigation that endeared it to fans—it was how halting and unpredictable that movement could be. The cats weren’t just along for an uneventful ride; they had to catch themselves as the robot turned unexpectedly or hit an object. (One YouTuber affectionately described the vacuum as “a drunk coming home from the bar.”) According to this theory, it’s the imperfection that is anthropomorphic. We are still more likely to welcome into our homes robots that are better at slapstick than superhuman feats. It’s worth noting that the top-of-the-line Roomba today will map your rooms and store that map on an app, so that it can choose the most efficient lawnmower-like cleaning path. In these high-end models, the old spiral navigation system is no longer needed. Neither is bumping into walls. 

Watching one of these Roombas clean a room is a lot less fun than it used to be. And it makes me wonder what the fate of the Roomba may have been had the first ever robot vacuum launched after the age of smartphones, already armed with the capacity to roll through rooms with precise confidence, rather than stumble along. It’s not always easy, after all, to trust someone who seems to know exactly where they are going.

Black market fears are hampering cannabis waste recycling efforts in California

As American cannabis has grown from cottage industry to $25 billion-a-year commercial enterprise that employs 428,059 folks nationwide, the product that weed has become now often bears little resemblance from the product that used to be sold raw. Flower, once delivered in sandwich bags, now arrives wrapped in child-safety-locked, plastic-lined mylar pouches; every gram of hash seemingly needs its own glass jar, plastic lid, and cardboard box; and half-gram vape pens must often be dug from three times their own weight in display and security packaging before use. And while most of the outer packaging can be easily recycled, vaporizer cartridges themselves can be far more problematic to dispose of.

Cannabis is more popular than ever in the US — 44 percent of adults have access to it, either medically or recreationally, more than 90 percent of adults support its full legalization, and a 2021 Weedmaps survey suggests that usage has increased by 50 percent since the start of the pandemic. What’s more, edibles and concentrates continue to rise in popularity among all age groups, from boomers to doomers. This increased demand for vape cartridges — both near-ubiquitous 510-threads like those from Rove or more specialized carts like the Pax Era Pods — has led to their increased production and, in turn, their inevitable arrival in American landfills. In California, the nation’s largest legal cannabis market, 510 cartridges are quite popular but, due to the state’s strict hazardous waste disposal regulations, difficult to dispose of in a responsible manner.

On the production side, virtually every ingredient, component, growth medium, nutrient, castoff, trimming, and scrap is carefully destroyed, typically either dismantled on-site or rendered unusable before being shipped to a certified waste facility. At the cultivation level, Taylor Vozniak, Sales and Marketing Manager for California cannabis waste management company Gaiaca, told Engadget, “it would be plants after they've been trimmed, grow medium — that's either going to be soil or rock wool or cocoa husk — any sort of water nutrients or pesticides.”

At the manufacturing stage, the company handles post-production green waste (think, mashed up stems and leaves) as well as hazardous waste like concentrate solvents and failed edible product batches like misshapen canna-gummies or burned weed brownies — the latter must be destroyed on-site to stay within bounds of the California Cannabis Track and Trace (CCTT) system operated by the state’s Department of Cannabis Control. The CCTT extends to the point of sale, meaning that local dispensaries are responsible for seeing returned product and defective merchandise properly destroyed.

“Single-use batteries have been a big sticking point for a while now,” Vozniak said. “We're proud that we can recycle those vape batteries either with or without cannabis.” As it turns out, much of the underlying impetus for the creation of the CCTT system, Vozniak notes, is to prevent this waste from being illicitly harvested and resold. “The overarching way these regulations were written the way they were is to prevent any sort of product going into the black market,” he noted, which is why cannabis by-products, which is what all the stuff above is considered, has to be rendered into inert “waste” before it gets put in the ground. It’s also why your local dispensary doesn’t have a drop-off bin for used cartridges.

Products are handled slightly differently depending on whether they’re THC or CBD-based. “CBD is federally legal,” Vozniak said — so that it can be transported across state lines for disposal, “while THC is state-by-state regulated. A lot of the time you'll see, especially in California, CBD destroyed on site, but I have a client in Dallas who I've been able to just take their product as-is off site to a disposal facility.”

The materials that can be directly recycled or composted, will be. The six-month composting process is sufficient to leach out and fully decay any leftover THC before the material is repackaged and sold as a gardening amendment. Less sustainable materials like used nitrile gloves, non-recyclable or food-contaminated packaging will instead be routed to local landfills and incinerators. But not vape cartridges. Those, along with the Li-ion batteries that power them, are considered e-waste in California so there’s a litany of additional regulatory hurdles to jump through before throwing one away.

“What ends up happening is you'll be able to take [used carts and batteries] to a recycling vendor for a while,” Vozniak said, until “they realize it's a difficult product to deal with, so we'll have to find new vendors.”

MediaNews Group/Reading Eagle via Getty Images via Getty Images

The difficulty with recycling cartridges lies in their complex construction and mix of materials — woven cloth wicks and aluminum atomizers sealed by plastic walls with rubber o-rings keeping the viscous liquid in place. You can’t very well clean, sort, and disassemble these items by hand; as e-waste, they’re sorted, cleaned and then repeatedly mechanically shredded and resorted into progressively smaller chunks until they’re reduced and separated into their constituent materials. Vape pen batteries, both rechargeable and single-use all-in-one varieties, go through a similar process, Vizniak explains. They’re first statically separated by density, then dipped into liquid nitrogen to instantly freeze and deactivate the lithium ion cells before they’re pulverized with mechanical hammers and further sorted for commodity sale.

If that seems like a whole lot of work for such tiny devices, you’re not wrong. Despite the legal cannabis industry in California existing for less than a decade, much of the verbiage of Prop 54 is already falling out of relevance. “When things were first written, there was a lack of understanding of how the cannabis industry would end up operating,” Vozniak said. He points to all-in-one (AIO) pen battery disposal as one such example.

“We still have to destroy these products on site — and I understand the concern there, they [state regulators] don't want anything going to the black market — but for these all-in-one-pens, there really is no way to destroy them without putting the operators at risk,” he continued. “A lot of times, operators are going to try to destroy these products themselves because Gaica can be on the more expensive side just because of the nature of what we do. It's very labor intensive.”

Vozniak has seen cannabis retailers encase old AIOs in blocks of resin to deactivate them — whole drums of resin-ensconced lithium batteries that no recycler would ever take — in order to comply with the state’s “destroy on-site” order. Vozniak argues that a basic exemption to that rule specifically for cannabis e-waste could, “really help the industry out because that's really what I'm seeing most — out of state as well.”

In addition to contacting their district and state representatives to advocate for regulatory amendments, vape pen users looking to reduce their consumption footprints have a number of options. Refillable 510 cartridges are a thing — they operate just as the single-use canisters from the dispensary do but have a screw-on lid for injecting fresh oil — such as the Flacko Jodye from KandyPens, the SPRK ceramic from PCKT, an all-in-one kit from Kiara Naturals, or the Puffco Plus. Maintaining and cleaning refillable tanks is straightforward and they can easily be topped off using a dab syringe from either your local dispensary or friendly neighborhood drug dealer if you prefer a more homebrewed product.

MIT solved a century-old differential equation to break 'liquid' AI's computational bottleneck

Last year, MIT developed an AI/ML algorithm capable of learning and adapting to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee sense) literally play 4D chess — their models requiring time-series data to operate — which makes them ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. But, the problem is that data throughput has become a bottleneck, and scaling these systems has become prohibitively expensive, computationally speaking.

On Tuesday, MIT researchers announced that they have devised a solution to that restriction, not by widening the data pipeline but by solving a differential equation that has stumped mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.”

“The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said in a Tuesday press statement. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

So, for those of us without a doctorate in Really Hard Math, differential equations are formulas that can describe the state of a system at various discrete points or steps throughout the process. For example, if you have a robot arm moving from point A to B, you can use a differential equation to know where it is in between the two points in space at any given step within the process. However, solving these equations for every step quickly gets computationally expensive as well. MIT’s “closed form” solution end-arounds that issue by functionally modeling the entire description of a system in a single computational step. AS the MIT team explains:

Imagine if you have an end-to-end neural network that receives driving input from a camera mounted on a car. The network is trained to generate outputs, like the car's steering angle. In 2020, the team solved this by using liquid neural networks with 19 nodes, so 19 neurons plus a small perception module could drive a car. A differential equation describes each node of that system. With the closed-form solution, if you replace it inside this network, it would give you the exact behavior, as it’s a good approximation of the actual dynamics of the system. They can thus solve the problem with an even lower number of neurons, which means it would be faster and less computationally expensive.

By solving this equation at the neuron-level, the team is hopeful that they’ll be able to construct models of the human brain that measure in the millions of neural connections, something not possible today. The team also notes that this CfC model might be able to take the visual training it learned in one environment and apply it to a wholly new situation without additional work, what’s known as out-of-distribution generalization. That’s not something current-gen models can really do and would prove to be a significant step towards the generalized AI systems of tomorrow.

Tesla is offering its proprietary charge connector as a new North American standard

When it comes to charging your EV in the US, Canada and Mexico, the only two connector types available aren't cross-compatible. Tesla has its J1772 connector, which in the company's defense was developed when Tesla was still the only EV game in town. Everybody else uses the current North American standard, the Combined Charging System (CCS). Tesla apparently hopes to upend that dynamic, announcing Friday that it is "opening our EV connector design to the world."  

Tesla is releasing its specs and production designs for the J1772 connector, which it is rebranding as the North American Charging Standard (NACS), in hopes that charging networks like Electrify America and Chargepoint will incorporate the company's hardware in their stations. The NACS contains "no moving parts, is half the size, and twice as powerful," as the alternative, Tesla argues. 

The company presses that these networks should adopt its technology because, "NACS vehicles outnumber CCS two-to-one, and Tesla's Supercharging network has 60 percent more NACS posts than all the CCS-equipped networks combined." I mean, sure, but that's kind of ignoring that those numbers are a direct result of the multi-year lead that Tesla held over its competition in coming to market, a capitalization lead that is rapidly shrinking as the industry's marquee brands like GM, Honda and Audi pivot to electrification and Chinese makers like BYD dominate the EV space in Asia's largest market.

Tesla claims that "network operators already have plans in motion to incorporate NACS at their chargers," without specifying which networks are doing so and at what scale. The company "looks forward to future electric vehicles incorporating the NACS design and charging at Tesla’s North American Supercharging and Destination Charging networks."

We can only speculate as to why Tesla has decided that right now — even as Elon Musk sinks faster than Artax into the quicksands of Twitter ownership — is the best tiime to open up their standard to the rest of the industry. Tesla, and now Twitter too, does not employ a public-facing PR team, so your guess is as good as any blue check's.

Adaptive 'high-definition' headlights are just around the corner for American drivers

The first headlights to adorn automobiles weren’t all that much better than squinting real hard and hoping any cows in the road had the good sense to move out of your way. The dim light cast by early kerosene oil and acetylene gas lamps made most travel after dark a fool’s errand. 

Today, of course, the latest generation of headlights work much like modern televisions with tightly packed arrays of pixelated lights blinking at up to 5,000 times a second, allowing drivers to essentially use high and low beams at the same time. Until very recently, however, cutting-edge features like that weren’t allowed on vehicles sold in the US due to an NHTSA regulation set in the 1960s. But thanks to a multi-year lobbying effort on the part of Toyota, those regulations changed this last February — now America’s roadways are about to become a bit brighter and a whole lot safer.

How headlights evolved from open flames to laser pixels

Following the short-lived idea of using open flames to light the way, the first electric headlights appeared on the 1912 Cadillac Model 30 and, by the next decade, were quickly becoming mandatory equipment across the nation. The first split-intensity headlights offering separate low and high beams were produced in 1915 but wouldn’t be included in a vehicle’s OEM design until in 1924 and the floor-mounted switch that controlled them wouldn’t be invented until three years after that — a full decade of having to get out of the car just to turn your lights on and blink between brightnesses!

The advent of sealed beam headlights with filaments for both low and high beams in 1954, and its widespread adoption by 1957, proved a massive technological leap. With low beams for dusk and evening driving, and high beams for late night travel on otherwise unlit roads, these new headlights would drastically extend the hours of day a car could safely be on the road.

The first halogen light, which would itself quickly become a global standard, debuted in 1962. But halogens at that time were about as popular in the US as the metric system — we still preferred tungsten incandescents. That changed with the passage of the Motor Vehicle Safety Act of 1966 and the formation of the National Highway Transportation Authority (the NHTSA) in 1968, which took the existing hodge-podge of state-level vehicular regulations and federalized them, as well as the formal adoption that year of Federal Motor Vehicle Safety Standard (FMVSS) 108, which dictated that all headlights be constructed of sealed beams.

kampee patisena via Getty Images

By the 1970s, halogen bulbs, with their increased brightness and efficiency compared to tungsten incandescents, became the industry standard. The ‘80s, in turn, saw US regulations expand to allow for replaceable-bulb headlamps, which the European market had already been enjoying for a number of years. The ability to swap out a bulb rather than an entire headlight unit, combined with recent material advances that saw lamp lenses constructed out of plastic instead of glass, drastically reduced the cost of making and operating headlights. And by the ‘90s, halogens had themselves fallen to the wayside in favor of modern xenon and LED lighting technologies. The 21st century has seen further advances to not just the lighting technology itself — hello halo and laser headlights! — but also the control systems that direct the beams.

Due to differences in their relative transportation regulations, the rate of technological adoption has diverged between US drivers and their European counterparts — often with the Americans lagging behind. As with replaceable bulbs in the ‘50s and glare reduction efforts in the ‘30s, Europe has shown itself far more willing to innovate and readily implement recent headlight advances, in part to restrictions imposed by FMVSS 108. Because Standard 108 defined headlights as only having high or low beams — and legally requiring they remain separate — it tacitly excluded all of the technical advances that followed, specifically adaptive driving beam (ADB) headlight systems as found in Audi’s matrix LEDs, Lexus’ Blade Scan LEDs or Ford’s Adaptive Front Lighting System, none of which you will currently find operable Stateside.

Those and similar ADB systems have been available in the Europe, Canada and Japan since the technology's debut in 2004 (though, technically, the 1967 Citroen DS did also feature headlights that swivel in sync with the steering). It would be more than a decade — not until Toyota’s monumental 2015 petition — before the NHTSA would even consider allowing their use in the North American market. In fact, it took another three years beyond that for the agency’s bureaucratic skullduggery to wrap up and it wasn’t until February of this year — a year and a half ahead of schedule because they had to satisfy a requirement set forth in the Bipartisan Infrastructure Bill — that the NHTSA amended the regulation.

“NHTSA prioritizes the safety of everyone on our nation’s roads, whether they are inside or outside a vehicle. New technologies can help advance that mission,” Dr. Steven Cliff, NHTSA’s Deputy Administrator, said in a February press release. “NHTSA is issuing this final rule to help improve safety and protect vulnerable road users.”

“Adaptive driving beam headlight systems, or ADB, use automatic headlight beam switching technology to shine less light on occupied areas of the road and more light on unoccupied areas,” the NHTSA further explained. “The adaptive beam is particularly useful for distance illumination of pedestrians, animals, and objects without reducing the visibility of drivers in other vehicles.”

How Adaptive Driving Beams bend light around rain

Broadly, ADB are headlights that actively adapt to the prevailing weather conditions such as redirecting light around falling rain and snow, extending them ahead of turns or dimming the high beams towards oncoming vehicles only. These systems often leverage the same forward-facing cameras used by the adaptive cruise control system and can be programmed to not just illuminate the road ahead but display prescient navigation information as well.

Audi outside of the US, for example, offers Digital matrix LED headlights — LEDs arrayed in a grid pattern and granularly controlled by a central processor called a DMD (digital micromirror device). They operate much like the digital projection technology they’re based on.

”At its heart is a small chip containing one million micromirrors, each of whose edge length measures just a few hundredths of a millimeter,” Audi’s Lighting page explains. ”With the help of electrostatic fields, each individual micromirror can be tilted up to 5,000 times per second. Depending on the setting, the LED light is either directed via the lenses onto the road or is absorbed in order to mask out areas of the light beam.”

Those masked areas are where the light isn’t bouncing off falling water or glaring into the eyes of other drivers. What’s more, the system will project “dynamic leaving- and coming-home animations” onto nearby surfaces, as a treat. More practically, the system can angle the beams to illuminate farther into turns.

Similarly, the HD Matrix LED system found on later model year A8s, under specific circumstances, will dim the vehicle’s high beams without any human intervention. When the headlights are set to Automatic, the vehicle is going over 18 mph outside of urban areas (as dictated by the navigation system), and the front camera sees an another vehicle, the headlights will darken and dim individual LEDs in 64 stages — roughly several million potential patterns — to “mask out other vehicles while continuing to fully illuminate the zones between and adjacent to them.”

Ford’s high-resolution Adaptive Front Lighting System, which debuted in Europe this past August, offers similar capabilities. The company notes that roughly 40 percent of accidents occur on UK roads after sundown. Glancing down at bright infotainment displays while on dark roads can temporarily blind drivers, so Ford’s headlights will project speed limits, navigation cues and road hazard warnings onto the road itself. What’s more, the beams can “bend” around corners and penetrate fog, rain and other inclement weather conditions.

“What started as playing around with a projector light and a blank wall could take lighting technologies to a whole new level,” Ford engineer Lars Junker said in a press release. “There’s the potential now to do so much more than simply illuminate the road ahead, to help reduce the stress involved in driving at night. The driver could get essential information without ever needing to take their eyes off the road.”

Mercedes’ Digital Light system, on the other hand, uses a unique light module consisting of three LEDs mounted in each headlamp. Their light is reflected by a thumbnail-sized array of some 1.3 million micromirrors, each of which is controlled via an onboard graphics processor to precisely bend and attenuate the beams. According to Mercedes, that fidelity enables its Highbeam Assist to function two magnitudes more precisely in excluding oncoming traffic than conventional 84-pixel arrays.

At the other end of the spectrum, Lexus’ Blade Scan high-definition headlights, which debuted in Asian markets in 2019, only utilize 24 LEDs per headlight. Rather than an array of micromirrors, Lexus uses a pair of rapidly-rotating mirrors to direct their light through the lens and onto the road. Per the company, this allows the system to aim with 0.7 degrees of accuracy and detect pedestrians at the roadside up to 184 feet away.

Unfortunately, for as cool as these capabilities are and as technically legal as they are, American drivers still have a short wait before they come stateside. That's because the NHTSA must now devise a set of testing requirements by which to measure and regulate adaptive headlights under the revised standard. In the short term, it means we’ll likely see more new vehicles equipped with ADB-capable-but-disabled hardware that can be activated over-the-air later on, once the regulations have firmed up.

“While adaptive headlights have been approved, the testing requirements for approval put forth by NHTSA is still under discussion,” an Audi representative told Engadget. “Because of this, [I’m] afraid we are still not able to offer the matrix functionality in the US at this time and continue to work with regulators to bring this safety relevant function to market.”

Volvo officially reveals the EX90 EV SUV, its 'safest car ever'

Volvo took another step towards its 2030 goal of full electrification on Wednesday with the official unveiling of its new flagship, the all-electric EX90. The three-row, seven-seat SUV, which grew out of the Concept Recharge design, will go on sale alongside its gas-powered sibling, the XC90, in model year 2024.

The EX90 will initially come equipped with a 111 kWh battery pack powering a pair of permanent magnet electric motors for 380 kW (517 hp) and 910 Nm of AWD torque. Per the company, the pack can refill from 10 to 80 percent in under 30 minutes. Volvo has not yet shared performance figures for the vehicle.

The EX90 will be the first Volvo to offer bi-directional charging capabilities, which enable drivers to use their vehicles as home-scale batteries in the event of power outages — similar to what Hyundai's Ioniq 5 and the Ford F-150 Lightning offer. Volvo plans on selling home charging equipment as well including a wall box and energy management system. What's more the EX90 — like the rest of its EV brethren — will enjoy over-the-air software updates, though Volvo has begun expanding that service out to hybrid and ICE models as well. 

The EX90's exterior is optimized for aerodynamics, boasting a drag coefficient of .29, just a touch behind the VW ID.4 and very respectable for a full-size SUV, electric or not. "We’ve taken inspiration from yacht design to outline the Volvo EX90’s beautiful and sleek proportions,” Volvo's head of design, T. Jon Mayer, said in a press release. “If you look at the front, it’s proud and confident – inspired by a sailboat’s ability to shear through the ocean’s slamming waves. But it’s also rounder overall, which enables the air to flow around the car more efficiently.”

Pedestrians will sail around the car more efficiently as well, thanks to the "shield of safety" that Volvo is working on. Using a mix of LiDAR, optical, ultrasonic and radar sensors, the EX90 will offer a 360 degree view around itself to proactively react to other vehicles and pedestrians even if the driver doesn't immediately notice the issue. The EX90's LiDAR sensors can reportedly spot pedestrians up to 250 meters away. The company believes that the system could reduce the rate of all accidents by 9 percent and cut accidents resulting in serious injury or death by as much as 20 percent. 

"No matter how much experience you have or how much competence you have, at the end of the day, we are all still human," Volvo Head of Safety Jim Rowan quips in the walkthrough above. "We want to help people become better drivers by being there when they're not at their very best." The company plans to further incorporate the LiDAR sensors into its future unsupervised autonomous driving system.

But like the oozing garbage bags one finds on the side of the highway, it's what's inside that counts. The EX90's cabin provides a well-lit Scandinavian minimalist design clad in sustainable and recycled materials, such as "Nordico," a fabric made from recycled PET bottles "as well as bio-attributed material from responsibly-managed forests in Sweden and Finland. All of the interior wood panels are FSC-certified sustainable and if you opt for the wool seat fabric, that yarn will come from vetted suppliers "according to strict sustainability standards on animal welfare, environmental and social issues." In all, more than 50 kilograms of recycled plastic are used in each EX90 interior.

“We’ve put a lot of effort into the illumination inside the EX90, trying to create a warm interior and a somewhat colder expression for the exterior,” Mayer noted. “It’s also connected to how people in Scandinavia might be perceived. There’s a calm and understated confidence that can read as cold at first – but once you get to know people, you find that they're really warm.”

The EX90's in-cabin LiDAR is a world-first, designed to detect the presence of occupants and alert the driver if anyone is left behind in an effort to prevent hot car deaths — Volvo notes that more than 900 kids have died in these circumstances in the US since 1998. Where allowed by regulation, the system will prevent the keyfob from locking the doors should a child or pet be detected inside and display a warning icon on the central infotainment screen.

"No one chooses to be distracted or tired, but we know it can happen," Lotta Jakobsson, Volvo senior technical specialist in injury prevention. "We’re all human and distraction is a fact of life. With the help of cutting-edge technology, we’ll support you when you’re not at your best and help you avoid leaving family members or pets behind by accident."

Developing...

Audi's new flagship Q8 e-tron SUV boasts a maximum range of 373 miles

Audi has introduced seven, well, now eight, electric SUVs under the e-tron banner since they began production in 2018 — with another dozen already in the works for release by 2026. After months of teasing, the company finally, officially introduced the latest iteration of its luxury EV line with Wednesday’s reveal of the upcoming Q8 e-tron crossover SUV.

Audi

Available for order starting next spring, the Q8 e-tron and its Sportback version will both offer three AWD powertrain options: a Q8 50, a Q8 55, and the top of the line SQ8. The base model Q8 50 e-tron delivers 250 kW (335 HP) with 490 lb-ft of torque alongside a 0-60 of 6 seconds flat, a max range 306 miles for the SUV body style and 313 miles for the Sportback.

Audi

The step up Q8 55 e-tron, again both the SUV and Sportback, outputs 300kW peak (402 HP) with 490 lb-ft of torque. The SUV ranges out to 361 miles while the Sportback can hit 600 km (372 miles), thanks to their larger battery pack. The top-of-the-line SQ8 e-tron boasts three motors to the others’ pairs so it can generate 370 kW (496 HP) and 717 ft-lb of torque with 307 miles (SUV) and 319 miles (Sportback) of range. The S model is software limited to a top speed of 130 MPH and a 0-60 of 5.6 seconds.

The Q8 50s will come equipped with a net-89 kWh battery pack that can charge at up to 150 kW. The Q8 55s, however, are outfitted with a bigger net-106 kWh pack that can achieve 170kW charging rates (higher rates equals faster charging times which means less sitting around a roadside power station). The 55s will be able to refill from 10 to 80 percent (~260 miles of range) in just over a half hour using an L3 DC fast charger, though the rate drops to just 11 - 22kW, depending on the wall box options, for home charging. You’re going to have to leave a Q8 50 on an 11kW socket for about nine and a quarter hours to fully replenish it, and a whopping 11 and a half hours for the bigger Q8 55. Those numbers drop to just under five hours for the 50 and six hours for the 55 at 22 kW.

Audi

The new e-tron line boasts more efficient motors than its predecessors, offering 14-coil power plants that can generate a stronger magnetic field for roughly the same amount of electrical input as the older 12-coil motors required. A stronger field can generate more torque when necessary but also step that power back when it isn’t needed to help extend the vehicle’s range. What’s more, the vehicle’s exteriors have been designed to minimize air resistance, with the Q8 Sportback offering a drag coefficient of just 0.24 — that’s the same as the Polestar 2 — and the Q8 e-tron offering 0.27, slightly better than the VW ID.4.

Audi

“Looking at the current Audi e-tron, it's clear that we're starting with a very solid base of technical features as we move forward into the Q8 e-tron family,” Audi spokesperson Benedikt Still said during a press preview last week. “We'll be carrying over retaining this strong character in the new model, groundbreaking features such as the digital matrix headlights or the virtual side mirrors are still at the technical forefront today.”

As is the way in the luxury vehicle segment, the Q8 is packed with high-tech features. The vehicle offers nearly four dozen driver assist features based on data gathered from as many as five radar sensors, five optical cameras and a dozen ultrasonic pickups. This plethora of incoming information is enough to let the Q8 park itself. Available for order in 2023, remote park assist plus will autonomously guide your seventy-some odd thousand EV “into even the tightest parking spaces,” according to Wednesday's release, in a process controlled through the driver’s myAudi app.

Audi

And as with every e-tron released since the line went electric in 2018, the Q8 will come equipped with Audi’s Matrix LED headlights. But unlike previous model years, the Q8’s headlights will finally be ADB capable following a long-awaited NHTSA ruling this past February and, as such they’ll have three new features: enhanced traffic information, a lane light with a direction indicator and an orientation light on country roads.

If you were expecting the interior to be anything short of opulent, you’re going to be disappointed. The two-part roof is panoramic and controlled electronically, as are the integrated sunshades. Audi is also offering four-zone automatic climate control as an option as well as massaging functions for the synthetic leather-clad seats. You’ll be hard-pressed to find a physical button to push as virtually all of the cabin’s features are controlled through the pair of central infotainment screens — a 10.1-inch upper and a 8.6-inch lower — or via voice command.

Audi

Order windows for both the Q8 e-tron and the Q8 Sportback open mid-November. Audi is aiming for an initial market launch in Germany and major European markets at the end of next February with arrivals to the US happening by the end of April. Audi has announced a base MSRP of 74,400 euros or around $72,500 US at current exchange rates.