Posts with «author_name|andrew tarantola» label

NYU is building an ultrasonic flood sensor network in New York's Gowanus neighborhood

People made some 760 million trips aboard New York’s subway system last year. Granted, that’s down from around 1.7 trillion trips, pre-pandemic, but still far outpaced the next two largest transit systems — DC’s Metro and the Chicago Transit Authority — combined. So when major storms, like last year’s remnants of Hurricane Ida, nor'easters, heavy downpours or swelling tides swamp New York’s low lying coastal areas and infrastructure, it’s a big deal.

Jonathan Oatis / reuters

And it’s a deal that’s only getting bigger thanks to climate change. Sea levels around the city have already risen a foot in the last century with another 8- to 30-inch increase expected by mid century, and up to 75 additional inches by 2100, according to the New York City Panel on Climate Change. To help city planners, emergency responders and everyday citizens alike better prepare for 100-year storms that are increasingly happening every couple, researchers from NYU’s Urban Flooding Group have developed a street-level sensor system that can track rising street tides in real time.

The city of New York is set atop a series of low lying islands and has been subject to the furies of mid-Atlantic hurricanes throughout its history. In 1821, a hurricane reportedly hit directly over the city, flooding streets and wharves with 13-foot swells rising over the course of just one hour; a subsequent Cat I storm in 1893 then scoured all signs of civilization from Hog Island, and a Cat III passed over Long Island, killing 200 and causing major flooding. Things did not improve with the advent of a storm naming convention. Carol in 1954 also caused citywide floods, Donna in ‘60 brought an 11-foot storm surge with her, and Ida in 2021 saw an unprecedented amount of rainfall and subsequent flooding in the region, killing more than 100 people and causing nearly a billion dollars in damages.

NOAA

As the NYC Planning Department explains, when it comes to setting building codes, zoning and planning, the city works off of FEMA’s Preliminary Flood Insurance Rate Maps (PFIRMs) to calculate an area’s flood risk. PFIRMs cover the areas where, “flood waters are expected to rise during a flood event that has a 1 percent annual chance of occurring,” sometimes called the 100-year floodplain. As of 2016, some 52 million square feet of NYC coastline falls within that categorization, impacting 400,000 residents — more than than the entire populations of Cleveland, Tampa, or St. Louis. By 2050, that area of effect is expected to double and the probability of 100-year floods occuring could triple, meaning the chances that your home will face significant flooding over the course of a 30-year mortgage would jump from around 26 percent today to nearly 80 percent by mid-century.

NOAA

As such, responding to today’s floods while preparing for worsening events in the future is a critical task for NYC’s administration, requiring coordination between governmental and NGOs at the local, state and federal levels. FloodNet, a program launched first by NYU and expanded with help from CUNY, operates on the hyperlocal level to provide a street-by-street look at flooding throughout a given neighborhood. The program began with NYU’s Urban Flooding Group.

“We are essentially designing, building and deploying low cost sensors to measure street level flooding,” Dr. Andrea Silverman, environmental engineer and Associate Professor at NYU’s Department of Civil and Urban Engineering, told Engadget. “The idea is that it can provide badly needed quantitative data. Before FloodNet, there was no quantitative data on street level flooding, so people didn't really have a full sense of how often certain locations were flooding — the duration of the floods, the depth, rates of onset and drainage, for example.”

Urban Flooding Group, NYU

“And these are all pieces of information that are helpful for infrastructure planning, for one, but also for emergency management,” she continued. “So we do have our data available, they send alerts to see folks that are interested, like the National Weather Service and emergency management, to help inform their response.”

FloodNet is currently in early development with just 23 sensor units erected on 8-foot tall posts throughout the Gowanus neighborhood in Brooklyn, though the team hopes to expand that network to more than 500 units citywide within the next half decade. Each FloodNet sensor is a self-contained, solar-powered system that uses ultrasound as an invisible rangefinder — as flood waters rise, the distance between the street surface and the sensor shrinks, calculating the difference between that and baseline readings shows how much the water level has risen. The NYU team opted for an ultrasound-based solution rather than, say LiDAR or RADAR, due to ultrasound tech being slightly less expensive and providing more focused return data, as well as being more accurate and requiring less maintenance than a basic contact water sensor.

The data each sensor produces is transmitted wirelessly using a LoRa transceiver to a gateway hub, which can pull from any sensor within a one-mile radius and push it through the internet to the FloodNet servers. The data is then displayed in real-time on the FloodNet homepage.

URban Flooding Group, NYU

”The city has invested a lot in predictive models [estimating] where it would flood with a certain amount of rain, or increase in tide,” Silverman said. Sensors won’t have to be installed on every corner to be most effective, she pointed out. There are “certain locations that are more likely to be flood prone because of topology or because of the sewer network or because of proximity to the coast, for example. And so we use those models to try to get a sense of locations where it may be most flood-prone,” as well as reach out to local residents with first-hand knowledge of likely flood areas.

In order to further roll out the program, the sensors will need to undergo a slight redesign, Silverman noted. “The next version of the sensor, we're taking what we've learned from our current version and making it a bit more manufacturable,” she said. “We're in the process of testing that and then we're hoping to start our first manufacturing round, and that's what's going to allow us to expand out”.

FloodNet is an open-source venture, so all of the sensor schematics, firmware, maintenance guides and data are freely available on the team’s GitHub page. “Obviously you need to have some sort of technical know-how to be able to build them — it may not be right now where just anyone could go build a sensor, deploy it and be online immediately, in terms of being able to just generate the data, but we're trying to get there,” Silverman conceded. “Eventually we'd love to get to a place where we can have the designs written up in a way that anyone can approach it.”

NASA successfully smacked its DART spacecraft into an asteroid

After nearly a year in transit, NASA's experimental Double Asteroid Redirection Test (DART) mission, which sought to answer the questions, "Could you potentially shove a asteroid off its planet-killing trajectory by hitting it with a specially designed satellite? How about several?" has successfully collided with the Dimorphos asteroid. Results and data from the collision are still coming in but NASA ground control confirms that the DART impact vehicle has intercepted the target asteroid. Yes, granted, Dimorphos is roughly the size of an American football stadium but space is both very large and very dark, and both asteroid and spacecraft were moving quite fast at the time.

NASA launched the DART mission in November, 2021 in an effort to explore the use of defensive satellites as a means of planetary defense against Near Earth Objects. The vending machine-sized DART impactor vehicle was travelling at roughly 14,000 MPH when it fatally crossed Dimorphos' path nearly 68 million miles away from Earth. Dimorphos itself is the smaller of a pair of gravitationally-entangled asteroids — its parent rock is more than five times as large. 

Developing...    

Hitting the Books: How Southeast Asia's largest bank uses AI to fight financial fraud

Yes, robots are coming to take our jobs. That's a good thing, we should be happy they are because those jobs they're taking kinda suck. Do you really want to go back to the days of manually monitoring, flagging and investigating the world's daily bank transfers in search of financial fraud and money laundering schemes? DBS Bank, Singapore's largest financial institution, certainly doesn't. The company has spent years developing a cutting-edge machine learning system that heavily automates the minutia-stricken process of "transaction surveillance," freeing up human analysts to perform higher level work while operating in delicate balance with the antique financial regulations that bound the industry. It's fascinating stuff. Working with AI by Thomas H. Davenport and Steven M. Miller is filled with similar case studies from myriad tech industries, looking at commonplace human-AI collaboration and providing insight into the potential implications of these interactions. 

MIT Press

Excerpted from Working with AI: Real Stories of Human-Machine Collaboration by Thomas H. Davenport and Steven M. Miller. Reprinted with permission from The MIT Press. Copyright 2022.


DBS Bank: AI-Driven Transaction Surveillance

Since the passage of the Bank Secrecy Act, also known as the Currency and Foreign Transactions Reporting Act, in the US in 1970, banks around the world have been held accountable by governments for preventing money laundering, suspicious cross-border flows of large amounts of money, and other types of financial crime. DBS Bank, the largest bank in Singapore and in Southeast Asia, has long had a focus on anti-money laundering (AML) and financial crime detection and prevention. According to a DBS executive for compliance, “We want to make sure that we have tight internal controls within the bank so the perpetrators, money launderers, and sanctions evaders do not penetrate into the financial system, either through our bank, through our national system, or internationally.”

The Limitations of Rule-Based Systems for Surveillance Monitoring

As at other large banks, the area of DBS that focuses on these issues, called “transaction surveillance,” has taken advantage of AI for many years to do this type of work. The people in this function evaluate alerts raised by a rule-based system. The rules assess transaction data from many different systems across the bank, including those for consumers, wealth management, institutional banking, and their payments. These transactions all flow through the rule-based system for screening, and the rules flag transactions that match conditions associated with an individual or entity doing suspicious transactions with the bank—those involving a potential money laundering event, or another type of financial fraud. Rule-based systems—in the past known as “expert systems” — are one of the oldest forms of AI, but they are still widely used in banking and insurance, as well as in other industries.

At DBS and most other banks across the world, rule-based financial transaction surveillance systems of this sort generate a large number of alerts every day. The primary shortcoming of rule-based surveillance systems is that most — up to 98 percent — of the alerts generated are false positives. Some aspect of the transaction triggers a rule that leads the transaction to be flagged on the alert list. However, after follow-up investigation by a human analyst, it turns out that the alerted transaction is actually not suspicious.

The transaction surveillance analysts have to follow up on every alert, looking at all the relevant transaction information. They must also consider the profiles of the individuals involved in the transaction, their past financial behaviors, whatever they have declared in “know your customer” and customer due diligence documents, and anything else the bank might know about them. Following up on alerts is a time-intensive process.

If the analyst confirms that a transaction is justifiably suspicious or verified as fraud, the bank has a legal obligation to issue a Suspicious Activity Report (SAR) to the appropriate authorities. This is a high-stakes decision, so it is important for the analyst to get it right: if incorrect, law-abiding bank customers could be incorrectly notified that they are being investigated for financial crimes. On the other side, if a “bad actor” is not detected and reported, it could lead to problems related to money laundering and other financial crimes.

For now at least, rule-based systems can’t be eliminated because the national regulatory authorities in most countries still require them. But DBS executives realized there are many additional sources of internal and external information available to them that, if used correctly, could be applied to automatically evaluate each alert from the rule-based system. This could be done using ML, which can deal with more complex patterns and make more accurate predictions than rule-based systems.

Using the New Generation of AI Capabilities to Enhance Surveillance

A few years ago, DBS started a project to apply the new generation of AI/ML capabilities in combination with the existing rule-based screening system. The combination would enable the bank to prioritize all the alerts generated by the rule-based system according to a numerically calculated probability score indicating the level of suspicion. The ML system was trained to recognize suspicious and fraudulent situations from recent and historical data and outcomes. At the time of our interviews, the new ML-based filtering system had been in use for just over one year. The system reviews all the alerts generated by the rule-based system, assigns each alert a risk score, and categorizes each alert into higher-, medium-, and lower-risk categories. This type of “post-processing” of the rule-based alerts enables the analyst to decipher which ones to prioritize immediately (those in the higher- and medium-risk categories) and which ones can wait (those in the lowest-risk category). An important capability of this ML system is that it has an explainer that shows the analyst the evidence used in making the automated assessment of the probability that the transaction is suspicious. The explanation and guided navigation given by the AI/ML model helps the analyst make the right risk decision.

DBS also developed other new capabilities to support the investigation of alerted transactions, including a Network Link Analytics system for detecting suspicious relationships and transactions across multiple parties. Financial transactions can be represented as a network graph showing the people or accounts involved as nodes in the network and any interactions as the links between the nodes. This network graph of relationships can be used to identify and further assess suspicious patterns of financial inflows and outflows.

In parallel, DBS has also replaced a labor-intensive approach to investigation workflow with a new platform that automates for the analyst much of the support for surveillance-related investigation and case management. Called CRUISE, it integrates the outputs of the rule-based engine, the ML filter model, and the Network Link Analytics system.

Additionally, the CRUISE system provides the analyst with easy and integrated access to the relevant data from across the bank needed to follow up on the transactions the analyst is investigating. Within this CRUISE environment, the bank also captures all the feedback related to the analyst’s work on the case, and this feedback helps to further improve DBS’s systems and processes.

Impact on the Analyst

Of course, these developments make analysts much more efficient in reviewing alerts. A few years ago, it was not uncommon for a DBS transaction surveillance analyst to spend two or more hours looking into an alert. This time included the front-end preparation time to fetch data from multiple systems and to manually collate relevant past transactions, and the actual analysis time to evaluate the evidence, look for patterns, and make the final judgment as to whether or not the alert appeared to be a bona fide suspicious transaction.

After the implementation of multiple tools, including CRUISE, Network Link Analytics, and the ML-based filter model, analysts are able to resolve about one-third more cases in the same amount of time. Also, for the high-risk cases that are identified using these tools, DBS is able to catch the “bad actors” faster than before. 

Commenting on how this differs from traditional surveillance approaches, the DBS head of transaction surveillance shared the following:

Today at DBS, our machines are able to gather the necessary support data from various sources across the bank and present it on the screen of our analyst. Now the analyst can easily see the relevant supporting information for each alert and make the right decision without searching through sixty different systems to get the supporting data. The machines now do this for the analyst much faster than a human can. It makes the life of the analysts easier and their decisions a lot sharper.

In the past, due to practical limitations, transaction surveillance analysts were able to collect and use only a small fraction of the data within the bank that was relevant to reviewing the alert. Today at DBS, with our new tools and processes, the analyst is able to make decisions based on instant, automatic access to nearly all the relevant data within the bank about the transaction. They see this data, nicely organized in a condensed manner on their screen, with a risk score and with the help of an explainer that guides them through the evidence that led to the output of the model.

DBS invested in a skill set “uplift” across the staff who were involved in creating and using these new surveillance systems. Among the staff benefiting from the upskilling were the transaction surveillance analysts, who had expertise in detecting financial crimes and were trained in using the new technology platform and in relevant data analytics skills. The teams helped design the new systems, beginning with the front-end work to identify risk typologies. They also provided inputs to identify the data that made most sense to use, and where automated data analytics and ML capabilities could be most helpful to them.

When asked how the systems would affect human transaction analysts in the future, the DBS compliance executive said:

Efficiency is always important, and we must always strive for higher levels of it. We want to handle the transaction-based aspects of our current and future surveillance workload with fewer people, and then reinvest the freed- up capacity into new areas of surveillance and fraud prevention. There will always be unknown and new dimensions of bad financial behavior and bad actors, and we need to invest more time and more people into these types of areas. To the extent that we can, we will do this through reinvesting the efficiency gains we achieve within our more standard transaction surveillance efforts.

The Next Phase of Transaction Surveillance

The bank’s overall aspiration is for transaction surveillance to become more integrated and more proactive. Rather than just relying on alerts generated from the rule-based engine, executives want to make use of multiple levels of integrated risk surveillance to monitor holistically from “transaction to account to customer to network to macro” levels. This combination would help the bank find more bad actors, and to do so more effectively and efficiently. The compliance executive elaborated:

It is important to note that money launderers and sanctions evaders are always finding new ways of doing things. Our people need to work with our technology and data analytics capabilities to stay ahead of these emerging threats. We want to free up the time our people have been spending on the tedious, manual aspects of reviewing alerts, and use that time to keep pace with the emerging threats.

Human analysts will continue to play an important role in AML transaction surveillance, though the way they use their time and their human expertise will continue to evolve.

The compliance executive also shared a perspective on AI: “It’s really augmented intelligence, rather than automated AI in risk surveillance. We do not think we can remove human judgment from the final decisions because there will always be a subjective element to evaluations of what is and is not suspicious in the context of money laundering and other financial crimes. We cannot eliminate this subjective element, but we can minimize the manual work that the human analyst does as part of reviewing and evaluating the alerts.”

Lessons We Learned from This Case

  • An automated system that generates large numbers of alerts most of which turn out to be false positives does not save human labor.

  • Multiple types of AI technology (in this case, rules, ML, and Network Link Analytics) can be combined to improve the capabilities of the system.

  • Companies may not reduce the number of people doing a job even when the AI system substantially improves the efficiency of doing it. Rather, employees can use the freed-up time to work on new and higher-valued tasks in their jobs.

  • Because there will always be subjective elements in the evaluation of complex business transactions, human judgment may not be eliminated from the evaluation process.

Tesla to recall more than a million vehicles over pinchy windows

More than a million Tesla owners will have yet another recall notice to deal with in the coming weeks. On Tuesday the National Highway Traffic Safety Administration filed a safety recall notice for numerous late model vehicles from across the EV maker's lineup because "the window automatic reversal system may not react correctly after detecting an obstruction," and as such, "a closing window may exert excessive force by pinching a driver or passenger before retracting, increasing the risk of injury," per the notice.

The following models and years are impacted: 2017-22 Model 3s as well as 2020-21 Model Y, X and S vehicles. Tesla has until mid-November to contact affected owners and plans to push an OTA software update to correct the issue. 

I have the exact same problem and have had two service appointments for it. It’s still happening. Tesla service says they don’t have a fix for it. Are you kidding me?

— Taylor Ogan (@TaylorOgan) May 5, 2021

Per the Associated Press, Tesla first identified the issues during product testing in August and has incorporated the update into newly built vehicles since September 13th. However, multiple Twitter users have sounded off in response to Tuesday's announcement, noting that their vehicles have been having nearly identical issues since at least 2021. 

This is far from Tesla's first safety recall. Over the last two years alone, Teslas have been recalled on account of overheating infotainment systems, camera and trunk defects, separating front suspensions, their "full self driving" ADAS, their pedestrian warning sounds, their seatbelt chimes, software glitches in their brakes, and sundry touchscreen failures. And that's just in the US. In Germany this past July, Tesla got popped trying to pass off painted-over frame damage on its Model 3s too.

Hertz to purchase 175,000 General Motors EVs over the next five years

Hertz is once again growing its EV fleet, announcing Tuesday that it has struck a deal with General Motors to purchase 175,000 electric vehicles from the automaker's Chevrolet, Buick, GMC, Cadillac and BrightDrop brands over the next five years. Customers will see the first offerings, namely the Chevrolet Bolt EV and Bolt EUV, arrive on Hertz lots beginning in the first quarter next year. 

The deal, which runs through 2027, will bring a wide variety of models to Hertz's growing EV herd. Between now and 2027, the rental company expects its customers to drive about 8 billion miles in said EVs, preventing an estimated 3.5 million metric tons of carbon dioxide from being released. Hertz plans to convert a quarter of its rental fleet to battery electric by 2024. 

This news follows Hertz's 65,000-vehicle order from Polestar in April, which the performance EV maker has already begun deliveries on. An earlier announcement in 2021 had many believing that Tesla would be supplying the Hertz fleet with 100,000 vehicles, worth an estimated $4.2 billion, was quickly kiboshed by Tesla CEO, Elon Musk. Hertz is already planning to rent 50,000 Tesla EVs to Uber drivers, which now operate in 25 North American cities, there's no word on whether GM's vehicles will be offered under similar terms.

Hitting the Books: What if 'Up' but pigeons?

We all have those thoughts, the ones that come to us in the small hours of the night. Who am I? Why are we here? What if my cellphone ran on vacuum tubes instead? Randall Munroe has the answer to, well, only one of those questions, but also the answers to a whole bunch of others collected together into What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions. Yes, that is a T-Rex eating an airplane. In the excerpt below Munroe examines what it would take to haul an average sized human in a chair over Australia's tallest skyscraper, using only the power of pigeons. Lots and lots of pigeons.     

Penguin Random House

Excerpted from What If? 2 by Randall Munroe. Copyright © 2022 by Randall Munroe. Excerpted by permission of Riverhead, an imprint and division of Penguin Random House LLC, New York. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.


How many pigeons would it require in order to lift the average person and a launch chair to the height of Australia’s Q1 skyscraper?

In a 2013 study, researchers at the Nanjing University of Aeronautics and Astronautics led by Ting Ting Liu trained pigeons to fly up to a perch while wearing a weighted harness. They found that the average pigeon in their study could take off and fly upward while carrying 124 grams, about 25 percent of its body weight.

The researchers determined that the pigeons could fly better if the weights were slung below their bodies, rather than on their backs, so you would probably want pigeons to lift your chair from above rather than support it from below.

Let’s suppose your chair and harnesses weigh 5 kilograms and you weigh 65 kilograms. If you used the pigeons from the 2013 study, it would take a flock of about 600 of them to lift your chair and fly upward with it.

Unfortunately, flying with a load is a lot of work. The pigeons in the 2013 study were able to carry a load 1.4 meters upward to a perch, but they probably wouldn’t have been able to fly too much higher than that. Even unencumbered pigeons can only maintain strenuous vertical flight for a few seconds. One 1965 study measured a climb rate of 2.5 m/s for unencumbered pigeons,* so even if we’re being optimistic, it seems unlikely that pigeons could lift your chair more than 5 meters.

No problem, you might think. If 600 pigeons can lift you the first 5 meters, then you just need to bring another 600 along with you, like the second stage of a rocket, to carry you the next 5 meters when the first flock gets tired. You can bring another 600 for the 5 meters after that and so on. The Q1 is 322 meters high, so about 40,000 pigeons should be able to get you to the top, right?

No. There’s a problem with this idea.

Since a pigeon can carry only a quarter of its body weight, it takes four flying pigeons to carry one resting pigeon. That means each “stage” will need at least four times as many pigeons as the one above it. Lifting one person may only take 600 pigeons, but lifting one person and 600 resting pigeons would take another 3,000 pigeons.

This exponential growth means that a 9-stage vehicle, able to lift you 45 meters, would need almost 300 million pigeons, roughly equal to the entire global population. Reaching the halfway point would require 1.6 × 1025 pigeons, which would weigh about 8 × 1024 kilograms—more than the Earth itself. At that point, the pigeons wouldn’t be pulled down by the Earth’s gravity—the Earth would be pulled up by the pigeons’ gravity.

The full 65-stage craft to reach the top of the Q1 would weigh 3.5 × 1046 kilograms. That’s not just more pigeons than there are on Earth, it’s more mass than there is in the galaxy.

You could make things more efficient by reusing pigeons. In the 2013 study, the researchers gave the pigeons 30 seconds to rest on the perch before bringing them down for another trial. If each “stage” is two seconds, and pigeons are refreshed after 30 seconds, you could fly arbitrarily high with a 15-stage craft—but that would still require trillions of pigeons.

A better approach might be to avoid carrying the pigeons with you. After all, pigeons can get up to the top of the skyscraper themselves, so you might as well send them ahead to wait for you there instead of having their friends carry them up with you. If you could train them well enough, you could have them glide along at the appropriate height, then grab you and tug you upward for a few seconds when you reach their altitude. Keep in mind that pigeons can’t grab and carry things with their feet, so they’d need little harnesses with aircraft-carrier-style hooks to intercept you.

With this arrangement, it’s possible you could fly yourself to the top of the tower with just a few tens of thousands of well-trained pigeons. You should probably make sure you have some kind of safety system that will keep you from plunging to your demise every time a falcon flies by and spooks the pigeons.

The craft wouldn’t just be more dangerous than an elevator, it would also be a lot harder to pick your destination. You might plan to go to the top of the Q1, but once you take off... you’ll be completely under the control of anyone with a bag of seeds.

Alexa to provide branded answers to your pressing questions

See, the problem is that you plebes simply aren't buying enough. To rectify this issue, Amazon announced on Thursday that it is introducing a new Alexa feature, dubbed "Customers Ask Alexa," wherein "expert brands" provide answers to customer questions like “How can I remove pet hair from my carpet?” that also just so happen to prominently feature that brand's particular product.

Per the company, brands will have to first sign up to the Amazon Brand Registry to gain access to the sellers hub where they can view and answer questions that customers ask their networked Alexa devices. Both questions and answers reportedly pass through the company's content moderation team before the most relevant answers are pushed live.

The program launches on limited release this October before expanding out to all eligible US brands by 2023. Alexa users will see the responses appear in late 2022 in the Amazon search bar and on Echo devices by the middle of next year.

While this isn't the creepiest use of Alexa we've seen from the company in Q3 2022 — that honor goes to the ghouls who think using your Nan's vocal imprints like a goddamn auditory marionette is a good idea — but it is among the most concerning. Amazon has made no secret of its goal to surveil (and subsequently profit from) every aspect of our public and private lives that it can worm itself access into — whether that's knowing our shopping habits, viewing habits, eating habits, obviously our cleaning habits, and potentially soon, our healthcare habits. And if this announcement holds any portent for the future, getting reliable, non-partisan answers to even basic questions is going to get a lot harder for anyone navigating Amazons sprawling online ecosphere.

What we bought: The Cosori 0165 dehydrator mummifies meat for $70

I’m a big fan of beef jerky. Not so much eye-watering retail price, mind you, or the untraceable nature of the commercial product’s precursors — like when you get that one bag that’s nothing but scraps, unidentifiable knuckles and strands of desiccated flesh, ew. So I decided, in keeping with my recent self-sufficiency kick, to start dehydrating my own food for fun and, presumably, eventual profit. Certainly not because the USDA is warning that in 2022, “all food prices are predicted to increase between 8.5 and 9.5 percent,” with “food-at-home prices predicted to increase between 10 and 11 percent.”

Dehydration is one of humanity’s oldest and most useful food preparation techniques. We were doing it before we began farming, sun-drying meat and vegetable matter to wick away moisture that leads to spoilage, extending its durability and making it easier to transport. Even with later advances in fermentation, pickling, curing and canning, drying remains an ubiquitous practice with the global meat snack industry estimated at $9.47 billion in 2021.

Given that I was just getting into the activity, and am generally a cheap sumbitch, I ignored the advice of popular review sites and forewent the bells and whistles of Wi-Fi connectivity, stainless steel construction and associated smartphone apps, opting instead for the least expensive, most barebones dehydrator I could find: the Cosori C0165. It’s $70 and perfect.

Andrew Tarantola / Engadget.com

I mean it’s a food dehydrator. It is, by definition, a box that blows hot air. You could literally MacGyver one out of a hair dryer, a plastic milk crate, a two gallon water jug, some chicken wire, and a roll of duct tape if you wanted to. And there is nothing fancy about the dehydration process. You set the temperature and a timer, then wait 6 - 18 hours for a bell to ding, so like why would I spend upwards of $200-500 for a bunch of features that only give the illusion of greater control but don’t make the actual process go faster?

The C0165 does exactly what it's supposed to and not one iota more and I absolutely love it for that. You get five BPA-free plastic stacking trays, a fruit roll sheet and a mesh sheet for herbs (yes, those herbs too). You put moisture-filled stuff on those trays, you stack the trays, you turn on the machine, you set the temperature (95ºF-165ºF) and time (30-minute increments up to 48 hours), and then you move on with your life. There are no pop-up reminders to clear, no app permissions to grant, and very little to break, so long as you don’t dunk the base unit in liquid. The thing is damn near silent, running under 48 dB — you won’t notice it operating overnight unless it's in the same room as you — and is compact enough to fit into the cabinet when not in use. Clean-up is also easy: just wipe down the base with a sponge and give the trays a light scrubbing to take off any dried bits left behind.

Andrew Tarantola / Engadget.com

To date, I’ve managed to fit 2-plus pounds of sliced and marinated bottom round into the machine in one go, as well as around 3-pounds of roasted heirloom tomatoes at a time. Taller (or broader, depending on your angle of observation) items can be tricky as there isn’t much space between each tray level so stuff like hatch chilis will need to be cut down to size before being processed. And while I have to run the machine for the better part of a day to see results, it is still far more efficient than using a full-size kitchen oven (which draw 2000-3000W, on average, to the C0165’s 450W) and magnitudes faster than waiting for the dumb old sun to do it — and that’s assuming you even live somewhere hot and dry enough to prevent the food from rotting before it fully dries (hint: that somewhere is sure not San Francisco).

Jeep adds new Grand Cherokee and Wrangler trims to its 4xe lineup

Jeep is making good on its commitment to release a "4xe" plug-in hybrid variant of each of its SUV models by 2025 with Stellantis executives taking the stage at the 2022 Detroit Auto Show Wednesday morning to unveil the 2023 Jeep Wrangler Willy 4xe and Grand Cherokee 4xe 30th Anniversary edition. Both models will be on display through September 25th.

Stellantis

The Wrangler Willy 4xe, named after the venerated first-generation military Jeeps that debuted in WWII, matches a 2.0-liter turbocharged inline-4 with a 17kWh, 400-Volt battery pack to deliver 375 horsepower and 470 lb-ft of torque along with 49 MPGe and 21 miles of electric-only range. Its Selec-Trac full-time 4WD system and Dana 44 axles ensure that the Willy 4xe will be just at home at a trailhead as it is a Whole Foods. Yeah, you're going to need "bougie grocery store" money if you want a Willy, which will arrive with an MSRP of $53,995 (excluding $7,500 tax credit and $1,595 destination). Per Jeep, "LED headlamps and fog lamps, Alpine 9-speaker premium audio, all-weather floor mats, rear limited-slip differential, rock rails, black grille, and 17-inch black-painted alloy wheels wearing LT255/75R17C mud-terrain tires" will all come standard as well as a "Willys" decal on the hood. Order banks for the new model open today with deliveries scheduled to start fourth quarter 2022.

The Grand Cherokee 4xe 30th Anniversary edition (wow, that's a mouthful) isn't so much a new model as it is a new optional package. The GC 4xe already offers 56 MPGe and 25 miles all-electric range in addition to the same 375 HP / 470 lb-ft torque that the Wrangler does (unsurprising since the two run identical powertrains), with the 2022 model year Grand Cherokees being available in the Limited, Trailhawk, Overland, and Summit trims. 

Stellantis

The 30th Anniversary package will feature a blacked out exterior — 20-inch black rims, special-edition badging and body-color rear fascia, lower moldings, sill claddings and wheel flares — with dual exhaust and a dual-pane sunroof. The interior offers black capri leather seats, wireless phone charging and a nine-speaker Alpine audio system and Uconnect 5 with a 10.1-inch touchscreen. The 30th Anniversary package will retail for $4,500 on top of the $58,465 you'll need for the rest of the vehicle. Orders for the Grand Cherokee open later this year with deliveries set for early 2023.

Sony's next State of Play will highlight 10 PS4, PS5 and PS VR2 titles

Sony's next State of Play preview event is going down on Tuesday, September 13th at 6pm ET, the company announced via Twitter Monday. "We’ll have some great updates from our amazing Japanese partners and developers all around the world," Sony noted in a subsequent tweet. "Expect about ~20 minutes covering 10 upcoming games."

State of Play returns tomorrow, September 13. Watch live to see new reveals and updates for PS5, PS4, and PS VR2.

Tune in at 3 PM PT / 11 PM BST: https://t.co/pB7wQ5ipwvpic.twitter.com/GfbT4uK1Cy

— PlayStation (@PlayStation) September 12, 2022

Further details regarding which games will be showcased were not provided, however Sony has repeatedly focused on PS VR2 titles during the past two events, seemingly intent on teasing all of the nearly two dozen "major" titles — including from the Among Us and Horizon franchises — which are scheduled to launch alongside the next generation headset.  

The State of Play will live stream through PlayStation's Twitch and YouTube channels. Gaming fans will be in for a treat tomorrow, as this afternoon event follows Nintendo's next Direct stream, which is happening Tuesday morning, at 10am ET.