Posts with «author_name|andrew tarantola» label

Honda's HALO facility is the 'world's most advanced' wind tunnel

Regular readers of Engadget may have noticed that much of our EV reviews and coverage makes mention of the vehicle's drag coefficient. It's a handy indicator that measures the ratio of the drag force to the force produced by the dynamic pressure times the area — essentially, the lower the drag coefficient, the less drag the vehicle produces and the more efficiently it pushes through air. 

Honda

For ICE (internal combustion) vehicles, higher a drag coefficient translates into lower fuel efficiency and more frequent trips to the pump. For EVs, a low drag coefficient is even more critical because it directly impacts the vehicle's driveable range, a continuing concern for many potential EV buyers. As such, designing optimally aerodynamic vehicles is in every automaker's interest but doing so does require the use of a specialized wind tunnel technologies, much like the $124 million state-of-the-art HALO facility Honda opened on Monday in Central Ohio.

HALO (Honda Automotive Laboratories of Ohio) is "the world's most advanced wind tunnel" according to Honda, offering three distinct testing capabilities — aerodynamics, aeroacoustics, and racing — with which to develop Honda and Acura products as well as conduct general science and research work with third parties.   

Honda

"I can tell you our new HALO wind tunnel will be an incredible new asset to our engineers as well as others evolved in aerodynamic research in America, providing a critical new resource for future innovation," Jim Keller, EVP of Honda Development and Manufacturing of America said in a Friday press call. "This new wind tunnel and our safety research center will provide our R&D engineers with two world-class facilities in Ohio to support the design and development of new products."

When vehicles are operated in a wind tunnel, they drive on what is essentially a giant treadmill belt. These belts are designed to control the boundary layer between the floor and the vehicle, a critical factor in generating accurate aerodynamic data, Mike Unger, Wind Tunnel Lead at HALO, explained during the call. HALO uses two, 40-ton belt modules: a standard "wide" belt, which sits under the entire vehicle and works well for sedans and other low-riding vehicles, and the 5-belt system which puts one under each tire, a fifth under the whole vehicle and is meant for testing SUVs. Each can be swapped out for the other in under four hours.

Honda

For acoustics testing, the HALO utilizes more than 500 exterior microphones studded throughout the wind tunnel and another 54 mics within the vehicle itself. Thanks to a novel microphone array, Honda techs can switch the wind tunnel from aerodynamic testing to aeroacoustic testing in just half an hour — a process that used to take around half a day to complete.

Determining the frontal area of a new vehicle, a stat necessary to properly calibrate the tunnel's results, is done with lasers and optical cameras that precisely measure the vehicle's front and side proportions. The tunnel is also equipped with a 180-degree turntable, Unger said, "which allows us to test various, and sometimes extreme, yaw angles as well load the car as quickly and as efficiently as possible." There's also an 80-ton diagnostic tool. 

Honda

"Essentially, it's a big giant big robotic arm that we can attach a sensor on the end of and locate anywhere in the tunnel," he continued. With it techs can "measure any kind of phenomena we're looking for — it could be pressure, velocity, sound, or any other thing... this tool will allow the test engineer to look into detailed phenomena to understand exactly what's going on with the flow field." The system is so precise that it can measure drag forces with a sensitivity of +/- 2.5 Newtons, roughly the weight of a standard D battery.

The tunnel itself is an eighth of a mile long with a test area measuring 3m x 5m x 15m, large enough to accommodate up to a full-size delivery van. It's 8m-diameter fan is outfitted with a dozen hollow carbon fiber fixed-pitch blades which spin up to 253 rpm, driven by a 5MW 6,700HP electric motor, and generates wind speeds in excess of 190 MPH.

Honda

Honda began development on the HALO facility in 2015 in what was initially an effort to mitigate the expenses the company was incurring flying its technicians and the prototypes being developed at the Honda R&D Center in Ohio, around the world in order to get access to suitable aerodynamic testing facilities, like the company's existing wind tunnel in Japan. Those considerations as well as "the arrival of the electrified era, made building Honda's own wind tunnel a smart decision," Wind Tunnel Business Strategy Lead, Chris Combs, said during the call.

The company does not plan to bogart its new facility's research capabilities, however. "Honda has partnered with the Transportation Research Center to form a consortium for the purpose of promoting aerodynamic research that will be shared amongst consortium members," Combs said. "We look forward to hosting college students in the future to advance in STEM careers and overall aerodynamic endeavors. It is anticipated that some non-auto parties will utilize the facility for projects focused on renewable energy like wind turbines and even architectural design."

Hitting the Books: The mad science behind digging really huge holes

Sure you could replace the President with a self-aware roboclone, take the moon hostage, threaten to release a millennia-old Eldritch horror to wreak unspeakable terror upon the populace, or just blew up a few financial servers in your pursuit of global dominion, but a savvy supervillain knows that the true path to power is through holes — the deeper, the better. 

In the excerpt below from his newest book, author Ryan North spelunks into the issues surrounding extreme mining and how the same principles that brought us the Kola Superdeep Borehole could be leveraged to dominate humanity, or turn a tidy profit. And, if you're not digging the whole hole scheme, How to Take Over the World has designs for every wannabe Brain, from pulling the internet's proverbial plug to bioengineering a dinosaur army — even achieving immortality if the first few plans fail to pan out.

Riverhead Books

From HOW TO TAKE OVER THE WORLD: Practical Schemes and Scientific Solutions for the Aspiring Supervillain by Ryan North published on March 15, 2022 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2022 Ryan North.


The world’s deepest hole, as of this writing, is the now-­abandoned Kola Superdeep Borehole, located on the Kola Peninsula in Russia, north of the Arctic Circle. It’s a hole 23 centimeters (cm) in diameter, and it was started in May 1970 with a target depth of 15,000m. By 1989, Soviet scientists had reached a depth of 12,262m, but they found they were unable to make further progress due to a few related issues. The first was that temperatures were increasing faster than they’d expected. They’d expected to encounter temperatures of around 100°C at that depth but encountered 180°C heat instead, which was damaging their equipment. That, combined with the type of rock found and the pressure at those depths, was causing the rock to behave in a way that was almost plastic. Whenever the drill bit was removed for maintenance or repair, rocks would move into the hole to fill it. Attempts to dig deeper were made for years, but no hole ever made it farther than 12,262m, and the scientists were forced to conclude that there was simply no technology available at the time that could push any deeper. The Soviet Union dissolved in 1991 in an unrelated event, drilling stopped in 1992, the site was shut down, and the surface-­level opening to the hole was welded closed in 1995. Today, the drill site is an abandoned and crumbling ruin, and that still-­world-record-­holding maximum depth, 12,262m, is less than 0.2% of the way to the Earth’s center, some 6,371 km below.

So, that’s a concern.

But that was back in the ’90s, and we humans have continued to dig holes since! The International Ocean Discovery Program (IODP) has a plan to dig through the thinner oceanic crust, hoping to break through to the mantle and recover the first sample of it taken in place — but this project, estimated to cost $1 billion USD, has not yet been successful. Still, a ship built for the project, the Chikyū, has briefly held the world record for deepest oceanic hole (7,740m below sea level!), until it was surpassed by the Deepwater Horizon drilling rig, which dug a hole 10,683m below sea level and then exploded.

The evidence here all points to one depressing conclusion: the deepest holes humanity has ever made don’t go nearly far enough, and they’ve already reached the point where things get too hot — and too plastic — to continue.

But these holes were all dug not by supervillains chasing lost gold but by scientists, a group largely constrained by their “ethical principles” and “socially accepted morals.” To a supervillain, the solution here is obvious. If the problem is that the rocks are so hot that they’re damaging equipment and flowing into the hole, why not simply make a hole wide enough that some slight movement isn’t catastrophic, and cool enough so the rocks are all hardened into place? Why not simply abandon the tiny, 23cm-­diameter boreholes of the Soviets and the similarly sized drill holes of the IODP, and instead think of something bigger? Something bolder?

Something like a colossal open-­pit mine?

Such a mine would minimize the effects of rocks shifting by giving them a lot more room to shift — and us a lot more time to react — before they become a problem. You could keep those rocks cool and rigid with one of the most convenient coolants we have: cold liquid water. On contact with hot rocks or magma, water turns to steam, carrying that heat up and away into the atmosphere, where it can disperse naturally — while at the same time cooling the rocks so that they remain both solid enough to drill and rigid enough to stay in place. It would take an incredible amount of water, but lucky for us, Earth’s surface is 71% covered with the stuff!

So if you build a sufficiently large open-­pit mine next to the ocean and use a dam to allow water to flow into the pit to cool the rocks as needed, then you’ll be the proud owner of a mine that allows you to reach greater depths, both literal and metaphorical, than anyone else in history! This scheme has the added benefit that, if we’re clever, we can use the steam that’s generated by cooling all that hot rock and magma to spin turbines, which could then generate more power for drilling. You’ll build a steam engine that’s powered by the primordial and nigh-inexhaustible heat of the Earth herself.

The exact dimensions of open-­pit mines vary depending on what’s being mined, but they’re all shaped like irregular cones, with the biggest part at ground level and the smallest part at the bottom of the pit. The open-­pit mine that’s both the world’s largest and deepest is the Bingham Canyon copper mine in Utah: it’s been in use since 1906, and in that time it has produced a hole in the Earth’s crust that’s 4km wide and 1.2km deep. Using those dimensions as a rough guide produces the following chart:

Penguin Randomhouse

… and here we have another problem. Just reaching the bottom of the crust needs a hole over five times the length of the island of Manhattan, dozens of times wider than any other hole made by humanity, and easily large enough to be seen from space. Reaching the bottom of the lower mantle would require a hole so huge that its opening would encompass 75% of the Earth’s diameter, and to do the same with the outer and inner cores requires holes that are wider than the Earth itself.

Even if you could turn almost half the Earth into an open-­pit mine cooled by seawater, the steam created by cooling a pit that size would effectively boil the oceans and turn the Earth into a sauna, destroying the climate, collapsing food chains, and threatening all life on the planet — and that’s before you even reach the hostage-­taking phase, let alone the part where you plunder forbidden gold! Things get even bleaker once you take into account the responses from the governments you’d upset by turning their countries into hole; the almost inconceivable amount of time, energy, and money required to move that much matter; where you’d put all that rock once you dug it up; or the true, objective inability for anyone, no matter how well funded, ambitious, or self-­realized, to possibly dig a hole this huge.

So.

That’s another concern.

It pains me to say this, but… there is absolutely no way, given current technology, for anyone to dig a hole to the center of the Earth no matter how well funded they are, even if they drain the world’s oceans in the attempt. We have reached the point where your ambition has outpaced even my wildest plans, most villainous schemes, and more importantly strongest and most heat-­resistant materials. Heck, we’re actually closer to immortal humans (see Chapter 8) than we are to tunneling to the Earth’s core. It’s unachievable. Impossible. There’s simply no way forward.

It’s truly, truly hopeless. It’s hard for me to admit it, but even the maddest science can’t realize every ambition.

I’m sorry. There’s nothing more I can do.

. . . for that plan, anyway!

But every good villain always has a Plan B, one that snatches victory from the jaws of defeat. And heck, if you’ve got your heart set on digging a hole, making some demands, and becoming richer than Midas and Gates and Luthor in the process—who am I to stop you?

You’re going to sidestep the issues of heat and pressure in the Earth’s core by staying safely inside the crust, within the depth range of holes we already know how to dig. And you’re going to sidestep the issues of legality that tend to surround schemes to take the Earth’s core hostage by instead legally selling access to your hole to large corporations and the megarich, who will happily pay through their noses for the privilege. Why?

Because instead of digging down, you’re going to dig sideways. Instead of mining gold, you’re going to mine information. And unlike even the lost gold of the Earth’s core, this mine is practically inexhaustible.

It all has to do with stock trading. In the mid-­twentieth century, stock exchanges had trading floors, which were actual, physical floors where offers to buy and sell were shouted, out loud, to other traders. It was noisy and chaotic, but it ensured everyone on the trading floor had, in theory, equal access to the same information. Those floor traders were later supplemented by telephone trading, and then almost entirely replaced by electronic trading, which is how most stock exchanges operate today. At the time, both telephone and electronic trading could be pitched as simply a higher-­tech version of the same floor trading that already existed, but they also did something more subtle: they moved trading from the trading floor to outside the exchanges themselves, where everyone might not have access to the same information.

Turns out, there’s money to be made from that.

Delivery apps are stepping in to help drivers hit by high gas prices

Russia's invasion of Ukraine and the resulting economic sanctions against the aggressor nation are already causing economic havok the world over. Inflation is on the rise, causing the price of essentials like food, medicine and fuel to spike. Domestically, these additional financial strains are being deeply felt by gig economy workers and delivery drivers who are now struggling to stay on the road as gas averages $4.41 a gallon nationwide. In response, some delivery apps have extended financial lifelines to the "independent contractors" that their businesses rely upon — but not all of them and not entirely without a catch.

Instacart is the latest service to adjust its pricing in response, announcing on Friday morning that it will institute a 40-cent per order surcharge "over the next month" to help offset the increased costs to its drivers, which have seen a 71-cent increase since February 28th

Uber has already imposed a fuel surcharge of its own, though the amount depends on which state the driver is in and how far the trip is going. Roughly, the surcharge for a passenger Uber ride will be between $0.45 to $0.55 per trip while having the food brought to you instead of the other way around will see a $0.35 to $0.45 per trip charge added on. The charge went into effect on Wednesday and will be reevaluated in 60 days, according to the company. Uber, beacon of fair labor practices that it is, has made assurances that the added charges will go directly to drivers. And yes, cheapskates, the surcharge applies even if you and/or your food is riding in an EV. 

Nearly identically, Lyft announced on Monday that it will charge a flat $0.55 per trip fee — ICE vehicle or not — starting next week and leave it in place for 60 days. Additionally drivers can get 4 - 6 percent cashback on gas through June if they use the company-branded debit card. 

"We’ve been closely monitoring rising gas prices and their impact on our driver community," Lyft senior communications manager CJ Macklin told Engadget in a statement. "Driver earnings overall remain elevated compared to last year, but given the rapid rise in gas prices we’ll be asking riders to pay a temporary fuel surcharge, all of which will go to drivers." 

DoorDash has enacted a similar cashback scheme for its drivers as well, though you'll want to grab a pencil and calculator before trying to navigate it. 

"Beginning on March 17th, drivers for Doordash will be able to receive 10% cashback on gas purchases, though only if they’re enrolled in the company’s own DasherDirect Visa cards," Engadget reporter Amrita Khalid explains. "On top of that, drivers who drive a certain amount of miles per week will qualify for weekly gas rewards, ranging from $5 to $15 per week. Unlocking the $5 discount requires drivers to complete at least 100 miles worth of trips in a week. Drivers who total more than 225 miles worth of trips will earn a $15 weekly bonus."

That translates into around $2 of rewards per gallon, depending on the distance a Dasher drives. 

Amazon Flex workers — drivers who use their own vehicles to make deliveries for the online retailer's Prime, Whole Foods and Fresh branded orders — have, not unsurprisingly, largely been left to their own devices in navigating these higher fuel prices. "We’ve already made several adjustments through pricing surges in impacted areas to help ease some of the financial challenges,” an Amazon spokesperson told MSNBC on Thursday. “As the situation evolves, we’ll continue to make changes where we can to help support our partners.” The company is "closely monitoring the situation," said the spokesperson.

Engadget has reached out to Caviar (owned by DoorDash), GrubHub, Postmates (owned by Uber) and Shipt for comment and will update this post upon their replies.

Maserati plans to go fully electric by 2025

Maserati announced on Thursday that it will offer electric versions of its entire vehicle lineup by 2025 and is starting its efforts off with the GranTurismo EV, a 1,200 HP roadster slated for release next year. 

Stellantis

The GranTurismo “Folgore” will be the first entry into Maserati's new line of electric vehicles. Its thousand-plus horses will translate into a limitered top speed of 190 MPH and a sub-3-second 0-60. It will be joined by an electrified version of the new Grecale SUV and Grancabrio GT in 2023 followed by EV variants of the MC20, the Quattroporte and the Levante SUV by 2025. The company also announced its intention to halt production of internal combustion vehicles and go fully electric by 2030. 

The company, a subsidiary of the Stellantis Group, did not elaborate on the expected MSRPs for the upcoming vehicles, but given Maserati's current offerings, interested buyers will likely be looking to pay anywhere from the high five-figures to the mid-sixes.   

Cornell researchers taught a robot to take Airbnb photos

Aesthetics is what happens when our brains interact with content and go, “ooh pretty, give me more of that please.” Whether it’s a starry night or The Starry Night, the sound of a scenic seashore or the latest single from Megan Thee Stallion, understanding how the sensory experiences that scintillate us most deeply do so has spawned an entire branch of philosophy studying art, in all its forms, as well as how it is devised, produced and consumed. While what constitutes “good” art varies between people as much as what constitutes porn, the appreciation of life’s finer things is an intrinsically human endeavor (sorry, Suda) — or at least it was until we taught computers how to do it too.

The study of computational aesthetics seeks to quantify beauty as expressed in human creative endeavors, essentially using mathematical formulas and machine learning algorithms to appraise a specific piece based on existing criteria, reaching (hopefully) an equivalent opinion to that of a human performing the same inspection. This field was founded in the early 1930s when American mathematician George David Birkhoff devised his theory of aesthetics, M=O/C, where M is the aesthetic measure (think, a numerical score), O is order and C is complexity. Under this metric simple, orderly pieces would be ranked higher — i.e. be more aesthetically pleasing — than complex and chaotic scenes.

German philosopher Max Bense and French engineer Abraham Moles both, and independently, formalized Birkoff’s initial works into a reliable scientific method for gauging aesthetics in the 1950s. By the ’90s, the International Society for Mathematical and Computational Aesthetics had been founded and, over the past 30 years, the field has further evolved, spreading into AI and computer graphics, with an ultimate goal of developing computational systems capable of judging art with the same objectivity and sensitivity as humans, if not superior sensibilities. As such, these computer vision systems have found use in augmenting human appraisers’ judgements and automating rote image analysis similar to what we’re seeing in medical diagnostics, as well as grading video and photographs to help amateur shutterbugs improve their craft.

Recently, a team of researchers from Cornell University took a state of the art computational aesthetic system one step further, enabling the AI to not only determine the most pleasing picture in a given dataset, but capture new, original — and most importantly, good — shots on its own. They’ve dubbed it, AutoPhoto, its study was presented last fall at the International Conference on Intelligent Robots and Systems. This robo-photographer consists of three parts: the image evaluation algorithm, which evaluates a presented image and issues an aesthetic score; a Clearpath Jackal wheeled robot upon which the camera is affixed; and the AutoPhoto algorithm itself, which serves as a sort of firmware, translating the results from the image grading process into drive commands for the physical robot and effectively automating the optimized image capture process.

For its image evaluation algorithm, the Cornell team led by second year Masters student Hadi AlZayer, leveraged an existing learned aesthetic estimation model, which had been trained on a dataset of more than a million human-ranked photographs. AutoPhoto itself was virtually trained on dozens of 3D images of interior room scenes to spot the optimally composed angle before the team attached it to the Jackal.

When let loose in a building on campus, as you can see in the video above, the robot starts off with a slew of bad takes, but as the AutoPhoto algorithm gains its bearings, its shot selection steadily improves until the images rival those of local Zillow listings. On average it took about a dozen iterations to optimize each shot and the whole process takes just a few minutes to complete.

“You can essentially take incremental improvements to the current commands,” AlZayer told Engadget. “You can do it one step at a time, meaning you can formulate it as a reinforcement learning problem.” This way, the algorithm doesn’t have to conform to traditional heuristics like the rule of thirds because it already knows what people will like as it was taught to match the look and feel of the shots it takes with the highest-ranked pictures from its training data, AlZayer explained.

“The most challenging part was the fact there was no existing baseline number we were trying to improve,” AlZayer noted to the Cornell Press. “We had to define the entire process and the problem.”

Looking ahead, AlZayer hopes to adapt the AutoPhoto system for outdoor use, potentially swapping out the terrestrial Jackal for a UAV. “Simulating high quality realistic outdoor scenes is very hard,” AlZayer said, “just because it's harder to perform reconstruction of a controlled scene.” To get around that issue, he and his team are currently investigating whether the AutoPhoto model can be trained on video or still images rather than 3D scenes.

Cadillac will offer two new features to select Super Cruise drivers this summer

GM's Super Cruise will learn two new tricks this summer. GM announced on Tuesday that the driver assist system will offer Automatic Lane Change and Trailering capabilities for eligible owners.

Owners of the 2021 Cadillac CT4 and CT5 will have the opportunity to purchase Automatic Lane Change capabilities while 2021 Escalade owners will be given the Trailering option, which allows the SUV to tow without touching the steering wheel. The company estimates that some 12,000 CT4s, 5s, and Escalades will be eligible for the paid updates. "Eligible customers will receive communication from Cadillac about pricing, and how they can purchase and install these new upgrades in the near future," a GM spokesperson told Engadget via email.

Super Cruise, which can be found on a variety of GM products including the new Hummer EV is a Level 2 system, in that it is a driver assist and not fully-autonomous. It relies on a mixture of LiDAR mapping, GPS, visual cameras and radar sensors to navigate traffic. GM originally announced these new features back in 2020, however the COVID pandemic and a global processor shortage have hampered their rollout

History Channel will tell the tale of the Hummer EV with a documentary

If you ever wondered how General Motors, one of the biggest automakers on the planet, went from 0 to EV so quickly while managing to reinvent its iconic Hummer SUV, former-poster child of automotive excess, as a future-facing electric vehicle, the History Channel has a show for you. Revolution: GMC Hummer EV will take a behind-the-scenes look at the development of the all-electric supertruck when it premieres Sunday, March 21st at 11am ET. 

“Our goal was to upend what an electric vehicle is capable of and push the boundaries from 100 years of vehicle development experience,” Executive Chief Engineer for the Hummer EV, Josh Tavel, said in a press statement. “This documentary captures the soul of a team capable of incredible innovation and resilience. Their learnings are laying the foundation of vehicle development for decades to come.”

The hour-long documentary, produced by Hiatus and Detroit-based WTP Pictures and directed by Sean King O’Grady, followed the Hummer development team over the course of two years of design at the Global Technical Center in Warren, Michigan followed by grueling environmental testing at GM's proving grounds in Milford and Yuma.

If you miss the live premiere, Revolution will hit History on Hulu, History.com and the GMC YouTube page the following Sunday, April 3rd.  

Hitting the Books: How Ronald Reagan torpedoed sensible drug patenting

Americans pay two and a half times more for their prescription drugs than residents of any other nation on Earth. Though generic versions of popular compounds accounted for 84 percent of America's annual sales volume in 2021, they only generated 12 percent of the actual dollars spent. The rest of the money pays for branded drugs — Lipitor, Zestril, Accuneb, Vicodin, Prozac — and we have the Reagan Administration in part to thank for that. In the excerpt below from Owning the Sun: A People's History of Monopoly Medicine from Aspirin to COVID-19 Vaccines, a fascinating look at the long, infuriating history of public research being exploited for private profit, author Alexander Zaitchik recounts former President Reagan's court-packing antics from the early 1980s that helped cement lucrative monopolies on name-brand drugs.

Counterpoint Press

Copyright © 2022 by Alexander Zaitchik, from Owning the Sun: A People's History of Monopoly Medicine from Aspirin to COVID-19 Vaccines. Reprinted by permission of Counterpoint Press.


When Estes Kefauver died in 1963, he was writing a book about monopoly power called In a Few Hands. Early into Reagan’s first term, the industry must have been tempted to publish a gloating retort titled In a Few Years. Between 1979 and 1981, the drug companies did more than break the stalemate of the 1960s and ’70s — they smashed it wide open. Stevenson-Wydler and Bayh-Dole replaced the Kennedy policy with a functioning framework for the high-speed transfer of public science into private hands. As the full machinery was built out, the industry-funded echo chamber piped a constant flow of memes into the culture: patents alone drive innovation... R&D requires monopoly pricing... progress and American competitiveness depend on it... there is no other way...

In December 1981, the drug companies celebrated another long-sought victory when Congress created a federal court devoted to settling patent disputes. Previously, patent disputes were heard in the districts where they originated. The problem, from industry’s perspective, was the presence of so many staunch New Deal judges in key regions like New York’s Second Circuit. These lifetime judges often understood patent challenges not as threats to property rights, but as opportunities to enforce antitrust law. Local circuit judges appointed by Republicans could also be dangerously old-fashioned in their interpretations of the “novelty” standard. By contrast, the judges on the new patent court, named the Court of Appeals for the Federal Circuit, were appointed by the president. Reagan stuffed its bench with corporate patent lawyers and conservative legal scholars influenced by the Johnny Appleseed of the Law and Economics movement, Robert Bork. Prior to 1982, federal district judges rejected around two-thirds of patent claims; the Court of Appeals has since decided two-thirds of all cases in favor of patent claims. Reagan’s first appointee, Pauline Newman, was the former lead patent counsel for the chemical firm FMC.

The Supreme Court also contributed to the industry’s 1979–1981 run of wins. When Reagan entered office, one of the great scientific-legal unknowns involved the patentability of modified genes. Similar to the uncertainty around the postwar antibiotics market—settled in the industry’s favor by the 1952 Patents Act — the uncertainty threatened the monopoly dreams of the emergent biotechnology sector. The U.S. Patent Office was against patenting modified genes. In 1979, its officers twice rejected an attempt by a General Electric microbiologist to patent a modified bacterium invented to assist in oil spill cleanups. The GE scientist, Ananda Chakrabarty, sued the Patent Office, and in the winter of 1980 Diamond v. Chakrabarty landed before the Supreme Court. In a 5–4 decision written by Warren Burger, the Court overruled the U.S. Patent Office and ruled that modified genes were patentable, as was “anything under the sun that is made by man.” The decision was greeted with audible exhales by the players in the Bayh-Dole alliance. “Chakrabarty was the game changer that provided academic entrepreneurs and venture capitalists the protection they were waiting for,” says economist Öner Tulum. “It paved the way for a more expansive commercialization of science.”

But the industry knew better than to relax. It understood that political victories could be impermanent and fragile, and it had the scar tissue to prove it. Uniquely profitable, uniquely hated, and thus uniquely vulnerable — the companies could not afford to forget that their fantastic postwar wealth and power depended on the maintenance of artificial monopolies resting on dubious if not indefensible ethical and economic arguments that were rejected by every other country on earth. In the United States, home to their biggest profit margins, danger lurked behind every corner in the form of the next crusading senator eager to train years of unwanted attention on these facts. Not even Bayh-Dole, that precious newborn legislation, could be taken for granted. This mode of permanent crisis was validated by the return of a familiar menace in the early 1980s. Of all things, it was the generics industry, an old but weak enemy of the patent-based drug companies, that reappeared and threatened to ruin their celebration of achieving dominance over every corner of medical research and the billions of public dollars flowing through it.

***

As late as the 1930s, there was no “generic” drug industry to speak of. There were only big drug companies and small ones, some with stature, others obscure. They both sold products that were, in the parlance of ethical medicine, “nonproprietary.” To be listed in the United States Pharmacopeia and National Formulary, the official bibles of prescribable medicines, drugs could only carry scientific names; the essential properties of a good scientific name, according to the first edition of the Pharmacopeia, were “expressiveness, brevity, and dissimilarity.” The naming of drugs and medicines formed the other half of the patent taboo: branding a drug evidenced the same knavishness and greed as monopolizing one. The rules of “ethical marketing” did permit products to include an institutional affiliation—Parke-Davis Cannabis Indica Extract, or Squibb Digitalis Tincture—but the names of the medicines themselves (cannabis, digitalis) did not vary. “The generic name emerged as a parallel form of social property belonging to all that resisted commodification and thereby came to occupy a central place in debates about monopoly rights,” writes Joseph Gabriel.

As with patents on scientific medicine, the Germans gave the U.S. drug industry early instruction in the use of trademarks to entrench market control. Hoechst and Bayer broke every rule of so-called ethical marketing, aggressively advertising their breakthrough drugs under trademarks like Aspirin, Heroin, and Novocain. The idea was to twine these names and the things they described in the public mind so tightly, the brand name would secure a de facto monopoly long after the patent expired.

The strategy worked, but the German firms did not reap the benefits. The wartime Office of Alien Property redistributed the German patents and trademarks among domestic firms who produced competing versions of aspirin, creating the first “branded generic.” During the patent taboo’s extended death rattle of the interwar years, more U.S. companies waded into the use of original trademarks to suppress competition. As they experimented with German tactics to avoid “genericide” — the loss of markets after patent expiration — they were enabled by court decisions that transformed trademarks into forms of hard property, similar to the way patents were reconceived in the 1830s.

After World War II, branding and monopoly formed the two-valve heart of a post-ethical growth strategy. The industry’s incredible postwar success — between 1939 and 1959, drug profits soared from $300 million to $2.3 billion — was fueled in large part by expanding the German playbook. While branding monopolies with trade names, the industry initiated campaigns to ruin the reputations of scientifically identical but competing products. The goal was the “scandalization” of generic drugs, writes historian Jeremy Greene. The drug companies “worked methodically to moralize and sensationalize generic dispensing as a dangerous and subversive practice. Dispensing a non-branded product in place of a brand-name product was cast as ‘counterfeiting’; the act of substituting a cheaper version of a drug at the pharmacy was described as ‘beguilement,’ ‘connivance,’ ‘misrepresentation,’ ‘fraudulent,’ ‘unethical’ and ‘immoral.’”

As with patenting, it was the drug companies that dragged organized medicine with them into the post-ethical future. As late as 1955, the AMA’s Council on Pharmacy and Chemistry maintained a ban on advertisements for branded products in its Journal. That changed the year Equanil hit the market, opening the age of branded prescription drugs as a leading source of income for medical journals and associations. “Clinical journals and newer ‘throwaway’ promotional media now teemed with advertisements for Terramycin, Premarin, and Diuril rather than oxytetracycline (Pfizer), conjugated equine estrogens (Wyeth) or chlorothiazide (Merck),” writes Greene. In 1909, only one in ten prescription drugs carried a brand name. By 1969, the ratio had flipped, with only one in ten marketed under its scientific name. In another echo of the patent controversy, the rise of marketing and branded drugs produced division and resistance. By the mid-1950s, an alliance of so-called nomenclature reformers arose to decry trademarks as unscientific handmaidens of monopoly and call for a return to the use of scientific names. These reformers — doctors, pharmacists, labor leaders — made regular appearances before the Kefauver committee beginning in 1959. Their testimony on how the industry used trademarks to suppress competition informed a section in Kefauver’s original bill requiring doctors to use scientific names in all prescriptions. The proposed law reflected the norms that reigned during ethical medicine’s heyday, and would have allowed doctors to recommend firms, but not their branded products. Like most of Kefauver’s core proposals, however, the generic clause was excised. The only trademark-related reform in the final Kefauver-Harris Amendments placed limits on companies’ ability to rebrand and market old medicines as new breakthroughs.

Volkswagen officially unveils its ID.Buzz EV

The Microbus is back, baby! Nearly 75 years since the first Volkswagen Type 2 rolled off its assembly line and into the annals of Americana as an icon of 1960s counterculture, VW is re-releasing the emblematic vehicle — this time as a full EV for the 21st century hippy.

VW

VW executives took to the livestreaming stage on Wednesday ahead of SXSW 2022’s kickoff to debut the ID.Buzz, which will be available as both a people mover and a cargo van (dubbed the ID.Buzz Cargo) beginning later this year. The ID.Buzz will appear in Europe first — arriving later this year — and will be available with a number of options lacking in their American-market cousins including short-wheelbase and commercial variants. There’s even a self-driving version that likely won’t be making it across the pond. The American iterations are slated to arrive in 2023.

The ID.Buzz is built atop VW’s modular electric drive matrix (MEB if you say it in German), the same battery platform Ford plans to use for one of its European market vehicles come 2023. As such it shares some similar build characteristics with the ID.4.

The ID.Buzz will come equipped with a 77kWh battery pack with a 170kw charging capacity powering a 150kw rear motor. The passenger model will seat five with 1.21 cubic meters (39.5 cubic feet) of cargo space while the Cargo will offer 3.9 cubic meters (137.7 cubic feet) by replacing the rear seats with a partition behind the front row. For the interior, VW designers took inspiration from the aesthetics of the Microbus, pulling style elements from the T1 generation of vehicle and matching seat cushions, dash panels and the door trim to the vehicle’s exterior paint color of which buyers will have their pick of seven solid-color options and four two-tone schemes (white + another color).

ingo barenschee

VW also noted during the presentation the extensive work it put into lessening environmental impacts arising from the ID.Buzz’s production. The interior upholstery is made completely animal-free — the steering wheel may be made of polyurethane, but VW executives swear that it has the same look and feel as leather. The seat covers, floor coverings and headliner are all similarly composed of recycled goods like marine plastic and old water bottles. Using these materials emits 32 percent less carbon than similar products would, according to the company. Overall, VW hopes to ​​cut its carbon emissions in Europe by 40 percent by 2030 and achieve climate neutrality as part of its Way to Zero plan by 2050.

developing

California pilot program turns GM's EVs into roving battery packs

While not nearly as much of a mess as Texas' energy infrastructure, California's power grid has seen its fair share of brownouts, rolling blackouts, and power outages caused by wildfires caused by PG&E. To help mitigate the economic impact of those disruptions, this summer General Motors and Northern California's energy provider will team up to test out using the automaker's electric vehicles as roving, backup battery packs for the state's power grid. 

The pilot program announced by GM CEO Mary Barra on CNBC Tuesday morning is premised on birectional charging technology, wherein power can both flow from the grid to a vehicle (G2V charging) and from a vehicle back to the grid (V2G), allowing the vehicle to act as an on-demand power source. GM plans to offer this capability as part of its Ultium battery platform on more than a million of its EVs by 2025. Currently the Nissan Leaf and the Nissan e-NV200 offer V2G charging, though Volkswagen announced in 2021 that its ID line will offer it later this year and the the Ford F-150 Lightning will as well. 

This summer's pilot will initially investigate, "the use of bidirectional hardware coupled with software-defined communications protocols that will enable power to flow from a charged EV into a customer’s home, automatically coordinating between the EV, home and PG&E’s electric supply," according to a statement from the companies. Should the initial tests prove fruitful, the program will expand first to a small group of PG&E customers before scaling up to "larger customer trials" by the end of 2022.

"Imagine a future in which there's an EV in every garage that functions as a backup power source whenever it's needed," GM spokesperson Rick Spina said during a press call on Monday.

"We see this expansion as being the catalyst for what could be the most transformative time for for two industries, both utilities and the auto automotive industry" PG&E spokesperson Aaron August added. "This is a huge shift in the way we're thinking about electric vehicles, and personal vehicles overall. Really, it's not just about getting from point A to point B anymore. It's about getting from point A to point B with the ability to provide power."

Technically, like from a hardware standpoint, GM vehicles can provide bidirectional charging as they are currently being sold, Spina noted during the call. The current challenge, and what this pilot program is designed to address, is developing the software and UX infrastructure needed to ensure that PG&E customers can easily use the system day-to-day. "The good news there is, it's nothing different from what's already industry standard for connectors, software protocols," August said. "The industry is moving towards ISO 15118-20."

The length of time that an EV will be able to run the household it's tethered to will depend on a number of factors — from the size of the vehicle's battery to the home's power consumption to the prevailing weather — but August estimates that for an average California home using 20 kWh daily, a fully-charged Chevy Bolt would have enough juice to power the house for around 3 days. This pilot program comes as automakers and utilities alike work out how to most effectively respond to the state's recent directive banning the sale of internal combustion vehicles starting in 2035.