As a white man in America with no discernible regional accent, I can simply assume that modern consumer technologies — virtual assistants like Siri, Alexa or Assistant, and my phones’ camera — will work seamlessly out of the box. I assume this because, well, they do. That’s namely because the nerds who design and program these devices overwhelmingly both look and sound just like me — if even a little whiter. Folks with more melanin in their skin and extra twang on their tongue don’t enjoy that same privilege.
Tomorrow’s chatbots and visual AIs will only serve to exacerbate this bias unless steps are taken today to ensure a benchmark standard of fairness and equitable behavior from these systems. To address that issue, Meta AI researchers developed and released the Casual Conversations dataset in 2021, designed to “help researchers evaluate their computer vision and audio models for accuracy across a diverse set of age, genders, apparent skin tones and ambient lighting conditions.” On Thursday, the company unveiled Casual Conversations v2, which promises even more granular classification categories than its predecessor.
The original CC dataset included 45,000 videos from more than 3,000 paid subjects across age, gender, apparent skin tone and lighting conditions. These videos are designed to be used by other AI researchers, specifically those working with generative AIs like ChatGPT or visual AIs like those used in social media filters and facial recognition features, to help them ensure that their creations behave the same whether the user looks like Anya Taylor-Joy or Lupita Nyong’o, whether they sound like Colin Firth or Colin Quinn.
Since Casual Conversations first debuted two years ago, Meta has worked “in consultation with internal experts in fields such as civil rights,” according to Tuesday’s announcement, to expand and improve upon the dataset. Professor Pascale Fung, director of the Centre for AI Research, as well as other researchers from Hong Kong University of Science and Technology, participated in the literature review of government and industry data to establish the new annotation categories.
Version 2 now includes 11 categories (seven self-reported and four researcher-annotated) and 26,467 video monologues recorded by nearly 5,600 subjects in seven countries — Brazil, India, Indonesia, Mexico, Vietnam, Philippines and the US. While there aren’t as many individual videos in the new dataset, they are far more heavily annotated. As Meta points out, the first iteration only had a handful of categories: “age, three subcategories of gender (female, male, and other), apparent skin tone and ambient lighting,” according to the Thursday blog post.
“To increase nondiscrimination, fairness, and safety in AI, it’s important to have inclusive data and diversity within the data categories so researchers can better assess how well a specific model or AI-powered product is working for different demographic groups,” Roy Austin, Vice President and Deputy General Counsel for Civil Rights at Meta, said in the release. “This dataset has an important role in ensuring the technology we build has equity in mind for all from the outset."
As with most all of its public AI research to date, Meta is releasing Casual Conversations v2 as an open source dataset for anyone to use and expand upon — perhaps to include markers such as “disability, accent, dialect, location, and recording setup,” as the company hinted at on Thursday.
This article originally appeared on Engadget at https://www.engadget.com/metas-newest-ai-fairness-benchmark-measures-even-more-granular-bias-markers-140043320.html?src=rss
Even as it has radically expanded the hands-free driver assist capabilities of its current generation Super Cruise ADAS, General Motors has been hard at work on the system's successor, Ultra Cruise, since 2021. On Tuesday, GM finally revealed which model will be first to receive the upgraded features of Ultra Cruise and that vehicle is the Cadillac Celestiq.
"We're trying to expand our hands-free driving experience that we have with Super Cruise to most paved and public roads," Jason Dittman, General Motors' Chief Engineer, said during a press call Monday. "It will be a 'destination to destination' experience."
"You get in your car, use the internal nav navigation system, put a destination in it, and the car would essentially do the driving — roughly on 95 percent of the driving maneuvers on a typical drive, you'll be able to do hands-free," he added.
We already had a solid understanding of what Ultra Cruise would be capable of as GM went into detail when it first announced development of the system in 2021. Super Cruise currently works on around 400,000 miles of US and Canadian highways, allowing drivers to take their hands off the wheel when driving on a compatible highway or state route. It uses a mix of LiDAR, radar, GPS and cameras to know where the vehicle is on the road.
Ultra Cruise, builds off this with a new computing system, that will fuse the incoming data streams into a unified 360-degree view around the vehicle. "They're not redundant, they're fused together to give us the most accurate picture of the vehicle surroundings," Dittman said. Ultra Cruise equipped vehicles will also use an interior-facing infrared driver attention monitor that will track the, "driver’s head position and/or eyes in relation to the road," according to Tuesday's announcement.
Ultra Cruise will work on more than 2 million miles of highway at launch. Over time, the company plans to further expand the number of roadways covered by the Ultra Cruise network to include 3.4 million miles of roadway encompassing, "nearly every paved road, city street, suburban street, subdivision, and rural road in addition to the highways that today on the super cruise operates on," Dittman added.
Note that despite the larger number of roads the new system will work on, it still offers the same Level 2 driver assist capabilities as the rest of the auto industry, save Mercedes. That means, you will have to keep paying attention to the road you just won't have to keep your hands strictly on the wheel.
Unfortunately, current Super Cruise subscribers will not be able to upgrade to the new system once it arrives later this year. Ultra Cruise requires additional sensors and hardware to operate and GM doesn't currently have plans to offer a retrofit kit. You'll have to pony up the $300k Caddy is asking for the Celestiq if you want to be among the first to try it.
This article originally appeared on Engadget at https://www.engadget.com/gms-ultra-cruise-system-will-debut-on-the-cadillac-celestiq-later-this-year-140011591.html?src=rss
There is too much internet and our attempts to keep up with the breakneck pace of, well, everything these days — it is breaking our brains. Parsing through the deluge of inundating information hoisted up by algorithmic systems built to maximize engagement has trained us as slavering Pavlovian dogs to rely on snap judgements and gut feelings in our decision making and opinion formation rather than deliberation and introspection. Which is fine when you're deciding between Italian and Indian for dinner or are waffling on a new paint color for the hallway, but not when we're out here basing existential life choices on friggin' vibes.
In his latest book, I, HUMAN: AI, Automation, and the Quest to Reclaim What Makes Us Unique, professor of business psychology and Chief Innovation Officer at ManpowerGroup, Tomas Chamorro-Premuzic explores the myriad ways that AI systems now govern our daily lives and interactions. From finding love to finding gainful employment to finding out the score of yesterday's game, AI has streamlined the information gathering process. But, as Chamorro-Premuzic argues in the excerpt below, that information revolution is actively changing our behavior, and not always for the better.
If the AI age requires our brains to be always alert to minor changes and react quickly, optimizing for speed rather than accuracy and functioning on what behavioral economists have labeled System 1 mode (impulsive, intuitive, automatic, and unconscious decision-making), then it shouldn’t surprise us that we are turning into a less patient version of ourselves.
Of course, sometimes it’s optimal to react quickly or trust our guts. The real problem comes when fast mindlessness is our primary mode of decision-making. It causes us to make mistakes and impairs our ability to detect mistakes. More often than not, speedy decisions are borne out of ignorance.
Intuition can be great, but it ought to be hard-earned. Experts, for example, are able to think on their feet because they’ve invested thousands of hours in learning and practice: their intuition has become data-driven. Only then are they able to act quickly in accordance with their internalized expertise and evidence-based experience. Alas, most people are not experts, though they often think they are. Most of us, especially when we interact with others on Twitter, act with expert-like speed, assertiveness, and conviction, offering a wide range of opinions on epidemiology and global crises, without the substance of knowledge that underpins it. And thanks to AI, which ensures that our messages are delivered to an audience more prone to believing it, our delusions of expertise can be reinforced by our personal filter bubble. We have an interesting tendency to find people more open-minded, rational, and sensible when they think just like us. Our digital impulsivity and general impatience impair our ability to grow intellectually, develop expertise, and acquire knowledge.
Consider the little perseverance and meticulousness with which we consume actual information. And I say consume rather than inspect, analyze, or vet. One academic study estimated that the top-10 percent digital rumors (many of them fake news) account for up to 36 percent of retweets, and that this effect is best explained in terms of the so-called echo chamber, whereby retweets are based on clickbait that matches the retweeter’s views, beliefs, and ideology, to the point that any discrepancy between those beliefs and the actual content of the underlying article may go unnoticed. Patience would mean spending time determining whether something is real or fake news, or whether there are any serious reasons to believe in someone’s point of view, especially when we agree with it. It’s not the absence of fact-checking algorithms during presidential debates that deters us from voting for incompetent or dishonest politicians, but rather our intuition. Two factors mainly predict whether someone will win a presidential candidacy in the United States—the candidate’s height and whether we would want to have a beer with them.
While AI-based internet platforms are a relatively recent type of technology, their impact on human behavior is consistent with previous evidence about the impact of other forms of mass media, such as TV or video games, which show a tendency to fuel ADHD-like symptoms, like impulsivity, attention deficits, and restless hyperactivity. As the world increases in complexity and access to knowledge widens, we avoid slowing down to pause, think, and reflect, behaving like mindless automatons instead. Research indicates that faster information gathering online, for example, through instant Googling of pressing questions, impairs long-term knowledge acquisition as well as the ability to recall where our facts and information came from.
Unfortunately, it’s not so easy to fight against our impulsive behavior or keep our impatience in check. The brain is a highly malleable organ, with an ability to become intertwined with the objects and tools it utilizes. Some of these adaptations may seem pathological in certain contexts or cultures, but they are essential survival tools in others: restless impatience and fast-paced impulsivity are no exception.
Although we have the power to shape our habits and default patterns of behaviors to adjust to our habitat, if pace rather than patience is rewarded, then our impulsivity will be rewarded more than our patience. And if any adaptation is overly rewarded, it becomes a commoditized and overused strength, making us more rigid, less flexible, and a slave to our own habits, as well as less capable of displaying the reverse type of behavior. The downside of our adaptive nature is that we quickly become an exaggerated version of ourselves: we mold ourselves into the very objects of our experience, amplifying the patterns that ensure fit. When that’s the case, then our behaviors become harder to move or change.
When I first returned to my hometown in Argentina after having spent a full year in London, my childhood friends wondered why my pace was so unnecessarily accelerated—“Why are you in such a hurry?” Fifteen years later, I experienced the same disconnect in speed when returning to London from New York City, where the pace is significantly faster. Yet most New Yorkers seem slow by the relative standards of Hong Kong, a place where the button to close the elevator doors (two inward-looking arrows facing each other) is usually worn out, and the automatic doors of the taxis open and close while the taxis are still moving. Snooze, and you truly lose.
There may be limited advantages to boosting our patience when the world moves faster and faster. The right level of patience is always that which aligns with environmental demands and best suits the problems you need to solve. Patience is not always a virtue. If you are waiting longer than you should, then you are wasting your time. When patience breeds complacency or a false sense of optimism, or when it nurtures inaction and passivity, then it may not be the most desirable state of mind and more of a character liability than a mental muscle. In a similar vein, it is easy to think of real-life problems that arise from having too much patience or, if you prefer, would benefit from a bit of impatience: for example, asking for a promotion is usually a quicker way of getting it than patiently waiting for one; refraining from giving someone (e.g., a date, colleague, client, or past employer) a second chance can help you avoid predictable disappointments; and waiting patiently for an important email that never arrives can harm your ability to make better, alternative choices. In short, a strategic sense of urgency—which is the reverse of patience—can be rather advantageous.
There are also many moments when patience, and its deeper psychological enabler of self-control, may be an indispensable adaptation. If the AI age seems disinterested in our capacity to wait and delay gratification, and patience becomes somewhat of a lost virtue, we risk becoming a narrower and shallower version of ourselves.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-i-human-tomas-chamorro-premuzic-harvard-business-review-press-153003112.html?src=rss
Deep Brain Stimulation therapies have proven an invaluable treatment option for patients suffering from otherwise debilitating diseases like Parkinson's. However, it — and its sibling tech, brain computer interfaces — currently suffer a critical shortcoming: the electrodes that convert electron pulses into bioelectric signals don't sit well with the surrounding brain tissue. And that's where folks with the lab coats and holding squids come in! InWe Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds, author Sally Adee delves into two centuries of research into an often misunderstood and maligned branch of scientific discovery, guiding readers from the pioneering works of Alessandro Volta to the life-saving applications that might become possible once doctors learn to communicate directly with our body's cells.
“There’s a fundamental asymmetry between the devices that drive our information economy and the tissues in the nervous system,” Bettinger told The Verge in 2018. “Your cell phone and your computer use electrons and pass them back and forth as the fundamental unit of information. Neurons, though, use ions like sodium and potassium. This matters because, to make a simple analogy, that means you need to translate the language.”
“One of the misnomers within the field actually is that I’m injecting current through these electrodes,” explains Kip Ludwig. “Not if I’m doing it right, I don’t.” The electrons that travel down a platinum or titanium wire to the implant never make it into your brain tissue. Instead, they line up on the electrode. This produces a negative charge, which pulls ions from the neurons around it. “If I pull enough ions away from the tissue, I cause voltage-gated ion channels to open,” says Ludwig. That can — but doesn’t always — make a nerve fire an action potential. Get nerves to fire. That’s it — that’s your only move.
It may seem counterintuitive: the nervous system runs on action potentials, so why wouldn’t it work to just try to write our own action potentials on top of the brain’s own ones? The problem is that our attempts to write action potentials can be incredibly ham-fisted, says Ludwig. They don’t always do what we think they do. For one thing, our tools are nowhere near precise enough to hit only the exact neurons we are trying to stimulate. So the implant sits in the middle of a bunch of different cells, sweeping up and activating unrelated neurons with its electric field. Remember how I said glia were traditionally considered the brain’s janitorial staff? Well, more recently it emerged that they also do some information processing—and our clumsy electrodes will fire them too, to unknown effects. “It’s like pulling the stopper on your bathtub and only trying to move one of three toy boats in the bathwater,” says Ludwig. And even if we do manage to hit the neurons we’re trying to, there’s no guarantee that the stimulation is hitting it in the correct location.
To bring electroceuticals into medicine, we really need better techniques to talk to cells. If the electron-to-ion language barrier is an obstacle to talking to neurons, it’s an absolute non-starter for cells that don’t use action potentials, like the ones that we are trying to target with next-generation electrical interventions, including skin cells, bone cells, and the rest. If we want to control the membrane voltage of cancer cells to coax them back to normal behavior; if we want to nudge the wound current in skin or bone cells; if we want to control the fate of a stem cell—none of that is achievable with our one and only tool of making a nerve fire an action potential. We need a bigger toolkit. Luckily, this is the objective for a fast-growing area of research looking to make devices, computing elements, and wiring that can talk to ions in their native tongue.
Several research groups are working on “mixed conduction,” a project whose goal is devices that can speak bioelectricity. It relies heavily on plastics and advanced polymers with long names that often include punctuation and numbers. If the goal is a DBS electrode you can keep in the brain for more than ten years, these materials will need to safely interact with the body’s native tissues for much longer than they do now. And that search is far from over. People are understandably beginning to wonder: why not just skip the middle man and actually make this stuff out of biological materials instead of manufacturing polymers? Why not learn how nature does it?
It’s been tried before. In the 1970s, there was a flurry of interest in using coral for bone grafts instead of autografts. Instead of a traumatic double-surgery to harvest the necessary bone tissue from a different part of the body, coral implants acted as a scaffold to let the body’s new bone cells grow into and form the new bone. Coral is naturally osteoconductive, which means new bone cells happily slide onto it and find it an agreeable place to proliferate. It’s also biodegradable: after the bone grew onto it, the coral was gradually absorbed, metabolized, and then excreted by the body. Steady improvements have produced few inflammatory responses or complications. Now there are several companies growing specialized coral for bone grafts and implants.
After the success of coral, people began to take a closer look at marine sources for biomaterials. This field is now rapidly evolving — thanks to new processing methods which have made it possible to harvest a lot of useful materials from what used to be just marine waste, the last decade has seen an increasing number of biomaterials that originate from marine organisms. These include replacement sources for gelatin (snails), collagen (jellyfish), and keratin (sponges), marine sources of which are plentiful, biocompatible, and biodegradable. And not just inside the body — one reason interest in these has spiked is the effort to move away from polluting synthetic plastic materials.
Apart from all the other benefits of marine-derived dupes, they’re also able to conduct an ion current. That was what Marco Rolandi was thinking about in 2010 when he and his colleagues at the University of Washington built a transistor out of a piece of squid.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-we-are-electric-sally-adee-hachette-books-153003295.html?src=rss
Despite the repeated and audacious claims by its sometimes CEO, Elon Musk, the prospects of brain-computer interface (BCI) startup Neuralink bringing a product to market remain distant, according to a new report from Reuters. The BCI company was apparently denied authorization by the FDA in 2022 to conduct human trials using the same devices that killed all those pigs — namely on account of; pig killing.
"The agency’s major safety concerns involved the device’s lithium battery; the potential for the implant’s tiny wires to migrate to other areas of the brain; and questions over whether and how the device can be removed without damaging brain tissue," current and former Neuralink employees told Reuters.
The FDA's concerns regarding the battery system and its novel transdermal charging capabilities revolve around the the device's chances of failure. According to Reuters, the agency is seeking reassurances that the battery is "very unlikely to fail" because should it do so, the discharge of electrical current or heat energy from a ruptured pack could fry the surrounding tissue.
The FDA is also very concerned with potential problems should the device need to be removed wholesale, either for replacement or upgrades, due to the minuscule size of the electrical leads that extend into the patient's grey matter. Those leads are so small and delicate that they are at risk of breaking off during removal (or even during regular use) and then migrating to other parts of the brain where they might get lodged in something important.
During Neuralink's open house last November, Musk's confidently claimed the company would secure FDA approval "within six months," basically by this spring. That estimate is turning out to be as accurate as his guesses for when the Cybertruck might finally enter production. “He can’t appreciate that this is not a car,” one employee told Reuters. “This is a person’s brain. This is not a toy.” Neuralink did not respond to requests for comment.
This article originally appeared on Engadget at https://www.engadget.com/fda-reportedly-denied-neuralinks-request-to-begin-human-trials-of-its-brain-implant-204454485.html?src=rss
Tesla's production capacities are in store for a significant growth spurt, CEO Elon Musk told the crowd assembled at the company's Austin, Texas Gigafactory for Investor Day 2023 — and AI will apparently be the magic bullet that gets them there. It's all part of what Musk is calling Master Plan part 3.
This is indeed Musk's third such Master Plan, the first two coming in 2006 and 2016, respectively. These have served as a roadmap for the company's growth and development over the past 17 years as Tesla has grown from neophyte startup to the world's leading EV automaker. "There is a clear path to a sustainable energy Earth and it does not require destroying natural habitats," Musk said during the keynote address.
"You could support a civilization much bigger than Earth [currently does]. Much more than the 8 billion humans could actually be supported sustainably on Earth and I'm just often shocked and surprised by how few people realize this," he continued.
Main Tesla subjects will be scaling to extreme size, which is needed to shift humanity away from fossil fuels, and AI.
But I will also Include sections about SpaceX, Tesla and The Boring Company.
The Master Plan aims to establish a sustainable energy economy by developing 240 terraWatt hours (TWH) of energy storage and 30 TWH of renewable power generation, which would require an estimated $10 trillion investment, roughly 10 percent of the global GDP. Musk notes, however, that figure is less than half of what we spend currently on internal combustion economy. In all, he anticipates we'd need less than 0.2 percent of the world's land area to create the necessary solar and wind generation capacity.
"All cars will go to fully electric and autonomous," Musk declared, arguing once again that ICE vehicles will soon be viewed in the same disdain as the horse and buggy. He also teased potential plans to electrify aircraft and ships. "As we improve the energy density of batteries, you’ll see all transportation go fully electric, with the exception of rockets,” he said. No further details as to when or how that might be accomplished were shared.
“A sustainable energy economy is within reach and we should accelerate it,” Drew Baglino, Tesla's SVP of Powertrain and Energy Engineering, added.
Developing...
This article originally appeared on Engadget at https://www.engadget.com/elon-musk-lays-out-his-vision-for-teslas-future-at-the-companys-investor-day-2023-215737642.html?src=rss
Engine knock, wherein fuel ignites unevenly along the cylinder wall resulting in damaging percussive shockwaves, is an issue that automakers have struggled to mitigate since the days of the Model T. The industry's initial attempts to solve the problem — namely tetraethyl lead — were, in hindsight, a huge mistake, having endumbened and stupefied an entire generation of Americans with their neurotoxic byproducts.
Dr. Vaclav Smil, Professor Emeritus at the University of Manitoba in Winnipeg, examines the short-sighted economic reasoning that lead to leaded gas rather than a nationwide network of ethanol stations in his new book Invention and Innovation: A Brief History of Hype and Failure. Lead gas is far from the only presumed advance to go over like a lead balloon. Invention and Innovation is packed with tales of humanity's best-intentioned, most ill-conceived and generally half-cocked ideas — from airships and hyperloops to DDT and CFCs.
Just seven years later Henry Ford began to sell his Model T, the first mass-produced affordable and durable passenger car, and in 1911 Charles Kettering, who later played a key role in developing leaded gasoline, designed the first practical electric starter, which obviated dangerous hand cranking. And although hard-topped roads were still in short supply even in the eastern part of the US, their construction began to accelerate, with the country’s paved highway length more than doubling between 1905 and 1920. No less important, decades of crude oil discoveries accompanied by advances in refining provided the liquid fuels needed for the expansion of the new transportation, and in 1913 Standard Oil of Indiana introduced William Burton’s thermal cracking of crude oil, the process that increased gasoline yield while reducing the share of volatile compounds that make up the bulk of natural gasolines.
But having more affordable and more reliable cars, more paved roads, and a dependable supply of appropriate fuel still left a problem inherent in the combustion cycle used by car engines: the propensity to violent knocking (pinging). In a perfectly operating gasoline engine, gas combustion is initiated solely by a timed spark at the top of the combustion chamber and the resulting flame front moves uniformly across the cylinder volume. Knocking is caused by spontaneous ignitions (small explosions, mini-detonations) taking place in the remaining gases before they are reached by the flame front initiated by sparking. Knocking creates high pressures (up to 18 MPa, or nearly up to 180 times the normal atmospheric level), and the resulting shock waves, traveling at speeds greater than sound, vibrate the combustion chamber walls and produce the telling sounds of a knocking, malfunctioning engine.
Knocking sounds alarming at any speed, but when an engine operates at a high load it can be very destructive. Severe knocking can cause brutal irreparable engine damage, including cylinder head erosion, broken piston rings, and melted pistons; and any knocking reduces an engine’s efficiency and releases more pollutants; in particular, it results in higher nitrogen oxide emissions. The capacity to resist knocking— that is, fuel’s stability— is based on the pressure at which fuel will spontaneously ignite and has been universally measured in octane numbers, which are usually displayed by filling stations in bold black numbers on a yellow background.
Octane (C8H18) is one of the alkanes (hydrocarbons with the general formula CnH2n + 2) that form anywhere between 10 to 40 percent of light crude oils, and one of its isomers (compounds with the same number of carbon and hydrogen atoms but with a different molecular structure), 2,2,4-trimethypentane (iso-octane), was taken as the maximum (100 percent) on the octane rating scale because the compound completely prevents any knocking. The higher the octane rating of gasoline, the more resistant the fuel is to knocking, and engines can operate more efficiently with higher compression ratios. North American refiners now offer three octane grades, regular gasoline (87), midgrade fuel (89), and premium fuel mixes (91– 93).
During the first two decades of the twentieth century, the earliest phase of automotive expansion, there were three options to minimize or eliminate destructive knocking. The first one was to keep the compression ratios of internal combustion engines relatively low, below 4.3:1: Ford’s best-selling Model T, rolled out in 1908, had a compression ratio of 3.98:1. The second one was to develop smaller but more efficient engines running on better fuel, and the third one was to use additives that would prevent the uncontrolled ignition. Keeping compression ratios low meant wasting fuel, and the reduced engine efficiency was of a particular concern during the years of rapid post–World War I economic expansion as rising car ownership of more powerful and more spacious cars led to concerns about the long-term adequacy of domestic crude oil supplies and the growing dependence on imports. Consequently, additives offered the easiest way out: they would allow using lower-quality fuel in more powerful engines operating more efficiently with higher compression ratios.
During the first two decades of the twentieth century there was considerable interest in ethanol (ethyl alcohol, C2H6O or CH3CH2OH), both as a car fuel and as a gasoline additive. Numerous tests proved that engines using pure ethanol would never knock, and ethanol blends with kerosene and gasoline were tried in Europe and in the US. Ethanol’s well-known proponents included Alexander Graham Bell, Elihu Thomson, and Henry Ford (although Ford did not, as many sources erroneously claim, design the Model T to run on ethanol or to be a dual-fuel vehicle; it was to be fueled by gasoline); Charles Kettering considered it to be the fuel of the future.
But three disadvantages complicated ethanol’s large-scale adoption: it was more expensive than gasoline, it was not available in volumes sufficient to meet the rising demand for automotive fuel, and increasing its supply, even only if it were used as the dominant additive, would have claimed significant shares of crop production. At that time there were no affordable, direct ways to produce the fuel on a large scale from abundant cellulosic waste such as wood or straw: cellulose had first to be hydrolyzed by sulfuric acid and the resulting sugars were then fermented. That is why the fuel ethanol was made mostly from the same food crops that were used to make (in much smaller volumes) alcohol for drinking and medicinal and industrial uses.
The search for a new, effective additive began in 1916 in Charles Kettering’s Dayton Research Laboratories with Thomas Midgley, a young (born in 1889) mechanical engineer, in charge of this effort. In July 1918 a report prepared in collaboration with the US Army and the US Bureau of Mines listed ethyl alcohol, benzene, and a cyclohexane as the compounds that did not produce any knocking in high-compression engines. In 1919, when Kettering was hired by GM to head its new research division, he defined the challenge as one of averting a looming fuel shortage: the US domestic crude oil supply was expected to be gone in fifteen years, and “if we could successfully raise the compression of our motors . . . we could double the mileage and thereby lengthen this period to 30 years.” Kettering saw two routes toward that goal, by using a high-volume additive (ethanol or, as tests showed, fuel with 40 percent benzene that eliminated any knocking) or a low-percentage alternative, akin to but better than the 1 percent iodine solution that was accidentally discovered in 1919 to have the same effect.
In early 1921 Kettering learned about Victor Lehner’s synthesis of selenium oxychloride at the University of Wisconsin. Tests showed it to be a highly effective but, as expected, also a highly corrosive anti-knocking compound, but they led directly to considering compounds of other elements in group 16 of the periodic table: both diethyl selenide and diethyl telluride showed even better anti-knocking properties, but the latter compound was poisonous when inhaled or absorbed through skin and had a powerful garlicky smell. Tetraethyl tin was the next compound found to be modestly effective, and on December 9, 1921, a solution of 1 percent tetraethyl lead (TEL) — (C2H5)4 Pb — produced no knock in the test engine, and soon was found to be effective even when added in concentrations as low as 0.04 percent by volume.
TEL was originally synthesized in Germany by Karl Jacob Löwig in 1853 and had no previous commercial use. In January 1922, DuPont and Standard Oil of New Jersey were contracted to produce TEL, and by February 1923 the new fuel (with the additive mixed into the gasoline at pumps by means of simple devices called ethylizers) became available to the public in a small number of filling stations. Even as the commitment to TEL was going ahead, Midgley and Kettering conceded that “unquestionably alcohol is the fuel of the future,” and estimates showed that a 20 percent blend of ethanol and gasoline needed in 1920 could be supplied by using only about 9 percent of the country’s grain and sugar crops while providing an additional market for US farmers. And during the interwar period many European and some tropical countries used blends of 10– 25 percent ethanol (made from surplus food crops and paper mill wastes) and gasoline, admittedly for relatively small markets as the pre–World War II ownership of family cars in Europe was only a fraction of the US mean.
Other known alternatives included vapor-phase cracked refinery liquids, benzene blends, and gasoline from naphthenic crudes (containing little or no wax). Why did GM, well aware of these realities, decide not only to pursue just the TEL route but also to claim (despite its own correct understanding) that there were no available alternatives: “So far as we know at the present time, tetraethyl lead is the only material available which can bring about these results”? Several factors help to explain the choice. The ethanol route would have required a mass-scale development of a new industry dedicated to an automotive fuel additive that could not be controlled by GM. Moreover, as already noted, the preferable option, producing ethanol from cellulosic waste (crop residues, wood), rather than from food crops, was too expensive to be practical. In fact, the large-scale production of cellulosic ethanol by new enzymatic conversions, promised to be of epoch-making importance in the twenty-first century, has failed its expectations, and by 2020 high-volume US production of ethanol (used as an anti-knocking additive) continued to be based on fermenting corn: in 2020 it claimed almost exactly one-third of the country’s corn harvest.
Cheaters are why we can't have nice things. All the time, money and effort that could be going towards expanded DLCs and improved gameplay mechanics is instead spent staving off the legions of mediocre players who mistake aimbots for actual gaming prowess. The entire exercise is exhausting and Ubisoft isn't going to take it anymore, the company announced Monday. Come the game's next update release, any 'Rainbow Six Siege' player found cheating through the use of input spoofing — that is, using a third-party device to run a keyboard and mouse on their console instead of a controller — will see their lag times drastically extended. Play stupid games, win stupid prizes.
These devices — which include the XIM APEX, the Cronus Zen, or the ReaSnow S1 — allow players to leverage the heightened sensitivity and increased reactions that a keyboard and mouse offer over console controllers. They also incorporate aim assist, autoreload, and autoscope features which have long (and rightfully!) been scorned by the larger gaming community and banned from anything even loosely resembling official competition. But that hasn't stopped folks from increasingly relying on such devices to artificially boost their scores in online shooters from 'Destiny 2' to 'Overwatch.'
That will no longer be the case with 'Rainbow Six Siege.' The company revealed its Mousetrap system on Monday, a detection suite built specifically to sniff out accounts running these illicit hardware devices. Mousetrap is already live, has been for a few seasons now as the company honed the system's detection capabilities and built out a database of known cheats. Also, yes, they're very much onto you and your pedestrian FPS machinations.
“We know exactly which players are spoofing and when they were spoofing,” Jan Stahlhacke, gameplay programming team lead for 'Rainbow Six Siege,' announced in the Y8S1 reveal above. “We also know that at the highest ranks spoofers become much more common.”
Should the system spot one, that account will see a notable increase in its response times, more than enough to cancel out any ill-gained advantages. The user will have to unplug the device, then play a few more rounds with the "al-ping-tross" chained to their neck before the lag penalty will (eventually) dissipate. Activision took similar — and equally inventive — measures in 2022 against Call of Duty cheats with its Disarm measure.
The company does acknowledge that such devices are used legitimately by gamers with disabilities and Ubisoft urges those players to reach out with feedback about how these changes might impact them. Huh, seems like the sort of thing you'd want to get squared away before enacting a sweeping policy such as this but, then again, Ubisoft isn't exactly famous for its culture of inclusivity.
GT7 players will be able to access a special “Gran Turismo Sophy Race Together” mode from February 21st at 1am ET, when the update arrives. Players will face off against four separate GT Sophy AI opponents, all of whom’s vehicles are specced slightly differently so you’re not going up against a quartet of clones, in a four-circuit series striated by difficulty (beginner-intermediate-expert).
“The difference [between racers] is that, it's essentially the power you have versus the other cars on the track,” Michael Spranger told Engadget. “You have different levels of performance. In the beginning level, you have a much more powerful vehicle — still within the same class, but you're much faster [than your competition].” That performance gap continues to shrink as you move up in difficulty until you reach the one v one against GT Sophy in identically specced vehicles.
Sony AI
The Sophy you race here is the exact same Sophy that’s been winning against the pros, Peter Wurman explained. The AI has not been hobbled or dumbed down in any way ahead of this release. “The power the player has is a car advantage, which allows them to be competitive, but otherwise, GT Sophy is the same. Really good driver, just all across the board.”
This is a limited-time event. The GT Sophy races will only be available until the end of March. The Sony AI team is time-limiting this initial release on account of a few technical reasons but, “mostly this is a new game design and we want to try it out, get feedback, and then take what we learned and iterate on that,” Wurman explained. The team can’t share any specifics about where the program goes from here
Sony AI
“We believe this technology has a huge potential to really elevate player experience across different game types, different experiences,” Wurman continued. He notes that agent AIs like GT Sophy can accomplish a lot in terms of interacting with players but also sees related AI systems playing an expanded role as well. The “technology is really crucial for the content creation itself,” he said. “They're going to these race tracks, doing detailed capturing in order to create the environment and, speaking generally, you can imagine AI has a really big potential to help with many of those processes.”
Sony AI
If you’re thinking about grabbing a copy of the game ahead of tomorrow’s release, you’ll want to get some laps in before the update arrives. Only players who’ve reached Collector Level 6 will be qualified to race against the AI.
Some of us are destined to lead successful lives thanks to the circumstances of our birth. Some of us, like attorney Bruce Jackson, are destined to lead such lives in spite them. Raised in New York's Amsterdam housing projects and subjected to the daily brutalities of growing up a black man in America, Jackson's story is ultimately one of tempered success. Sure he went on to study at Georgetown Law before representing some of the biggest names in hip hop — LL Cool J, Heavy D, the Lost Boyz and Mr. Cheeks, SWV, Busta Rhymes — and working 15 years as Microsoft's associate general counsel. But at the end of the day, he is still a black man living in America, with all the baggage that comes with it.
In his autobiography, Never Far from Home(out now from Atria), Jackson recounts the challenges he has faced in life, of which there are no shortage: from being falsely accused of robbery at age 10 to witnessing the murder of his friend at 15 to spending a night in lockup as an adult for the crime of driving his own car; the shock of navigating Microsoft's lillywhite workforce following years spent in the entertainment industry, and the end of a loving marriage brought low by his demanding work. While Jackson's story is ultimately one of triumph, Never Far from Home reveals a hollowness, a betrayal, of the American Dream that people of Bill Gates' (and this writer's) complexion will likely never have to experience. In the excerpt below, Jackson recalls his decision to leave a Napster-ravaged music industry to the clammy embrace of Seattle and the Pacific Northwest.
In the late 1990s, the digital revolution pushed the music business into a state of flux. And here was Tony Dofat, sitting in my office, apoplectic, talking about how to stop Napster and other platforms from taking the legs out from under the traditional recording industry.
I shook my head. “If they’re already doing it, then it’s too late. Cat’s out of the bag. I don’t care if you start suing people, you’re never going back to the old model. It’s over.”
In fact, lawsuits, spearheaded by Metallica and others, the chosen mode of defense in those early days of the digital music onslaught, only served to embolden consumers and publicize their cause. Free music for everyone! won the day.
These were terrifying times for artists and industry executives alike. A decades-old business model had been built on the premise that recorded music was a salable commodity.
Artists would put out a record and then embark on a promotional tour to support that record. A significant portion of a musician’s income (and the income of the label that supported the artist) was derived from the sale of a physical product: recorded albums (or singles), either in vinyl, cassette, or compact disc. Suddenly, that model was flipped on its head... and still is. Artists earn a comparative pittance from downloads or streams, and most of their revenue is derived from touring, or from monetizing social media accounts whose numbers are bolstered by a song’s popularity. (Publicly, Spotify has stated that it pays artists between $.003 and $.005 per stream. Translation: 250 streams will result in revenue of approximately one dollar for the recording artist.)
Thus, the music itself has been turned primarily into a marketing tool used to entice listeners to the product: concert and festival tickets, and a social media advertising platform. It is a much tougher and leaner business model. Additionally, it is a model that changed the notion that record labels and producers needed only one decent track around which they could build an entire album. This happened all the time in the vinyl era: an artist came up with a hit single, an album was quickly assembled, often with filler that did not meet the standard established by the single. Streaming platforms changed all of that. Consumers today seek out only the individual songs they like, and do it for a fraction of what they used to spend on albums. Ten bucks a month gets you access to thousands of songs on Spotify or Pandora or Apple Music roughly the same amount a single album cost in the pre-streaming era. For consumers, it has been a landmark victory (except for the part about artists not being able to create art if they can’t feed themselves); for artists and record labels, it has been a catastrophic blow.
For everyone connected to the music business, it was a shock to the system. For me, it was provocation to consider what I wanted to do with the next phase of my career. In early 2000, I received a call from a corporate recruiter about a position with Microsoft, which was looking for an in-house counsel with a background in entertainment law — specifically, to work in the company’s burgeoning digital media division. The job would entail working with content providers and negotiating deals in which they would agree to make their content — music, movies, television shows, books — available to consumers via Microsoft’s Windows Media Player. In a sense, I would still be in the entertainment business; I would be spending a lot of time working with the same recording industry executives with whom I had built prior relationships.
But there were downsides, as well. For one thing, I was recently married, with a one-year-old baby and a stepson, living in a nice place in the New York City suburbs. I wasn’t eager to leave them—or my other daughters—three thousand miles behind while I moved to Microsoft’s headquarters in the Pacific Northwest. From an experience standpoint, though, it was almost too good an offer to turn down.
Deeply conflicted and at a crossroads in my career, I solicited advice from friends and colleagues, including, most notably, Clarence Avant. If I had to name one person who has been the most important mentor in my life, it would be Clarence, “the Black Godfather.” In an extraordinary life that now spans almost ninety years, Clarence has been among the most influential men in Black culture, music, politics, and civil rights. It’s no surprise that Netflix’s documentary on Clarence featured interviews with not just a who’s who of music and entertainment industry superstars, but also former US presidents Barack Obama and Bill Clinton.
In the early 1990s, Clarence became chairman of the board of Motown Records. As lofty a title as that might be, it denotes only a fraction of the wisdom and power he wielded. When the offer came down from Microsoft, I consulted with Clarence. Would I be making a mistake, I wondered, by leaving the music business and walking away from a firm I had started? Clarence talked me through the pros and cons, but in the end, he offered a steely assessment, in a way that only Clarence could.
“Son, take your ass to Microsoft, and get some of that stock.”