Posts with «author_name|andrew tarantola» label

The JumpMod haptic backpack makes virtual leaps more realistic

VR technology has come a long way from the early Virtuality systems that inhabited our local malls in the ‘80s and ‘90s, with modern headsets offering 4K resolution, Dolby Atmos surround sound, and motion-sensing controllers. But even the most convincing optical and auditory illusions can’t fully fool our inner ears.

“If you want to feel these big sensations, you’ve got to have the infrastructure first,” University of Chicago PhD student, Romain Nith, told Engadget. “You’ve got to go to theme parks, ride roller coasters, or you need bungie cords pulling you from the ceiling.” And while the sensations are really like what they’re simulating (because you're really being thrown around), “you can't have that in your living room.”

The JumpMod Haptic Backpack prototype, on the other hand, can effectively fool its user’s sense of proprioception to make jumping in VR feel much more lifelike with a device the size of, well, a backpack. It has been developed by Nith and his research team from the University of Chicago’s Human-Computer Integration Lab, which is headed by Pedro Lopes, Associate Professor, Department of Computer Science. The HCI Lab’s research focuses on using technology to “borrow parts of the body for input and output, rather than adding more technology to the body” and, as such, has generated a veritable menagerie of novel devices exploring that concept.

“I think the next generation of devices is not going to be defined by how small they are, or how implanted they are in the body… but more about how deeply they integrate with your body,” Lopes told Engadget. He points to the functional issues of dealing with Google Maps in 2007 — specifically the need to physically print them out for them to be useful. “Now when that runs on your smartphone, the device that can move with you, in your pocket, you can access information anywhere, anytime,” he said. “All of a sudden that makes a lot of sense. So every jump of these paradigms allows you to do something new.”

“We're looking at the body and trying to create technology that really hybridizes with you,” Lopes continued, using smartwatches as an example, which rely on small spinning motors to create the notification vibration. “That is one of the reasons smart watches are so big.”

Instead, a small electrical charge can elicit the same tingling sensation without the need for a “big rotating mass type of device,” Lopes explained. “The sensations, the functionality, ends up being the same and the device looks very different.”

JumpMod takes a similar approach, rapidly shifting the position of a weight worn by the user to fool their senses rather than hoist the user wholesale to practically recreate the sensation. The untethered device is designed to modify the user’s sense of jumping, when used with a VR program, by rapidly lifting and lowering a 2-kilogram weight (which doubles as the device’s power cell) in time with their physical movement. Adjusting the speed of weight’s motion impacted the user’s perceived jump momentum, enabling the team to create sensations of higher and broader jumps, softer and harder landings and being pulled up or down.

The device itself is completely untethered and can operate both indoors and out. In the demo above, the researcher team used the backpack to improve its user’s timing when jumping rope and even took JumpMod to a basketball court to show how it could be used to help (or hinder) players in a game of one-on-one. The current iteration is built to generate as much force as comfortably possible, in order to maximize the generated sensation, so it does tend to be rather loud and heavy.

“We probably don't have to drive it as fast, which generates less noise, and probably don't even need all the weight that we have, which would make for a slimmer backpack,” Lopes said. “Where does that sensation start to occur? Is that at 100 grams, is it at 300 grams? We optimized it for maximum power, rather than for a minimal device. That's the kind of stuff one would do if one were to commercialize [the technology].”

Technically, the device doesn’t even need to be worn, it could theoretically be implanted into the backs of theater seats. “I think that the tension here in VR is really interesting,” Lopes said. “ If you go to the Disney theme park, they play these super-immersive VR scenes, you're on a motion platform and when the scene jumps, the motion platform goes up.” Lope argues that a similar sensation could potentially be produced at a fraction of the infrastructure requirements using JumpMod.

“There's lots of proto-motion platforms for VR, some with special shoes, some move around, some rotate but none of them have really paid off,” Lopes said. “It's a really difficult challenge where, if you want to create an involuntary force and involuntary movement, you need a big infrastructure. We are interested in whether that's possible, but honestly, we don't even know if it is.“

The “involuntary” aspect of these devices and technologies is an ethical sticking point for the field, and one which Lopes’ lab has studied frequently. His students have developed passive systems that allow one user to dictate the hand motions of another, or use electrical muscle stimulation to improve the users’ dexterity — artificially boosting their reaction speeds and shaping their finger positions on a guitar fretboard. They can even be controlled through an exoskeleton to properly form the words of American Sign Language. However, all of those devices require the user to relinquish some degree of control over their bodies to let the machines do their things.

“We call it ‘optimizing agency,’” Lopes said. For most of the projects in his lab, “agency is not super critical.” Stakes are low when allowing a robot to guide your finger positionings when learning to play guitar or have one physically guide your head using electrical muscle stimulation during a workplace safety training experience. “We apply the [EMS pads] to the neck muscles,” Lopes reassured Engadget, which gently buzz the user to make them look around their office space, “so they know where the fire extinguisher is, where the fire exit is.”

Lopes does concede that physically instigating a user to turn their head by externally stimulating their nervous system could be construed as “making people completely lose their sense of agency,” however he notes that his lab consistently includes user overrides for all EMS-related devices. “In all these, we design some form [of override] to keep you in control. For example, in the case of [the head actuation study], if you push against the device, it senses that you're pushing against the direction that it’s starting to move your head and turns off.”

“I think there's more research to be done there, more complex ways to tackle this,” he continued. “Brain Computer Interfaces (BCIs) are really interesting because you can kind of detect what people are thinking, what their goal is, and then you don't even have to activate the system if it's not needed.”

This article originally appeared on Engadget at https://www.engadget.com/the-jumpmod-haptic-backpack-makes-virtual-leaps-more-realistic-160003718.html?src=rss

IBM and NASA teamed up to build the GPT of Earth sciences

NASA estimates that its Earth science missions will generate around a quarter million terabytes of data in 2024 alone. In order for climate scientists and the research community efficiently dig through these reams of raw satellite data, IBM, HuggingFace and NASA have collaborated to build an open-source geospatial foundation model that will serve as the basis for a new class of climate and Earth science AIs that can track deforestation, predict crop yields and rack greenhouse gas emissions.

For this project, IBM leveraged its recently-released Watsonx.ai to serve as the foundational model using a year’s worth of NASA’s Harmonized Landsat Sentinel-2 satellite data (HLS). That data is collected by the ESA’s pair of Sentinel-2 satellites, which are built to acquire high resolution optical imagery over land and coastal regions in 13 spectral bands.

For it’s part, HuggingFace is hosting the model on its open-source AI platform. According to IBM, by fine-tuning the model on “labeled data for flood and burn scar mapping,” the team was able to improve the model's performance 15 percent over the current state of the art using half as much data.

"The essential role of open-source technologies to accelerate critical areas of discovery such as climate change has never been clearer,” Sriram Raghavan, VP of IBM Research AI, said in a press release. “By combining IBM’s foundation model efforts aimed at creating flexible, reusable AI systems with NASA’s repository of Earth-satellite data, and making it available on the leading open-source AI platform, Hugging Face, we can leverage the power of collaboration to implement faster and more impactful solutions that will improve our planet.”

This article originally appeared on Engadget at https://www.engadget.com/ibm-and-nasa-teamed-up-to-build-the-gpt-of-earth-sciences-040116377.html?src=rss

AI-assisted cancer screening could cut radiologist workloads in half

A newly published study in the the Lancet Oncology journal has found that the use of AI in mammogram cancer screening can safely cut radiologist workloads nearly in half without risk of increasing false-positive results. In effect, the study found that the AI’s recommendations were on par with those of two radiologists working together.

“AI-supported mammography screening resulted in a similar cancer detection rate compared with standard double reading, with a substantially lower screen-reading workload, indicating that the use of AI in mammography screening is safe,” the study found.

The study was performed by a research team out of Lund University in Sweden and, accordingly, followed 80,033 Swedish women (average age of 54) for just over a year in 2021-2022 . Of the 39,996 patients that were randomly assigned AI-empowered breast cancer screenings, 28 percent or 244 tests returned screen-detected cancers. Of the other 40,024 patients that received conventional cancer screenings, just 25 percent, or 203 tests, returned screen-detected cancers.

Of those extra 41 cancers detected by the AI side, 19 turned out to be invasive. Both the AI-empowered and conventional screenings ran a 1.5 percent false positive rate. Most impressively, radiologists on the the AI side had to look at 36,886 fewer screen readings than their counterparts, a 44 percent reduction in their workload.

“These promising interim safety results should be used to inform new trials and program-based evaluations to address the pronounced radiologist shortage in many countries, but they are not enough on their own to confirm that AI is ready to be implemented in mammography screening," lead author, Dr Kristina Lång, warned in a release. “We still need to understand the implications on patients’ outcomes, especially whether combining radiologists’ expertise with AI can help detect interval cancers that are often missed by traditional screening, as well as the cost-effectiveness of the technology.”

Cancer detection has been an aspirational goal for computer vision researchers and AI companies for years now. I mean, who doesn’t want to be the company to build the tricorder that infallibly spots cancerous growths in their earliest stages? Machine vision systems designed for these screenings have improved steadily in recent years and in specific cases have shown to be as reliable as human clinicians, with the likes of IBM, Google, MIT and NVIDIA investing in similar cancer screening research in recent years.

This article originally appeared on Engadget at https://www.engadget.com/ai-assisted-cancer-screening-could-cut-radiologist-workloads-in-half-193427969.html?src=rss

Hitting the Books: The dangerous real-world consequences of our online attention economy

If reality television has taught us anything, it's there's not much people won't do if offered enough money and attention. Sometimes, even just the latter. Unfortunately for the future prospects of our civilization, modern social media has focused upon those same character foibles and optimized them at a global scale, sacrifices at the altar of audience growth and engagement. In Outrage Machine, writer and technologist Tobias Rose-Stockwell, walks readers through the inner workings of these modern technologies, illustrating how they're designed to capture and keep our attention, regardless of what they have to do in order to do it. In the excerpt below, Rose-Stockwell examines the human cost of feeding the content machine through a discussion on YouTube personality Nikocado Avocado's rise to internet stardom.

 

Legacy Lit

Excerpted from OUTRAGE MACHINE: How Tech Amplifies Discontent, Disrupts Democracy—And What We Can Do About It by Tobias Rose-Stockwell. Copyright © 2023 by Tobias Rose-Stockwell. Reprinted with permission of Legacy Lit. All rights reserved.


This Game Is Not Just a Game

Social media can seem like a game. When we open our apps and craft a post, the way we look to score points in the form of likes and followers distinctly resembles a strange new playful competition. But while it feels like a game, it is unlike any other game we might play in our spare time.

The academic C. Thi Nguyen has explained how games are different: “Actions in games are screened off, in important ways, from ordinary life. When we are playing basketball, and you block my pass, I do not take this to be a sign of your long-term hostility towards me. When we are playing at having an insult contest, we don’t take each other’s speech to be indicative of our actual attitudes or beliefs about the world.” Games happen in what the Dutch historian Johan Huizinga famously called “the magic circle”— where the players take on alternate roles, and our actions take on alternate meanings.

With social media we never exit the game. Our phones are always with us. We don’t extricate ourselves from the mechanics. And since the goal of the game designers of social media is to keep us there as long as possible, it’s an active competition with real life. With a constant type of habituated attention being pulled into the metrics, we never leave these digital spaces. In doing so, social media has colonized our world with its game mechanics.

Metrics are Money

While we are paid in the small rushes of dopamine that come from accumulating abstract numbers, metrics also translate into hard cash. Acquiring these metrics don’t just provide us with hits of emotional validation. They are transferable into economic value that is quantifiable and very real.

It’s no secret that the ability to consistently capture attention is an asset that brands will pay for. A follower is a tangible, monetizable asset worth money. If you’re trying to purchase followers, Twitter will charge you between $2 and $4 to acquire a new one using their promoted accounts feature.

If you have a significant enough following, brands will pay you to post sponsored items on their behalf. Depending on the size of your following in Instagram, for instance, these payouts can range from $75 per post (to an account with two thousand followers), up to hundreds of thousands of dollars per post (for accounts with hundreds of thousands of followers).

Between 2017 and 2021, the average cost for reaching a thousand Twitter users (the metric advertisers use is CPM, or cost per mille) was between $5 and $7. It costs that much to get a thousand eyeballs on your post. Any strategies that increase how much your content is shared also have a financial value.

Let’s now bring this economic incentive back to Billy Brady’s accounting of the engagement value of moral outrage. He found that adding a single moral or emotional word to a post on Twitter increased the viral spread of that content by 17 percent per word. All of our posts to social media exist in a marketplace for attention — they vie for the top of our followers’ feeds. Our posts are always competing against other people’s posts. If outraged posts have an advantage in this competition, they are literally worth more money.

For a brand or an individual, if you want to increase the value of a post, then including moral outrage, or linking to a larger movement that signals its moral conviction, might increase the reach of that content by at least that much. Moreover, it might actually improve the perception and brand affinity by appealing to the moral foundations of the brand’s consumers and employees, increasing sales and burnishing their reputation. This can be an inherently polarizing strategy, as a company that picks a cause to support, whose audience is morally diverse, might then alienate a sizable percentage of their customer base who disagree with that cause. But these economics can also make sense — if a company knows enough about its consumers’ and employees’ moral affiliations — it can make sure to pick a cause-sector that’s in line with its customers.

Since moral content is a reliable tool for capturing attention, it can also be used for psychographic profiling for future marketing opportunities. Many major brands do this with tremendous success — creating viral campaigns that utilize moral righteousness and outrage to gain traction and attention among core consumers who have a similar moral disposition. These campaigns also often get a secondary boost due to the proliferation of pile- ons and think pieces discussing these ad spots. Brands that moralize their products often succeed in the attention marketplace.

This basic economic incentive can help to explain how and why so many brands have begun to link themselves with online cause-related issues. While it may make strong moral sense to those decision-makers, it can make clear economic sense to the company as a whole as well. Social media provides measurable financial incentives for companies to include moral language in their quest to burnish their brands and perceptions.

But as nefarious as this sounds, moralization of content is not always the result of callous manipulation and greed. Social metrics do something else that influences our behavior in pernicious ways.

Audience Capture

In the latter days of 2016, I wrote an article about how social media was diminishing our capacity for empathy. In the wake of that year’s presidential election, the article went hugely viral, and was shared with several million people. At the time I was working on other projects full time. When the article took off, I shifted my focus away from the consulting work I had been doing for years, and began focusing instead on writing full time. One of the by-products of that tremendous signal from this new audience is the book you’re reading right now.

A sizable new audience of strangers had given me a clear message: This was important. Do more of it. When many people we care about tell us what we should be doing, we listen.

This is the result of “audience capture”: how we influence, and are influenced by those who observe us. We don’t just capture an audience — we are also captured by their feedback. This is often a wonderful thing, provoking us to produce more useful and interesting works. As creators, the signal from our audience is a huge part of why we do what we do.

But it also has a dark side. The writer Gurwinder Boghal has explained the phenomena of audience capture for influencers illustrating the story of a young YouTuber named Nicholas Perry. In 2016, Perry began a You- Tube channel as a skinny vegan violinist. After a year of getting little traction online, he abandoned veganism, citing health concerns, and shifted to uploading mukbang (eating show) videos of him trying different foods for his followers. These followers began demanding more and more extreme feats of food consumption. Before long, in an attempt to appease his increasingly demanding audience, he was posting videos of himself eating whole fast-food menus in a single sitting.

He found a large audience with this new format. In terms of metrics, this new format was overwhelmingly successful. After several years of following his audience’s continued requests, he amassed millions of followers, and over a billion total views. But in the process, his online identity and physical character changed dramatically as well. Nicholas Perry became the personality Nikocado — an obese parody of himself, ballooning to more than four hundred pounds, voraciously consuming anything his audience asked him to eat. Following his audience’s desires caused him to pursue increasingly extreme feats at the expense of his mental and physical health.

Legacy Lit

Nicholas Perry, left, and Nikocado, right, after several years of building a following on YouTube. Source: Nikocado Avocado YouTube Channel.

Boghal summarizes this cross-directional influence.

When influencers are analyzing audience feedback, they often find that their more outlandish behavior receives the most attention and approval, which leads them to recalibrate their personalities according to far more extreme social cues than those they’d receive in real life. In doing this they exaggerate the more idiosyncratic facets of their personalities, becoming crude caricatures of themselves.

This need not only apply to influencers. We are signal-processing machines. We respond to the types of positive signals we receive from those who observe us. Our audiences online reflect back to us what their opinion of our behavior is, and we adapt to fit it. The metrics (likes, followers, shares, and comments) available to us now on social media allow for us to measure that feedback far more precisely than we previously could, leading to us internalizing what is “good” behavior.

As we find ourselves more and more inside of these online spaces, this influence becomes more pronounced. As Boghal notes, “We are all gaining online audiences.” Anytime we post to our followers, we are entering into a process of exchange with our viewers — one that is beholden to the same extreme engagement problems found everywhere else on social media.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-dangerous-real-world-consequences-of-our-online-attention-economy-143050602.html?src=rss

MIT's 'PhotoGuard' protects your images from malicious AI edits

Dall-E and Stable Diffusion were only the beginning. As generative AI systems proliferate and companies work to differentiate their offerings from those of their competitors, chatbots across the internet are gaining the power to edit images — as well as create them — with the likes of Shutterstock and Adobe leading the way. But with those new AI-empowered capabilities come familiar pitfalls, like the unauthorized manipulation of, or outright theft of, existing online artwork and images. Watermarking techniques can help mitigate the latter, while the new "PhotoGuard" technique developed by MIT CSAIL could help prevent the former.

PhotoGuard works by altering select pixels in an image such that they will disrupt an AI's ability to understand what the image is. Those "perturbations," as the research team refers to them, are invisible to the human eye but easily readable by machines. The "encoder" attack method of introducing these artifacts targets the algorithmic model's latent representation of the target image — the complex mathematics that describes the position and color of every pixel in an image — essentially preventing the AI from understanding what it is looking at. 

The more advanced, and computationally intensive, "diffusion" attack method camouflages an image as a different image in the eyes of the AI. It will define a target image and optimize the perturbations in its image so as to resemble its target. Any edits that an AI tries to make on these "immunized" images will be applies to the fake "target" images resulting in an unrealistic looking generated image. 

""The encoder attack makes the model think that the input image (to be edited) is some other image (e.g. a gray image)," MIT doctorate student and lead author of the paper, Hadi Salman, told Engadget. "Whereas the diffusion attack forces the diffusion model to make edits towards some target image (which can also be some grey or random image)." The technique isn't foolproof, malicious actors could work to reverse engineer the protected image potentially by adding digital noise, cropping or flipping the picture.

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” Salman said in a release. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools."

This article originally appeared on Engadget at https://www.engadget.com/mits-photoguard-protects-your-images-from-malicious-ai-edits-213036912.html?src=rss

Hitting the Books: 'Vision Zero' could help reclaim roads from American car culture

Despite decades of focusing our national infrastructure on personal vehicles (often at the direct exclusion and expense of other modes of transport), modern folks gets around on far more than planes, trains and automobiles these days. With our city streets and suburban neighborhoods increasingly populated by an ever-widening variety of vehicle — from e-scooters to city bikes, to autonomous EV taxis and internal combustion SUVs. The task of accommodating these competing priorities ensuring that everybody in town, regardless of physical or financial ability, can get where they're going is growing ever more challenging. 

Inclusive Transportation: A Manifesto for Divided Communities, by civil engineer Veronica O Davis, highlights the many failings (both procedural and structural) of America's transportation infrastructure and calls on city planners to reexamine how their public works projects actually affect the people they are intended to serve. Davis deftly agues in favor of a systemic revolution to the transportation planning field demanding better and more functional training for civil engineers, more diverse voices in transportation planning projects, and undoing at least some of the community-dividing harms that America's past love affair with freeways has wrought. In the excerpt below, Davis examines the relative successes of Washington DC's Vision Zero road safety program.  

Island Press

From Inclusive Transportation by Veronica O. Davis. Copyright © 2023 Veronica O. Davis.


Reevaluating Transportation Policies

Policies lay the foundation for many decisions. For example, I worked with a city that had a policy that the curb-to-curb space could not be expanded unless there were extenuating circumstances, and even then the answer was no. That meant the roadway could not be expanded, but we could do a “road diet,” or narrowing of the roadway. As an example, if a road was sixty feet wide from curb to curb, all we had was sixty feet to work with as we developed alternatives to move the growing number of people moving into the corridor. The city’s policy decision was “Work with what you have, and if we are going to spend money to reconstruct the road, it will not be to widen it.”

Vision Zero could be a path forward as an overall framework for changing policy priorities, but it needs to be more than a plan, and it needs to be crafted with the people. Vision Zero is a concept from Sweden that recognizes we are human and we will make mistakes, but our mistakes should not lead to serious injuries or fatalities. One thing that gets muddled as people in the United States attempt to adopt Vision Zero is conflation of the total number of crashes with the total number of crashes that lead to deaths and serious injuries. Vision Zero does not demand perfect records, and it recognizes that crashes will occur because we are human. Instead, it argues that the focus should be on deaths and serious injuries. The distinction is important because crashes generally happen all over a community and people walk away from fender benders and sideswipes with minor or no injuries. Other than having a bad day, everyone is alive to recount the drama with their family and friends. But the more severe crashes tend to cluster in certain communities. If you focus on crashes regardless of the resulting injury, you may move resources from communities that need them more because they are where people are dying.

The Vision Zero plan of Washington, DC, is a great example of both successful interactions and some shortcomings. In 2015, only a few US cities embraced Vision Zero. DC’s plan was one of the first in the United States that included extensive outreach during the plan’s development. Over the course of a summer, we had ten meetings on street corners around the city, a youth summit with over two hundred young people, two meetings with special advocacy groups, and meetings with over thirty-five city agencies. We did not just inform people; we also engaged with them and used their feedback and stories to shape the plan. As an example, after talking with a group of young Black teens at the youth summit, we removed all enforcement related to people walking and biking. The young people conveyed to us that sometimes crossing the street mid-block got them away from a group of people who may want to cause them harm. The teens weighed their risk of being targeted by violence as higher than their risk of being struck by someone driving a vehicle.

In addition, we heard from people that having police enforce laws related to walking and biking put the community and law enforcement in conflict with each other. Charles T. Brown has documented in his research for his podcast Arrested Mobility how laws such as those prohibiting jaywalking are disproportionately enforced in Black and Brown communities, for men in particular. In DC’s Vision Zero plan, enforcement was instead targeted to dangerous driving behavior such as excessive speeding, driving under the influence, distracted driving, and reckless driving.

In a world where we are examining policing more closely after George Floyd’s murder, I think plans that reexamine equity in this way should take one more step. DC’s Vision Zero plan correctly focused on behaviors that lead to deaths and fatalities. However, the plan should have recommended a comprehensive evaluation of all the transportation laws and the removal of any that were not supported by data or did not lead to safer streets. If we are discussing data-driven approaches, the laws should target behaviors that lead to crashes that result in deaths and serious injuries.

Moreover, this plan offered recommendations and strategies and did not go further. After the Vision Zero plan was shared, communities were all demanding safer streets. This calls to mind the discussion [in chapter 2] of Montgomery County and the tension about who would get resources. All streets could be safer, even if incrementally, and without guiding principles for more of an “emergency room” structure. DC’s Vision Zero program led to resources going to where there was advocacy but not necessarily to the areas that needed the investment the most. If you have an opportunity similar to this, I emphasize the importance of putting in a framework that allocates resources to communities and areas experiencing high rates of fatalities and serious injuries, which tend to be the areas with high numbers of Black, Latino, or low-income residents or all of these.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-vision-zero-could-help-reclaim-roads-from-american-car-culture-143043556.html?src=rss

Tesla built and delivered a nearly half a million EVs in Q2

Tesla remains the top US EV producer with a new internal record internal record with 479,700 vehicles built and 466,140 of them delivered in Q2, up ~87 percent YoY.

The company made headlines in Q2 after opening its previously-proprietary charging port design to the rest of the industry. Mercedes, Volvo, Rivian, and GM vehicles will use the design for their North American models beginning in the 2024 model year. Texas went so far as to require its state-funded EV charging stations accommodate the standard. Tesla's charging network station capacity has grown by a third from this time last year, with 48,082 chargers in total spread across 5,265 stations, globally.

The first production Cybertruck rolled off the assembly line this quarter as well, though you couldn't see much of the vehicle from its official release photo. The Cybertruck line has entered tooling, according to the company, and is expected to begin steady production sometime next year.

"We are now testing Cybertruck vehicles around the world for final certification and validation," the company wrote in its Q2 investors deck. "This might be the most unique vehicle product in decades; with that comes trialing and testing new technologies."

This past quarter has seen a number of scandals at the company including its executives accused of being overpaid by a cool $735 million dollars since 2017 as well as Elon being suspected of misappropriating company funds to build a glass house. Not a fancy aboratorium, not a metaphor for Twitter, a literal "glass house."

Wednesday's investor deck specifically noted Tesla's "commitment to being at the forefront of AI development" with the start of production for its Dojo training computers, which will be used to help Autopilot developers iterate future designs and features. Details were sparse but we do expect company executives to further discuss this initiative during the Q2 investors call which begins at 5:30pm ET.

Stay tuned to Engadget for up to the minute breaking news from that call, as well as whatever wacky and problematic-for-Legal statements CEO Elon Musk shares.

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/tesla-built-and-delivered-a-nearly-half-a-million-evs-in-q2-205948639.html?src=rss

Hitting the Books: How NASA helped JFK build his 'Nation of Immigrants'

The Apollo 11 moon landing was a seminal event in American history, one etched deeply into our nation's collective psyche. The event ushered in an era of unbridled possibilities — the stars were finally coming into reach — and its effects were felt across the culture, from art and fashion to politics and culture. In After Apollo: Cultural Legacies of the Race to the Moon, a multidisciplinary collection of historians, researchers and academics explore the myriad ways that putting a man on the moon impacted the American Experience.

University of Florida Press

Excerpted from “Scientists Without Borders: Immigrants in NASA and the Apollo Program” by Rosanna Perotti from After Apollo: Cultural Legacies of the Race to the Moon, edited by J Bret Bennington and Rodney F. Hill. Gainesville: University of Florida Press, 2023. Reprinted with permission of the University of Florida Press.


Space Travel and the Immigrant Experience

From NASA’s very beginnings, immigrant engineers, scientists, and technicians lent their talent, labor, and technical skills to the space program. But space travel itself always represented more than a scientific endeavor. Human spaceflight was one of the “great dreams” of the 1960s, as space historian Valerie Neal reminds us, and as a “big idea,” spaceflight relied heavily on American cultural narratives. The Apollo program (1963–1972) conjured the image of pioneering the frontier in the 1960s—exploration and discovery were indispensable to America’s history and continuing redefinition, and Americans welcomed the frontier as a metaphor for space exploration (Neal 15). The shuttle program (1972–2011) echoed the narrative of Americans “going to work.” As the Apollo missions were replaced by the space shuttle, NASA supporters and commentators depicted the shuttle crews with imagery associated with blue-collar labor: “astronaut repairmen made service calls in a vehicle often called a space truck."

Both of these narratives — “pioneering the frontier” and “getting the job done” — are closely associated with a third narrative that was becoming deeply ingrained in American national identity in the 1960s: the myth of the United States as a nation of immigrants and of the immigrant as the backbone of America’s egalitarian democracy. This American immigrant myth was not born in the nineteenth or even in the early twentieth century, when immigration was peaking and Congress struggled to impose limitations and quotas. The myth reached wide acceptance only in the early 1960s. It is no coincidence that John F. Kennedy presented the immigrant myth most succinctly in his pamphlet, A Nation of Immigrants, in 1963, as Kennedy was preparing to ask Congress to overhaul the nation’s immigration laws. At the same time, his administration was pressing furiously to put a man on the Moon by the end of the decade, a central goal of the New Frontier. Interestingly, Kennedy’s space proposals were a far more important policy priority for the administration than immigration reform (the latter was not accomplished until 1965, as we shall see later). But his articulation of the “nation of immigrants” narrative provided powerful imagery in support of the space program he championed from the start of his administration.

Kennedy’s articulation of the complex immigration myth featured not just a welcoming America, but an idealized immigrant, united with others by little other than a common love of freedom. Ours was “a nation of people with the fresh memory of old traditions who dared to explore new frontiers, people eager to build lives for themselves in a spacious society that did not restrict their freedom of choice and action." Citing Tocqueville, Kennedy noted that immigrants’ very poverty made them more inclined toward egalitarian democracy. No arena of American life was untouched by the influence of immigrants, and immigrants themselves were paragons of self-reliance, ingenuity, entrepreneurship, and pioneer spirit. “It was the future and not the past to which he was compelled to address himself,” Kennedy wrote, describing the motivations of the nineteenth-century immigrant.

Except for the Negro slave, he could go anywhere and do anything his talents permitted. A sprawling continent lay before him, and he had only to weld it together by canals, by railroads and by roads . . . This has been the foundation of American inventiveness and ingenuity, of the multiplicity of new enterprises, and of the success in achieving the highest standard of living anywhere in the world.

The space program was the next frontier in the natural progression toward excellence. It evoked not only the immigrant’s capacity for adventure and discovery but also his practicality and capacity to work hard and tame his surroundings. From the time of the English settlers, who “fought a rugged land” in the words of Kennedy, immigrants had to overcome adversity to earn their fortunes and shape their environment. They had worked as artisans, provided cheap labor for American farms, factories, mills, and mines, and climbed the economic ladder to provide succeeding generations with educational opportunities. They had moved forward to get the job done. Launched under the motto “Going to Work in Space,” the space shuttle was a vehicle that could deliver satellites and repair them in orbit, carry commercial payloads, and support a research laboratory. Astronauts would carry out their work all but rolling up their sleeves as builders and repair technicians, wielding robotic arms and power hand tools. Businesses could use the shuttle as a workhorse to launch satellites or develop manufacturing capabilities. All of this economic productivity in space could be expected to resonate with a nation whose increasingly diverse immigrant workforce was transitioning to a new economy. American society was reflected not only symbolically but practically in NASA’s missions. They produced results that appeared almost impossibly ambitious. NASA represented excellence: the best work in the world. Space travel also mirrored some of the risks and hardships of the immigrant experience. As the American public began questioning the nation’s investment in space travel through the 1980s, advocates harked back to this part of the immigrant narrative. In the aftermath of the 1986 Challenger tragedy, the Report of the Advisory Committee on the Future of the US Space Program (1990) reminded Americans that acceptance and resilience in the face of failure were a part of America’s pioneer and immigrant legacies:

In a very real sense, the space program is analogous to the exploration and settlement of the new world. In this view, risk and sacrifice are seen to be constant features of the American experience. There is a national heritage of risk-taking handed down from early explorers, immigrants, settlers, and adventurers. It is this element of our national character that is the wellspring of the U.S. space program.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-how-nasa-helped-jfk-build-his-nation-of-immigrants-143027063.html?src=rss

Meta's newest dataset will train speech recognition engines on 'clusters' of speakers

It is 2023 and, sorry, Siri somehow still didn’t catch that. Despite the tsunami of advancements generative AI systems have enjoyed in recent months, the synthetic assistants on our mobile devices remain nearly as hard of hearing as they were in 2011. A newly developed dataset from Meta AI, however, promises to improve the performance of such automatic speech recognition (ASR) tools by clustering speech at the “utterance level.”

Meta has long sought to improve its ASRs’ performance, teaching them to train without the aid of transcripts, recognize more than 4,000 spoken languages and even read lips at a higher proficiency than human experts. However, many of the datasets used to train ASR models are organized by demographic — age group, gender, nationality, English accent — which limit the variation of pronunciations that models are trained on, ultimately hindering their function in understanding a broad cross section of users.

To get around this, Meta AI has developed a dataset that instead relies on an utterance clustering method. “Instead of dividing a dataset based on speakers’ demographic information … our proposed algorithm clusters speech at the utterance level,” the Meta AI team explained in Wednesday’s blog post. “A single cluster will contain similar utterances from a diverse group of speakers. We can then train our model using the various clusters and use fairness datasets to measure how the model impacts outcomes across different demographic groups.”

Meta’s resulting dataset includes just over 27,000 command utterances collected from 595 paid US volunteers. Their utterances revolve around seven main themes — music, capture, utilities, notification control, messaging, calling and dictation — that other researchers can then use to train their own models and digital assistants on. Prompts included asking the speakers how they’d voice search for a song or make plans with friends and deciding where to meet up.

To evaluate this new system, Meta first trained a model on publicly-available, English-language Facebook videos. Researchers then evaluated that model using two other datasets: Casual Conversations v1, which Meta released in 2021, and a “de-identified dataset collected from a data supplier for ASR,” which includes 48,000 spoken utterances from 867 individuals.

The initial results proved promising, with model performance improvements “on all demographic groups in our evaluation datasets, though by far the largest gains are with respect to more inclusivity of accents,” per the blog. Overall, ASR performance increased by 10 percent using the clustering method, with large gains coming from the age 66-85 crowd as well, a traditionally underrepresented demographic in the voice command space.

“Our proposed algorithm is part of Meta’s long-term focus on responsible AI and just one part of our holistic approach to address fairness issues,” the researchers wrote. Looking ahead, the team is exploring adapting the system to other languages.

This article originally appeared on Engadget at https://www.engadget.com/meta-new-dataset-train-speech-recognition-engine-clusters-speaker-130012841.html?src=rss

Google's Bard AI chatbot has learned to talk

Google's Bard gained a handful of new features and functions Thursday in the chatbot AI's latest round of updates, including expanded linguistic knowledge, more nuanced response controls and the ability to respond with spoken word in addition to text. In all the AI can now converse in nearly four dozen languages. 

Users can now converse with the AI in Arabic, Chinese, German, Hindi and Spanish, among others as well as access the platform from more places on the planet, such as Brazil and "across Europe," Jack Krawczyk, Bard Product Lead, and Amarnag Subramanya, Bard's VP of Engineering, wrote in a blog post Thursday. "As we bring Bard to more regions and languages over time, we’ll continue to use our AI Principles as a guide, incorporate user feedback, and take steps to protect people’s privacy and data."

Bard now literally speaks. Users will have the option to either read or listen to the AI's generated responses, which Krawczyk and Subramanya believe will help immensely when users want to hear the correct pronunciation of words in those 40 newly-added languages. Users have also been afforded more robust controls over how friendly Bard will be with five distinct options for the AI's tone: simple, long, short, professional or casual. Those are only available on English-language requests for the moment but the company is already working to expand it out to more of the 40, "soon."

The chatbot also has some fancy new multimodal eyes, gaining the capacity to interpret images dropped into the chat through the prompt field. Faster and easier than uploading it as a document, users can request more information about the contents of the image or generate content like captions based on it. This also is currently English-only. 

Getting the information and code that Bard generates out of the chat window and into the hands of collaborators is no longer quite such a slog. Starting Thursday, users will be able to export Bard-generated Python code to Replit, in addition to Colab. They'll also be able to copy and share portions of individual chats with other users. The process of organizing and revisiting old conversations being streamlined as well with the addition of pinned conversations, which are what they sound like, and the ability to rename them.

This article originally appeared on Engadget at https://www.engadget.com/googles-bard-ai-chatbot-has-learned-to-talk-070111881.html?src=rss