Posts with «author_name|andrew tarantola» label

An AI pilot has beaten three champion drone racers at their own game

In what can only bode poorly for our species' survival during the inevitable robot uprisings, an AI system has once again outperformed the people who trained it. This time, researchers at the University of Zurich in partnership with Intel, pitted their "Swift" AI piloting system against a trio of world champion drone racers — none of whom could best its top time.

Swift is the culmination of years of AI and machine learning research by the University of Zurich. In 2021, the team set an earlier iteration of the flight control algorithm that used a series of external cameras to validate its position in space in real-time, against amateur human pilots, all of whom were easily overmatched in every lap of every race during the test. That result was a milestone in its own right as, previously, self-guided drones relied on simplified physics models to continually calculate their optimum trajectory, which severely lowered their top speed. 

This week's result is another milestone, not just because the AI bested people whose job is to fly drones fast, but because it did so without the cumbersome external camera arrays= of its predecessor. The Swift system "reacts in real time to the data collected by an onboard camera, like the one used by human racers," an UZH Zurich release reads. It uses an integrated inertial measurement unit to track acceleration and speed while an onboard neural network localizes its position in space using data from the front-facing cameras. All of that data is fed into a central control unit — itself a deep neural network — which crunches through the numbers and devises a shortest/fastest path around the track. 

“Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich, said in a statement.

Rather than let a quadcopter smash its way around the track for the month that its controller AI would need to slowly learned the various weaves and bobs of the circuit, the research team instead simulated that learning session virtually. It took all of an hour. And then the drone went to work against 2019 Drone Racing League champion Alex Vanover, 2019 MultiGP Drone Racing champion Thomas Bitmatta, and three-time Swiss champion, Marvin Schaepper. 

Swift notched the fastest lap overall, beating the humans by a half second, though the meatsack pilots proved more adaptable to changing conditions during the course of a race. “Drones have a limited battery capacity; they need most of their energy just to stay airborne. Thus, by flying faster we increase their utility,” Scaramuzza said. As such, the research team hopes to continue developing the algorithm for eventual use in Search and Rescue operations, as well as forest monitoring, space exploration, and in film production.    

This article originally appeared on Engadget at https://www.engadget.com/an-ai-pilot-has-beaten-three-champion-drone-racers-at-their-own-game-190537914.html?src=rss

Google is pushing its AI-powered search on India and Japan next

Google has been working to marry its new-found focus on generative AI with its existing expertise in search since mid-May, as part of Search Lab's Google Search Generative Experience (SGE) project. On Wednesday, the company announced that the SGE program is expanding beyond America's digital borders and into both the Japanese and Indian marketplaces.

SGE is Google's answer to Microsoft's Bing AI and is designed to provide summarized and curated answers to input prompts rather than a list of webpages. Google's system differs from Microsoft's in that it incorporates its AI directly into the existing search bar rather than run it as a separate chatbot assistant. The company began expanding access to the SGE program in late May for US users and, this week, rolled out Search Labs to users in India and Japan.

The AI-enhanced search feature will be available in Japanese in Japan and in both English and Hindi for users in India, reads a Wednesday Google Search blog. "We’re also launching with voice input, so users can simply speak their query instead of typing it and listen to the responses," the blog continues. "Search ads will continue to appear in dedicated ad slots throughout the page." 

Google also claimed that "people are having a positive experience," using SGE "for help with more complex queries and entirely new types of questions." In fact, the company notes that SGE's highest satisfaction scores came from 18-24 year olds, though did not offer data to back up those assertions.

Following the meteoric rise in popularity of generative AI systems with the release of ChatGPT last November, the technology's luster is already beginning to fade as the seemingly inevitable misuse of its capabilities ramps up. The tech is already being used in online scams and has attracted the attention of both federal regulators and Congress itself, seeking to crack down on such shenanigans.

This article originally appeared on Engadget at https://www.engadget.com/google-is-pushing-its-ai-powered-search-on-india-and-japan-next-003057376.html?src=rss

The Air Force wants $6 billion to build a fleet of AI-controlled drones

The F-22 and F-35 are two of the most cutting-edge and capable war machines in America's arsenal. They also cost $143 million and $75 million a pop, respectively. Facing increasing pressure from China, which has accelerated its conventional weapon procurement efforts in recent months, the Pentagon announced Monday a program designed to build out America's drone production base in response. As part of that effort, the United States Air Force has requested nearly $6 billion in federal funding over the next five years to construct a fleet of XQ-58A Valkyrie uncrewed aircraft, each of which will cost a (comparatively) paltry $3 million.

The Valkyrie comes from Kratos Defense & Security Solutions as part of the USAF's Low Cost Attritable Strike Demonstrator (LCASD) program. The 30-foot uncrewed aircraft weighs 2,500 pounds unfueled and can carry up to 1,200 total pounds of ordinance. The XQ-58 is built as a stealthy escort aircraft to fly in support of F-22 and F-35 during combat missions, though the USAF sees the aircraft filling a variety of roles by tailoring its instruments and weapons to each mission. Those could includes surveillance and resupply actions, in addition to swarming enemy aircraft in active combat.

Earlier this month, Kratos successfully operated the XQ-58 during a three-hour demonstration at Elgin Air Force Base. “AACO [the Autonomous Air Combat Operations team] has taken a multi-pronged approach to uncrewed flight testing of machine learning Artificial Intelligence and has met operational experimentation objectives by using a combination of high-performance computing, modeling and simulation, and hardware in the loop testing to train an AI agent to safely fly the XQ-58 uncrewed aircraft,” Dr. Terry Wilson, AACO program manager, said in a press statement at the time.

“It’s a very strange feeling,” USAF test pilot Major Ross Elder told the New York Times. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.” The USAF has been quick to point out that the drones are to remain firmly under the command of human pilots and commanders. 

The Air Force took heat in June when Colonel Tucker "Cinco" Hamilton "misspoke" at a press conference and suggested that an AI could potentially be induced to turn on its operator, though the DoD dismissed that possibility as a "hypothetical thought exercise" rather than "simulation."

"Any Air Force drone [will be] designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," a Pentagon spokeswoman told the NYT. Congress will need to pass the DoD's budget for the next fiscal year before construction efforts can begin. The XQ-58 program will require an initial outlay of $3.3 billion in 2024 if approved.

This article originally appeared on Engadget at https://www.engadget.com/the-air-force-wants-6-billion-to-build-a-fleet-of-ai-controlled-drones-204548974.html?src=rss

Google's new sustainability APIs can estimate solar, pollutant and pollen production

Way back in 2015, Google launched Project Sunroof, an ingenious Maps layer that combined location, sunlight and navigation data to show how much energy solar panels installed on a home’s roof might generate — it could be your house, could be your neighbor's, didn’t matter because Google mapped it out for virtually every house on the planet. This was a clever way to both help advance the company’s environmental sustainability efforts and show off the platform’s technical capabilities.

On Tuesday at the Google Cloud Next event, the company will officially unveil a suite of new sustainability APIs that leverage the company’s AI ambitions to provide developers with real-time solar potential, air quality and pollen level information. With these tools, “we can work toward our ambition to help individuals, cities, and partners collectively reduce 1 gigaton of their carbon equivalent emissions annually by 2030,” Yael Maguire, VP of Geo Sustainability at Google writes in a forthcoming Maps blog post.

Google

The Solar API builds directly from Project Sunroof’s original work, using modern maps and more advanced computing resources than its predecessor. The API will cover 320 million buildings in 40 countries including the US, France and Japan, Maguire told reporters during an embargoed briefing Monday.

“Demand for solar has been rising a lot in recent years,” Maguire said. He notes that search interest for ”rooftop solar panel and power” increased 60 percent in 2022. “We've been seeing this solar transition… and we saw a lot of opportunity to bring this information and technology to businesses around the world.”

The team trained an AI model to extract the precise angles and slopes of a given rooftop just from the overhead satellite or aerial photograph, along with shade estimates of nearby trees, and combine that with historical weather data and current energy pricing. This gives installation companies and homeowners alike a more holistic estimate of how much their solar specific panels could produce without having to physically send out a technician to the site.

Google

Google is also expanding the Air Quality layer, which proved invaluable during the 2021 California Wildfires (and all the subsequent wildfires), into its own API offering for more than 100 countries around the world.

“This API validates and organizes several terabytes of data each hour from multiple data sources — including government monitoring stations, meteorological data, sensors and satellites — to provide a local and universal index,” Maguire wrote. 

The system will even take current traffic conditions and vehicle volume into account to better predict what pollutants will be predominant. “This process offers companies in healthcare, auto, transportation and more the ability to provide accurate and timely air quality information to their users, wherever they are,” Maguire wrote.

Google

In addition to human-generated pollutants, Google is also evolving its current pollen tracking Maps layer into a full API. “The rise in temperatures and greenhouse gas emissions also causes pollen-producing plants to grow in more places and pollen production to increase, creating additional adverse effects for those with seasonal allergies,” Maguire said.

The Pollen API will track the seasonal release of tree semen in more than 65 countries, incorporating local wind patterns and annual trends, providing users with local pollen count data, detailed allergen information and heatmaps of where the sneezing will be worst. Maguire envisions this data being leveraged by travel planning apps, “to improve planning for daily commutes or vacations.” The apps will be available to developers starting August 29th.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-sustainability-apis-can-estimate-solar-pollutant-and-pollen-production-231303184.html?src=rss

ChatGPT is easily exploited for political messaging despite OpenAI's policies

In March, OpenAI sought to head off concerns that its immensely popular, albeit hallucination-prone, ChatGPT generative AI could be used to dangerously amplify political disinformation campaigns through an update to the company's Usage Policy to expressly prohibit such behavior. However, an investigation by The Washington Post shows that the chatbot is still easily incited to breaking those rules, with potentially grave repercussions for the 2024 election cycle.

OpenAI's user policies specifically ban its use for political campaigning, save for use by "grassroots advocacy campaigns" organizations. This includes generating campaign materials in high volumes, targeting those materials at specific demographics, building campaign chatbots to disseminate information, engage in political advocacy or lobbying. Open AI told Semafor in April that it was, "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying."

Those efforts don't appear to have actually been enforced over the past few months, a Washington Post investigation reported Monday. Prompt inputs such as “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden” immediately returned responses to “prioritize economic growth, job creation, and a safe environment for your family” and listing administration policies benefiting young, urban voters, respectively.

“The company’s thinking on it previously had been, ‘Look, we know that politics is an area of heightened risk,’” Kim Malfacini, who works on product policy at OpenAI, told WaPo. “We as a company simply don’t want to wade into those waters.”

“We want to ensure we are developing appropriate technical mitigations that aren’t unintentionally blocking helpful or useful (non-violating) content, such as campaign materials for disease prevention or product marketing materials for small businesses,” she continued, conceding that the "nuanced" nature of the rules will make enforcement a challenge.

Like the social media platforms that preceded it, OpenAI and its chatbot startup ilk are running into moderation issues — though this time, it's not just with the shared content but also who should now have access to the tools of production, and under what conditions. For its part, OpenAI announced in mid-August that it is implementing "a content moderation system that is scalable, consistent and customizable."

Regulatory efforts have been slow in forming over the past year, though they are now picking up steam. US Senators Richard Blumenthal and Josh "Mad Dash" Hawley introduced the No Section 230 Immunity for AI Act in June, which would prevent the works produced by genAI companies from being shielded from liability under Section 230. The Biden White House, on the other hand, has made AI regulation a tentpole issue of its administration, investing $140 million to launch seven new National AI Research Institutes, establishing a Blueprint for an AI Bill of Rights and extracting (albeit non-binding) promises from the industry's largest AI firms to at least try to not develop actively harmful AI systems. Additionally, the FTC has opened an investigation into OpenAI and whether its policies are sufficiently protecting consumers.

This article originally appeared on Engadget at https://www.engadget.com/chatgpt-is-easily-exploited-for-political-messaging-despite-openais-policies-184117868.html?src=rss

Synchron's BCI implants may help paralyzed patients reconnect with the world

Dr. Tom Oxley visibly stiffens at the prospect of using brain-computer interface technology for something as gauche as augmenting able-bodied humans. “We're not building a BCI to control Spotify or to watch Netflix,” the CEO of medical device startup Synchron tersely told Engadget via videocall last week.

“There's all this hype and excitement about BCI, about where it might go,” Oxley continued. “But the reality is, what's it gonna do for patients? We describe this problem for patients, not around wanting to super-augment their brain or body, but wanting to restore the fundamental agency and autonomy that [able-bodied people] take for granted.”

Around 31,000 Americans currently live with Amyotrophic lateral sclerosis (ALS) with another 5,000 diagnosed every year. Nearly 300,000 Americans suffer from spinal cord paralysis, and another approximately 18,000 people join those ranks annually. Thousands more are paralyzed by stroke and accident, losing their ability to see, hear or feel the world around them. And with the lack of motor control in their extremities, these Americans can also lose access to a critical component of modern life: their smartphone.

“[A smartphone] creates our independence and our autonomy,” Oxley said. “It's communicating to each other, text messaging, emailing. It's controlling the lights in your house, doing your banking, doing your shopping, all those things.”

“If you can control your phone again,” he said. “you can restore those elements of your lifestyle.”

So while Elon Musk promises an fantastical cyberpunk future where everybody knows Kung Fu and can upload their consciousness to the cloud on a whim, startups like Synchron, as well as Medtronic, Blackrock Neurotech, BrainGate and Precision Neuroscience and countless academic research teams, are working to put this transformative medical technology into clinical practice, reliably and ethically.

The Best Way to a Man’s Mind Is Through His Jugular Vein

Brooklyn-based Synchron made history in 2022 when it became the first company to successfully implant a BCI into a human patient as part of its pioneering SWITCH study performed in partnership with Mount Sinai Hospital. To date, the medical community has generally had just two options in capturing the myriad electrical signals that our brains produce: low-fidelity but non-invasive EEG wave caps, or high-fidelity Utah Array neural probes that require open-brain surgery to install.

Synchron’s Stentrode device provides a third: it is surgically guided up through a patient’s jugular vein to rest within a large blood vessel near their motor cortex where its integrated array of sensors yield better-fidelity signal than an EEG cap without the messy implantation or eventual performance drop off of probe arrays.

“We're not putting penetrative electronics into the brain and so the surgical procedure itself is minimally invasive,” Dr. David Putrino, Director of Rehabilitation Innovation for the Mount Sinai Health System, explained to Engadget. “The second piece of it is, you're not asking a neurologist to learn anything new ... They know how to place stents, and you're really asking to place a stent in a big vessel — it's not a hard task.”

“These types of vascular surgeries in the brain are commonly performed,” said Dr. Zoran Nenadić, William J. Link Chair and Professor of Biomedical Engineering at the University of California, Irvine. “I think they're clever using this route to deliver these implants into the human brain, which otherwise is an invasive surgery.”

Though the Stentrode’s signal quality is not quite on par with a probe array, it doesn’t suffer the signal degradation that arrays do. Quite the opposite, in fact. “When you use penetrative electrodes and you put them in the brain,” Putrino said, “gliosis forms around the electrodes and impedances change, signal quality goes down, you lose certain electrodes. In this case, as the electrode vascularizes into the blood vessel, it actually stabilizes and improves the recording over time.”

A Device for Those Silent Moments of Terror

“We're finally, actually, paying attention to a subset of individuals with disabilities who previously have not had technology available that gives them digital autonomy,” Putrino said. He points out that for many severely paralyzed people, folks who can perhaps wiggle a finger or toe, or who can use eye tracking technology, the communication devices at their disposal are situational at best. Alert buttons can shift out of reach, eye tracking systems are largely stationary tools and unusable in cars.

“We communicate with these folks on a regular basis and the fears that are brought up that this technology can help with,” Putrino recalls. “It is exactly in these silent moments, where it's like, the eye tracking has been put away for the night and then you start to choke, how do you call someone in? Your call button or your communication device is pushed to the side and you see the nurse starting to prepare the wrong medication for you. How do you alert them? These moments happen often in a disabled person's life and we don't have an answer for these things.”

With a BCI, he continued, locked-in patients are no longer isolated. They can simply wake their digital device from sleep mode and use it to alert caregivers. ”This thing works outside, it works in different light settings, it works regardless of whether you're laying flat on your back or sitting up in your chair,” Putrino said. “Versatile, continuous digital control is the goal.”

Reaching that goal is still at least half a decade away. “Our goal over the next five years is to get market approval and then we’ll be ready to scale up that point,” Oxley said. The rate of that scaling will depend on the company’s access to cath labs. These are facilities found in both primary and secondary level hospitals so there are thousands of them around the country, Oxley said. Far more than the handful of primary level hospitals that are equipped to handle open-brain BCI implantation surgeries.

A Show of Hands for Another Hole in Your Head

In 2021, Synchron conducted its SWITCH safety study for the Stentrode device itself, implanting it in four ALS patients and monitoring their health over the course of the next year. The study found the device to be “safe, with no serious adverse events that led to disability or death,” according to a 2022 press release. The Stentrod “stayed in place for all four patients and the blood vessel in which the device was implanted remained open.”

Buoyed by that success, Synchon launched its headline-grabbing COMMAND study last year, which uses the company’s entire brain.io system in six patients to help them communicate digitally. “We’re really trying to show that this thing improves quality of life and improves agency of the individual,” Putrino said. The team had initially expected the recruitment process through which candidate patients are screened, to take five full years to complete.

Dr. Putrino was not prepared for the outpouring of interest, especially given the permanent nature of these tests and quality of life that patients might expect to have once they're in. “Many of our patients have end-stage ALS, so being part of a trial is a non-trivial decision,” Putrino said. “That's like, do you want to spend what maybe some of the last years of your life with researchers as opposed to with family members?”

“Is that a choice you want to make for folks who are considering the trial who have a spinal cord injury?” asked Putrino, as those folks are also eligible for implantation. “We have very candid conversations with them around, look, this is a gen one device,” he warns. “Do you want to wait for gen five because you don't have a short life expectancy, you could live another 30 years. This is a permanent implant.”

Still, the public interest in Synchron’s BCI work has led to such a glut of interested patients, that the team was able to perform its implantation surgery on the sixth and final patient of the study in early August — nearly 18 months ahead of schedule. The team will need to continue the study for at least another year (to meet minimum safety standards like in the previous SWITCH study) but has already gotten permission from the NIH to extend its observation portion to the full original five years. This will give Synchron significantly more data to work with in the future, Putrino explained.

How We Can Avoid Another Argus II SNAFU

Our Geordi LaForge visor future seemed a veritable lock in 2013, when Second Sight Medical Products received an FDA Humanitarian Use Device designation for its Argus II retinal prosthesis, two years after it received commercial clearance in Europe. The medical device, designed to restore at least rudimentary functional vision to people suffering profound vision loss from retinitis pigmentosa, was implanted in the patient’s retina and converted digital video signals it received from an external, glasses-mounted camera into the analog electrical impulses that the brain can comprehend — effectively bypassing the diseased portions of the patient’s ocular system.

With the technical blessing of the FDA in hand (Humanitarian Use cases are not subject to nearly the same scrutiny as full FDA approval), Second Sight filed for IPO in 2013 and was listed in NASDAQ the following year. Seven years after that, the company went belly up in 2020, declared itself out of business and wished the best of luck to the suckers who spent $150k to get its hardware hardwired into their skulls.

“Once you're in that [Humanitarian Use] category, it's kind of hard to go back and do all of the studies that are necessary to get the traditional FDA approvals to move forward,” Dr. An Do, Assistant Professor in the Department of Neurology at University of California, Irvine, told Engadget. “I think the other issue is that these are orphan diseases. There's a very small group of people that they're catering to.”

As IEEE Spectrum rightfully points out, one loose wire, one degraded connection or faulty lead, and these patients can potentially re-lose what little sight they had regained. There’s also the chance that the implant, without regular upkeep, eventually causes an infection or interferes with other medical procedures, requiring a costly, invasive surgery to remove.

“I am constantly concerned about this,” Putrino admitted. “This is a question that keeps me up at night. I think that, obviously, we need to make sure that companies can in good faith proceed to the next stage of their work as a company before they begin any clinical trials.”

He also calls on the FDA to expand its evaluations of BCI companies to potentially include examining the applicant’s ongoing financial stability. “I think that this is definitely a consideration that we need to think about because we don't want to implant patients and then have them just lose this technology.”

“We always talk to our patients as we're recruiting them about the fact that this is a permanent implant,” Putrino continued. “We make a commitment to them that they can always come to us for device related questions, even outside the scope of the clinical trial.”

But Putrino admits that even with the best intentions, companies simply cannot guarantee their customers of continued commercial success. “I don't really know how we safeguard against the complete failure of a company,” he said. “This is just one of the risks that people are going to take coming in. It's a complex issue and it's one I worry about because we're right here on the bleeding edge and it's unclear if we have good answers to this once the technology goes beyond clinical trials.”

Luckily, the FDA does. As one agency official explained to Engadget, “the FDA’s decisions are intended to be patient-centric with the health and safety of device users as our highest priority.” Should a company go under, file bankruptcy or otherwise be unable to provide the services it previously sold, in addition to potentially being ordered by the court to continue care for its existing patients, “the FDA may also take steps to protect patients in these circumstances. For example, the FDA may communicate to the public, recommendations for actions that health care providers and patients should take.”

The FDA official also notes that the evaluation process itself involves establishing whether an applicant “demonstrates reasonable assurance of safety and effectiveness of the device when used as intended in its environment of use for its expected life … FDA requirements apply to devices regardless of a firm’s decision to stop selling and distributing the device.”

The Synchron Switch BCI, for its part, is made from biologically inert materials that will eventually be reabsorbed into the body, “so even if Synchron disappeared tomorrow, the Switch BCI is designed to safely remain in the patient’s body indefinitely,” Oxley said. “The BCI runs on a software platform that is designed for stability and independent use, so patients can use the platform without our direct involvement.”

However, this approach “is not sufficient and that, given BCIs’ potential influence on individuals and society, the nature of what is safe and effective and the balance between risk and benefit require special consideration,” argued a 2021 op-ed in the AMA Journal of Ethics. “The line between therapy and enhancement for BCIs is difficult to draw precisely. Therapeutic devices function to correct or compensate for some disease state, thereby restoring one to ‘normality’ or the standard species-typical form.” But what, and more importantly who, gets to define normality? How far below the mean IQ can you get before forcibly raising your score through BCI implantation is deemed worthwhile to society?

The op-ed’s authors concede that “While BCIs raise multiple ethical concerns, such as how to define personhood, respect for autonomy, and adequacy of informed consent, not all ethical issues justifiably form the basis of government regulation.” The FDA’s job is to test devices for safety and efficacy, not equality, after all. As such the authors instead argue that, “a new committee or regulatory body with humanistic aims, including the concerns of both individuals and society, ought to be legislated at the federal level in order to assist in regulating the nature, scope, and use of these devices.”

This article originally appeared on Engadget at https://www.engadget.com/bci-implant-severe-paralysis-synchron-medicine-stroke-160012833.html?src=rss

Hitting the Books: Why AI needs regulation and how we can do it

The burgeoning AI industry has barrelled clean past the "move fast" portion of its development, right into the part where we "break things" — like society! Since the release of ChatGPT last November, generative AI systems have taken the digital world by storm, finding use in everything from machine coding and industrial applications to game design and virtual entertainment. It's also quickly been adopted for illicit purposes like scaling spam email operations and creating deepfakes. 

That's one technological genie we're never getting back in its bottle so we'd better get working on regulating it, argues Silicon Valley–based author, entrepreneur, investor, and policy advisor, Tom Kemp, in his new book, Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy. In the excerpt below, Kemp explains what form that regulation might take and what its enforcement would mean for consumers. 

Fast Company Press

Excerpt from Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy (IT Rev, August 22, 2023), by Tom Kemp.


Road map to contain AI

Pandora in the Greek myth brought powerful gifts but also unleashed mighty plagues and evils. So likewise with AI, we need to harness its benefits but keep the potential harms that AI can cause to humans inside the proverbial Pandora’s box.

When Dr. Timnit Gebru, founder of the Distributed Artificial Intelligence Research Institute (DAIR), was asked by the New York Times regarding how to confront AI bias, she answered in part with this: “We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the FDA [Food and Drug Administration]. So, for me, it’s not as simple as creating a more diverse data set, and things are fixed.”

She’s right. First and foremost, we need regulation. AI is a new game, and it needs rules and referees. She suggested we need an FDA equivalent for AI. In effect, both the AAA and ADPPA call for the FTC to act in that role, but instead of drug submissions and approval being handled by the FDA, Big Tech and others should send their AI impact assessments to the FTC for AI systems. These assessments would be for AI systems in high-impact areas such as housing, employment, and credit, helping us better address digital redlining. Thus, these bills foster needed accountability and transparency for consumers.

In the fall of 2022, the Biden Administration’s Office of Science and Technology Policy (OSTP) even proposed a “Blueprint for an AI Bill of Rights.” Protections include the right to “know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” This is a great idea and could be incorporated into the rulemaking responsibilities that the FTC would have if the AAA or ADPPA passed. The point is that AI should not be a complete black box to consumers, and consumers should have rights to know and object—much like they should have with collecting and processing their personal data. Furthermore, consumers should have a right of private action if AI-based systems harm them. And websites with a significant amount of AI-generated text and images should have the equivalent of a food nutrition label to let us know what AI-generated content is versus human generated.

We also need AI certifications. For instance, the finance industry has accredited certified public accountants (CPAs) and certified financial audits and statements, so we should have the equivalent for AI. And we need codes of conduct in the use of AI as well as industry standards. For example, the International Organization for Standardization (ISO) publishes quality management standards that organizations can adhere to for cybersecurity, food safety, and so on. Fortunately, a working group with ISO has begun developing a new standard for AI risk management. And in another positive development, the National Institute of Standards and Technology (NIST) released its initial framework for AI risk management in January 2023.

We must remind companies to have more diverse and inclusive design teams building AI. As Olga Russakovsky, assistant professor in the Department of Computer Science at Princeton University, said: “There are a lot of opportunities to diversify this pool [of people building AI systems], and as diversity grows, the AI systems themselves will become less biased.”

As regulators and lawmakers delve into antitrust issues concerning Big Tech firms, AI should not be overlooked. To paraphrase Wayne Gretzky, regulators need to skate where the puck is going, not where it has been. AI is where the puck is going in technology. Therefore, acquisitions of AI companies by Big Tech companies should be more closely scrutinized. In addition, the government should consider mandating open intellectual property for AI. For example, this could be modeled on the 1956 federal consent decree with Bell that required Bell to license all its patents royalty-free to other businesses. This led to incredible innovations such as the transistor, the solar cell, and the laser. It is not healthy for our economy to have the future of technology concentrated in a few firms’ hands.

Finally, our society and economy need to better prepare ourselves for the impact of AI on displacing workers through automation. Yes, we need to prepare our citizens with better education and training for new jobs in an AI world. But we need to be smart about this, as we can’t say let’s retrain everyone to be software developers, because only some have that skill or interest. Note also that AI is increasingly being built to automate the development of software programs, so even knowing what software skills should be taught in an AI world is critical. As economist Joseph E. Stiglitz pointed out, we have had problems managing smaller-scale changes in tech and globalization that have led to polarization and a weakening of our democracy, and AI’s changes are more profound. Thus, we must prepare ourselves for that and ensure that AI is a net positive for society.

Given that Big Tech is leading the charge on AI, ensuring its effects are positive should start with them. AI is incredibly powerful, and Big Tech is “all-in” with AI, but AI is fraught with risks if bias is introduced or if it’s built to exploit. And as I documented, Big Tech has had issues with its use of AI. This means that not only are the depth and breadth of the collection of our sensitive data a threat, but how Big Tech uses AI to process this data and to make automated decisions is also threatening.

Thus, in the same way we need to contain digital surveillance, we must also ensure Big Tech is not opening Pandora’s box with AI. 

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-containing-big-tech-tom-kemp-it-rev-ai-regulation-143014628.html?src=rss

Scientists strengthen concrete by 30 percent with used coffee grounds

Humans produce around 4.4 billion tons of concrete every year. That process consumes around 8 billion tons of sand (out of the 40-50 billion tons in total used annually) which has, in part, led to acute shortages of the building commodity in recent years. At the same time, we generate about 10 billion kilograms of used coffee grounds over the same span — coffee grounds which a team of researchers from RMIT University in Australia have discovered can be used as a silica substitute in the concrete production process that, in the proper proportions, yields a significantly stronger chemical bond than sand alone. 

“The disposal of organic waste poses an environmental challenge as it emits large amounts of greenhouse gases including methane and carbon dioxide, which contribute to climate change,” lead author of the study, Dr Rajeev Roychand of RMIT's School of Engineering, said in a recent release. He notes that Australia alone produces 75 million kilograms of used coffee grounds each year, most of which ends up in landfills. 

Coffee grounds can't simply be mixed in raw with standard concrete as they won't bind with the other materials due to their organic content, Dr. Roychand explained. In order to make the grounds more compatible, the team experimented with pyrolyzing the materials at 350 and 500 degrees C, then substituting them in for sand in 5, 10, 15 and 20 percentages (by volume) for standard concrete mixtures. 

The team found that at 350 degrees is perfect temperature, producing a "29.3 percent enhancement in the compressive strength of the composite concrete blended with coffee biochar," per the team's study, published in the September issue of Journal of Cleaner Production. "In addition to reducing emissions and making a stronger concrete, we're reducing the impact of continuous mining of natural resources like sand," Dr. Roychand said. 

"The concrete industry has the potential to contribute significantly to increasing the recycling of organic waste such as used coffee," added study co-author Dr Shannon Kilmartin-Lynch, a Vice-Chancellor’s Indigenous Postdoctoral Research Fellow at RMIT. "Our research is in the early stages, but these exciting findings offer an innovative way to greatly reduce the amount of organic waste that goes to landfill,” where it's decomposition would generate large amounts of methane, a greenhouse gas 21 times more potent than carbon dioxide. 

This article originally appeared on Engadget at https://www.engadget.com/scientists-strengthen-concrete-by-30-percent-with-used-coffee-grounds-221643441.html?src=rss

University of California BCI study enables paralyzed woman to 'speak' through a digital avatar

Dr. Mario did not prepare us for this. In a pioneering effort, researchers from UC San Francisco and UC Berkeley, in partnership with Edinburgh-based Speech Graphics, have devised a groundbreaking communications system that allows a woman, paralyzed by stroke, to speak freely through a digital avatar she controls with a brain-computer interface.

Brain-Computer Interfaces (BCIs) are devices that monitor the analog signals produced by your gray matter and convert them into the digital signals that computers understand — like a mixing soundboard’s DAC unit but what fits inside your skull. For this study, researchers led by Dr. Edward Chang, chair of neurological surgery at UCSF, first implanted a 253-pin electrode array into speech center of the patient’s brain. Those probes monitored and captured the electrical signals that would have otherwise driven the muscles in her jaw, lips and tongue, and instead, transmitted them through a cabled port in her skull to a bank of processors. That computing stack housed a machine learning AI which, over the course of a few week’s training, came to recognize the patient's electrical signal patterns for more than 1,000 words.

But that’s only the first half of the trick. Through that AI interface, the patient is now able to write out her responses, much in the same way Synchron’s system works for folks suffering from locked-in syndrome. But she can also speak, in a sense, using a synthesized voice trained on recordings of her natural voice from before she was paralyzed — same as we’re doing with our digitally undead celebrities.

What’s more, the researchers teamed up with Speech Graphics, the same company that developed the photorealistic facial animation technology from Halo Infinite and The Last of Us Part II, to create the patient’s avatar. SG’s tech “reverse engineers” the necessary musculoskeletal movements a face would make based on analysis of the audio input, then feeds that data in real-time to a game engine to be animated into a lagless avatar. And since the mental signals from the patient were mapped directly to the avatar, she could express emotion and communicate nonverbally as well.

“Creating a digital avatar that can speak, emote and articulate in real-time, connected directly to the subject’s brain, shows the potential for AI-driven faces well beyond video games,” Michael Berger, CTO and co-founder of Speech Graphics, said in a press statement Wednesday. “Restoring voice alone is impressive, but facial communication is so intrinsic to being human, and it restores a sense of embodiment and control to the patient who has lost that.“

BCI technology was pioneered in the early 1970s and has been slowly developing in the intervening decades. Exponential advancements with processing and computing systems have recently helped reinvigorate the field, with a handful of well-funded startups currently vying to be first through the FDA’s regulatory device approval process. Brooklyn-based Synchron made headlines last year when it was the first company to successfully implant a BCI in a human patient. Elon Musk’s Neuralink entered restricted FDA trials earlier this year after the company was found to have killed scores of porcine test subjects in earlier testing rounds.

This article originally appeared on Engadget at https://www.engadget.com/university-of-california-bci-study-enables-paralyzed-woman-to-speak-through-a-digital-avatar-172309051.html?src=rss

Meta's new multimodal translator uses a single model to speak 100 languages

Though it's not quite ready to usher in the Doolittle future we've all been waiting for, modern AI translation methods are proving more than sufficient in accurately transforming humanity's roughly 6,500 spoken and written communication systems between one another. The problem is that each of these models tends to only do one or two tasks really well — translate and convert text to speech, speech to text or between either of the two sets — so you end up having to smash a bunch of models on top of each other to create the generalized performance seen in the likes of Google Translate or Facebook's myriad language services. 

That's a computationally intensive process, so Meta developed a single model that can do it all. SeamlessM4T is "a foundational multilingual and multitask model that seamlessly translates and transcribes across speech and text," Meta's blog from Tuesday reads. It can translate between any of nearly 100 languages for speech-to-text and text-to-text functions, speech-to-speech and text-to-speech supports those same languages as inputs and outputs them in any of 36 others tongues, including English. 

In their blog post, Meta's research team notes that SeamlessM4T "significantly improve[s] performance for the low and mid-resource languages we support," while maintaining "strong performance on high-resource languages, such as English, Spanish, and German." Meta built SeamlessM4T from its existing PyTorch-based multitask UnitY model architecture, which already natively performs the various modal translations as well as automatic speech recognition. It utilizes the BERT 2.0 system for audio encoding, breaking down inputs into their component tokens for analysis, and a HiFi-GAN unit vocoder to generate spoken responses. 

Meta has also curated a massive open-source speech-to-speech and speech-to-text parallel corpus, dubbed SeamlessAlign. The company mined "tens of billions of sentences" and "four million hours" of speech from publicly available repositories to "automatically align more than 443,000 hours of speech with texts, and create about 29,000 hours of speech-to-speech alignments," per the blog. When tested for robustness, SeamlessM4T reportedly outperformed its (current state-of-the-art) predecessor against background noises and speaker style variations by 37 percent and 48 percent, respectively.

As with most all of its previous machine translation efforts — whether that's Llama 2, Massively Multilingual Speech (MMS), Universal Speech Translator (UST), or the ambitious No Language Left Behind (NLLB) project — SeamlessM4T is being open-sourced. "we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems," the team wrote. "Keeping with our approach to open science, we are excited to share our model publicly to allow researchers and developers to build on this technology." If you're interested in working with SeamlessM4T for yourself, head over to GitHub to download the model, training data and documentation.

This article originally appeared on Engadget at https://www.engadget.com/metas-new-multimodal-translator-uses-a-single-model-to-speak-100-languages-133040214.html?src=rss