The ARC nerve stimulation therapy system from startup Onward Medical passed another developmental milestone on Wednesday, as the company announced the first successful installation of its brainwave-driven implantable electrode array to restore function and feeling to a patient’s hands and arms. The news comes just five months after the researchers implanted a similar system in a different patient to help them regain a more natural walking gait.
The ARC system used differs depending on how what issue it's being applied to. The ARC-EX is an external, non-invasive stimulator array that sits on the patient’s lower back and helps regulate their bladder control and blood pressure, as well as improving limb function and control. Onward’s lower limb study from May employed the EX along with a BCI controller from CEA-Clinatec to create a “digital bridge” spanning the gap in the patient’s spinal column.
The study published Wednesday instead utilized the ARC-IM, an implantable version of the company’s stimulator array which is installed near the spinal cord and is controlled through wearable components and a smartwatch. Onward had previously used the IM system to enable paralyzed patients to stand and walk short distances without assistance, for which it was awarded an FDA Breakthrough Device Designation in 2020.
Medical professionals led by by neurosurgeon Dr. Jocelyne Bloch, implanted the ARC-IM and the Clinatec BCI into a 46-year-old patient suffering from a C4 spinal injury, in mid-August. The BCI’s hair-thin leads pick up electrical signals in the patient’s brain, convert those analog signals into digital ones that machines can understand, and then transmits them to a nearby computing device where a machine learning AI interprets the patient’s electrical signals and issues commands to the implanted stimulator array. The patient thinks about what they want to do and these two devices work to translate that intent into computer-controlled movement.
How well that translation occurs remains to be seen while the patient learns and adapts to the new system. “The implant procedures involving the Onward ARC-IM and Clinatec BCI went smoothly,” Dr. Bloch said in an press release. “We are now working with the patient to use this cutting-edge innovation to recover movement of his arms, hands, and fingers. We look forward to sharing more information in due course.”
“If the therapy continues to show promise, it is possible it could reach patients by the end of the decade,” Onward CEO Dave Marver said in a statement to Engadget. “It is important to note that we do not expect people with spinal cord injury to wait that long for Onward to commercialize an impactful therapy - we hope to commercialize our external spinal cord stimulation solution, ARC-EX Therapy, to restore hand and arm function in the second half of 2024.”
Onward Medical among a quickly expanding field of BCI-based startups working to apply the fledgling technology to a variety of medical maladies. Those applications include loss of limb and self-regulatory function due to stroke, traumatic brain or spinal cord injury, physical rehabilitation from those same injuries, as well as a critical means of communication for people living with Locked-In Syndrome.
This article originally appeared on Engadget at https://www.engadget.com/the-arc-nerve-stimulation-system-could-help-quadriplegic-patients-move-their-arms-again-053027395.html?src=rss
Reports of David Limp's retirement have been greatly exaggerated. The former SVP of Devices and Services announced last week at Amazon's 2023 Devices Event that he would be stepping down from the role he had held for more than a decade. By Monday, however, Limp had reportedly been tapped by Jeff Bezos to take over for current Blue Origin CEO, Bob Smith, who is retiring at the start of December.
MSNBC reports that Smith will stick around until January 2, 2024 to assist with the transition. Bezos sent the following announcement to the Blue Origin's workforce on Monday:
I’m excited to share that Dave Limp will join Blue starting December 4th as CEO, replacing Bob, who has elected to step aside on January 2. The overlap is purposeful to ensure a smooth transition.
Before I provide some background on Dave, I’d like to take the time to recognize Bob and the significant growth and transformation we’ve experienced during his tenure. Under Bob’s leadership, Blue has grown to several billion dollars in sales orders, with a substantial backlog for our vehicles and engines. Our team has increased from 850 people when Bob joined to more than 10,000 today. We’ve expanded from one office in Kent to building a launch pad at LC-36 and five million square feet of facilities across seven states.
Our mission has grown too – we’ve flown 31 people above the Kármán Line, almost five percent of all the people who have been to space. Flight-qualified BE-4 engines are ready to boost Vulcan into orbit. New Glenn is nearing launch next year, and, with our recent NASA contract, we will land Americans back on the Moon, this time to stay. We have also engaged and inspired millions of children and educators through our Club for the Future efforts. We’ve made tremendous progress in building a road to space for the benefit of Earth, thanks to each of you and Bob’s leadership.
I’ve worked closely with Dave for many years. He is the right leader at the right time for Blue. Dave joins us after almost 14 years at Amazon, where he most recently served as senior vice president of Amazon Devices and Services, leading Kuiper, Kindle, Alexa, Zoox, and many other businesses. Before Amazon, Dave had roles at other high-tech companies, including Palm and Apple. Dave is a proven innovator with a customer-first mindset and extensive experience leading and scaling large, complex organizations. Dave has an outstanding sense of urgency, brings energy to everything, and helps teams move very fast.
Please join me in welcoming Dave and thanking Bob. Through this transition, I know we’ll remain focused on our customer commitments, production schedules, and executing with speed and operational excellence. I look forward to the many exciting and historic milestones ahead of us!
Jeff
MSNBC obtained Limp's welcome as well:
Team Blue,
It’s been about six years since I joined Blue Origin. During that time, our team, facilities, and sales orders have grown dramatically, and we’ve made significant contributions to the history of spaceflight.
With pride and satisfaction in all that we’ve accomplished, I’m announcing that effective December 4, I will be stepping aside as Chief Executive Officer of Blue Origin. I will remain with Blue until January 2 to ensure a smooth transition with the new CEO.
It has been my privilege to be part of this great team, and I am confident that Blue Origin’s greatest achievements are still ahead of us. We’ve rapidly scaled this company from its prototyping and research roots to a large, prominent space business. We have the right strategy. a supremely talented team, a robust customer base, and some of the most technically ambitious and exciting projects in the entire industry. We also have a team that cares deeply about its mission, legacy, and how we contribute to the next generation and bring everyone into a brighter future.
Jeff and I have been discussing my plan for months, and Jeff will announce Blue’s new CEO in a separate note shortly. I’m very excited about the operational excellence and culture of innovation this new leader will bring to Blue. building on the foundation we’ve created over the past few years.
I’m committed to ensuring this transition is flawless, and everyone should know that Ill always be on Team Blue.
This article originally appeared on Engadget at https://www.engadget.com/dave-limp-will-lead-jeff-bezos-blue-origin-after-retiring-from-amazon-212411125.html?src=rss
NASA's pioneering OSIRIS-REx mission has successfully returned from its journey to the asteroid Bennu. The robotic spacecraft briefly set down on the celestial body in a first-of-its-kind attempt (by an American space agency) to collect pristine rock samples, before alighting and heading back to Earth on a three-year roundtrip journey. The samples impacted safely on Sunday in the desert at the DoD’s Utah Test and Training Range and Dugway Proving Grounds.
Even more impressive, the spacecraft performed its Touch-and-Go Sample Acquisition Mechanism (TAGSAM) maneuver autonomously through the craft’s onboard Natural Feature Tracking (NFT) visual navigation system — another first! Engadget recently sat down with Guidance Navigation and Control Manager at Lockheed Martin Dr. Ryan Olds, who helped develop the NFT system, to discuss how the groundbreaking AI was built and where in the galaxy it might be heading next.
The OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification and Security – Regolith Explorer) is America’s first attempt at retrieving physical samples from a passing asteroid (Japan has already done it twice). Bennu, being roughly 70 million miles from Earth when OSIRIS first intercepted it, presented far more of a challenge in landing than previous, larger targets like the also-not-particularly-easy-to-reach targets of the moon or Mars
“There's so many different factors,” in matching the myriad velocities and trajectories involved in these landing maneuvers, Olds told Engadget. “So many little details. A lot of what we're doing is based on models and, if you have little error sources in your model that aren't being taken into account, then those can lead to big mistakes. So it's really, really important to make sure you’re modeling everything accurately.”
In fact, after OSIRIS-REx rendezvoused with Bennu in 2020, the spacecraft spent more than 500 days circling the asteroid and capturing detailed images of its surface from which the ground control team generated digital terrain models. “It takes a lot of research to make sure you've got all the effects understood,” Olds said. “We did a lot of that with our work on Natural Feature Tracking to make sure we understood the gravity field around the asteroid. Even little things like the spacecraft’s heaters turning on and off — even that produces a very, very tiny propulsive effect because you're radiating heat, and on really small bodies like Bennu, those little things matter.”
Since the asteroid revolved around its axis, the surface transitioning from sunlit side to dark and back again, every four hours, the OSIRIS team had to, “design all of our TAG trajectories so that we were flying over the lit portion of the asteroid,“ Olds said. “We didn't want the spacecraft to ever miss the maneuver and accidentally drift back into the eclipse behind the asteroid.” The NFT system, much like a Tesla, relies primarily on an array of visual spectrum cameras to know where it is in space, with a LiDAR system operating as backup.
LiDAR was initially going to be the primary method of navigating, given the team’s belief during the planning phase that the surface of Bennu resembled a sandy, beach-like environment. “We weren't expecting to have any hazards like big boulders,” Olds said. ”So the navigation system was really only designed to make sure we would land within about a 25-meter area, and LiDAR was the system of choice for that. But quickly once we got to Bennu, we were really surprised by what it looked like, just boulders everywhere, hazards everywhere.”
The team had difficulty spotting any potential landing site with a radius larger than eight meters, which meant that the LiDAR system would not be precise enough for the task. They racked their brains and decided to switch over to using the NFT system, which offered the ability to estimate orbital state in three dimensions. This is helpful in knowing if there’s a boulder in the lander’s descent path. The spacecraft ultimately touched down within just 72cm of its target.
“We did have some ground-based models from radar imagery,” Olds said. “But that really only gave us a very kind of bulk shape — it didn't give us the detail.” OSIRIS’s 17 months of flyovers provided that missing granularity in the form of thousands of high-resolution images. Those images were subsequently transmitted back to Earth where members of the OSIRIS-REx Altimetry Working Group (AltWG) processed, analyzed and reassembled them into a catalog of more than 300 terrain reference maps and trained a 3D shape model of the terrain. The NFT system relied on these assets during its TAG maneuver to adjust its heading and trajectory.
That full maneuver was a four-part process starting at the “safe-home terminator orbit” of Bennu. The spacecraft moved onto the daylight side of the asteroid, to a position about 125m above the surface dubbed Checkpoint. The third maneuver shifted OSIRIS-REx to Matchpoint, 55m above the surface, so that by the time it finished descending and came into contact with the asteroid, it would be traveling at just 10 cm/s. At that point the ship switched from visual cameras (which were less useful due to kicked-up asteroid dust) to using its onboard accelerometer and the delta-v update (DVU) algorithm to accurately estimate its relative position. In its fourth and final maneuver, was the craft — and its approximately eight-oz (250g) cargo — gently backed away from the 4.5 billion-year-old space rock.
Sunday’s touchdown was not the end of the NFT’s spacefaring career. An updated and upgraded version of the navigation system will potentially be aboard the next OSIRIS mission, OSIRIS-APEX. “Next year, we're going to start hitting the whiteboard about what we want this updated system to do. We learned a lot of lessons from the primary mission.”
Olds notes that the asteroid’s small stature made navigation a challenge, “because of all those little tiny forces I was telling you about. That caused a lot of irritation on the ground … so we're definitely wanting to improve the system to be even more autonomous so that future ground crews don't have to be so involved.“ The OSIRIS spacecraft is already en route to its APEX target, the 1,000-foot wide Apophis asteroid, which is scheduled to pass within just 20,000 miles of Earth in 2029. NASA plans to put OSIRIS into orbit around the asteroid to see if doing so affects the body’s orbit, spin rate, and surface features.
This article originally appeared on Engadget at https://www.engadget.com/osiris-rex-used-a-tesla-esque-navigation-system-to-capture-45-billion-year-old-regolith-192132417.html?src=rss
American entrepreneurs have long fixated on extracting the maximum economic value out of, well really, any resource they can get their hands on — from Henry Ford's assembly line to Tony Hsieh's Zappos Happiness Experience Form. The same is true in the public sector where some overambitious streamlining of Texas' power grid contributed to the state's massive 2021 winter power crisis that killed more than 700 people. In her new book, the riveting Optimal Illusions: The False Promise of Optimization, UC Berkeley applied mathematician and author, Coco Krumme, explores our historical fascination with optimization and how that pursuit has often led to unexpected and unwanted consequences in the systems we're streamlining.
In the excerpt below, Krumme explores the recent resurgence of interest in Universal Basic (or Guaranteed) Income and the contrasting approaches to providing UBI between tech evangelists like Sam Altman and Andrew Yang, and social workers like Aisha Nyandoro, founder of the Magnolia Mother’s Trust, in how to address the difficult questions of deciding who should receive the financial support, and how much.
California, they say, is where the highway ends and dreams come home to roost. When they say these things, their eyes ignite: startup riches, infinity pools, the Hollywood hills. The last thing on their minds, of course, is the town of Stockton.
Drive east from San Francisco and, if traffic cooperates, you’ll be there in an hour and a half or two, over the long span of slate‑colored bay, past the hulking loaders at Oakland’s port, skirting rich suburbs and sweltering orchards and the government labs in Livermore, the military depot in Tracy, all the way to where brackish bay waters meet the San Joaquin River, where the east‑west highways connect with Interstate 5, in a tangled web of introductions that ultimately pitches you either north toward Seattle or south to LA.
Or you might decide to stay in Stockton, spend the night. There’s a slew of motels along the interstate: La Quinta, Days Inn, Motel 6. Breakfast at Denny’s or IHOP. Stockton once had its place in the limelight as a booming gold‑rush supply point. In 2012, the city filed for bankruptcy, the largest US city until then to do so (Detroit soon bested it in 2013). First light reveals a town that’s neither particularly rich nor desperately poor, hitched taut between cosmopolitan San Francisco on one side and the agricultural central valley on the other, in the middle, indistinct, suburban, and a little sad.
This isn’t how the story was supposed to go. Optimization was supposed to be the recipe for a more perfect society. When John Stuart Mill aimed for the greater good, when Allen Gilmer struck out to map new pockets of oil, when Stan Ulam harnessed a supercomputer to tally possibilities: it was in service of doing more, and better, with less. Greater efficiency was meant to be an equilibrating force. We weren’t supposed to have big winners and even bigger losers. We weren’t supposed to have a whole sprawl of suburbs stuck in the declining middle.
We saw how overwrought optimizations can suddenly fail, and the breakdown of optimization as the default way of seeing the world can come about equally fast. What we face now is a disconnect between the continued promises of efficiency, the idea that we can optimize into perpetuity, and the reality all around: the imperfect world, the overbooked schedules, the delayed flights, the institutions in decline. And we confront the question: How can we square what optimization promised with what it’s delivered?
Sam Altman has the answer. In his mid-thirties, with the wiry, frenetic look of a college student, he’s a young man with many answers. Sam’s biography reads like a leaderboard of Silicon Valley tropes and accolades: an entrepreneur, upper‑middle‑class upbringing, prep school, Stanford Computer Science student, Stanford Computer Science dropout, where dropping out is one of the Valley’s top status symbols. In 2015, Sam was named a Forbes magazine top investor under age thirty. (That anyone bothers to make a list of investors in their teens and twenties says as much about Silicon Valley as about the nominees. Tech thrives on stories of overnight riches and the mythos of the boy genius.)
Sam is the CEO and cofounder, along with electric‑car‑and‑rocket‑ship‑magnate Elon Musk, of OpenAI, a company whose mission is “to ensure that artificial general intelligence benefits all of humanity.” He is the former president of the Valley’s top startup incubator, Y Combinator, was interim CEO of Reddit, and is currently chairman of the board of two nuclear‑energy companies, Helion and Okto. His latest venture, Worldcoin, aims to scan people’s eyeballs in exchange for cryptocurrency. As of 2022, the company had raised $125 million of funding from Silicon Valley investors.
But Sam doesn’t rest on, or even mention, his laurels. In conversation, he is smart, curious, and kind, and you can easily tell, through his veneer of demure agreeableness, that he’s driven as hell. By way of introduction to what he’s passionate about, Sam describes how he used a spreadsheet to determine the seven or so domains in which he could make the greatest impact, based on weighing factors such as his own skills and resources against the world’s needs. Sam readily admits he can’t read emotions well, treats most conversations as logic puzzles, and not only wants to save the world but believes the world’s salvation is well within reach.
A 2016 profile in The New Yorker sums up Sam like this: “His great weakness is his utter lack of interest in ineffective people.”
Sam has, however, taken an interest in Stockton, California.
Stockton is the site of one of the most publicized experiments in Universal Basic Income (UBI), a policy proposal that grants recipients a fixed stipend, with no qualifications and no strings attached. The promise of UBI is to give cash to those who need it most and to minimize the red tape and special interests that can muck up more complex redistribution schemes. On Sam’s spreadsheet of areas where he’d have impact, UBI made the cut, and he dedicated funding for a group of analysts to study its effects in six cities around the country. While he’s not directly involved in Stockton, he’s watching closely. The Stockton Economic Empowerment Demonstration was initially championed by another tech wunderkind, Facebook cofounder Chris Hughes. The project gave 125 families $500 per month for twenty‑four months. A slew of metrics was collected in order to establish a causal relationship between the money and better outcomes.
UBI is nothing new. The concept of a guaranteed stipend has been suggested by leaders from Napoleon to Martin Luther King Jr. The contemporary American conception of UBI, however, has been around just a handful of years, marrying a utilitarian notion of societal perfectibility with a modern‑day faith in technology and experimental economics.
Indeed, economists were among the first to suggest the idea of a fixed stipend, first in the context of the developing world and now in America. Esther Duflo, a creative star in the field and Nobel Prize winner, is known for her experiments with microloans in poorer nations. She’s also unromantic about her discipline, embracing the concept of “economist as plumber.” Duflo argues that the purpose of economics is not grand theories so much as on‑the‑ground empiricism. Following her lead, the contemporary argument for UBI owes less to a framework of virtue and charity and much more to the cold language of an econ textbook. Its benefits are described in terms of optimizing resources, reducing inequality, and thereby maximizing societal payoff.
The UBI experiments under way in several cities, a handful of them funded by Sam’s organization, have data‑collection methods primed for a top‑tier academic publication. Like any good empiricist, Sam spells out his own research questions to me, and the data he’s collecting to test and analyze those hypotheses.
Several thousand miles from Sam’s Bay Area office, a different kind of program is in the works. When we speak by phone, Aisha Nyandoro bucks a little at my naive characterization of her work as UBI. “We don’t call it universal basic income,” she says. “We call it guaranteed income. It’s targeted. Invested intentionally in those discriminated against.” Aisha is the powerhouse founder of the Magnolia Mother’s Trust, a program that gives a monthly stipend to single Black mothers in Jackson, Mississippi. The project grew out of her seeing the welfare system fail miserably for the very people it purported to help. “The social safety net is designed to keep families from rising up. Keep them teetering on edge. It’s punitive paternalism. The ‘safety net’ that strangles.”
Bureaucracy is dehumanizing, Aisha says, because it asks a person to “prove you’re enough” to receive even the most basic of assistance. Magnolia Mother’s Trust is unique in that it is targeted at a specific population. Aisha reels off facts. The majority of low‑income women in Jackson are also mothers. In the state of Mississippi, one in four children live in poverty, and women of color earn 61 percent of what white men make. Those inequalities affect the community as a whole. In 2021, the trust gave $1,000 per month to one hundred women. While she’s happy her program is gaining exposure as more people pay attention to UBI, Aisha doesn’t mince words. “I have to be very explicit in naming race as an issue,” she says.
Aisha’s goal is to grow the program and provide cash, without qualifications, to more mothers in Jackson. Magnolia Mother’s Trust was started around the same time as the Stockton project, and the nomenclature of guaranteed income has gained traction. One mother in the program writes in an article in Ms. magazine, “Now everyone is talking about guaranteed income, and it started here in Jackson.” Whether or not it all traces back to Jackson, whether the money is guaranteed and targeted or more broadly distributed, what’s undeniable is that everyone seems to be talking about UBI.
Influential figures, primarily in tech and politics, have piled on to the idea. Jack Dorsey, the billionaire founder of Twitter, with his droopy meditation eyes and guru beard, wants in. In 2020, he donated $15 million to experimental efforts in thirty US cities.
And perhaps the loudest bullhorn for the idea has been wielded by Andrew Yang, another product of Silicon Valley and a 2020 US presidential candidate. Yang is an earnest guy, unabashedly dorky. Numbers drive his straight‑talking policy. Blue baseball caps for his campaign are emblazoned with one short word: MATH.
UBI’s proponents see the potential to simplify the currently convoluted American welfare system, to equilibrate an uneven playing field. By decoupling basic income from employment, it could free some people up to pursue work that is meaningful.
And yet the concept, despite its many proponents, has managed to draw ire from both ends of the political spectrum. Critics on the right see UBI as an extension of the welfare state, as further interference into free markets. Left‑leaning critics bemoan its “inefficient” distribution of resources: Why should high earners get as much as those below the poverty line? Why should struggling individuals get only just enough to keep them, and the capitalist system, afloat?
Detractors on both left and right default to the same language in their critiques: that of efficiency and maximizing resources. Indeed, the language of UBI’s critics is all too similar to the language of its proponents, with its randomized control trials and its view of society as a closed economic system. In the face of a disconnect between what optimization promised and what it delivered, the proposed solution involves more optimizing.
Why is this? What if we were to evaluate something like UBI outside the language of efficiency? We might ask a few questions differently. What if we relaxed the suggestion that dollars can be transformed by some or another equation into individual or societal utility? What if we went further than that and relaxed the suggestion of measuring at all, as a means of determining the “best” policy? What if we put down our calculators for a moment and let go of the idea that politics is meant to engineer an optimal society in the first place? Would total anarchy ensue?
Such questions are difficult to ask because they don’t sound like they’re getting us anywhere. It’s much easier, and more common, to tackle the problem head‑on. Electric‑vehicle networks such as Tesla’s, billed as an alternative to the centralized oil economy, seek to optimize where charging stations are placed, how batteries are created, how software updates are sent out — and by extension, how environmental outcomes take shape. Vitamins fill the place of nutrients leached out of foods by agriculture’s maximization of yields; these vitamins promise to optimize health. Vertical urban farming also purports to solve the problems of industrial agriculture, by introducing new optimizations in how light and fertilizers are delivered to greenhouse plants, run on technology platforms developed by giants such as SAP. A breathless Forbes article explains that the result of hydroponics is that “more people can be fed, less precious natural resources are used, and the produce is healthier and more flavorful.” The article nods only briefly to downsides, such as high energy, labor, and transportation costs. It doesn’t mention that many grains don’t lend themselves easily to indoor farming, nor the limitations of synthetic fertilizers in place of natural regeneration of soil.
In working to counteract the shortcomings of optimization, have we only embedded ourselves deeper? For all the talk of decentralized digital currencies and local‑maker economies, are we in fact more connected and centralized than ever? And less free, insofar as we’re tied into platforms such as Amazon and Airbnb and Etsy? Does our lack of freedom run deeper still, by dint of the fact that fewer and fewer of us know exactly what the algorithms driving these technologies do, as more and more of us depend on them? Do these attempts to deoptimize in fact entrench the idea of optimization further?
A 1952 novel by Kurt Vonnegut highlights the temptation, and also the threat, of de-optimizing. Player Piano describes a mechanized society in which the need for human labor has mostly been eliminated. The remaining workers are those engineers and managers whose purpose is to keep the machines online. The core drama takes place at a factory hub called Ilium Works, where “Efficiency, Economy, and Quality” reign supreme. The book is prescient in anticipating some of our current angst — and powerlessness — about optimization’s reach.
Paul Proteus is the thirty‑five‑year‑old factory manager of the Ilium Works. His father served in the same capacity, and like him, Paul is one day expected to take over as leader of the National Manufacturing Council. Each role at Ilium is identified by a number, such as R‑127 or EC‑002. Paul’s job is to oversee the machines.
At the time of the book’s publication, Vonnegut was a young author disillusioned by his experiences in World War II and disheartened as an engineering manager at General Electric. Ilium Works is a not‑so‑thinly‑veiled version of GE. As the novel wears on, Paul tries to free himself, to protest that “the main business of humanity is to do a good job of being human beings . . . not to serve as appendages to machines, institutions, and systems.” He seeks out the elusive Ghost Shirt Society with its conspiracies to break automation, he attempts to restore an old homestead with his wife. He tries, in other words, to organize a way out of the mechanized world.
His attempts prove to be in vain. Paul fails and ends up mired in dissatisfaction. The machines take over, riots ensue, everything is destroyed. And yet, humans’ love of mechanization runs deep: once the machines are destroyed, the janitors and technicians — a class on the fringes of society — quickly scramble to build things up again. Player Piano depicts the outcome of optimization as societal collapse and the collapse of meaning, followed by the flimsy rebuilding of the automated world we know.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-optimal-illusions-coco-krumme-riverhead-books-143012184.html?src=rss
Despite it nominally being a Surface-centric event, Microsoft sure spent a lot of time talking about AI on Thursday. "We believe it has the potential to help you be more knowledgeable, more productive, more creative, more connected to the people and things around you," Microsoft CEO Satya Nadella told the assembled crowd of reporters. "We think there's also an opportunity beyond work and life to have one experience that works across your entire life." To that end, Microsoft announced that its CoPilot AI, which currently exists in various iterations in the Edge browser, Microsoft 365 platform and Windows, will be bundled into a single, unified and ubiquitous generative AI assistant across all of Microsoft's products — from Powerpoint to Teams.
"It's kind of like your PC now it's kind of becoming your CP. We believe Copilot will fundamentally transform the relationship with technology and user in a new era of personal computing, the age of Copilots," Nadella said. He also noted that the new AI will also have the "power to harness all your work data and intelligence," inferring that the system will be tunable to a customer's personal data silo.
Microsoft has been at the forefront of the generative AI revolution since the debut of ChatGPT last November. The company has spent years and millions of dollars in R&D working on the technology, including purchasing GitHub in 2019 and dramatically expanding its ongoing partnership with OpenAI that past January.
"It's starting roll out on September 26th, informed by what you're doing on your PC," Yusuf Mehdi, CVP Consumer Chief Marketing Officer, said on stage. The AI will arrive as part of the new Windows 11 release, which Medhi confirmed will "have over 150 new features and be the biggest update since it was first released."
This is a developing story. Please check back for updates.
Follow all of the news live from Microsoft’s 2023 Surface event right here.
This article originally appeared on Engadget at https://www.engadget.com/windows-copilot-ai-starts-rolling-out-september-26-143148644.html?src=rss
Amazon's Alexa is set to receive a major upgrade that will bring its conversational capabilities more in line with modern chatbots like Google Bard or OpenAI's ChatGPT, David Limp SVP of Amazon Devices & Services, announced during the company's 2023 Devices event on Wednesday. The long running digital assistant will soon be driven by a purpose-built large language model that will be available in nearly every new Echo device.
Elon Musk's Neuralink company, purveyors of the experimental N1 brain-computer interface (BCI), announced on Tuesday that it has finally opened enrollment for its first in-human study, dubbed Precise Robotically Implanted Brain-Computer Interface (PRIME, not PRIBCI). The announcement comes nearly a year after the company's most recent "show and tell" event, four months beyond the timeframe Musk had declared the trials would start, and nearly a month after rival Synchron had already beaten them to market.
Per the company's announcement, the PRIME study "aims to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with paralysis to control external devices with their thoughts." As such, this study is looking primarily for "those who have quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS)," despite Musk's repeated and unfounded claims that the technology will be useful as vehicle for transhumanistic applications like learning Kung Fu from an SD card, uploading your consciousness to the web and controlling various household electronics with your mind.
Actually, that last one is a real goal of both the company and the technology. BCIs operate as a bridge between the human mind and machines, converting the analog electrical signals of our brains into digital signals that machines understand. The N1 system from Nueralink leverages a high-fidelity Utah Array of hair-thin probes that, unlike Synchron's Stentrode, must be installed via robotic keyhole surgery (performed by Nerualink's sewing machine-like R1 robot surgeon). This array will be fitted onto the patient's motor cortex where it will record and wirelessly transmit electrical impulses produced by the region to an associated app which will interpret them into actionable commands for the computer. "The initial goal of our BCI is to grant people the ability to control a computer cursor or keyboard using their thoughts alone," the release reads.
Neuralink has been working on the N1 system since 2017, one of the first companies in the industry to begin publicly developing a commercial BCI. However, Neuralink's efforts were waylaid last year after the company was credibly accused of causing the needless suffering and death of dozens of animal test subjects, which led to both a USDA investigation on animal cruelty charges and instigated the FDA to deny the company's request to fasttrack human trials. The PRIME study is being conducted under the auspices of the investigational device exemption (IDE), which the FDA awarded Neuralink this past May.
This article originally appeared on Engadget at https://www.engadget.com/neuralink-opens-enrollment-for-its-first-human-bci-implants-215822024.html?src=rss
We’ve already seen OpenAI and Salesforce incorporate their standalone chatbots into larger, more comprehensive machine learning platforms that span the breadth and depth of their businesses. On Tuesday, Google announced that its Bard AI is receiving the same treatment and has been empowered to pull real-time data from other Google applications including Docs, Maps, Lens, Flights, Hotels and YouTube, as well as the users’ own silo of stored personal data, to provide more relevant and actionable chatbot responses.
“I've had the great fortune of being a part of the team from the inception,” Jack Krawczyk,bproduct lead for Bard, told Engadget. “This Thursday marks six months since Bard entered into the world.”
Google
But despite of the technology’s rapid spread, Krawczyk concedes that many users remain wary of it, either because they don’t see an immediate use-case for it in their personal lives or “some others are saying, ‘I've also heard that it makes things up a lot.’” Bard’s new capabilities are meant to help assuage those concerns and build public trust with the technology through increased transparency and more fully explained reasoning by the AI.
“We started off talking about Bard as a creative collaborator because that we saw in our initial testing, that's how people use it,” he continued.”Six months into the experiment, that hypothesis is truly validating.”
The new iteration of Bard, “is the first time a language model will not only talk about how confident it is in its answer by finding content from across the web and linking to it,” Krawczyk said. “It's also the first time the language model is willing to admit that it made a mistake or got something wrong, and we think that's a critical step.” Krawczyk notes that feedback provided by the experimental tool’s users over the past half year has enabled the company to rapidly iterate increasingly robust, “more intuitive and imaginative” language models.
In order to provide these more expansive responses, Google is following OpenAI and Salesforce’s lead in enabling its AI to access the real-time capabilities of the company’s other apps — including Maps, YouTube, Hotels and Flights, among others. What’s more, users will be able to mix and match those API requests using natural language requests.
That is, if you want to take your partner to Puerto Rico on February 14, 2024 and go sightseeing, you’ll be able to ask Bard, “can you show me flights to Puerto Rico and available hotels on Valentines Day next year?” and then follow up with, “show me a map of interesting sites near our hotel” and Bard should be able to provide a list of potential flights, available hotel rooms and a list of stuff to do outside of said hotel room once you book it.
“We believe there's already a high bar for the transparency choice and control that you have with your data,” Krawczyk said. “It needs to be even higher as it relates to bringing in your private data.”
In an effort to improve the transparency of its AI’s reasoning, Google is both explicitly linking to the sites that it is summarizing, and introducing a Double Check feature that will highlight potentially unfounded responses. When users click on Bard’s G button, the AI will independently audit its latest response and search the web for supporting information. If Search turns up contradictory evidence, the statement is highlighted orange. Conversely, heavily referenced and supported statements will be highlighted green.
Google
Users will also be able to opt-in to a feature, dubbed Bard Extensions, that will allow the AI access to their personal Google data (emails, photos, calendar entries, et cetera) so that it can provide specific answers about their daily lives. Instead of digging through email chains looking for a specific important date, for example, users will be able to ask Bard to scour their Gmail account for the information, as well as summarize the most important points of the overall discussion. Or, the user could work with the chatbot to draft a cover letter based specifically on the work experience listed in their resume.
And to allay concerns over Google potentially having even more access to your personal data than it already does, the company has pledged that “your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model.” What’s more, users will be able to opt in and out of the system at will and can allow or deny access to specific files. The service is initially only available to non-enterprise users in English, though the company is working to expand those offerings in the future.
“We think that this is a really critical step, but so much context is required in communication,” Krawczyk said. “We think really harnessing the healthy and open web is key because what we found in the first six months of Bard is, people will see a response and then follow up with trusted content to actually understand and go deeper. We're excited to provide that for people with this new experience.”
This article originally appeared on Engadget at https://www.engadget.com/googles-bard-ai-can-tap-the-companys-apps--and-your-personal-data--for-better-responses-100020506.html?src=rss
Modern tech journalism would likely look far differently today, if not for the efforts of Dorothy Vaughan, Katherine Johnson and a host of other trailblazing female reporters who staffed the Science Service throughout the publication's history. These journalists were among the very first science communicators, making sense of the newfangled technological wonders of the 1920s through 1950s and bringing that understanding to their readers — often in spite of the personalities and institutions they were covering.
In Writing for Their Lives: America's Pioneering Female Science Journalists, historian Marcel Chotkowski Lafollette highlights not just the important work that these women performed but examines how their diverse the excerpt below recounts the hectic days and weeks in the outlets newsroom following America's use of a terrifying new "atom" bomb.
In the weeks following the August 1945 dropping of atomic bombs on Hiroshima and Nagasaki, the Science Service staff frequently apologized for their tardy responses to any correspondence that had arrived that month. “Just about the time that your letter arrived here, we were completely showered with debris from the atom bombs,” Martha Morrow wrote somewhat facetiously. “This note of appreciation would have gotten off sooner if we had not had atomic bombs and peace crashing down on us,” Jane Stafford told another scientist. The journalists’ internal memos, however, exuded a sense of accomplishment. They had risen to the challenge of covering extraordinary breaking news; they had collaborated, cooperated, and served their readers well.
Because Watson Davis happened to be traveling in South America during the first week of August 1945, the five editorial writers remaining in Washington worked as a team, with each person applying a different interpretative frame to explaining the development and use of an atomic bomb. Morrow focused on the physics; Stafford looked at radiation and physiology; Marjorie Van de Water concentrated on the psychological and social implications; Helen Davis explored the chemistry of explosions; and Frank Thone focused on the biological impacts. Van de Water later recalled the electric atmosphere:
The telephone ringing all the day interrupted thought and work. Two of these calls summed up neatly the problems of the writer who tries to tell the public about the “findings of scientific research.” One inquiry was concise and practical, easily answered. “What is an atom?” this caller wanted to know. I gave him a convenient definition, but he was not quite satisfied. “That’s fine,” he said, “But now could you add a little something to make this whole thing more com- prehensible?” The other was a preacher. He was alarmed at what he had read in the afternoon papers. “What are the implications of this thing?” he wanted to know. “Where will it end? Is man going to destroy himself utterly? Does it mean the end of the world?”
As she concluded, “It was not possible to think of anything else except one stupendous fact—atomic fission, atomic power, atomic destruction, unlimited except by the unpredictable desires of the human heart.”
The general outlines and mission of the Manhattan Project had not, of course, surprised these reporters. Preliminary discussions about the feasibility of atomic weapons occurred long before the imposition of official secrecy. Helen’s daughter, Charlotte, used her family’s own special code words when she wrote her mother on August 7 from Rhode Island, where she worked in a US Navy laboratory:
The first I saw of the news was on the bus at Providence last night. A small boy came aboard selling the Boston Record which was headlined “Atomic Bomb Terror.” I regret to say that with all my previous knowledge and good guesses about Shangri-La and “that other place in Tennessee” I merely said to myself “Oh well, the Record!” and went to sleep. Not until I saw the Providence Journal and the New York Times did the import of the matter dawn on me.
Helen replied a few days later, apologizing for the delay—“as you can guess, the atomic bomb has us running in circles.” Watson was scheduled to be in Buenos Aires on August 6, yet cables to him at the US embassy in Argentina had gone unanswered. Helen quipped that she wanted to send him a telegram saying, “Having an awful time, wish you were here.” Messages from the office trailed Watson around Latin America, with Stafford’s telegram (“YOUR ATOMIZING STAFF MISSES AND GREETS YOU”) eventually catching up with him in Uruguay. His reply revealed his regret at having missed the action: “WHAT DAYS TO BE AWAY FROM WASHINGTON HOPE WE PLASTERED ATOMIC BOMB.”
Once the official technical report (a document known as the “Smyth Report”) was released, newspaper clients expected succinct technical summaries almost immediately. The news service produced that material in record time. Other than Martha, Helen was the only one on the staff who understood the bomb’s basic physics and chemistry, and she complained that she felt "more like Hamlet every day: ‘Oh, wretched spite, That I was ever born to set them right!'" Helen even quickly wrote an editorial on atomic power for the next issue of Chemistry, which was just going to press. On the afternoon of August 11, having “practically disintegrated along with the atom all this week," Helen wrote a catch-up letter to Charlotte. For the first few days, she explained, they had had only the bare announcement that the weapons had exploded as designed and civilians had been killed. In “the thick of the fight,” during the previous week, she had had doubts about their coverage, but “after seeing what the rest of the world did with the story,” she told Charlotte, she realized “we didn’t do too badly.”
New Questions
Helen’s September 2 letter to Watson (who was by then in Mexico and trying to get home) offered another perspective on the complicated office politics:
So much has happened, I probably can’t do more than hit the highest spots. First and biggest, of course, was the atom bomb. We will probably never be the same again! The story broke . . . with the President’s announcement. We had the War Department releases, but Frank was sitting on them, in a complete dither, but writing like mad. Nobody dared interrupt him. He finally yelled to me to do a piece on the atom and what it is. His story and mine were all that made the DMR [Daily Mail Report] that day.
Cool-headed preparation eventually prevailed. When the writers learned that the War Department planned to release the official technical report at the end of that first week, they decided to start drafting background material yet “not get too far out on a limb.” By the time copies of the Smyth Report arrived on Friday, Thone was already on his way to a meeting in Boston. Martha was racing back from vacation. For a time, “which seemed then just a few minutes short of eternity,” Helen wrote, “there was nobody but Jane, Marjorie, and me to carry on. When we three get together and pool our talents, you’d be surprised what a good physicist we make!” She described the Smyth Report as “amazing”:
It is multilithed, and over an inch thick. We got two copies. One we kept intact, the other we pulled the staples out of, so we could work on parts of it all at once. Jane Stafford, I think, has read all the chapter headings through consecutively, for she set herself that task. The rest of us just pick up any sheet at random and find at least one story that has to be written now, without bothering with anything else.
That report, Helen told Charlotte, made “all physics and chemistry B.A.B. (Before Atom Bomb, of course) completely obsolete,” and “is beautifully written and as exciting as a detective story.” Because the War Department wanted publishers to reprint the report “in whole or in part,” Helen “rear- ranged it and wrote connecting paragraphs,” making it the central focus of the September 1945 Chemistry. That issue was later praised for its clarity. Helen not only understood the technical aspects but also had the ability to explain them, as demonstrated in her revised edition of the “Laws of Matter Up-to-Date” feature in October 1945. During those same busy weeks, Helen even sketched mock-ups and text estimates for a brochure (“Atomic Power”) to advertise the organization’s capability to answer technical questions like, “When you split an atom of uranium, what elements do you have as a result?” And she compiled a three-page list of “important dates in the history of the atom” to share with her colleagues.
The real news story, though, would involve unpacking the weapon’s social, political, and economic consequences, attempting to understand whether and to what extent the awesome power would be “good only for the destruction of cities and of people” as well as how its existence might affect future generations. The implications of that “alchemist’s dream” (Helen’s ironic phrase) intensified public interest in all science. As the editor of the Pittsburgh Press told his staff, “Abstruse science has been popularized by a situation which has made the public read and discuss material it would otherwise never have heard of—because it involved the lives and safety of their own loved ones.” All over the country, adults and students began writing to newspapers, scientists, and public officials, asking for more information about atomic energy. One young woman who planned to major in chemistry and physics at Vassar College wrote directly to Vannevar Bush, head of the Office of Scientific Research and Development. Bush’s secretary asked Helen to respond. Helen answered each question (e.g., “Exactly what happens within the nucleus of the Uranium atom before it splits? What are the remaining materials after the atom splits? How long will it be before these radioactive materials disintegrate?”) with detailed explanations and references to relevant sections of the Smyth Report, and enclosed the latest issue of Chemistry as added encouragement to a budding young science student.
This article originally appeared on Engadget at https://www.engadget.com/how-a-pioneering-mixed-gender-newsroom-covered-the-a-bomb-160043585.html?src=rss
The CEOs of leading AI companies — including Meta's Mark Zuckerberg, Microsoft's Satya Nadella, Alphabet's Sundar Pichai, Tesla's Elon Musk and Open AI's Sam Altman — appeared before Congress once again on Wednesday. But instead of the normal bombast and soapboxing we see during public hearings about the dangers of unfettered AI development, this conversation reportedly took on far more muted tones.
In all, more than 20 tech and civil society leaders spoke with lawmakers at Wednesday's meeting, organized by Senate Majority Leader Chuck Schumer, to discuss how AI development should be regulated moving forward. Senators Martin Heinrich (D-NM), Todd Young (R-IN) and Mike Rounds (R-SD) who were also in attendance and reportedly working with the majority leader to draft additional proposals.
The word of the day: consensus. “First, I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and every single person raised their hands even though they had diverse views,” Schumer told reporters Wednesday.
But as Bloomberg reports, "areas of disagreement were apparent throughout the morning session" with Zuckerberg, Altman and Bill Gates all differing on the risks posed by open-source AI (three guesses as to where old Monopoly Bill came down on that issue). True to form, Elon Musk got into it with "Berkeley researcher Deb Raji for appearing to downplay concerns about AI-powered self-driving cars, according to one of the people in the room," Bloomberg reports.
“Some people mentioned licensing and testing and other ways of regulation … there were various suggestions as to how to do it, but no consensus emerged yet,” Schumer said following the event.
“That’s probably the worst wedding to try to do seating for,” Humane Intelligence CEO Rumman Chowdhury said of the event as an attendee. She also noted that Elon Musk and Mark Zuckerberg did not interact and sat at opposite ends of the room-width table — presumably to keep the two bloodthirsty cagefighting CEOs from throwing down and Royal Rumbling the esteemed proceedings.
The meeting participants generally agreed that the federal government needs to “help to deal with what we call transformational innovation,” one unnamed participant suggested. That could entail creating a $32 billion fund that would assist with “the kind of stuff that maximizes the benefits of AI,” Schumer told reporters.
Following the seven-hour event, Facebook released Mark Zuckerberg's official remarks. They cover the company's long-standing talking points about developing and rolling out the technology "in a responsible manner," coordinating its efforts with civil society leaders (instead of say, allegedly fomenting genocide like that one time in Myanmar) and ensuring "that America continue to lead in this area and define the technical standard that the world uses."
Elon Musk, famed libertarian and bloodsworn enemy of the FTC, warned reporters corralled outside of the hearing about the "civilizational risk" posed by AI. He wants a Federal Department of AI to help regulate the industry. He reportedly envisions it operating similarly to the FAA or SEC (two more agencies Musk has been variously scolded by) but did not elaborate beyond that. “I think this meeting could go down in history as important to the future of civilization,” he told reporters.
This article originally appeared on Engadget at https://www.engadget.com/ai-tech-leaders-make-all-the-right-noises-at-cozy-closed-door-senate-meeting-194505318.html?src=rss