Apparently those Super Bowl ads finally did the trick. The National Highway Traffic Safety Administration announced on Thursday that Tesla is recalling nearly 363,000 of its vehicles because the Full Self-Driving software may cause a crash. Specifically, the NHTSA cites a risk to "exceed speed limits or travel through intersections in an unlawful or unpredictable manner increases the risk of a crash."
In all, the recall impacts 362,758 vehicles. They include, according to the announcement, “certain 2016-2023 Model S, Model X, 2017-2023 Model 3, and 2020-2023 Model Y vehicles equipped with Full Self-Driving Beta (FSD Beta) software or pending installation.” Tesla will release an OTA update, free of charge to its customers to rectify the issue, Reuters reports.
The NHSTA initially launched its investigation into Tesla's much-hyped Full Self-Driving Autopilot system back in August, 2021 following years of fatal highway accidents and terrifying social media posts documenting the software's seemingly self-destructive behavior.
"We're investing a lot of resources," NHTSA acting head Ann Carlson told reporters in January. "The resources require a lot of technical expertise, actually some legal novelty and so we're moving as quickly as we can, but we also want to be careful and make sure we have all the information we need."
Running from 1992 through 1995, Ghostwriter, the beloved PBS children’s television show, followed a diverse group of friends as they solved mysteries around their Brooklyn neighborhood with the help of their haunted typewriter, a cursed item possessed by the trapped soul of a murdered runaway Civil War slave. The Ghostwriter typewriter developed by interaction designer, artist and Lumen.world CTO, Arvind Sanjeev, on the other hand, comes with none of the paranormal hang-ups of its coincidental namesake. Instead of a spirit bound to this hellish plane of existence, forced to help tweens solve low-stakes conundrums, the deus in Sanjeev’s machina is animated by OpenAI’s GPT-3.
He first devised this artistic endeavor in 2021 as a, “poetic intervention that allows us to take a moment to breathe and reflect on this new creative relationship we are forming with machines.” Built over the course of weekends and evenings, Ghostwriter interacts with its user through the written word, allowing the two to converse and co-create freely through the physical medium of paper.
“I wanted Ghostwriter to evoke warm feelings and make people comfortable playing with it,” Sanjeev told Engadget via email. “I chose the mental model of the typewriter for this reason. It is an artifact from our past, a world where technology was more physical and mindful of people's lives.”
“People trust typewriters and feel comfortable with them because they know their sole purpose is just to create stories on paper,” he added. “This is contrary to today's technology, black boxes that try to propagate unethical business models based on the attention economy.”
Ghostwriter began as a vintage electronic Brother AX-325 typewriter (chosen on account of its encodable keypad matrix). Sanjeev selected the GPT-3 model in part due to his familiarity with it through his adjunct faculty position at CIID and in part to its impressive “capability to generate creative content,” he noted. “The easily accessible API convinced me to integrate this into Ghostwriter.”
Sanjeev stripped out much of the machine’s existing mechanical guts and replaced them with an Arduino controller and Raspberry Pi. The arduino reads what the human user has typed on Ghostwriter’s keyboard, then feeds that input to OpenAI’s GPT-3 API through the onboard Raspberry. The AI does its generative magic, spits out a response and Ghostwriter dutifully prints it back onto the page the person’s perusal.
“The Ghostwriter's tactile slow-typed responses made people meditatively read each word one after the other, bringing out all the quirks and nuances of the AI through its finer details,” Sanjeev said. “Fast digital interactions that live on a word editor tend to hide things like this unintentionally.”
Teaching the system to tap the correct keys in response proved one of the project’s greatest challenges. Sanjeev had to first decode the existing electronic keyboard’s matrix — the device that converts a key’s physical press into its corresponding digital signal. “I pressed each key, read its triggered signal-scan lines, mapped it to the corresponding key, and finally made a driver that ran on an Arduino,” he wrote. Users can even influence the AI’s answers using two physical knobs that adjust Ghostwriter’s “creativity” and “response length” parameters.
Ghostwriter will remain unique for a while longer, unfortunately, though Sanjeev is working to opensource the project so that makers around the world might build their own. “I hope to carve out some time to clean up the code and package everything together soon,” he said.
“Generative AI is definitely not a fad,” Sanjeev declared, though neither is it a silver bullet for content creation. “It is evidence that we have crossed the tipping point for AI creativity that pioneers of AI thought was impossible,” he continued.
These tools help shape our ideas and can even inspire new ones, but at the end of the day are still merely tools for our creativity, not replacements. “AI is a glorified brush that a painter can use to tell their stories,” Sanjeev said. “Humanity and life will always be the center of any successful work, regardless of whether it is realized through AI.”
Arvind Sanjeev
GPT’s ultimate applications will depend on the medium in which it is employed — as an active, hands-on instrument for digital content creation but more as a “library of ideas for inspiration” for makers in the physical space. “The key to unlocking the potential of chatGPT in maker spaces lies in creating meaningful physical interfaces for it,” he said. “The role of an artist or creative using AI becomes that of a bonafide curator who selects the best works from the AI, filters it, and passes it to the next phase of the design process.”
He expects a similar synergy from knowledge workers as well. Automated text generation systems have been the focus of intense media and industry scrutiny in recent months amidst ChatGPT’s rocketing popularity. The technology has shown itself adept at everything from writing linux code and haiku poetry to Wharton Business School entrance exams and CNet financial explainers. Knowledge workers — lawyers, business analysts and journalists, amidst myriad others — are rightly concerned that such automated systems might be used to replace them, as BuzzFeed recently did to its newsroom.
However, Sanjeev believes that AI will instead have a less conspicuous role to play, instead trickling down from its generalist creative uses specializing into specific knowledge fields as it goes. “Just like how cloud computing has become pervasive and powers most of the applications today, AI will also become ubiquitous and recede into the backgrounds of our lives once the hype cycle fades away,” Sanjeev argued.
The AI revolution should lessen the rigors of such jobs and automate much of the drudgier aspects of the work. “The ability to synthesize vast amounts of niche data catered specifically to domains like software engineering, law, and business is being used to train hyper-specialized AIs for these respective fields,” Sanjeev noted.
OpenAI itself offers custom training packages for its systems so that customers might more easily spin up their own personalized AI doctors and robolawyers. Who ultimately bears responsibility when something goes wrong — whether it’s an AI doctor pushing quack diagnoses or an AI lawyer getting itself disbarred — remains a significant question with few easy answers.
Great news everyone, we’re pivoting to chatbots! Little did OpenAI realize when it released ChatGPT last November that the advanced LLM (large language model) designed to uncannily mimic human writing would become the fastest growing app to date with more than 100 million users signing up over the past three months. Its success — helped along by a $10 billion, multi-year investment from Microsoft — largely caught the company’s competition flat-footed, in turn spurring a frenetic and frantic response from Google, Baidu and Alibaba. But as these enhanced search engines come online in the coming days, the ways and whys of how we search are sure to evolve alongside them.
“I'm pretty excited about the technology. You know, we've been building NLP systems for a while and we've been looking every year at incremental growth,” Dr. Sameer Singh, Associate Professor of Computer Science at the University of California, Irvine (UCI), told Engadget. “For the public, it seems like suddenly out of the blue, that's where we are. I've seen things getting better over the years and it's good for all of this stuff to be available everywhere and for people to be using it.”
As to the recent public success of large language models, “I think it's partly that technology has gotten to a place where it's not completely embarrassing to put the output of these models in front of people — and it does look really good most of the time,” Singh continued. “I think that that’s good enough.”
JASON REDMOND via Getty Images
“I think it has less to do with technology but more to do with the public perception,” he continued. “If GPT hadn't been released publicly… Once something like that is out there and it's really resonating with so many people, the usage is off the charts.”
Search providers have big, big ideas for how the artificial intelligence-enhanced web crawlers and search engines might work and damned if they aren’t going to break stuff and move fast to get there. Microsoft envisions its Bing AI to serve as the user’s “copilot” in their web browsing, following them from page to page answering questions and even writing social media posts on their behalf.
This is a fundamental change from the process we use today. Depending on the complexity of the question users may have to visit multiple websites, then sift through that collected information and stitch it together into a cohesive idea before evaluating it.
“That's more work than having a model that hopefully has read these pages already and can synthesize this into something that doesn't currently exist on the web,” Brendan Dolan-Gavitt, Assistant Professor in the Computer Science and Engineering Department at NYU Tandon, told Engadget. “The information is still out there. It's still verifiable, and hopefully correct. But it's not all in place.”
For its part, Google’s vision of the AI-powered future has users hanging around its search page rather than clicking through to destination sites. Information relevant to the user’s query would be collected from the web, stitched together by the language model, then regurgitated as an answer with reference to the originating website displayed as footnotes.
This all sounds great, and was all going great, right up to the very first opportunity for something to go wrong. When it did. In its inaugural Twitter ad — less than 24 hours after debuting — Bard, Google’s answer to ChatGPT, confidently declared, “JWST took the very first pictures of a planet outside of our own solar system.” You will be shocked to learn that the James Webb Space Telescope did not, in fact, discover the first exoplanet in history. The ESO’s Very Large Telescope holds that honor from 2004. Bard just sorta made it up. Hallucinated it out of the digital ether.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3pic.twitter.com/JecHXVmt8l
Of course this isn’t the first time that we’ve been lied to by machines. Search has always been a bit of a crapshoot, ever since the early days of Lycos and Altavista. “When search was released, we thought it was ‘good enough’ though it wasn't perfect,” Singh recalled. “It would give all kinds of results. Over time, those have improved a lot. We played with it, and we realized when we should trust it and when we shouldn’t — when we should go to the second page of results, and when we shouldn't.”
The subsequent generation of voice AI assistants evolved through the same base issues that their text-based predecessors did. “When Siri and Google Assistant and all of these came out and Alexa,” Singh said, “they were not the assistants that they were being sold to us as.”
The performance of today’s LLMs like Bard and ChatGPT, are likely to improve along similar paths through their public use, as well as through further specialization into specific technical and knowledge-based roles such as medicine, business analysis and law. “I think there are definitely reasons it becomes much better once you start specializing it. I don't think Google and Microsoft specifically are going to be specializing it too much — their market is as general as possible,” Singh noted.
In many ways, what Google and Bing are offering by interposing their services in front of the wider internet — much as AOL did with the America Online service in the ‘90s — is a logical conclusion to the challenges facing today’s internet users.
The Washington Post via Getty Images
“Nobody's doing the search as the end goal. We are seeking some information, eventually to act on that information,” Singh argues. “If we think about that as the role of search, and not just search in the literal sense of literally searching for something, you can imagine something that actually acts on top of search results can be very useful.”
Singh characterizes this centralization of power as, “a very valid concern. Simply put, if you have these chat capabilities, you are much less inclined to actually go to the websites where this information resides,” he said.
It’s bad enough that chatbots have a habit of making broad intellectual leaps in their summarizations, but the practice may also “incentivize users not go to the website, not read the whole source, to just get the version that the chat interface gives you and sort of start relying on it more and more,” Singh warned.
In this, Singh and Dolan-Gavitt agree. “If you’re cannibalizing from the visits that a site would have gotten, and are no longer directing people there, but using the same information, there's an argument that these sites won't have much incentive to keep posting new content.” Dolan-Gavitt told Engadget. “On the other hand the need for clicks also is one of the reasons we get lots of spam and is one of the reasons why search has sort of become less useful recently. I think [the shortcomings of search are] a big part of why people are responding more positively to these chatbot products.”
That demand, combined with a nascent marketplace, is resulting in a scramble among the industry’s major players to get their products out yesterday, ready or not, underwhelming or not. That rush for market share is decidedly hazardous for consumers. Microsoft’s previous foray into AI chatbots, 2014’s Taye, ended poorly (to put it without the white hoods and goose stepping). Today, Redditors are already jailbreaking OpenAI to generate racist content. These are two of the more innocuous challenges we will face as LLMs expand in use but even they have proven difficult to stamp out in part, because they require coordination amongst an industry of viscous competitors.
“The kinds of things that I tend to worry about are, on the software side, whether this puts malicious capabilities into more hands, makes it easier for people to write malware and viruses,” Dolan-Gavitt said. “This is not as extreme as things like misinformation but certainly, I think it'll make it a lot easier for people to make spam.”
“A lot of the thinking around safety so far has been predicated on this idea that there would be just a couple kinds of central companies that, if you could get them all to agree, we could have some safety standards.” Dolan-Gavitt continued. “I think the more competition there is, the more you get this open environment where you can download an unrestricted model, set it up on your server and have it generate whatever you want. The kinds of approaches that relied on this more centralized model will start to fall apart.”
America's first astronauts from the 1960s were all pulled from the highest ranks of the nation's military. As such, NASA's first few classes tended to conform to a rather specific demographic theme — white, male, flattop haircut you could set a watch too. By the mid-70's however, the space agency had gotten with the times and opened up the spacewalking profession to more than former Air Force and Navy test pilots.
In The New Guys, author Meredith Bagby follows the exploits of NASA's Astronaut class of 1978 — "Class 8," America's first women, African Americans, Asian American, and gay person to fly to space — from the team's selection through their mastering of cutting-edge technologies aboard the Space Shuttle and their history-making orbital missions. In the excerpt below, Class 8 receives a brutal introduction to the dangers that await them.
“Hey! We’ve got a fire in the cockpit!” a man screamed, then his voice cut out. Within seconds, another desperate voice cut through the static.
“We’ve got a bad fire . . . !” the second man shouted in pain.
“We’re burning up . . . !!!” a third howled.
Then the transmission faded into nothing but static.
In one of the many tiered seats in Mission Control, Ron McNair and his new classmates listened to a recording of the Apollo 1 fire. During a preflight test on January 27, 1967, astronauts Gus Grissom, Ed White, and Roger Chaffee had burned alive. Even though over a decade had passed since the accident, the pain and fear of the astronauts who perished was palpable to the room of new recruits.
The instructor surveyed the faces of the astronaut candidates. Are you sure you’re ready for this? The audio was a wake-up call, especially for those like Ron who had not served in the military and had never had a job with life-and-death consequences. If this reality was too much for any of them to accept, the instructor suggested, now was the time to go. No one budged.
A few weeks earlier, as Ron moved his family across the country from left-leaning Malibu, California, to the Lone Star State, the summer sizzled. Disco hits from the Bee Gees’, “Night Fever” and “Stayin’ Alive,” blared from the radio. Billboards advertised the new Hollywood blockbuster Grease, starring John Travolta and Olivia Newton-John. In the nation’s capital, almost a hundred thousand demonstrators marched in support of the Equal Rights Amendment—at the time, the largest march for women’s rights in US history. Muhammad Ali was on the verge of making history at the Louisiana Superdome, becoming the first man to win the World Heavyweight title three times in a row.
When Ron and his wife, Cheryl, arrived in Houston, they found a little starter apartment before moving to Clear Lake along with the Onizukas and the Gregorys. Everyone that had kids—or planned to—wanted a lawn for football and a cul-de-sac for bike riding. The neighborhood’s proximity to the middle and high schools made it the obvious choice for families. Single astronauts like Sally Ride, Kathy Sullivan, and Steve Hawley settled into apartments right outside Johnson’s back gate with a short commute, volleyball court, and communal barbecue pit.
On the Monday after the July 4th holiday, Ron drove through the gates of Johnson Space Center for his first day of work. Looking up from his baffling acronym-filled schedule, Ron spotted a few of his classmates and followed them to Building 4, the home of Johnson’s Flight Crew Operations. Everyone was rushing to the Monday morning all-hands meeting, a staple of the Astronaut Office since the Mercury days.
Standing watch from their office doors, Sylvia Salinas, Mary Lopez, and Estella Hernandez Gillette, all in their twenties, took in the excitement as the new astronauts stormed the hallways. The Hispanic American administrative staff — working in and around the Astronaut Office — came to be known as the Mexican Mafia. As the liaisons for George Abbey and John Young, Sylvia and Mary, and later Estella, ran the show behind scenes, making sure things went smoothly in the Astronaut Office. Up until then, the astronauts they worked for were military men, older in age and more conventional in style; they did not fraternize with support staff. Now, “kids like them” were rolling in. The arrival of Astronaut Class 8 was like a breath of fresh air.
A large conference table surrounded by two rings of chairs dominated Room 3025, the locus of the Monday meeting. Assuming the first ring was reserved for administrators and senior astronauts, Ron took a seat in the back row, as did the rest of his class. Everyone, that is, except the blond, mustachioed Rick Hauck, a US Navy commander who by military standards was the most senior-ranking pilot of their class. Hauck took a seat at the table. Some in the room gasped. Others eyed him with suspicion. Wow, he must either be a fool or the most confident bastard among us. Maybe both. Either way he made an impression.
Like Hauck, the fifteen fighter pilots in Ron’s class had plenty of swagger and bravado, and mixed easily with the veteran astronauts. The old guys, twenty-eight in all, including moonwalkers John Young and Alan Bean, whom Ron met during interview week, filled the inner circle. Among them were astronauts still itching for their first trip to space, like Bob “Crip” Crippen, the baby of the group at forty years old, and Richard “Dick” Truly, both career military pilots who had flown for both the Navy and Air Force. These yet-to-fly guys were caught between programs, too late for Apollo and—so far—too early for the shuttle. Crippen and Truly were part of Astronaut Group 7, who had been transferred to NASA after the cancellation of the Manned Orbiting Laboratory (MOL), a classified Cold War military project developed to acquire surveillance images from space. After a decade at the agency, the former MOL astronauts had only ever flown a desk.
Everyone here wanted a ticket to space, but the ten interesting people would be setting historical precedent, breaking barriers that in the past restricted people like them from space travel. Of the six women in the room, one would be the first American woman in space. While the Soviets had flown the first female astronaut, Valentina Tereshkova—being the first American woman in space would earn a prominent place in the annals of history. In 1978, no Black person had flown to space. Ron, along with Guy Bluford, and Fred Gregory would compete to be the first, while Ellison Onizuka would almost certainly be the first Asian American to fly. Guy and Fred, both Vietnam vets, and El, an Air Force test pilot, all spoke the military language of the old guys. Ron was an outsider even among outsiders.
John Young, chief of the Astronaut Office, began the meeting, mumbling “a few forgettable words of welcome” while staring at his shoes. Though he had braved the depths of space four times, on both Apollo and Gemini, Young had not conquered public speaking. Compact, with a jockey’s build, Young was a handsome Navy devil with big ears and an aw-shucks demeanor that belied how truly meticulous he was. He preferred solving thorny engineering problems, to dealing with management issues, and yet here he was as head of the Astronaut Office. He explained to the new class that they were not yet astronauts; they were still astronaut candidates, or “AsCans” for short. Only after two years of training would they earn the title astronaut and a silver pin to mark the achievement.
Inspired by Navy and Air Force aviator badges, the pin depicted a trio of rays merged atop a shining star and encircled by a halo denoting orbital flight. The silver pin meant you were flight-ready, but the gold pin meant you had flown to space. That’s when you make it. Young then left the group with a bit of sage advice: “Don’t talk about nothing you know nothing about.” Got it. So basically, keep our mouths shut.
As the old guys left the room, they once-overed the new guys. Quite simply, the old guys were a different generation. They were veterans, test pilots, and guys who had never worked with women or civilian graduate students. Underneath their pique was also perhaps a tinge of fear. The line to ride the bird just got a whole lot longer; maybe they would miss their chance altogether.
Who are these guys anyway? Hell, half of them are civilians, wet behind the ears, fresh off their mother’s teat. They traded in high grades and accolades, not in life-or-death. The old guys shook their heads. Those Fucking New Guys. “The Fucking New Guy,” a military term for the newest grunt in the unit, seemed to suit Astronaut Class 8 perfectly. So was born the official class nickname: TFNG. In polite company, the TFNGs referred to themselves as “Thirty-Five New Guys,” but everyone knew what the term really meant.
After the meeting, secretary Sylvia Salinas handed the New Guys their official NASA portraits and asked them to create signatures for the auto-pen machine. The agency would print thousands of autographed photos. Do thousands of people want our autograph? Ron wondered. It’s astronaut insurance, a veteran astronaut quipped. If you die, your family will have something to sell. The joke did not get any laughs.
ChatGPT, the automated text generation system from Open, has taken the world by storm in the two months since its public beta release but that time alone in the spotlight is quickly coming to an end. Google announced on Monday that its long-rumored chatbot AI project is real and on the way. It's called Bard.
Bard will serve as an "experimental conversational AI service," per a blog post by Google CEO Sundar Pichai Monday. It's built atop Google's existing Language Model for Dialogue Applications (LaMDA) platform, which the company has been developing for the past two years.
"Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models," Pichai declared. "It draws on information from the web to provide fresh, high-quality responses." Whether that reliance on the internet results in bigoted or racist behavior, as seemingly every chatbot before it has exhibited, remain to be seen.
The program will not simply be opened to the internet as ChatGPT was. Google is starting with the release of a lightweight version of LaMDA, which requires far lower system requirements than its full-specced brethren, for a select group of trusted users before scaling up from there. "We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information," Pichai said. "We’re excited for this phase of testing to help us continue to learn and improve Bard’s quality and speed."
Chatting with internet users is only the next step in Google's larger AI mechanizations. Pichai notes that as user search requests become more complex and nuanced, "you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web," Pichai said. He added that such features would be rolling out to users "soon." The commercial API running atop LaMDA, dubbed Generative Language API, will begin inviting select developers to explore the system starting next month.
For a product that its own creators, in a marketing pique, once declared “too dangerous” to release to the general public, OpenAI’s ChatGPT is seemingly everywhere these days. The versatile automated text generation (ATG) system, which is capable of outputting copy that is nearly indistinguishable from a human writer’s work, is officially still in beta but has already been utilized in dozens of novel applications, some of which extend far beyond the roles ChatGPT was originally intended for — like that time it simulated an operational Linux shell or that other time when it passed the entrance exam to Wharton Business School.
But with these technical advancements come with a slew of opportunities for misuse and outright harm. And if our previous hamfisted attempts at handling the spread of deepfake video and audio technologies were any indication, we’re dangerously underprepared for the havoc that at-scale, automated disinformation production will wreak upon our society.
NurPhoto via Getty Images
OpenAI’s billion dollar origin story
OpenAI has been busy since its founding in 2015 as a non-profit by Sam Altman, Peter Thiel, Reid Hoffman, Elon Musk and a host of other VC luminaries, who all collectively chipped in a cool billion dollars to get the organization up and running. The “altruistic” venture argues that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”
The following year, the company released its first public beta of the OpenAI Gym reinforcement learning platform. Musk resigned from the board in 2018, citing a potential conflict of interest with his ownership of Tesla. 2019 was especially eventful for OpenAI. That year, the company established a “capped” for-profit subsidiary (OpenAI LP) to the original non-profit (OpenAI Inc) organization, received an additional billion-dollar funding infusion from Microsoft and announced plans to begin licensing its products commercially.
In 2020, OpenAI officially launched GPT-3, a text generator able to “summarize legal documents, suggest answers to customer-service enquiries, propose computer code [and] run text-based role-playing games,” The company released its commercial API that year as well.
“I have to say I’m blown away,” startup founder Arram Sabeti wrote at the time, after interacting with the system. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”
2021 saw the release of DALL-E, a text-to-image generator; and the company made headlines again last year with the release of ChatGPT, a chat client based on GPT-3.5, the latest and current GPT iteration. In January 2023, Microsoft and OpenAI announced a deepening of their research cooperative with a multi-year, multi-billion-dollar ongoing investment.
“I think it does an excellent job at spitting out text that's plausible,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab and Associate Professor of Technology Policy Research at UC Berkeley, told Engadget. “It feels like somebody really wrote it. I've used it myself actually to kind of get over a writer's block, to just think through how I flow in the argument that I'm trying to make, so I found it helpful.”
That said, Nonnecke cannot look past the system’s stubborn habit of producing false claims. “It will cite articles that don't exist,” she added. “Right now, at this stage, it's realistic but there's still a long way to go.”
What is generative AI?
OpenAI is far from the only player in the ATG game. Generative AI (or, more succinctly, gen-AI) is the practice of using machine learning algorithms to produce novel content — whether that’s text, images, audio, or video — based on a training corpus of labeled example databases. It’s your standard unsupervised reinforcement learning regimen, the likes of which have trained Google’s AlphaGo, song and video recommendation engines across the internet, as well as vehicle driver assist systems. Of course while models like Stability AI’s Stable Diffusion or Google’s Imagen are trained to convert progressively higher resolution patterns of random dots into images, ATGs like ChatGPT remix text passages plucked from their training data to output suspiciously realistic, albeit frequently pedestrian, prose.
“They're trained on a very large amount of input,” Dr. Peter Krapp, Professor of Film & Media Studies at the University of California, Irvine, told Engadget. “What results is more or less… an average of that input. It's never going to impress us with being exceptional or particularly apt or beautiful or skilled. It's always going to be kind of competent — to the extent that we all collectively are somewhat competent in using language to express ourselves.”
Generative AI is already big business. While flashy events like Stable Diffusion’s maker getting sued for scraping training data from Meta or ChatGPT managing to schmooze its way into medical school (yes, in addition to Wharton) grab headlines, Fortune 500 companies like NVIDIA, Facebook, Amazon Web Services, IBM and Google are all quietly leveraging gen-AI for their own business benefit. They’re using it in a host of applications, from improving search engine results and proposing computer code to writing marketing and advertising content.
Wikipedia / Public Domain
The secret to ChatGPT’s success
Efforts to get machines to communicate with us as we do with other people, as Dr. Krapp notes, began in the 1960s and ‘70s with linguists being among the earliest adopters. “They realized that certain conversations can be modeled in such a way that they're more or less self-contained,” he explained. “If I can have a conversation with, you know, a stereotypical average therapist, that means I can also program the computer to serve as the therapist.” Which is how Eliza became an NLP easter egg hidden in Emacs, the popular Linux text editor.
Today, we use the technological descendents of those early efforts to translate the menus at fancy restaurants for us, serve as digital assistants on our phones, and chat with us as customer service reps. The problem, however, is that to get an AI to perform any of these functions, it has to be specially trained to do that one specific thing. We’re still years away from functional general AIs but part of ChatGPT’s impressive capability stems from its ability to write middling poetry as easily as it can generate a fake set of Terms of Service for the Truth Social website in the voice of Donald Trump without the need for specialized training between the two.
This prosaic flexibility is possible because, at its core, ChatGPT is a chatbot. It’s designed first and foremost to accurately mimic a human conversationalist, which it actually did on Reddit for a week in 2020 before being outed. It was trained using supervised learning methods wherein the human trainers initially fed the model both sides of a given conversation — both what the human user and AI agent were supposed to say. With the basics in it robomind, ChatGPT was then allowed to converse with humans with its responses being ranked after each session. Subjectively better responses scored higher in the model’s internal rewards system and were subsequently optimized for. This has resulted in an AI with a silver tongue but a “just sorta skimmed the Wiki before chiming in” aptitude of fact checking.
Part of ChatGPT’s boisterous success — having garnered a record 100 million monthly active users just two months after its launch — can certainly be marked up to solid marketing strategies such as the “too dangerous” neg of 2020, Natasha Allen, a partner at Foley & Lardner LLP, told Engadget. “I think the other part is just how easy it is to use it. You know, the average person can just plug in some words and there you go.”
“People who previously hadn’t been interested in AI, didn't really care what it was,” are now beginning to take notice. Its ease of use is an asset, Allen argues, making ChatGPT “something that's enticing and interesting to people who may not be into AI technologies.”
“It's a very powerful tool,” she conceded. “I don't think it's perfect. I think that obviously there are some errors but… it'll get you 70 to 80 percent of the way.”
Leon Neal via Getty Images
Will Microsoft’s ChatGPT be Microsoft’s Taye for a new generation?
But a lot can go wrong in those last 20 to 30 percent, because ChatGPT doesn’t actually know what the words it’s remixing into new sentences mean, it just understands the statistical relationships between them. “The GPT-3 hype is way too much,” Sam Altman, OpenAI’s chief executive, warned in a July, 2020 tweet. “It’s impressive but it still has serious weaknesses and sometimes makes very silly mistakes.”
Those “silly” mistakes range from making nonsensical comparisons like “A pencil is heavier than a toaster” to the racist bigotry we’ve seen with past chatbots like Taye — well, really, all of them to date if we’re being honest. Some of ChatGPT’s replies have even encouraged self-harm in its users, raising a host of ethical quandaries (not limited to, should AI byline scientific research?) for both the company and field as a whole.
“I'm worried because if we have deep fake video and voice, tying that with ChatGPT, where it can actually write something mimicking the style of how somebody speaks,” Nonnecke said. “Those two things combined together are just a powder keg for convincing disinformation.”
“I think it's gasoline on the fire, because people write and speak in particular styles,” she continued. “And that can sometimes be the tell — if you see a deepfake and it just doesn't sound right, the way that they're talking about something. Now, GPT very much sounds like the individual, both how they would write and speak. I think it's actually amplifying the harm.”
The current generation of celebrity impersonating chatbots aren’t what would be considered historically accurate (Henry Ford’s avatar isn’t antisemitic, for example) but future improvements could nearly erase the lines between reality and created content. “The first way it's going to be used is very likely to commit fraud,“ Nonnecke said, noting that scammers have already leveraged voice cloning software to pose as a mark’s relative and swindle money from them.
“The biggest challenge is going to be how do we appropriately address it, because those deep fakes are out. You already have the confusion,” Nonnecke said. “Sometimes it's referred to as the liars dividend: nobody knows if it's true, then sort of everything's a lie, and nothing can be trusted.”
Donato Fasano via Getty Images
ChatGPT goes to college
ChatGPT is raising hackles across academia as well. The text generator has notably passed the written portion of Wharton Business School’s entrance exam, along with all three parts of the US Medical Licensing exam. The response has been swift (as most panicked scramblings in response to new technologies tend to be) but widely varied. The New York City public school system took the traditional approach, ineffectually “banning” the app’s use by students, while educators like Dr. Ethan Mollick, associate professor at the University of Pennsylvania's prestigious Wharton School, have embraced it in their lesson plans.
"This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT," Mollick told NPR in January.
"The truth is, I probably couldn't have stopped them even if I didn't require it," he added. Instead, Mollick has his students use ChatGPT as a prompt and idea generator for their essay assignments.
UCI’s Dr. Krapp has taken a similar approach. “I'm currently teaching a couple of classes where it was easy for me to say, ‘okay, here's our writing assignment, let's see what ChadGPT comes up with,’’ he explained. “I did the five different ways with different prompts or partial prompts, and then had the students work on, ‘how do we recognize that this is not written by a human and what could we learn from this?’.”
Is ChatGPT coming for your writing job?
At the start of the year, tech news site CNET was outed for having used an ATG of its own design to generate entire feature-length financial explainer articles — 75 in all since November 2022. The posts were supposedly “rigorously” fact checked by human editors to ensure their output was accurate, though cursory examinations uncovered rampant factual errors requiring CNET and its parent company, Red Ventures, to issue corrections and updates for more than half of the articles.
BuzzFeed’s chief, Jonah Peretti, upon seeing the disastrous fallout CNET was experiencing from this computer generated dalliance, immediately decided to stick his tongue in the outlet too, announcing that his publication plans to employ gen-AI to create low-stakes content like personality quizzes.
This news came mere weeks after BuzzFeed laid off a sizable portion of its editorial staff on account of “challenging market conditions.” The coincidence is hard to ignore, especially given the waves of layoffs currently rocking the tech and media sectors for that specific reason, even as the conglomerates themselves bathe in record revenue and earnings.
This is not the first time that new technology has displaced existing workers. NYT columnist Paul Krugman points to coal mining as an example. The industry saw massive workforce reductions throughout the 20th century, not because our use of coal decreased, but because mining technologies advanced enough that fewer humans were needed to do the same amount of work. The same effect is seen in the automotive industry with robots replacing people on assembly lines.
“It is difficult to predict exactly how AI will impact the demand for knowledge workers, as it will likely vary, depending on the industry and specific job tasks,” Krugman opined. “However, it is possible that in some cases, AI and automation may be able to perform certain knowledge-based tasks more efficiently than humans, potentially reducing the need for some knowledge workers.”
However, Dr. Krapp is not worried. “I see that some journalists have said, ‘I'm worried. My job has already been impacted by digital media and digital distribution. Now the type of writing that I do well, could be done by computer for cheap much more quickly,’ he said. “I don't see that happening. I don't think that's the case. I think we still as humans, have a need — a desire — for recognizing in others what's human about them.”
“[ChatGPT is] impressive. It's fun to play with, [but] we're still here,” he added, “We're still reading, it's still meant to be a human size interface for human consumption, for human enjoyment.”
Fear not for someone is sure to save us, probably
ChatGPT’s shared-reality shredding fangs will eventually be capped, Nonnecke is confident, whether by congress or the industry itself in response to public pressure. “I actually think that there's bipartisan support for this, which is interesting in the AI space,” she told Engadget. “And in data privacy, data protection, we tend to have bipartisan support.”
She points to efforts in 2022 spearheaded by OpenAI Safety and Alignment researcher Scott Aaronson to develop a cryptographic watermark so that the end user could easily spot computer generated material, as one example of the industry’s attempts to self-regulate.
“Basically, whenever GPT generates some long text, we want there to be an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT,” Aaronson wrote on his blog. “We want it to be much harder to take a GPT output and pass it off as if it came from a human. This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda.”
The efficacy of such a safeguard remains to be seen. “It's very much whack-a-mole, right now,” Nonnecke exclaimed. “It's the company themselves making that [moderation] decision. There's no transparency in how they're deciding what types of prompts to block or not block, which is very concerning to me.”
“Somebody's going to use this to do terrible things,” she said.
The science of grafting skin has come a long way from the days of scraping it off one part of a patient's body and slapping it back on somewhere else to cover a nasty burn or injury. These days grafts are commonly bioprinted like living inkjets using the patient's cultured cells to seed the growing process, down to the vascularization. The primary shortcoming of these printed grafts is that they can only be produced in flat sheets with open edges. This method "disregard[s] the fully enclosed geometry of human skin," argue a team of researchers from Columbia University. Instead, they've devised a novel means of producing skin in virtually any complex 3D shape they need — from ears and elbows to entire hands printed like a pair of Buffalo Bill's mittens.
Alberto Pappalardo and Hasan Erbil Abaci / Columbia University Vagelos College of Physicians and Surgeons
The team published their findings, "Engineering edgeless human skin with enhanced biomechanical properties," in the January issue of Scientific Advances. Theyexplained how they engineered, "the skin as a fully enclosed 3D tissue that can be shaped after a body part and seamlessly transplanted as a biological clothing."
“Three-dimensional skin constructs that can be transplanted as ‘biological clothing’ would have many advantages,” Dr. Hasan Erbil Abaci, lead researcher and assistant professor of dermatology at Columbia University, said in a recent press release. “They would dramatically minimize the need for suturing, reduce the length of surgeries, and improve aesthetic outcomes.”
What's more, these uniform grafts have shown superior performance, both mechanically and functionally, than their patchwork alternatives. The Columbia team has dubbed the grafts "wearable edgeless skin constructs" (WESCs). Ok, but can you eat them?
The process of making these skin prosthetics isn't that far off from the existing techniques which result in flat slabs of skin. The transplant site is first scanned with a 3D laser to create a digital facsimile of the structure. That data is worked through a CAD program to generate a hollow wireframe of the appendige and then printed. This serves as the scaffolding on which the patient's cultured cells will grow. It's coated with skin fibroblasts and collagen then covered by an outer layer of keratinocytes (which make up the epidermis) and growth medium to feed the cells as they mature. As with making flat sheets, the entire process requires around three weeks for the cells to fully set up and be ready for transplant.
Initial lab tests with mouse models were encouraging. “It was like putting a pair of shorts on the mice,” Abaci said. “The entire surgery took about 10 minutes.” Don't get too excited, mouse skin is not people skin. It heals differently enough that additional animal studies will be required before we start trying it on humans. Such tests are likely still years away.
There was a time in the last century when we, quite foolishly, believed incineration to be a superior means of waste disposal than landfills. And, for decades, many of America's most disadvantaged have been paying for those decisions with with their lifespans. South Baltimore's Curtis Bay neighborhood, for example, is home to two medical waste incinerators and an open-air coal mine. It's ranked in the 95th percentile for hazardous waste and boasts among the highest rates of asthma and lung disease in the entire country.
The city's largest trash incinerator is the Wheelabrator–BRESCO, which burns through 2,250 tons of garbage a day. It has been in operation since the 1970s, belching out everything from mercury and lead to hydrochloric acid, sulfur dioxide, and chromium into the six surrounding working-class neighborhoods and the people who live there. In 2011, students from Benjamin Franklin High School began to push back against the construction of a new incinerator, setting off a decade-long struggle that pitted high school and college students against the power of City Hall.
In Fighting to Breathe: Race, Toxicity, and the Rise of Youth Activism in Baltimore, Dr. Nicole Fabricant, Professor of Anthropology at Towson University in Maryland, chronicles the students' participatory action research between 2011 and 2021, organizing and mobilizing their communities to fight back against a century of environmental injustice, racism and violence in one of the nation's most polluted cities. In the excerpt below, Fabricant discusses the use of art — specifically that of crankies — in movement building.
Making Connections: Fairfield Houses and Environmental Displacement
As the students developed independent investigations, they discovered what had happened in the campaigns against toxins that preceded their own struggle against the incinerator. They learned that the Fairfield neighborhood, before being relocated to its current site, had been situated near to where Energy Answers was planning to build their trash-to-energy incinerator. At the time of the students’ investigations, this area was an abandoned industrial site surrounded by heavy diesel truck traffic, polluting chemical and fertilizer industries, and abandoned brownfield sites.
Students read that the City had built basic infrastructure in Wagner’s Point, the all-white (though poor and white ethnic, to be clear) community on the peninsula in the 1950s, nearly thirty years before doing so in Fairfield, which was located alongside Wagner’s Point but all (or almost all) Black. As Destiny reiterated to me in the Fall of 2019:
Wagner’s Point was predominantly white and Fairfield predominantly Black, but both communities were company towns, living in poverty, working in dangerous hazardous conditions, and forced to live in a toxic environment.... On the surface, this history can be read as a story of two communities, different in culture and race, facing the issue together. But this ignores the issue of racism that divided the two communities. For instance, Fairfield did not get access to plumbing... until well into the 1970s. This is an example of structural racism. It is also a story not told by our history books.
The students talked in small groups about systemic and structural racism and unfair housing policies. They investigated the evacuation of Fairfield Housing. They learned that former residents were forcibly relocated to public housing and were offered $22,500 for renters and up too $5,250 per household. They also received moving costs of up to $1,500 per household. When 14 households remained in Fairfield a decade later, then-Mayor Kurt Schmoke stated that he would prefer to move all residents out of Fairfield, but the city did not have any money for relocation. This history provoked Free Your Voice youth to think beyond their community to how structural racism shaped citywide decisions and policies.
Despite attempts to integrate school systems in the 1950s and the passage of civil rights legislation in the 1960s intended, specifically, to mitigate racism in housing policies, the provision of public education and the regulation of housing practices remained uneven in the 1970s (and into the present). Students learned that in 1979 a CSX railroad car carrying nine thousand gallons of highly concentrated sulfuric acid overturned and the Fairfield Homes public housing complex was temporarily evacuated. That same year, they read, an explosion at the British Petroleum oil tank, located on Fairfield Peninsula, set off a seven-alarm fire. All of this led the students to deeper inquiry.
Figuring out the ways in which structural racism shaped contemporary ideas about people, bodies, and space is something that Destiny often referred to when speaking publicly. Destiny explained that studying “history allowed us to see our community in a way that gave us the ability to build power or collective strength. So, how do you confront this history, this marketplace?” Building power within the school was about “re-education,” she said, but it was also about rebuilding social relationships across the community and helping residents to understand the structural conditions and histories sustaining inequities that others (especially white others) tried to explain away using racist stereotypes and tropes (e.g., Black youth as “thugs”; “they’re poor because they’re lazy”). These tropes subtly and not so subtly suggested racial and cultural inferiority.
As a group, the students worked to establish a presence in the community and to create spontaneous spaces for dialogue and discussion. They attended a Fairfield reunion in Curtis Bay Park during the summer of 2013, where approximately 150 former Fairfield Homes residents gathered to celebrate their history, reminisce, and have a cookout together. Gathered on the grass next to the Curtis Bay Recreation Center, former residents reminisced about what life was like in the projects. At one point, an elder participant shared with Destiny, “Fairfield was the Cadillac of housing projects.... We were all a family, we took care of one another.” The Free Your Voice students engaged with living history as they listened and learned.
For many of the students, the combined processes of reading texts and listening to elder residents’ stories moved them from numbness to awareness. Being able to discuss what they learned in sophisticated conversations with their peers and the experts they sought out helped to build their confidence as activists and adult interlocutors.
Arts and Performance in Movement Building: The Crankie
While analysis and study were key to building change campaigns, the students also recognized that building a sociopolitical movement of economically disadvantaged people required more than mobilizing bodies. To be effective, they were going to have to move hearts and minds.
In 2014, Free Your Voice students decided to strengthen the emotional and relationship building aspects of their campaign by adopting art forms, including performance and storytelling, into their communication efforts. Destiny began a speech she delivered at The Worker Justice Center human rights dinner in 2015 by quoting W.E.B. Dubois: “‘Art is not simply works of art; it is the spirit that knows beauty, that has music in its being and the color of sunsets in its handkerchiefs, that can dance on a flaming world and make the world dance, too’” (Watford 2015). Art — in the form of a vintage performance genre known as “the crankie” and rap songs — became a tool the students utilized to tell their stories to much broader publics and to boost emotional connections with their allies. Performances particularly allowed youth to be creative and inventive. Their productions were often malleable. Sometimes, Free Your Voice youth would rewrite a script based on audience feedback. As a result, their performances were often improvisational, and they invited residents to be a part of the storytelling. This allowed the student-performers to develop strong narrative structures and especially realistic characters.
Not only did students do art, but they also invited artists, including performers, to join the Dream Team to broaden both the appeal and impact of the Stop the Incinerator campaign. One artist at the Maryland Institute College of Art, Janette Simpson, spoke to me at length about the genesis of her commitment to Free Your Voice’s organizing, and how that commitment deepened and extended her work with other campaigns originating with The Worker Justice Center. Free Your Voice students approached Simpson, with their teacher Daniel Murphy acting as their mediator, about incorporating her work in theater into their campaign.12 They sent her a recent report on the environmental history of the peninsula and asked that she read it. That report became the hook that convinced Simpson to collaborate:
I had been thinking about how art and artists can serve social movements, and how artists also have agency in the making of their artwork. Or maybe thinking about autonomy. Free Your Voice youth suggested I read the Diamond report, which was written by a team of researchers from the University of Maryland Law School. I remember being like, Wow! What a story! All these visuals came to my mind... like the guano factories, the ships, these agricultural communities, this Black community versus the white community... the relationship to the water and the relationship to the city. So I decided I would try to illustrate a version of that report in a way. Like, what did people look like in 1800s, and what were they wearing? ... Then I realized that this is not my history, who am I to tell someone else’s story? I need to think more symbolically, and then it came to me to write this illustrative history as a fable or an allegory.
Which is what she did, alongside Terrel Jones (whose childhood lived experiences I detailed in chapter 2). Terrel and Simpson created a crankie, an old storytelling art form popular in the nineteenth century that includes a long, illustrated scroll wound onto two spools. The spools are loaded into a box that has a viewing screen and the scroll is then hand-cranked, hence the name “crankie.” While the story is told, a tune is played or a song is sung. Terrel and Simpson created a show for the anti-incinerator campaign that was performed throughout the city for audiences of all ages and walks of life. The Holey Land, as their show was titled, was an allegory about the powerful connection between people and the place they call home. In this tale, the Peninsula People and the magic in their land are threatened when a stranger with a tall hat and a shovel shows up with big ideas for “improving” their community. As storybook images scroll past the viewing screen, the vibrant and colorful pictures of a peninsula rich in natural resources, including orange and pink fish, slowly get usurped by those of the man with the shovel building his factories, and the Peninsula People are left to ponder the fate of their land. The story ends with a surprising twist, and a hopeful message about a community’s ability to determine their own future.
"An unwavering commitment to innovation has consistently guided Mercedes-Benz from the very beginning," Dimitris Psillakis, President and CEO of MBUSA, said in Thursday's press statement. "It is a very proud moment for everyone to continue this leadership and celebrate this monumental achievement as the first automotive company to be certified for Level 3 conditionally automated driving in the US market."
Level 3 capabilities, as defined by the National Highway Transportation Safety Administration (NHTSA), would enable the vehicle to handle "all aspects of the driving" when engaged but still need the driver attentive enough to promptly take control if necessary. That's a big step up from the Level 2 systems we see today such as Tesla's "Full Self-Driving," Ford's Blue Cruise, and GM's Super Cruise. All of those are essentially extra-capable highway cruise controls where the driver must maintain their attention on driving, typically keeping their hands on or at least near the wheel, and be responsible for what the ADAS is doing while it's doing it. That's a far cry from the Knight Rider-esque ADAS outlook Tesla is selling and what Level 2 autonomy is actually capable of.
Mercedes' Drive Pilot system can, "on suitable freeway sections and where there is high traffic density," according to the company, take over the bumper-to-bumper crawling duties up to 40 MPH without the driver needing to keep their hands on the wheel. When engaged, the system handles lane-keeping duties, stays with the flow of traffic, navigates to destinations programmed into the Nav system, and will even react to "unexpected traffic situations and handles them independently, e.g. by evasive maneuvers within the lane or by braking maneuvers."
To perform these feats, the Drive Pilot system relies on a suite of sensors embedded throughout the vehicle including visual cameras, LiDAR arrays, radar and ultrasound sensors, and audio mics to keep an ear out for approaching emergency vehicles. The system even compares its onboard sensor data with what it is receiving from its GPS to ensure it knows exactly where on the road it actually is.
Drive Pilot is only available on the 2024 S-Class and EQS Sedan for now. Those are already in production and the first cars should reach the Vegas strip in the second half of this year.
CNet's AI SNFAU turned out to be merely the first pebble kicked down the slippery slope. In a Thursday morning internal memo acquired by the Wall Street Journal, Buzzfeed Chief Executive Jonah Peretti announced plans to embrace AI in both editorial and business operations and utilize text generation systems similar to CNet's to produce, for example, the memeable quizzes that originally built Buzzfeed's following.
Such AI-powered quizzes could provide more personalized answers based on the user's more specific responses rather than based on a score range or ranked choice system like they are today. Peretti envisions AI not only producing content on its own but drawing inspiration from human writers. We squishy meat sacks would serve as idea sources for AI text generators, or as Peretti described members of his own species, “cultural currency” and “inspired prompts.” He further argues that within the next two decades, AI will "create, personalize, and animate the content itself” rather than simply regurgitate (read: plagiarize) already existing works.
On one hand, this seems a foolhardy move given the Low Orbit Ion Cannon-level fallout that CNet's reputation has suffered since news broke that it had employed AI text generation systems to produce nearly 75 financial explainers since late 2022. More than half of those posts had to be updated on account of shoddy math (you'd think a computer would be better at that), plus the whole plagiarism thing. On the other hand, a lot of the flack that CNet took in the early days of the controversy was that it had tried to be cute and sneak in the fact that it was having chatbots write entire feature posts without actually telling anybody. Peretti's announcement Thursday, at least does that.
Of course, Buzzfeed is far from alone in the burgeoning "pivot to ChatGPT" movement. Microsoft announced this week a multiyear, "multi-billion dollar" investment in OpenAI's text generation systems — exactly two days before announcing that despite $52.7 billion in Q2 revenue, it would be laying off ten thousand (10,000) people on account of challenging macroeconomic conditions.