Sundance documentary Eternal You shows how AI companies are ‘resurrecting’ the dead

A woman has a text chat with her long-dead lover. A family gets to hear a deceased elder speak again. A mother gets another chance to say goodbye to her child, who died suddenly, via a digital facsimile. This isn't a preview of the next season of Black Mirror — these are all true stories from the Sundance documentary Eternal You, a fascinating and frightening dive into tech companies using AI to digitally resurrect the dead.

It's yet another way modern AI, which includes large language models like ChatGPT and similar bespoke solutions, has the potential to transform society. And as Eternal You shows, the AI afterlife industry is already having a profound effect on its early users.

The film opens on a woman having a late night text chat with a friend: "I can't believe I'm trying this, how are you?" she asks, as if she's using the internet for the first time. "I'm okay. I'm working, I'm living. I'm... scared," her friend replies. When she asks why, they reply, "I'm not used to being dead."

Beetz Brothers Film Production

It turns out the woman, Christi Angel, is using the AI service Project December to chat with a simulation of her first love, who died many years ago. Angel is clearly intrigued by the technology, but as a devout Christian, she's also a bit spooked out by the prospect of raising the dead. The AI system eventually gives her some reasons to be concerned: Cameroun reveals that he's not in heaven, as she assumes. He's in hell.

"You're not in hell," she writes back. "I am in hell," the AI chatbot insists. The digital Cameroun says he's in a "dark and lonely" place, his only companions are "mostly addicts." The chatbot goes on to say he's currently haunting a treatment center and later suggests "I'll haunt you." That was enough to scare Angel and question why she was using this service in the first place.

While Angel was aware she was talking to a digital recreation of Cameroun, which was based on the information she provided to Project December, she interacted with the chatbot as if she was actually chatting with him on another plane of existence. That's a situation that many users of AI resurrection services will likely encounter: Rationality can easily overwhelm your emotional response while "speaking" with a dead loved one, even if the conversation is just occurring over text.

In the film, MIT sociologist Sherry Turkle suggests that our current understanding of how AI affects people is similar to our relationship with social media over a decade ago. That makes it a good time to ask questions about the human values and purposes it's serving, she says. If we had a clearer understanding of social media early on, maybe we could have pushed Facebook and Twitter to confront misinformation and online abuse more seriously. (Perhaps the 2016 election would have looked very different if we were aware of how other countries could weaponize social media.)

Beetz Brothers Film Production

Eternal You also introduces us to Joshua Barbeau, a freelance writer who became a bit of an online celebrity in 2021 when The San Francisco Chronicle reported on his Project December chatbot: a digital version of his ex-fiancee Jessica. At first, he used Project December to chat with pre-built bots, but he eventually realized he could use the underlying technology (GPT-3, at the time) to create one with Jessica's personality. Their conversations look natural and clearly comfort Barbeau. But we're still left wondering if chatting with a facsimile of his dead fiancee is actually helping Barbeau to process his grief. It could just as easily be seen as a crutch that he feels compelled to pay for.

It's also easy to be cynical about these tools, given what we see from their creators in the film. We meet Jason Rohrer, the founder and Project December and a former indie game designer, who comes across as a typical techno-libertarian.

"I believe in personal responsibility," he says, after also saying that he's not exactly in control of the AI models behind Project December, and right before we see him nearly crash a drone into his co-founders face. "I believe that consenting adults can use that technology however they want and they're responsible for the results of whatever they're doing. It's not my job as the creator of the technology to prevent the technology from being released, because I'm afraid of what somebody might do with it."

But, as MIT's Turkle points out, reanimating the dead via AI introduces moral questions that engineers like Rohrer likely aren't considering. "You're dealing with something much more profound in the human spirit," she says. "Once something is constituted enough that you can project onto it, this life force. It's our desire to animate the world, which is human, which is part of our beauty. But we have to worry about it, we have to keep it in check. Because I think it's leading us down a dangerous path."

Beetz Brothers Film Production

Another service, Hereafter.ai, lets users record stories to create a digital avatar of themselves, which family members can talk to now or after they die. One woman was eager to hear her father's voice again, but when she presented the avatar to her family the reaction was mixed. Younger folks seemed intrigue, but the older generation didn't want any part of it. "I fear that sometimes we can go too far with technology," her father's sister said. "I would just love to remember him as a person who was wonderful. I don't want my brother to appear to me. I'm satisfied knowing he's at peace, he's happy, and he's enjoying the other brothers, his mother and father."

YOV, an AI company that also focuses on personal avatars, or "Versonas," wants people to have seamless communication with their dead relatives across multiple channels. But, like all of these other digital afterlife companies, it runs into the same moral dilemmas. Is it ethical to digitally resurrect someone, especially if they didn't agree to it? Is the illusion of speaking to the dead more helpful or harmful for those left behind?

The most troubling sequence in Eternal You focuses on a South Korean mother, Jang Ji-sun, who lost her young child and remains wracked with guilt about not being able to say goodbye. She ended up being the central subject in a VR documentary, Meeting You, which was broadcast in South Korea in early 2020. She went far beyond a mere text chat: Jang donned a VR headset and confronted a startlingly realistic model of her child in virtual reality. The encounter was clearly moving for Jang, and the documentary received plenty of media attention at the time.

"There's a line between the world of the living and the world of the dead," said Kim Jong-woo, the producer behind Meeting You. "By line, I mean the fact that the dead can't come back to life. But people saw the experience as crossing that line. After all, I created an experience in which the beloved seemed to have returned. Have I made some huge mistake? Have I broken the principle of humankind? I don't know... maybe to some extent."

Eternal You paints a haunting portrait of an industry that's already revving up to capitalize on grief-stricken people. That's not exactly new; psychics and people claiming to speak to the dead have been around for our entire civilization. But through AI, we now have the ability to reanimate those lost souls. While that might be helpful for some, we're clearly not ready for a world where AI resurrection is commonplace.

This article originally appeared on Engadget at https://www.engadget.com/sundance-documentary-eternal-you-shows-how-ai-companies-are-resurrecting-the-dead-153025316.html?src=rss

AI is coming for big pharma

If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.

Risk mitigation isn’t a sexy notion but it’s worth understanding how common it is for a new drug project to fail. To set the scene, consider that each drug project takes between three and five years to form a hypothesis strong enough to start tests in a laboratory. A 2022 study from Professor Duxin Sun found that 90 percent of clinical drug development fails, with each project costing more than $2 billion. And that number doesn’t even include compounds found to be unworkable at the preclinical stage. Put simply, every successful drug has to prop up at least $18 billion waste generated by its unsuccessful siblings, which all but guarantees that less lucrative cures for rarer conditions aren’t given as much focus as they may need.

Dr. Nicola Richmond is VP of AI at Benevolent, a biotech company using AI in its drug discovery process. She explained the classical system tasks researchers to find, for example, a misbehaving protein – the cause of disease – and then find a molecule that could make it behave. Once they've found one, they need to get that molecule into a form a patient can take, and then test if it’s both safe and effective. The journey to clinical trials on a living human patient takes years, and it’s often only then researchers find out that what worked in theory does not work in practice.

The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space. He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”

CellProfiler / Carpenter-Singh laboratory at the Broad Institute

Dr. Anne E. Carpenter is the founder of the Carpenter-Singh laboratory at the Broad Institute of MIT and Harvard. She has spent more than a decade developing techniques in Cell Painting, a way to highlight elements in cells, with dyes, to make them readable by a computer. She is also the co-developer of Cell Profiler, a platform enabling researchers to use AI to scrub through vast troves of images of those dyed cells. Combined, this work makes it easy for a machine to see how cells change when they are impacted by the presence of disease or a treatment. And by looking at every part of the cell holistically – a discipline known as “omics” – there are greater opportunities for making the sort of connections that AI systems excel at.

Using pictures as a way of identifying potential cures seems a little left-field, since how things look don’t always represent how things actually are, right? Carpenter said humans have always made subconscious assumptions about medical status from sight alone. She explained most people may conclude someone may have a chromosomal issue just by looking at their face. And professional clinicians can identify a number of disorders by sight alone purely as a consequence of their experience. She added that if you took a picture of everyone’s face in a given population, a computer would be able to identify patterns and sort them based on common features.

This logic applies to the pictures of cells, where it’s possible for a digital pathologist to compare images from healthy and diseased samples. If a human can do it, then it should be faster and easier to employ a computer to spot these differences in scale so long as it’s accurate. “You allow this data to self-assemble into groups and now [you’re] starting to see patterns,” she explained, “when we treat [cells] with 100,000 different compounds, one by one, we can say ‘here’s two chemicals that look really similar to each other.’” And this looking really similar to each other isn’t just coincidence, but seems to be indicative of how they behave.

In one example, Carpenter cited that two different compounds could produce similar effects in a cell, and by extension could be used to treat the same condition. If so, then it may be that one of the two – which may not have been intended for this purpose – has fewer harmful side effects. Then there’s the potential benefit of being able to identify something that we didn’t know was affected by disease. “It allows us to say, ‘hey, there’s this cluster of six genes, five of which are really well known to be part of this pathway, but the sixth one, we didn’t know what it did, but now we have a strong clue it’s involved in the same biological process.” “Maybe those other five genes, for whatever reason, aren’t great direct targets themselves, maybe the chemicals don’t bind,” she said, “but the sixth one [could be] really great for that.”

FatCamera via Getty Images

In this context, the startups using AI in their drug discovery processes are hoping that they can find the diamonds hiding in plain sight. Dr. Richmond said that Benevolent’s approach is for the team to pick a disease of interest and then formulate a biological question around it. So, at the start of one project, the team might wonder if there are ways to treat ALS by enhancing, or fixing, the way a cell’s own housekeeping system works. (To be clear, this is a purely hypothetical example supplied by Dr. Richmond.)

That question is then run through Benevolent’s AI models, which pull together data from a wide variety of sources. They then produce a ranked list of potential answers to the question, which can include novel compounds, or existing drugs that could be adapted to suit. The data then goes to a researcher, who can examine what, if any, weight to give to its findings. Dr. Richmond added that the model has to provide evidence from existing literature or sources to support its findings even if its picks are out of left-field. And that, at all times, a human has the final say on what of its results should be pursued and how vigorously.

It’s a similar situation at Recursion, with Dr. Gibson claiming that its model is now capable of predicting “how any drug will interact with any disease without having to physically test it.” The model has now formed around three trillion predictions connecting potential problems to their potential solutions based on the data it has already absorbed and simulated. Gibson said that the process at the company now resembles a web search: Researchers sit down at a terminal, “type in a gene associated with breast cancer and [the system] populates all the other genes and compounds that [it believes are] related.”

“What gets exciting,” said Dr. Gibson, “is when [we] see a gene nobody has ever heard of in the list, which feels like novel biology because the world has no idea it exists.” Once a target has been identified and the findings checked by a human, the data will be passed to Recursion’s in-house scientific laboratory. Here, researchers will run initial experiments to see if what was found in the simulation can be replicated in the real world. Dr. Gibson said that Recursion’s wet lab, which uses large-scale automation, is capable of running more than two million experiments in a working week.

“About six weeks later, with very little human intervention, we’ll get the results,” said Dr. Gibson and, if successful, it’s then the team will “really start investing.” Because, until this point, the short period of validation work has cost the company “very little money and time to get.” The promise is that, rather than a three-year preclinical phase, that whole process can be crunched down to a few database searches, some oversight and then a few weeks of ex vivo testing to confirm if the system’s hunches are worth making a real effort to interrogate. Dr. Gibson said that it believes it has taken a “year’s worth of animal model work and [compressed] it, in many cases, to two months.”

Of course, there is not yet a concrete success story, no wonder cure that any company in this space can point to as a validation of the approach. But Recursion can cite one real-world example of how close its platform came to matching the success of a critical study. In April 2020, Recursion ran the COVID-19 sequence through its system to look at potential treatments. It examined both FDA-approved drugs and candidates in late-stage clinical trials. The system produced a list of nine potential candidates which would need further analysis, eight of which it would later be proved to be correct. It also said that Hydroxychloroquine and Ivermectin, both much-ballyhooed in the earliest days of the pandemic, would flop.

And there are AI-informed drugs that are currently undergoing real-world clinical trials right now. Recursion is pointing to five projects currently finishing their stage one (tests in healthy patients), or entering stage two (trials in people with the rare diseases in question) clinical testing right now. Benevolent has started a stage one trial of BEN-8744, a treatment for ulcerative colitis that may help with other inflammatory bowel disorders. And BEN-8744 is targeting an inhibitor that has no prior associations in the existing research which, if successful, will add weight to the idea that AIs can spot the connections humans have missed. Of course, we can’t make any conclusions until at least early next year when the results of those initial tests will be released.

Yuichiro Chino via Getty Images

There are plenty of unanswered questions, including how much we should rely upon AI as the sole arbiter of the drug discovery pipeline. There are also questions around the quality of the training data and the biases in the wider sources more generally. Dr. Richmond highlighted the issues around biases in genetic data sources both in terms of the homogeneity of cell cultures and how those tests are carried out. Similarly, Dr. Carpenter said the results of her most recent project, the publicly available JUMP-Cell Painting project, were based on cells from a single participant. “We picked it with good reason, but it’s still one human and one cell type from that one human.” In an ideal world, she’d have a far broader range of participants and cell types, but the issues right now center on funding and time, or more appropriately, their absence.

But, for now, all we can do is await the results of these early trials and hope that they bear fruit. Like every other potential application of AI, its value will rest largely in its ability to improve the quality of the work – or, more likely, improve the bottom line for the business in question. If AI can make the savings attractive enough, however, then maybe those diseases which are not likely to make back the investment demands under the current system may stand a chance. It could all collapse in a puff of hype, or it may offer real hope to families struggling for help while dealing with a rare disorder.

This article originally appeared on Engadget at https://www.engadget.com/ai-is-coming-for-big-pharma-150045224.html?src=rss

Engadget Podcast: The Mac turns 40

Apple’s Mac just turned 40 years old! This week, Devindra chats with Deputy Editor Nathan Ingraham about his Mac retrospective. We focus on how much has changed since Apple’s disastrous 2016 lineup, why the Apple Silicon chips feel so revolutionary, and look back at our earliest Mac experiences. Also, we review the Framework Laptop 16, a wonderfully modular miracle of a laptop, but one that we wish had more graphics power for gaming. (But hey, at least you can replace the GPU eventually!).


Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!

Topics

  • Framework Laptop 16 review: Amazingly modular, but not so great at gaming – 1:17

  • The Mac turns 40 – 19:27

  • More tech layoffs at Blizzard/Activision, Riot, eBay and others – 49:58

  • Apple’s Car concept is allegedly still alive – 52:44

  • Apple overhauls App Store rules in response to European Union regulation – 58:25

  • Working on – 1:09:30

  • Pop culture picks – 1:13:40

Subscribe!

Credits
Hosts: Devindra Hardawar and Nathan Ingraham
Producer: Ben Ellman
Music: Dale North and Terrence O'Brien

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-the-mac-turns-40-141509644.html?src=rss

Google Chrome for Windows is finally getting native Arm support

A large downside to Windows PCs with Arm64 processors like Microsoft's own Surface Pro 9 5G has been a lack of native support for Chrome, the world's most popular browser. Now, Google has finally released a Chrome Canary beta version that fully supports the Arm64 architecture, Windows Central has reported. 

The new version should significantly accelerate Chrome performance on Arm64 PCs, negating the need to run Chrome in emulation mode. The download can be installed on PCs running recent versions of Windows 11 for Arm processors, with one user confirming it runs on a seven-year-old Snapdragon 835 SoC. 

Chrome has been available for some time on Google's Chromium on Arm64 and even Linux for Arm64, along with iOS and Mac. On top of that, Microsoft's Edge browser (which is based on Chrome) has run natively on Arm64 for years. So why the delay for Windows on Arm64? It may be because there aren't that many Arm64 Windows PCs and those that do exist are relatively expensive, especially compared to Chromebooks. 

Google might be reasoning that now is a good time to introduce the feature, since Qualcomm is set to release its Snapdragon X Elite chip, a successor to the Snapdragon 8cx Gen 3. Based on TSMC's latest 4-nanometer tech, it's promising performance double that of some 13th-gen Intel Core i7 CPUs with a third the power draw, allowing it to better compete with Apple's latest M-series silicon. 

If Windows laptops using the chip can finally deliver performance that's sadly been lacking in models to date, we may finally see them arrive in decent numbers. Snapdragon Elite X models are supposed to launch in mid-2024, so hopefully Google will be ready with a stable version of Chrome. If you have an Arm64 PC, you can download the Canary version here

This article originally appeared on Engadget at https://www.engadget.com/google-chrome-for-windows-is-finally-getting-native-arm-support-134832609.html?src=rss

The Morning After: Apple explains how third-party app stores will work in Europe

Apple is making major changes to the App Store in Europe in response to new European Union laws. Beginning in March, Apple will allow users in the EU to download apps and make purchases from outside its App Store. These changes are already being stress-tested in the iOS 17.4 beta.

Developers will be able to take payments and distribute apps from outside the App Store for the first time. Apple will still enforce a review process for apps that don’t come through its store, but it will be “focused on platform integrity and protecting users” from things like malware. The company warns it has less chance of addressing other risks like scams, abuse and harmful content.

Apple is also changing its commission structure, so developers will pay 17 percent on subscriptions and in-app purchases, reducing the fee to 10 percent for “most developers” after the first year. The company is tacking on a new three percent “payment processing” fee for transactions through its store, and there’s a new €0.50 “core technology fee” for all app downloads after the first million installations.

That’s a lot of new money numbers to process, and it could shake out differently for different developers. Apple says the new fee structure will result in most developers paying the company less, since the core technology fee will have the greatest impact on larger developers.

This all means that yes, Fortnite is returning.

— Mat Smith

​​

The biggest stories you might have missed

The FTC is investigating Microsoft, Amazon and Alphabet’s investments into AI startups

Budget retailer Newegg just started selling refurbished electronics

NASA’s Ingenuity Helicopter has flown on Mars for the final time

MIT researchers have developed a rapid 3D-printing technique that uses liquid metal

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Microsoft launches its metaverse-styled virtual meeting platform

Mesh is a place for your avatars to float around.

Microsoft

Microsoft has announced the launch of Mesh, a feature for employees’ avatars to meet in the same place, even if the actual people are spread out. The virtual connection platform is powered through Microsoft Teams. Currently, Microsoft’s Mesh is only available on desktop PCs and Meta Quest VR devices (if employees want a more immersive experience). Microsoft is offering a six-month free trial to anyone with a business or enterprise plan. But no legs, it seems.

Continue reading.

The Ray-Ban Meta smart glasses’ new AI powers are impressive

And worrying.

When we first reviewed the Ray-Ban Meta smart glasses, multimodal AI wasn’t ready. The feature enables the glasses to respond to queries based on what you’re looking at. Meta has now made multimodal search available for “early access.” Multimodal search is impressive, if not entirely useful yet. But Meta AI’s grasp of real-time information is shaky at best.

We tried asking it to help pick out clothes, like Mark Zuckerberg did in a recent Instagram post, and were underwhelmed. Then again, it may work best for a guy who famously wore the exact same shirt every day for years.

Continue reading.

Elon Musk confirms new low-cost Tesla model

Coming in 2025.

Elon Musk has confirmed a “next-generation low-cost” Tesla EV is in the works and is “optimistic” it’ll arrive in the second half of 2025, he said in an earnings call yesterday. He also promised “a revolutionary manufacturing system” for the vehicle. Reuters reported that the new vehicle would be a small crossover called Redwood. Musk previously stated the automaker is working on two new EV models that could sell up to five million per year, combined.

Musk said the company’s new manufacturing technique will be “very hard to copy” because “you have to copy the machine that makes the machine that makes the machine... manufacturing inception.”

I just audibly groaned reading that.

Continue reading. 

Japan’s lunar spacecraft landed upside down on the moon

It collected some data before shutting down.

JAXA

This picture just makes me sad.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-apple-explains-how-third-party-app-stores-will-work-in-europe-121528606.html?src=rss

The Xbox Series S is just $230 right now

If you weren't able to get the Microsoft Xbox Series S at a discount this past holiday season, you may want to check out Dell's website. The digital media-only console is currently on sale for $230, down $70 from its retail price of $300. While it can't play disc games, your $230 will get you 512GB in SSD storage and a wireless Xbox controller. The console supports variable refresh rates of up to 120 fps, and while it runs games at a max resolution of 1440p, you can use it to stream shows and movies in 4K. You only need to download the streaming apps you have access to, including Disney+, Netflix or Amazon Prime Video. 

While we called the Xbox Series S the least powerful console in its generation in our review, we found it to be capable of incredibly smooth gameplay. Even with boosted framerates, current and previous-gen games played like butter when we tested them out. Series S also starts up quickly, and a feature called Quick Resume lets you pick up from where you left off without having to suffer through loading screens that take forever to finish.

Storage could be an issue, seeing as this doesn't come with a disc drive, but you can expand it by getting the 1TB card Microsoft developed with Seagate. You can also mainly use it with the Game Pass subscription service that gives you access to a library with hundreds of titles. Bottom line is that the Xbox Series S is a great console if you're looking to go fully digital, and this is your chance to grab a unit without having to pay full price. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/the-xbox-series-s-is-just-230-right-now-115520855.html?src=rss

GM's Cruise is being investigated by the DoJ and SEC following a pedestrian accident

GM's driverless Cruise division is under investigation by both the Department of Justice (DoJ) and Securities and Exchange Commission (SEC), The Washington Post has reported. The probes follow an incident last year in which a jaywalking pedestrian was struck by a Cruise autonomous vehicle and then dragged 20 feet, worsening her injuries.

At the same time, yesterday Cruise released its own third-party findings regarding the accident, which took place on October 2 and involved another vehicle (a Nissan). The company said it "failed to live up to the justifiable expectations of regulators and the communities we serve... [and] also fell woefully short of our own expectations," adding that it's "fully cooperating" with investigators. According to its own findings, that's an understatement to say the least. 

According to the report, Cruise withheld crucial information from officials during a briefing the day after the accident. Specifically, the company failed to mention that its autonomous vehicle (AV) had dragged the victim 20 feet at around 7 MPH, causing serious injuries. According to the internal report, that occurred because the vehicle mistakenly detected a side (rather than a frontal) collision and attempted to pull over rather than stopping. 

At least 100 Cruise employees, including members of senior leadership, legal and others, were aware of the dragging incident — but failed to disclose it during October 3 meetings with the San Francisco Mayor's Office, NHTSA, DMV and other officials, the report states.

The company said it intended to let a video of the dragging incident speak for itself, then answer questions about it. However, the video didn't play clearly and fully due to internet connection issues, and then Cruise employees failed to verbally affirm the pullover maneuver and dragging of the pedestrian. In case that's not bad enough, the third-party findings state:

Cruise leadership was fixated on correcting the inaccurate media narrative that the Cruise AV, not the Nissan, had caused the Accident. This myopic focus led Cruise to convey the information about the Nissan hit-and-run driver having caused the Accident to the media, regulators and other government officials, but to omit other important information about the Accident. Even after obtaining the Full Video, Cruise did not correct the public narrative but continued instead to share incomplete facts and video about the Accident with the media and the public.

The report says the failings came about due to "poor leadership, mistakes in judgment, lack of coordination, an 'us versus them' mentality with regulators, and a fundamental misapprehension of Cruise’s obligations of accountability and transparency to the government and the public." 

Prior to the crash, Cruise was facing other problems with its autonomous vehicles (AVs) failing to recognize children and the frequency with which human operators took control. According to former CEO Vogt, human drivers needed to intervene in trips every four to five miles. 

Cruise had its license to operate suspended in California back in October. The company also laid off 24 percent of its workforce late last year, following the resignation of co-founder Daniel Kan and the departure of its CEO Kyle Vogt. On top of the two federal investigations, the company is also facing a lawsuit from the city of San Francisco. 

This article originally appeared on Engadget at https://www.engadget.com/gms-cruise-is-being-investigated-by-the-doj-and-sec-following-a-pedestrian-accident-104030508.html?src=rss

Apple Podcasts will automatically generate transcripts in iOS 17.4

Catching up on a new podcast should get easier very soon. Apple has announced that it will automatically transcribe podcasts, which should allow more people to enjoy episodes. Apple Podcast will allow creators to upload their own transcript for display or opt for Apple to create one.

There are some caveats to be aware of, though. Apple Podcasts should start creating the transcription when the episode is uploaded. However, it has a "short delay" until it's available, so people eager to play their favorite podcast right away will have to wait for an unspecified amount of time (Apple tells podcasters to give it at least 24 hours after uploading an episode). It's likely that the longer the episode is, the longer the transcription will take to be ready. The transcription will also not update if parts of the recording are changed with dynamically inserted audio, and it won't display music lyrics. 

Podcasters must follow Apple's quality requirements for their episodes to get correctly transcribed. According to Apple, podcasts with people talking over each other or music might not have as good a transcription. If someone chooses to upload their own, it must be a VTT or SRT file. A podcaster can also edit a transcription for greater accuracy. 

Apple Podcasts' transcriptions should launch in the spring on iOS 17.4 in English, German, Spanish and French. The feature is available in over 170 countries and regions, with older episodes getting transcribed over time. 

This article originally appeared on Engadget at https://www.engadget.com/apple-podcasts-will-automatically-generate-transcripts-in-ios-174-091040750.html?src=rss

23andMe's data hack went unnoticed for months

In late 2023, genetic testing company 23andMe admitted that its customer data was leaked online. A company representative told us back then that the bad actors were able to access the DNA Relatives profile information of roughly 5.5 million customers and the Family Tree profile information of 1.4 million DNA Relative participants. Now, the company has revealed more details about the incident in a legal filing, where it said that the hackers started breaking into customer accounts in late April 2023. The bad actors' activities went on for months and lasted until September 2023 before the company finally found out about the security breach. 

23andMe's filing contains the letters it sent customers who were affected by the incident. In the letters, the company explained that the attackers used a technique called credential stuffing, which entailed using previously compromised login credentials to access customer accounts through its website. The company didn't notice anything wrong until after a user posted a sample of the stolen data on the 23andMe subreddit in October. As TechCrunch notes, hackers had already advertised that stolen data on a hacker forum a few months before that in August, but 23andMe didn't catch wind of that post. The stolen information included customer names, birth dates, ancestry and health-related data. 

23andMe advised affected users to change their passwords after disclosing the data breach. But before sending out letters to customers, the company changed the language in its terms of service that reportedly made it harder for people affected by the incident to join forces and legally go after the company. 

This article originally appeared on Engadget at https://www.engadget.com/23andmes-data-hack-went-unnoticed-for-months-081332978.html?src=rss

23andMe was hacked for months before it discovered the data breach

In late 2023, genetic testing company 23andMe admitted that its customer data was leaked online. A company representative told us back then that the bad actors were able to access the DNA Relatives profile information of roughly 5.5 million customers and the Family Tree profile information of 1.4 million DNA Relative participants. Now, the company has revealed more details about the incident in a legal filing, where it said that the hackers started breaking into customer accounts in late April 2023. The bad actors' activities went on for months and lasted until September 2023 before the company finally found out about the security breach. 

23andMe's filing contains the letters it sent customers who were affected by the incident. In the letters, the company explained that the attackers used a technique called credential stuffing, which entailed using previously compromised login credentials to access customer accounts through its website. The company didn't notice anything wrong until after a user posted a sample of the stolen data on the 23andMe subreddit in October. As TechCrunch notes, hackers had already advertised that stolen data on a hacker forum a few months before that in August, but 23andMe didn't catch wind of that post. The stolen information included customer names, birth dates, ancestry and health-related data. 

23andMe advised affected users to change their passwords after disclosing the data breach. But before sending out letters to customers, the company changed the language in its terms of service that reportedly made it harder for people affected by the incident to join forces and legally go after the company. 

This article originally appeared on Engadget at https://www.engadget.com/23andme-was-hacked-for-months-before-it-discovered-the-data-breach-081332871.html?src=rss