Posts with «author_name|mariella moon» label

Uh-oh: ICQ is shutting down on June 26

ICQ, which used to be a very popular messaging app for a short period in the 90s and the early aughts, only has a month left before it joins the other apps and software of old in the great big farm in the sky. It will stop working on June 26, according to it website, which also encourages users to move to VK Messenger for casual chats and to VK WorkSpace for professional conversations. ICQ came into the picture at a time when most people were using IRC to chat. IRC, however, was mostly meant for group conversations — ICQ made it easy to communicate one-on-one. 

Users who signed up for an account got assigned a number that grew longer as time went on, because it was issued sequentially. The shortest numbers had five digits, which means users who got them were there at the very beginning. ICQ peaked in the early 2000s when it reached 100 million registered accounts. And while it didn't take a long time for AIM, Yahoo Messenger and MSN Messenger to eclipse its popularity, ICQ's iconic "uh-oh!" notification sound remains memorable for a lot of internet users during that era. 

ICQ, derived from the phrase "I seek you," was developed by Israeli company Mirabilis. It was then purchased by AOL and then by the Russian company Mail.Ru Group that's now known as VK, which has its own social networking and messaging services.

This article originally appeared on Engadget at https://www.engadget.com/uh-oh-icq-is-shutting-down-on-june-26-153048381.html?src=rss

Meta and Activision face lawsuit by families of Uvalde school shooting victims

The families of the shooting victims at Robb Elementary School in Uvalde, Texas have sued Call of Duty publisher Activision and Meta. They alleged that the companies "knowingly exposed the shooter to the weapon [he used], conditioned him to see it as the solution to his problems, and trained him to use it." The plaintiffs also accused the companies of "chewing up alienated teenage boys and spitting out mass shooters." 

In the lawsuit, the plaintiffs explained that the Uvalde shooter played Call of Duty, which featured an assault-style rifle made by gunmaker Daniel Defense. They also mentioned that he frequently visited Instagram, which advertised the gunmaker's products. The lawsuit claimed, as well, that Instagram gives gunmakers "an unsupervised channel to speak directly to minors, in their homes, at school, even in the middle of the night." It argued that the shooter was "a poor and isolated teenager" from small town Texas who only learned about AR-15s and set his sights on it, because he was exposed to the weapon from playing Call of Duty and visiting Instagram. In addition, it accused Meta of being more lenient towards firearms sellers than other users who break its rules. Meta prohibits the buying the selling of weapons and ammunition, but users can violate the policy 10 times before they're banned from its platforms. 

"The truth is that the gun industry and Daniel Defense didn’t act alone. They couldn’t have reached this kid but for Instagram," the plaintiffs' lawyer, Attorney Josh Koskoff, said at a news conference. "They couldn’t expose him to the dopamine loop of virtually killing a person. That's what Call of Duty does." Koskoff's law firm was the same one who reached a $73 million settlement with rifle manufacturer Remington for the families of the Sandy Hook Elementary School shooting victims. 

An Activision spokesperson told The Washington Post and Bloomberg Law that the "Uvalde shooting was horrendous and heartbreaking in every way," and that the company expresses its deepest sympathies to the families, but "millions of people around the world enjoy video games without turning to horrific acts."

This article originally appeared on Engadget at https://www.engadget.com/meta-and-activision-face-lawsuit-by-families-of-uvalde-school-shooting-victims-130025901.html?src=rss

SpaceX Raptor engine test ends in a fiery explosion

A SpaceX testing stand at the company's McGregor, Texas facilities went up in flames during a test of its Raptor 2 engines on the afternoon of May 23. According to NASASpaceflight, the engine had an anomaly that caused vapors to seep out and lead to a secondary explosion. The news organization's livestream showed the engine shutting down before the fire started and eventually swallowed the stand in flames and smoke. 

Here is the full clip of the test that ended in a RUD.

(video and audio synced)
See what's next steps are ahead: https://t.co/s5km3WuOOX https://t.co/lKDvoOXoz2 pic.twitter.com/rNgQUnz5LV

— carson_phillips (@Cphillips_03) May 23, 2024

SpaceX uses the Raptor engines for its Starship system's Super Heavy booster and upper-stage spacecraft. They use liquid methane and liquid oxygen as fuel, and they were designed to be powerful enough to be able to send Starship to the moon and Mars. As Gizmodo suggests, its gases mixing due to a leak or a similar anomaly could've caused the explosion, though SpaceX has yet to officially address what happened during testing. 

The company is currently preparing for Starship's fourth test flight, which is scheduled to take place on June 5, pending regulatory approval and barring poor weather or other factors that could delay the launch. This explosion likely wouldn't affect the flight's launch window. SpaceX's main goals for the fourth test flight are to make sure that the Super Heavy booster gets a soft splashdown in the Gulf of Mexico and to achieve a controlled entry of the Starship spacecraft. The company said it made several hardware and software upgrades to incorporate what it learned from its third flight test. Starship's upper stage reached space during that flight, but it burned up in the atmosphere upon reentry, while its Super Heavy booster broke apart in the final phases of its descent instead of softly splashing down into the ocean.

This article originally appeared on Engadget at https://www.engadget.com/spacex-raptor-engine-test-ends-in-a-fiery-explosion-110052362.html?src=rss

OpenAI scraps controversial nondisparagement agreement with employees

OpenAI will not enforce any nondisparagement agreement former employees had signed and will remove the language from its exit paperwork altogether, the company told Bloomberg. Vox recently reported that OpenAI was making exiting employees choose between being able to speak against the company and to keep the vested equity they earned. Employees could lose millions if they choose not to sign the agreement or if they violate it. Sam Altman, OpenAI's CEO, said he was "embarrassed" and didn't know that the provision existed, promising to have the company's paperwork altered. 

According to Bloomberg, the company notified former employees that "[r]egardless of whether [they] executed the agreement... OpenAI has not canceled, and will not cancel, any vested units." It released them from the agreement altogether, "unless the nondisparagement provision was mutual." At least one former employee said they had lost their vested equity that was equivalent to multiple times their family's net worth by refusing to sign when they left. It's unclear if they're getting it back with this change. The company also talked to current employees about this development, easing their worries that they will have to be careful with everything they say if they don't want to lose their stocks. 

"We are sorry for the distress this has caused great people who have worked hard for us," Chief Strategy Officer Jason Kwon said in a statement. "We have been working to fix this as quickly as possible. We will work even harder to be better."

This wasn't the only controversial situation OpenAI has been involved in as of late. The company recently revealed that it was disbanding the team it formed last year to help make sure humanity is protected from future AI systems, which could be so powerful they could cause our extinction. Before that, OpenAI chief scientist Ilya Sutskever, who was one of the team's leads, left the company. Another team lead, Jan Leike, said in a series of tweets that "safety culture and processes have taken a backseat to shiny products" within OpenAI. In addition, Scarlett Johansson accused OpenAI of copying her voice without permission for ChatGPT's Sky voice assistant after she turned down Altman's request to lend her voice to the company. OpenAI denied that it copied the actor's voice and said that it hired another actor way before Altman contacted Johansson. 

This article originally appeared on Engadget at https://www.engadget.com/openai-scraps-controversial-nondisparagement-agreement-with-employees-043750040.html?src=rss

Microsoft's Azure AI Speech lets Truecaller users create an AI assistant with their own voice

Truecaller, a caller ID app that can block and record calls, has teamed up with Microsoft to give its users a way to create an AI assistant that uses their own voice. The company originally introduced its AI assistant that can answer and screen calls for its users back in 2022. It already offers several voices to choose from, but the personal voice feature of Microsoft's Azure AI Speech gives users the capability to make a custom digital assistant that sounds like them. 

Users will have to record themselves reading a sentence giving Truecaller consent to use their voice. They'll also have to read a training script that the technology will then use to capture their speaking style to be able to create a convincing digital audio replica. When someone calls them, the assistant will then screen it and introduce itself as the "digital" version of the user. In the product demo presented by Truecaller Product Director and General Manager Raphael Mimoun, for instance, his assistant answered a call with: "Hi there! I'm digital Raphael Mimoun! May I ask who's calling?" After the caller responds, the assistant then asks if the call is urgent or if it can wait before pushing it through. 

"By integrating Microsoft Azure AI Speech’s personal voice capability into Truecaller, we've taken a significant step towards delivering a truly personalized and engaging communication experience," Mimoun said in a statement. That said, it could also feel unsettling, maybe even creepy, for callers to interact with a robotic version of their friend or colleague. 

Microsoft demonstrated Azure AI Speech's personal voice at Build this year, where it also revealed that digital creativity company Wondershare is integrating the new feature into its video editing tools. That will also allow Wondershare users to create an AI assistant using their voice, which they can then use to create audiobooks and podcasts. 

This article originally appeared on Engadget at https://www.engadget.com/microsofts-azure-ai-speech-lets-truecaller-users-create-an-ai-assistant-with-their-own-voice-133019270.html?src=rss

OpenAI didn't intend to copy Scarlett Johansson's voice, 'The Washington Post' reports

OpenAI cast the actor of Sky's voice months before Sam Altman contacted Scarlett Johansson, and it had no intention of finding someone who sounded like her, according to The Washington Post. The publication said the flier OpenAI issued last year looked for actors that had "warm, engaging [and] charismatic" voices. They needed to be between 25 and 45 years old and had to be non-union, but OpenAI reportedly didn't specify that it was looking for a Scarlett Johansson voice-alike. If you'll recall, Johansson accused the company of copying her likeness without permission for its Sky voice assistant.

The agent of Sky's voice told The Post that the company never talked about Johansson or the movie Her with their talent. OpenAI apparently didn't tweak the actor's recordings to sound like Johansson either, because her natural voice sounded like Sky's, based on the clips of her initial voice test that The Post had listened to. OpenAI product manager Joanne Jang told the publication that the company selected actors who were eager to work on AI. She said that Mira Murati, the company's Chief Technology Officer, made all the decisions about the AI voices project and that Altman was not intimately involved in the process.

Jang also told the publication that to her, Sky sounded nothing like Johansson. Sky's actress told The Post through her agent that she just used her natural voice and that she has never been compared to Johansson by the people who know her closely. But in a statement Johansson's team shared with Engadget, she said that she was shocked OpenAI pursued a voice that "sounded so eerily similar" to hers that her "closest friends and news outlets could not tell the difference" after she turned down Altman's offer to voice ChatGPT. 

Johansson said that Altman first contacted her in September 2023 with the offer and then reached out again just two days before the company introduced GPT-4o to ask her to reconsider. Sky has been one of ChatGPT's voices since September, but GPT-4o gave it the power to have more human-like conversations with users. That made its similarities to Johansson's voice more apparent — Altman tweeting "her" after OpenAI demonstrated the new large language model didn't help with the situation and invited more comparisons to the AI virtual assistant Johansson voiced in the movie. OpenAI has paused using Sky's voice "out of respect" for Johansson's concerns, it wrote in a blog post. The actor said, however, that the company only stopped using Sky after she hired legal counsel who wrote Altman and the company to ask for an explanation. 

her

— Sam Altman (@sama) May 13, 2024

If you're wondering if Sky truly does sound like Johansson, we embedded a video below so you can judge for yourself. It's a recording of Johansson's statement as read by the Sky voice assistant, posted by Victor Mochere on YouTube. Opinions in the comment section are divided, with some saying that it does sound like her if she were robotic, while others say that the voice sounds more like Rashida Jones.

This article originally appeared on Engadget at https://www.engadget.com/openai-didnt-intend-to-copy-scarlett-johanssons-voice-the-washington-post-reports-041247992.html?src=rss

Ray-Ban Meta smart glasses can now upload photos directly to Instagram Stories

Meta has updated its Ray-Ban Meta smart glasses to give it more hands-free capabilities, starting with a new feature that lets you share images as Instagram Stories without having to take out your phone. You can just say "Hey Meta, share my last photo to Instagram," if you've already snapped the photo you want. But you can also say "Hey Meta, post a photo to Instagram" if you want to be more spontaneous and take a picture to upload as a Story on the spot. It's for those moments you don't mind sharing with your followers, unedited, in real time. 

In addition, you'll now be able to get your glasses to quickly play your tunes on Amazon Music. Just say "Hey Meta, play Amazon Music" to start listening through the smart glasses' open-ear audio system. And yes, you'll be able to control the audio with the device's touch controls or with your voice. If you have a Calm account and need to decompress, you can listen to guided meditation or mindfulness exercises on your smart glasses instead. To do so, just say "Hey Meta, play the Daily Calm." And if you don't have a Calm account, you can get a three-month subscription for free if you follow the on-screen prompts in the Meta View app. All these features are "rolling out gradually," so you'll eventually get access to them if you don't have them yet. 

Last month, Meta also rolled out multimodal AI for the Ray-Ban smart glasses after months of testing. It enables the smart glasses to act as a personal AI gadget outside of the smartphone, similar to the Rabbit R1 and the Humane AI Pin. Thanks to that update, you can now ask the smart glasses to describe objects in the environment, identify landmarks and read signs in different languages, which sounds especially useful for frequent travelers. Meta also gave the device the ability to make hands-free video calls with WhatsApp and Messenger.

This article originally appeared on Engadget at https://www.engadget.com/ray-ban-meta-smart-glasses-can-now-upload-photos-directly-to-instagram-stories-130019041.html?src=rss

Nintendo snaps up a studio known for its Switch ports

Nintendo is buying (PDF) Florida-based studio Shiver Entertainment from the Embracer Group, which is splitting up its rather messy gaming empire and is letting go of certain assets. Shiver was founded in 2012 and is mostly known for working with publishers and developers to port games to the Switch, including couple of Scribblenauts titles and Hogwarts Legacy. Nintendo will acquire the "boutique-sized studio" in full, making it a fully owned subsidiary that will continue working on Switch ports and developing software for multiple platforms. 

The Japanese gaming company isn't known for gobbling up small studios and developers. In its announcement of the deal, it said it's aiming "to secure high-level resources for porting and developing software titles" with this purchase. By buying Shiver, Nintendo is also showing that it's committed to the Switch platform, which will remain its primary business for years to come

As Nintendo Life notes, Nintendo may have decided to purchase Shiver to acquire its talent, as well. The studio's CEO, John Schappert, is an industry veteran who used to oversee Xbox Live, the Xbox platform software and Microsoft Game Studios. He also served as Chief Operating Officer at EA and at Zynga. Nintendo didn't say how much it's paying for the studio, but it doesn't sound like the purchase will make any considerable impact on its finances. "The Acquisition will have only a minor effect on Nintendo’s results for this fiscal year," the company wrote in its announcement. 

This article originally appeared on Engadget at https://www.engadget.com/nintendo-snaps-up-a-studio-known-for-its-switch-ports-100003358.html?src=rss

Wearable AI Pin maker Humane is reportedly seeking a buyer

The tech startup Humane is seeking a buyer for its business, just a bit over a month since it released the AI Pin, according to Bloomberg. Engadget's Cherlynn Low described the AI Pin as a "wearable Siri button," because it's a small device you can wear that was designed with a very specific purpose in mind: To give you ready access to an AI assistant. Humane is working with a financial adviser, Bloomberg said, and is apparently hoping to sell for anywhere between $750 million and $1 billion. 

The company drummed up a lot of interest and successfully raised $230 million from high-profile investors. However, a billion may be a huge ask when its AI pin was mostly panned by critics upon launch. We gave the AI Pin a score of 50 out of 100 in our review due to several reasons. It was slow and took a few seconds to reply when we asked it questions. The responses were irrelevant at times and weren't any better than what you could get with a quick Google search. Its touchpad grew warm with use, it had poor battery life and its projector screen, while novel, was pretty hard to control. The Humane AI Pin also isn't cheap: It costs $700 to buy and requires a monthly fee of $24 to access the company's artificial intelligence technology and 4G service riding on T-Mobile's network. In a post on its website, Humane said that it was listening to feedback and listed several problem areas it intends to focus on.

Another dedicated AI gadget, the Rabbit R1, is much more affordable at $199, but it's still not cheap enough to make the category more popular than it is, especially since you could easily take out your phone to use AI tools when needed. Humane's efforts to sell its business is still in its very early stages, Bloomberg noted, and it might not close a deal at all. 

This article originally appeared on Engadget at https://www.engadget.com/wearable-ai-pin-maker-humane-is-reportedly-seeking-a-buyer-035322167.html?src=rss

Microsoft unveils Copilot for Teams

At this year's Build event, Microsoft has announced Team Copilot, and as you can probably guess from its name, it's a variant of the company's AI tool that can cater to the needs of a group of users. It expands Copilot's abilities beyond that of a personal assistant, so that it can serve a whole team, a department or even an entire organization, the company said in its announcement. The new tool was designed to take on time-consuming tasks to free up personnel, such as managing meeting agenda and taking down minutes that group members can tweak as needed. 

The new Copilot for Teams can also serve as a meeting moderator by summarizing important information for latecomers (or for reference after the fact) and answering questions. Finally, it can create and assign tasks in Planner, track their deadlines, and notify team members if they need to contribute to or review a certain task. The company's customers paying for a Copilot license on Microsoft 365 will be able to test these features in preview starting later this year. 

In addition to Team Copilot, Microsoft has also announced new ways customers can personalize the AI assistant. In Copilot Studio, users will be able to make custom Copilots in SharePoint so that users can more quickly access the information they need, as well as to create custom Copilots that act as agents. The latter would allow companies and business owners to automate business processes, such as end-to-end order fulfillment. Finally, the debut of Copilot connectors in Studio will make it easier for developers to build Copilot extensions that can customize the AI tools' actions. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-unveils-copilot-for-teams-153059261.html?src=rss