Posts with «author_name|mariella moon» label

X starts giving non-paying users the ability to make audio and video calls

X is slowly rolling out audio and video calling to users that don't pay for its premium subscription service that's formerly known as Twitter Blue. Enrique Barragan, an engineer for the company, has shared the news on the platform. The company initially launched the feature for iOS users last year, giving paying subscribers the option to call other people through the app, and was a step towards making X the "everything" application Elon Musk wants it to be. Earlier this year, the capability made its way to Android devices, but the ability to make calls remained limited to Premium subscribers only.

By the end of January, Musk said that X will make audio-video calling available to everyone as soon as the company is confident that it's robust. We're still being asked to subscribe to X Premium to be able to make calls when we hit the phone icon in DMs, but those who get the update will be able to make calls even if they're not a paying subscriber. The official X support page for the feature now says that all accounts are able to make and receive calls, though both parties must have been in contact via Direct Messaging at least once. In the past, it said only "Premium subscribers have the ability to make audio and video calls."

In addition to announcing the capability's rollout, Barragan revealed that users will now also be able to receive calls from everyone on the app if they want to. Audio and video calls were automatically switched on for us when we checked our DMs' Settings menu, configured so that we can (thankfully) only receive calls from people we follow. We're already seeing the "Everyone" option in there, though, ready to be picked by the most intrepid X users. 

we’re slowly rolling out audio and video calling to non premium users, try it out! now you can also choose allow calls from everyone

— Enrique (@enriquebrgn) February 23, 2024

This article originally appeared on Engadget at

Google explains why Gemini's image generation feature overcorrected for diversity

After promising to fix Gemini's image generation feature and then pausing it altogether, Google has published a blog post offering an explanation for why its technology overcorrected for diversity. Prabhakar Raghavan, the company's Senior Vice President for Knowledge & Information, explained that Google's efforts to ensure that the chatbot would generate images showing a wide range of people "failed to account for cases that should clearly not show a range." Further, its AI model grew to become "way more cautious" over time and refused to answer prompts that weren't inherently offensive. "These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong," Raghavan wrote.

Google made sure that Gemini's image generation couldn't create violent or sexually explicit images of real persons and that the photos it whips up would feature people of various ethnicities and with different characteristics. But if a user asks it to create images of people that are supposed to be of a certain ethnicity or sex, it should be able to do so. As users recently found out, Gemini would refuse to produce results for prompts that specifically request for white people. The prompt "Generate a glamour shot of a [ethnicity or nationality] couple," for instance, worked for "Chinese," "Jewish" and "South African" requests but not for ones requesting an image of white people. 

Gemini also has issues producing historically accurate images. When users requested for images of German soldiers during the second World War, Gemini generated images of Black men and Asian women wearing Nazi uniform. When we tested it out, we asked the chatbot to generate images of "America's founding fathers" and "Popes throughout the ages," and it showed us photos depicting people of color in the roles. Upon asking it to make its images of the Pope historically accurate, it refused to generate any result. 

Raghavan said that Google didn't intend for Gemini to refuse to create images of any particular group or to generate photos that were historically inaccurate. He also reiterated Google's promise that it will work on improving Gemini's image generation. That entails "extensive testing," though, so it may take some time before the company switches the feature back on. At the moment, if a user tries to get Gemini to create an image, the chatbot responds with: "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."

This article originally appeared on Engadget at

The latest experimental Threads features let you save drafts and take photos in-app

Meta is currently testing a couple of capabilities for Threads, which Instagram head Adam Mosseri describes as some of the "most requested" features for the social network. One of these experimental features is the ability to save drafts. Users will be easily able to save a post they've typed as a draft that they can edit and publish later by swiping down on their mobile device's display. When there's a draft saved, the app's menu at the bottom of the screen highlights the post icon. At the moment, though, they can only save one draft, and it's unclear if Meta has plans to give users the ability to save more. 

In addition to drafts, Meta is also testing an in-app camera. It opens the mobile phone's camera from within Threads itself, so that users can more easily share photos and videos from their phone. Meta chief Mark Zuckerberg made a post on the service with a photo he says was taken with the new in-app camera the company is testing. 

Meta told us that these are initial tests for the experimental features, which means they could undergo a lot of changes before they get a wide release, and are only available for a small number of people. Over the past month, Meta also started testing a bookmarking feature for Threads that allows users to save posts they can refer to later. The company is experimenting with its version of trending topics on Threads, as well, along with the ability to make cross-posts between Threads and Facebook. 

This article originally appeared on Engadget at

Microsoft is giving Windows Photos a boost with a generative AI-powered eraser

Microsoft has announced a generative-AI powered eraser for pictures, which gives you an easy way of removing unwanted elements from your photos. Windows Photos has long had a Spot Fix tool that can remove parts of an image for you, but the company says Generative erase is an enhanced version of the feature. Apparently, this newer tool can create "more seamless and realistic" results even when large objects, such as bystanders or clutter in the background, are removed from an image. 

If you'll recall, both Google and Samsung have their own versions of AI eraser tools on their mobile devices. Google's used to be exclusively available on newer Pixel phones until it was rolled out to older models. Microsoft's version, however, gives you access to an AI-powered photo eraser on your desktop or laptop computer. You only need to fire up the image editor in Photos to start using the feature. Simply choose the Erase option and then use the brush to create a mask over the elements you want to remove. You can even adjust the brush size to make it easier to select thinner or thicker objects, and you can also choose to highlight more than one element before erasing them all.

At the moment, though, access to Generative erase is pretty limited. It hasn't been released widely yet, and you can only use it if you're a Windows Insider through the Photos app on Windows 10 and Windows 11 for Arm64 devices.

undefinedThis article originally appeared on Engadget at

Chrome's latest experimental AI feature can help you write

Google has added an experimental generative AI feature to its browser with the launch of Chrome M122. The new AI tool is called "Help me write," because it can literally help you write more descriptive sentences or even full paragraphs from a short prompt. Google says the tool uses its Gemini models to understand the context of the web page you're on so that it could generate appropriate suggestions. If you're on a review page, for instance, it can give you a suggestion that reads like a review instead of a sales copy.


In one of Google's examples, the tool was able to spit out a decent description of what the person was selling with a prompt that simply read: "moving to a smaller place selling air fryer for 50 bucks." The tool suggested a full paragraph that was able to better communicate the user's message. "I'm moving to a smaller place and won't have room for my air fryer. It's in good condition and works great. I'm selling it for $50. Please contact me if you're interested," the suggestion read. 

In another example, the user asked the tool to write them a request to return a defective bike helmet and to communicate that the product developed a crack, which isn't mentioned in the product warranty. As you can see in Google's examples, you can change the length and tone of the suggestion if the first thing the writing aid comes up with isn't good enough to serve your needs. Once you're done, you can click the Replace button to switch your prompt with the suggested writeup.


To activate the experimental tool, you have to go into Settings in Chrome's three-dot drop-down menu. There, you can find the Experimental AI page where you can activate "Help me write." To use the feature, just highlight the text you want to rewrite and then right-click on it to summon the "Help me write" box. Take note that it's only available for Chrome browsers on Macs and Windows PCs in the US at the moment. It can also only understand prompts and write suggestions in the English language. 

Google first announced the arrival of the writing tool back in January, when it revealed that it was going to start integrating AI features into its Chrome browser. In addition to "Help me write," Google said that it's also giving the browser an AI-powered tab organizer and the ability the generate customized themes. 

This article originally appeared on Engadget at

PlayStation now supports passkey sign-ins

You don't have to type in your password every time you log into your PlayStation account anymore. Sony Interactive Entertainment (SIE) has launched passkey support for PlayStation accounts, which means you can simply sign in through your mobile device or computer and use its screen unlocking method to log in. If you use a PIN, your fingerprint or your face to unlock your phone, for instance, that's also how you'll be able to get into your PlayStation account. On desktop, we were easily able to link our account with 1Password and use its passkey capability. 

In its official page for the update, the company touches on the benefits of using passkeys, such as reducing account vulnerability. Passkeys can't be reused or given away, whether it's inadvertently or on purpose as SIE explains, making them resistant to phishing and data breaches. 

To set up a passkey, you simply have to go to Security under Account Management. There, you can activate the option and create a passkey by following the on-screen instructions. The company warns that some hardware security keys could cause issues, and it might be better to use synced passkeys on mobile devices instead. It also cautions against the use of mobile PIN codes as passkeys on Android and recommends iCloud Keychain, Google Password Manager, 1Password and Dashlane as a passkey provider. After setting up the option, you'll be prompted to use your passkey whenever you need to sign in on a PlayStation 5 or a PlayStation 4 console. You can deactivate the option anytime, though, if you want to go back to signing in with a password.

Login to your PlayStation account hassle-free with passkeys arriving later today! Keep an eye out for updates.

— Ask PlayStation (@AskPlayStation) February 21, 2024

This article originally appeared on Engadget at

Google promises to fix Gemini's image generation following complaints that it's 'woke'

Google's Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user's text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it's aware Gemini "is offering inaccuracies in some historical image generation depictions" and that it's going to fix things immediately. 

We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement.

— Google Communications (@Google_Comms) February 21, 2024

According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: "It's embarrassingly hard to get Google Gemini to acknowledge that white people exist." To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America's founding fathers and the Catholic Church's popes as people of color. 

In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn't get Gemini to generate Nazi images. "I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party," the chatbot responded. 

Gemini's behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its "image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously." He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that "[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that."

We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we…

— Jack Krawczyk (@JackK) February 21, 2024

This article originally appeared on Engadget at

Google introduces a lightweight open AI model called Gemma

Google has released an open AI model called Gemma, which it says is created using the same research and technology that was used to build its Gemini AI models. The company says Gemma is its contribution to the open community and is meant to help developers "in building AI responsibly." As such, it also introduced the Responsible Generative AI Toolkit alongside Gemma. It contains a debugging tool, as well as a guide with best practices for AI development based on Google's experience.

The company has made Gemma available in two different sizes — Gemma 2B and Gemma 7B — which both come with pre-trained and instruction-tuned variants and are both lightweight enough to run directly on a developer's laptop or desktop computer. Google says Gemma surpasses much larger models when it comes to key benchmarks and that both model sizes outperform other open models out there. 

In addition to being powerful, the Gemma models were trained to be safe. Google used automated techniques to strip personal information from the data it used to train the models, and it used reinforcement learning based on human feedback to ensure Gemma's instruction-tuned variants show responsible behaviors. Companies and independent developers could use Gemma to create AI-powered applications, especially if none of the currently available open models are powerful enough for what they want to build. 

Google has plans to introduce even more Gemma variants in the future for an even more diverse range of applications. That said, those who want to start working with the models right now can access them through data science platform Kaggle, the company's Colab notebooks or through Google Cloud. 

This article originally appeared on Engadget at

FuboTV accuses Disney, Fox and Warner Bros. of antitrust practices over joint streaming service

FuboTV, a streaming platform dedicated to live sports, has filed an antitrust lawsuit against Disney, Fox and Warner Bros. Discovery, accusing the companies of staging "a years-long campaign" to hamper its business. The company's lawsuit comes shortly Disney-owned ESPN, Fox and Warner Bros. Discovery announced that they're launching a sports streaming service in the fall of 2024, which will give subscribers access to sporting events from the networks they own. FuboTV's complaint argued that the companies are stealing its playbook and that the launch of their joint venture will destroy competition and lead to price inflation for consumers. 

Further, FuboTV alleged that the launch of the defendants' streaming service is but "the latest coordinated step" in their "campaign to eliminate competition in the sports-first streaming market" and in their effort to block its business. The streaming service said the defendants charge it content licensing rates that are 30 to 50 percent higher than the rates they charge other distributors. They also allegedly force FuboTV to bundle "dozens of expensive non-sports channels" that "customers do not want" with their sports offerings as a condition of licensing their content. All these increase the costs FuboTV must pass onto its customers, the company explained. 

FuboTV also claimed that the companies in question have prevented it from being able to offer streaming products subscribers would like, including content available on Hulu. Plus, the defendants allegedly impose a limitation on how many subscribers can buy their content package, ensuring that FuboTV can't make a dent in the market. 

"Each of these companies has consistently engaged in anticompetitive practices that aim to monopolize the market, stifle any form of competition, create higher pricing for subscribers and cheat consumers from deserved choice," FuboTV CEO David Gandler said in a statement. "By joining together to exclusively reserve the rights to distribute a specialized live sports package, we believe these corporations are erecting insurmountable barriers that will effectively block any new competitors from entering the market. This strategy ensures that consumers desiring a dedicated sports channel lineup are left with no alternative but to subscribe to the Defendants' joint venture."

Engadget has reached out to all three defendants: ESPN has declined to comment, while Fox and Warner Bros. Discovery have yet to get back to us. FuboTV is asking the court to prohibit the joint venture's launch or to impose restrictions, such as economic parity of licensing terms, on the defendants.

This article originally appeared on Engadget at

Samsung is upgrading a bunch of audio capabilities on its phones, tablets and earbuds

Samsung has announced a variety of updates designed to give its devices' audio capabilities a boost, starting with a Galaxy Buds' capability that could make it easier to communicate in another language. The company launched a new feature called Live Translate with its Galaxy S24 series, which people can use as an interpreter for phone calls to, say, a restaurant in a foreign country they're visiting. Soon, Galaxy S24 owners will be able use their phones as a real-time translation tool for in-person conversations if they pair their devices with their updated Galaxy Buds. 

When users listen to the other person through their earbuds, they'll hear the words translated into their own language. Meanwhile, the other person can hear them in their language through the phone speaker. The user can also swap the order of speech during the conversation by tapping on their Galaxy Buds. Samsung says this eliminates the need to pass a phone back and forth when trying to converse in two different languages. When we tested out Live Translate on the Galaxy S24 with a phone call, though, we experienced a noticeable lag before Samsung's computerized system interpreted our words. Still, this could be a valuable tool for travelers visiting foreign countries. 

The company has also revealed that it's expanding Auracast support to its phones and tables, including the Galaxy S24 series. Auracast is a Bluetooth technology that allows users to broadcast audio from devices, such as phones and TVs, to an unlimited number of nearby headphones, speakers and earbuds. Samsung initially made the technology available for its smart TVs only. With this Galaxy Buds update, owners will be able to use Auracast to transmit audio from their mobile devices to multiple earbuds. 

In addition, Galaxy Buds2 Pro and Buds2 users will be able to enjoy 360 Audio if they pair their earbuds with certain Samsung Neo QLED, QLED and OLED TV models. By doing so, their earbuds will be able to track their head movements for an immersive watching or listening experience. Finally, Galaxy Buds2 Pro users will be able to use Samsung's Auto Switch feature to automatically switch their connection between the company's tablets, Galaxy Books and TVs and its phones if they need to take a phone call. These features are making their way to Galaxy Buds2 Pro, Buds2 and Buds FE users starting in late February. 

This article originally appeared on Engadget at