WhatsApp is introducing a new feature called Advanced Chat Privacy that can block participants from sharing the contents of a conversation. This is an opt-in tool that’s available for both traditional chats and groups.
Once selected, the toolset will prevent anyone in the chat from sharing anything outside of the app. This means it’ll block all chat exports, but that’s just the beginning. The software also prevents a smartphone from auto-downloading media and will stop integration with AI assistants. Basically, what happens on WhatsApp stays on WhatsApp. However, it's unclear if it prevents screenshotting. We've reached out to Meta and will update this post when we hear back.
The platform says this is "best used when talking with groups where you may not know everyone closely but are nevertheless sensitive in nature." It gives examples like a support group about health challenges and a chat about community organizing.
WhatsApp says this is just the first version of the tool and that it’ll be adding "more protections" in the future. It’s rolling out right now across the globe, though it could take a month or two to reach everyone.
This article originally appeared on Engadget at https://www.engadget.com/apps/whatsapp-introduces-a-feature-that-blocks-chat-participants-from-sharing-content-150042001.html?src=rss
I try to play as broad a swathe of games as I can, including as many of the major releases as I am able to get to. Baldur's Gate 3 garnered near-universal praise when it arrived in 2023, and I was interested in trying it. But when I watched gameplay videos, the user interface seemed distressingly busy. There were far too many icons at the bottom of the screen and my brain crumbled at the sight of them. I am yet to try Baldur's Gate 3.
Two years later, I had similar feelings ahead of checking out Overwatch 2's Stadium, a major new mode for a game I play nearly every single day. Blizzard gave members of the press a spreadsheet that detailed all of the possible upgrades and powers for each hero, as well as a list of modifiers that any character can use. With two dozen or so unlockables for each of the 17 heroes that will be in Stadium at the jump and about 70 general upgrades, that's hundreds of different options Blizzard is adding to the game all at once.
As I scrolled through the list, I was surprised that a feeling of dismay washed over me. I started to worry that Stadium might not be for me.
Figuring out how to combine the items and powers in effective ways for so many different characters seemed completely daunting. It doesn't help that I'm growing tired of more and more major games having RPG elements with deeper character customization. Taking some of the decision making out of my hands by giving a character a defined set of abilities and weapons with no stat or gear upgrades to worry about is more my speed.
Thankfully, Blizzard has some good ideas on how to welcome players into this new mode. And, as it turns out, once I actually started playing Stadium, my anxious feelings swiftly melted away and I had a great time with it.
Blizzard bills Stadium, which will go live for all players as part of season 16 on April 22, as the third pillar of Overwatch 2. It will nestle alongside the Competitive and Unranked modes and only be available in a ranked format.
Stadium is a very different take on Overwatch 2. For instance, it has a more sports-like presentation. Thanks to some tweaks to maps that seem a little out of the Apex Legends playbook and a new, looser announcer, it feels a bit more like a spectator sport than the lore-infused Competitive and Unranked formats. The maps in Stadium are either new stages or condensed versions of existing ones, with rounds typically lasting just a few minutes each.
On paper, Stadium is a more tactical spin on Overwatch 2, though with a vastly different approach than the likes of Valorant or CS:GO. Neither of those games really landed for me (I retired from Valorant with a very modest undefeated record), adding to my concern that I wouldn’t gel with Stadium.
This is a best-of-seven, 5v5 format built around customizing your hero during a match with various upgrades. What's more, this is the first time players can opt for a third-person view at all times. The first-person view is still there if you prefer it.
It's a little redundant to think of Stadium as Blizzard's answer to Marvel Rivals. It's been in development for over two years — it was conceived before Overwatch 2 even debuted and long before Marvel Rivals siphoned away a chunk of the player base. Still, it's hard not to make the comparison.
Blizzard Entertainment
There's a lot to drink in here. Ahead of my hands-on time with Stadium, I asked game director Aaron Keller how the Overwatch 2 team designed the mode to avoid making it feel too overwhelming and how the developers hoped to ease players into Stadium.
The team has done a few things with the aim of making the transition "a little less intimidating" for both long-time players and newcomers to the game, such as having a tab with example builds in the Armory, the pre-round shop where you select your upgrades. "If you want to, when you're playing a hero for the first time, you can just click through a custom, designer-built set of powers and items that you can unlock over the course of that match," Keller said. "It takes a little bit of what can be an overwhelming decision-making process out of your first-time experience, but you'll still be able to feel yourself grow in power."
Restricting the initial roster of heroes to 17 out of 43 can help players get to grips with Stadium, Keller suggested, though Blizzard will add more characters to the mode each season (newcomer Freja will join Stadium after the midseason update). The lack of hero swapping could also be a boon here. "All you're really gonna have to focus on is what your hero, your team's heroes and the enemy team can do over the course of that match," Keller said.
The lack of hero swaps did seem odd at first. One of the things that initially drew me to Overwatch was that each character had a defined set of abilities. The idea of being able to switch to a different hero to counter a particular menace on the enemy team was such a core part of the Overwatch experience for so long, but that faded over time. The switch to role locks (which restricts each player to only picking a hero in a certain class) and the new perks system, which incentivizes sticking with one character over the course of a match to unlock useful upgrades, have diminished the freedom of swapping to any other hero at any time.
In Stadium, rather than hero swaps, the answer to countering a pesky opponent is optimizing your build. "A lot of Stadium takes place during combat, but it's just as important to be able to put a strategy together around what you're unlocking in the Armory," Keller said. "It becomes much, much harder to do that if you can't predict what the heroes are going to be on the enemy team from round to round."
To help players from feeling like they're unable to deal with a certain enemy (such as having a D.Va that couldn't normally block a Zarya's beam), players will be able to put together counter builds in the Armory.
"We've got anti-barrier builds you can use. We've even got anti-beam builds that are available to different heroes," Keller said. "If you're going up against a Zarya, there are some things that you, or people on your team, are going to be able to do to counter that."
Through the Armory, you can unlock up to four powers. These are powerful and/or ridiculous abilities that you can pick from every other round. These are locked in for the duration of a match.
One power sees Ashe's ultimate cost slashed in half, but when she deploys B.O.B., he's just a little guy with lower attack speed and durability. Mini B.O.B. is just far too adorable for words. Another power lets Kiriko players spawn an AI-controlled clone of the support for a few seconds after she teleports.
Along with powers, there are items. These are purchased with earnable currency and can be swapped out before each round. You get some currency at the beginning of a match and earn more by playing well — dealing damage, scoring eliminations, healing allies, collecting a bounty by taking out an enemy who's crushing it and so on. Common and rare items boost your stats, but epic items are the ones you want. These are the more expensive upgrades that you unlock more of the longer a Stadium match goes.
Mei has some really great tweaks, such as the ability to move faster if on ground that she freezes, being able to remove a burn effect with her chilling primary fire and turning into a rolling ice ball that damages opponents. One enemy I faced used a combo of Mei's ice ball and ice wall to trap me, with both abilities damaging my hero at the same time. I'm stealing that strategy.
Blizzard Entertainment
Orisa, meanwhile, can use her javelin spin to fly a short distance. Ana (the best hero in the game) can cast her powerful Nano Boost through walls and to multiple allies. Soldier: 76 can get a short burst of his auto-aiming ultimate after damaging an enemy with his Helix Rockets. This is just scratching the surface of the items on offer, and the options can compound on each other to make abilities wildly powerful.
"I mostly just want to present a space for players where they feel like they can take the elements they really love about the other core modes that we have and just push them. Find that character that speaks to them and just push it as far as they can," senior game designer Dylan Snyder said when asked what would make the team's work on Stadium feel like it paid off.
"If we start seeing people sharing builds around and saying 'guys, I found this, this is the answer in this scenario, check this out.' They do write-ups on that, to me that's a win. Any numbers or metrics aside, to me, that's the mark of something that has landed with people."
Overwatch 2's practice range is there for a reason
I'm glad I took some time to play around with all of the heroes in the Stadium version of the practice range before hopping into a match. I started to get a feel for what each hero could do with maxed-out example builds. Certain abilities can quickly become very powerful if you pick powers and items that complement each other. When I hopped into matches, I made a conscious choice to stop worrying about understanding everything and to embrace the side of Overwatch 2 that I love the most: full-blown chaos.
Relying on the example builds was a big help at the outset. By focusing on those — and selecting the items that I felt would be the most effective at any given time — I didn't have to overthink anything. Just quickly pick a power and some items and try to enjoy myself, before switching to more powerful items as soon as I had a chance. That was my strategy.
Because of that, I've been having an absolute blast with Stadium so far. Playing around with all the new stuff you can do as all of the heroes is far more engaging than I've expected. Piling every resource into survivability as a tank or weapon upgrades as a damage hero makes sense, but each hero has a ton of flexibility.
For instance, I could have gone all in on upgrading Ashe's Dynamite. But having a second Coach Gun charge to simultaneously blow up a trio of additional sticky explosives that can spawn when Ashe's Dynamite detonates was very impactful. I picked up quite a few kills with that trick.
Blizzard Entertainment
My favorite upgraded ability so far is being able to fly while using Reinhardt's charge. He can soar across nearly half a map in a few seconds. It's absurd. Not even flying heroes are safe from Reinhardt barreling them into a wall.
I'm a bit more mixed on the third-person view. It does have a lot of advantages, such as a wider field of view and peeking around walls. Until now, I've often had to use a dance emote to secretly peer around a corner. A lot of players will also appreciate being able to get a better look at the skins they've worked so hard (or spent so much) to unlock.
But I think some of the game's tactility is lost in third-person mode. In that perspective, Reinhardt feels a little slower and the satisfying smack of his hammer when it clatters an enemy feels less impactful. It's also a little jarring to switch from a third-person view to aiming down a rifle's sights with Ashe or Ana. So, although the third-person perspective works well for heroes like D.Va, Kiriko and Lucio, I'm glad the first-person mode is still an option.
Meanwhile, Soldier: 76 feels completely overpowered as things stand. He's been an ever-present in my matches and those playing as him usually ended up with the most currency out of everyone in the lobby. But that's the kind of thing the developers will be keeping a close eye on. It'll be even tougher to balance Stadium than the other modes, and doing so will be an ongoing process.
When I first started playing Overwatch in 2016, it took me several weeks to get my head around all of the heroes' abilities and how they could be combined or countered. It's going to take me a while to fully understand all of the new stuff here given the multiple layers of complexity, but I'm happy to just relax and have fun, and passively absorb all of the information instead of poring over it like I'm studying for a test.
Despite my initial reservations, I can see myself sticking with Stadium for a while. I've seen some wild stuff already, and things are going to get more bananas in the coming months as Blizzard folds more heroes into the mode. Plus, the Overwatch 2 I know and love is still there. If I ever feel too overwhelmed in Stadium, I can always retreat to the comfort of my beloved Mystery Heroes.
This article originally appeared on Engadget at https://www.engadget.com/gaming/overwatch-2s-frenetic-stadium-mode-is-a-new-lease-on-life-for-my-go-to-game-165053113.html?src=rss
Following customer outrage over its latest terms of service (ToS), Adobe is making updates to add more detail around areas like of AI and content ownership, the company said in a blog post. "Your content is yours and will never be used to train any generative AI tool," wrote head of product Scott Belsky and VP of legal and policy Dana Rao.
Subscribers using products like Photoshop, Premiere Pro and Lightroom were incensed by new, vague language they interpreted to mean that Adobe could freely use their work to train the company's generative AI models. In other words, creators thought that Adobe could use AI to effectively rip off their work and then resell it.
Other language was thought to mean that the company could actually take ownership of users' copyrighted material (understandably so, when you see it).
None of that was accurate, Adobe said, noting that the new terms of use were put in place for its product improvement program and content moderation for legal reasons, mostly around CSAM. However, many users didn't see it that way and Belsky admitted that the company "could have been clearer" with the updated ToS.
"In a world where customers are anxious about how their data is used, and how generative AI models are trained, it is the responsibility of companies that host customer data and content to declare their policies not just publicly, but in their legally binding Terms of Use," Belsky said.
To that end, the company promised to overhaul the ToS using "more plain language and examples to help customers understand what [ToS clauses] mean and why we have them," it wrote.
Adobe didn't help its own cause by releasing an update on June 6th with some minor changes to the same vague language as the original ToS and no sign of an apology. That only seemed to fuel the fire more, with subscribers to its Creative Cloud service threatening to quit en masse.
In addition, Adobe claims that it only trains its Firefly system on Adobe Stock images. However, multiple artists have noted that their names are used as search terms in Adobe's stock footage site, as Creative Bloq reported. The results yield AI-generated art that occasionally mimics the artists' styles.
Its latest post is more of a true mea culpa with a detailed explanation of what it plans to change. Along with the AI and copyright areas, the company emphasized that users can opt out of its product improvement programs and that it will more "narrowly tailor" licenses to the activities required. It added that it only scans data on the cloud and never looks at locally stored content. Finally, Adobe said it will be listening to customer feedback around the new changes.
This article originally appeared on Engadget at https://www.engadget.com/adobe-is-updating-its-terms-of-service-following-a-backlash-over-recent-changes-120044152.html?src=rss
I'm the first to admit that the amount of joy Google Sheets brings me is a bit odd, but I use it for everything from tracking my earnings to planning trip budgets with friends. So, I'm excited to see that Google is making it easier to get notified about specific changes to my spreadsheet without me learning to code (something I've just never gotten into). The company has announced that Google Sheets is getting conditional notifications, meaning you can set rules in spreadsheets that send emails when certain things happen.
For example, you could set it to send you an email notification when a number drops below or above a certain amount or when a column's value changes at all. You can also set rules that align more with a project manager tool, like getting a notification when a task's status or owner changes. This tool only requires edit access, with anyone able to set notifications for themselves or others by entering their email addresses. Don't worry, you can unsubscribe if someone starts sending you unwanted notifications.
To use conditional notifications, go to tools and then conditional notifications or just right-click in a cell. From there, click add rule (you can name the rule or let Google auto-label it) and then select a custom range or column. You can add additional criteria for the rule, such as exactly what a box should say for you to receive a notification. Then, you can manually input email addresses or select a column containing them. However, Google warns that if you do the latter, the number of cells must match the number included in the rule. So, if you have three cells in the rule, you can only highlight three cells with email addresses. If you get confused, Google gets into all the nitty-gritty of it here.
Google Sheet's conditional formatting is available to anyone with the following workplaces: Businesses Standard and Plus, Education Plus and Enterprise Starter, Standard, Plus or Essential. It started rolling out for Rapid Release domains on June 4 and will begin showing up for Standard Release domains on June 18. In both cases, conditional formatting might take up to 15 days to appear.
This article originally appeared on Engadget at https://www.engadget.com/google-sheets-new-tool-lets-you-set-specific-rules-for-notifications-133030113.html?src=rss
Opera users can already rely on the capabilities of OpenAI's large language models (LLMs) whenever they use the browser's Aria built-in AI assistant. But now, the company has also teamed up with Google to integrate its Gemini AI models into Aria. According to Opera, its Composer AI engine can process the user's intent based on their inquiry and then decide which model to use for each particular task.
Google called Gemini the "the most capable model [it has] ever built" when it officially announced the LLM last year. Since then, the company has announced Gemini-powered features across its products and has built the Gemini AI chatbot right into Android. Opera said that thanks to Gemini's integration, its browser "will now be able to provide its users with the most current information, at high performance."
The company's partnership with Google also enables Aria to offer new experimental features as part of its AI Feature Drop program. Users who have the Opera One Developer version of the browser can try a new image generation feature powered by Google's Imagen 2 model for free — in the image above, for instance, the user asked Aria to "make an image of a dog on vacation at a beach having a drink." In addition, users can listen to Aria read out responses in a conversational tone using Google's text-to-audio model. If everything goes well during testing, Opera could roll out the features to everyone, though they can still go through some changes, depending on early adopters' feedback.
This article originally appeared on Engadget at https://www.engadget.com/opera-is-adding-googles-gemini-ai-to-its-browser-120013023.html?src=rss
At this year's Build event, Microsoft has announced Team Copilot, and as you can probably guess from its name, it's a variant of the company's AI tool that can cater to the needs of a group of users. It expands Copilot's abilities beyond that of a personal assistant, so that it can serve a whole team, a department or even an entire organization, the company said in its announcement. The new tool was designed to take on time-consuming tasks to free up personnel, such as managing meeting agenda and taking down minutes that group members can tweak as needed.
The new Copilot for Teams can also serve as a meeting moderator by summarizing important information for latecomers (or for reference after the fact) and answering questions. Finally, it can create and assign tasks in Planner, track their deadlines, and notify team members if they need to contribute to or review a certain task. The company's customers paying for a Copilot license on Microsoft 365 will be able to test these features in preview starting later this year.
In addition to Team Copilot, Microsoft has also announced new ways customers can personalize the AI assistant. In Copilot Studio, users will be able to make custom Copilots in SharePoint so that users can more quickly access the information they need, as well as to create custom Copilots that act as agents. The latter would allow companies and business owners to automate business processes, such as end-to-end order fulfillment. Finally, the debut of Copilot connectors in Studio will make it easier for developers to build Copilot extensions that can customize the AI tools' actions.
This article originally appeared on Engadget at https://www.engadget.com/microsoft-unveils-copilot-for-teams-153059261.html?src=rss
Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.
When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions.
Google
The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them.
Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go.
Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access.
Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss
Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.
You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.
Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool.
Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.
This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss
The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.
TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend.
Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss
When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra.
The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front.
What is Project Astra?
According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."
The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."
Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."
Wait, were those Project Astra glasses? Is Google Glass back?
The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."
After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."
The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."
How does Project Astra work?
This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."
It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."
Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.
When will Project Astra be available?
While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss