Posts with «small businesses» label

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Google announces new scam detection tools that provide real-time alerts during phone calls

Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.

You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.

Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool. 

Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss

Google's Gemini Nano brings better image-description smarts to its TalkBack vision tool

The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.

TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend. 

Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss

Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

What is Project Astra?

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

Wait, were those Project Astra glasses? Is Google Glass back?

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

How does Project Astra work?

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

When will Project Astra be available?

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss

How to watch Google's I/O 2024 keynote

It’s that time of year again. Google’s annual I/O keynote is upon us. This event is likely to be packed with updates and announcements. We’ll be covering all of the news as it happens and you can stream the full event below. The keynote starts at 1PM ET on May 14 and streams are available via YouTube and the company’s hub page.

In terms of what to expect, the rumor mill has been working overtime. There are multiple reports that the event will largely focus on the Android 15 mobile operating system, which seems like a given since I/O is primarily an event for developers and the beta version is already out in the wild.

So let’s talk about the Android 15 beta and what to expect from the full release. The beta includes an updated Privacy Sandbox feature, partial screen sharing to record a certain app or window instead of the whole screen and system-level app archiving to free up space. There’s also improved satellite connectivity, additional in-app camera controls and a new power efficiency mode.

Despite the beta already existing, it’s highly probable that Google will drop some surprise Android 15 announcements. The company has confirmed that satellite messaging is coming to Android, so maybe that’ll be part of this event. Rumors also suggest that Android 15 will boast a redesigned status bar and an easier way to monitor battery health.

Sam Rutherford/Engadget

Android 15 won’t be the only thing Google discusses during the event. There’s a little acronym called AI you may have heard about and the company has gone all in. It’s a good bet that Google will spend a fair amount of time announcing updates for its Gemini AI, which could eventually replace Assistant entirely.

Back in December, it was reported that Google was working on an AI assistant called Pixie as an exclusive feature for Pixel devices. The branding is certainly on point. We could hear more about that, as it may debut in the Pixel 9 later this year. 

Google’s most popular products could also get AI-focused redesigns, including Search, Chrome, G Suite and Maps. We might get an update as to what the company plans on doing about third-party cookies and maybe it’ll throw some AI at that problem too.

What not to expect? Don’t get your hopes up for a Pixel 9 or refreshed Pixel Fold for this event, as I/O is more for software than hardware. We’ll likely get details on those releases in the fall. However, rules were made to be broken. Last year, we got a Pixel Fold announcement at I/O, so maybe the line between hardware and software is blurring. We’ll find out soon.

This article originally appeared on Engadget at https://www.engadget.com/how-to-watch-googles-io-2024-keynote-160010787.html?src=rss

Instagram's 'Add Yours' sticker now lets you share songs

Instagram just announced some new features coming to Stories, including a suite of interactive stickers. The music one is perhaps the most interesting, as it's an extension of the pre-existing Add Yours feature. The Add Yours Music sticker lets users share their favorite songs, along with a prompt for followers to get in on the fun by sharing their own related tracks. Of course, the song has to already be in Instagram’s music library to work.

To that end, Instagram has partnered with Dua Lipa to promote her new album, Radical Optimism. Many of the songs from the album are available for use in this way, and the artist herself has been posting Stories with Add Your Music stickers.

Instagram

Another nifty sticker added today is called Reveal. Opting for this sticker blurs the visuals of a story post and the only way followers can see the content is to DM the person who shared it. Direct messages have become a key factor behind Instagram’s continued growth, with site head Adam Mosseri stating that teens actually spend more time in DMs than anywhere else on the platform.

He also says that “virtually all” engagement growth over the past few years has come from DMs and Stories, according to reporting by Business Insider. So, yeah, this will most definitely be used as a hack by savvy creators looking to boost their engagement. The thirst traps will be thirstier and trappier than ever before.

Instagram

Instagram has also unveiled a sticker called Frames. This tool throws a Polaroid-esque overlay over a photo, turning it into an instant print image. To reveal the contents, followers will have to channel Andre 3000 and shake their phones like a Polaroid picture, though there’s also a button. Creators can add captions which are also revealed upon shaking. This feature was originally revealed at this year’s Coachella festival.

Instagram

Finally, there’s a feature called Cutouts. This tool turns any part of a video or photo in your camera roll into a sticker, which can then be applied to a story or reel. Once a cutout is created, it gets saved into an easily-accessible sticker tray for future uses. This also works with photos posted to Instagram, though the pictures have to be shared by public accounts.

This has been a big month of changes for Instagram. In addition to the aforementioned new sticker systems, the social media app recently overhauled its algorithm to boost original content and deemphasize aggregator accounts. The company also changed the way Reels works to give smaller accounts a chance to expand their reach, though it remains unclear how this works. Instagram has also recently made Meta’s AI chatbot available in DMs, if you want some confident, yet absolutely wrong, answers to questions.

This article originally appeared on Engadget at https://www.engadget.com/instagrams-add-yours-sticker-now-lets-you-share-songs-180730795.html?src=rss

Snapchat will finally let you edit your chats

Snapchat will finally join most of its messaging app peers and allow users to edit their chats. The feature, which will be rolling out “soon,” will initially be limited to Snapchat+ subscribers, the company said.

With the change, Snapchat users will have a five-minute window to rephrase their message, fix typos or otherwise edit their chats. Messages that have been edited will have a label indicating the text has been changed. The company didn’t say when the feature might be available to more of its users, but the company often brings sought after features to its subscription service first. Snap announced last week that Snapchat+, which costs $3.99 a month, had reached 9 million subscribers.

The app is also adding several non-exclusive features, including updated emoji reactions for chats, the ability to use the My AI assistant to set reminders and AI-generated outfits for Bitmoji. Snap also showed off a new AI lens that transforms users’ selfies into 1990’s-themed snapshots (just don’t look too closely at the wireless headphones appearing in many of the images.)

This article originally appeared on Engadget at https://www.engadget.com/snapchat-will-finally-let-you-edit-your-chats-223643771.html?src=rss

Rabbit denies claims that its R1 virtual assistant is a glorified Android app

The Rabbit R1, a pocket-sized AI virtual assistant device, runs Android under the hood and is powered by a single app, according to Android Authority. Apparently, the publication was able to install the R1 APK on a Pixel 6a and made it run as if it were the $199 gadget, bobbing bunny head on the screen and all. If you already have a phone and aren't quite intrigued by specialized devices or keen on being an early adopter, you probably didn't see merit in getting the R1 (or its competitor, the Humane AI Pin) in the first place. But this information could make you question the device's purpose even more. Rabbit CEO Jesse Lyu, however, denied that the company's product could've just been released an Android app.

In a statement sent to Android Authority, Lyu said: "rabbit r1 is not an Android app." He added that the company is aware that there are "unofficial rabbit OS app/website emulators out there" and is discouraging their use. "We understand the passion that people have to get a taste of our AI and LAM instead of waiting for their r1 to arrive," he continued. "That being said, to clear any misunderstanding and set the record straight, rabbit OS and LAM run on the cloud with very bespoke AOSP and lower level firmware modifications, therefore a local bootleg APK without the proper OS and Cloud endpoints won’t be able to access our service. rabbit OS is customized for r1 and we do not support third-party clients. Using a bootlegged APK or webclient carries significant risks; malicious actors are known to publish bootlegged apps that steal your data. For this reason, we recommend that users avoid these bootlegged rabbit OS apps."

Android Authority admitted that Spotify integration and other features probably wouldn't work when the R1 is installed on a phone, because it was created to run on the company's specialized firmware. However, it promised a follow-up story delving deeper into the subject. 

The R1 has the capability to book you an Uber, find you titles to songs stuck in your brain or look for recipes that can incorporate ingredients you have in your fridge, among other things a virtual assistant or an AI chatbot can do. When Rabbit CEO Jesse Lyu introduced the R1 at CES 2024, he demonstrated how it can be trained to do a variety of other tasks when he taught it to generate an image using Midjourney. Engadget Deputy Editor Cherlynn Low found it more fun and accessible than the $700 Humane AI Pin, but she remains skeptical about the usefulness of AI devices overall. It may still be too early to tell whether they have the potential to become a must-have product for your daily life or the high-tech equivalent of single-use kitchen tools. We're already in the midst of testing the R1 and will publish a review soon to help you decide if it's worth giving the product category a chance. 

This article originally appeared on Engadget at https://www.engadget.com/rabbit-denies-claims-that-its-r1-virtual-assistant-is-a-glorified-android-app-123049869.html?src=rss

Walmart thinks it's a good idea to let kids buy IRL items inside Roblox

Walmart's Discovered experience started out last year as a way for kids to buy virtual items for Roblox inside the game. But today, that partnership is testing out an expanded pilot program that will allow teens to buy real-life goods stocked on digital shelves before they're shipped to your door. 

Available to children 13 and up in the US, the latest addition to Walmart Discovered is an IRL commerce shop featuring items created by partnered user-generated content creators including MD17_RBLX, Junozy, and Sarabxlla. Customers can browse and try on items inside virtual shops, after which the game will open a browser window to Walmart's online store (displayed on an in-game laptop) in order to view and purchase physical items. 

Furthermore, anyone who buys a real-world item from Discovered will receive a free digital twin so they can have a matching virtual representation of what they've purchased. Some examples of the first products getting the dual IRL and virtual treatment are a crochet bag from No Boundaries, a TAL stainless steel tumbler and Onn Bluetooth headphones

According to Digiday, during this initial pilot phase (which will take place throughout May), Roblox will not be taking a cut from any of the physical sales made as part of Walmart's Discovered experience as it looks to determine people's level of interest. However, the parameters of the partnership may change going forward as Roblox gathers more data about how people embrace buying real goods inside virtual stores. 

Unfortunately, while Roblux's latest test may feel like an unusually exploitative way to squeeze even more money from teenagers (or more realistically their parent's money), this is really just another small step in the company's efforts to turn the game into an all-encompassing online marketplace. Last year, Roblox made a big push into digital marketing when it launched new ways to sell and present ads inside the game before later removing requirements for advertisers to create bespoke virtual experiences for each product. 

So in case you needed yet another reason not to save payment info inside a game's virtual store, now instead of wasting money on virtual items, kids can squander cash on junk that will clutter up their rooms too. 

This article originally appeared on Engadget at https://www.engadget.com/walmart-thinks-its-a-good-idea-to-let-kids-buy-irl-items-inside-roblox-180054985.html?src=rss

Insta360’s X4 captures 8K 360-degree video

There’s a cult following for 360-degree cameras. While companies like GoPro and Ricoh continue to dabble in the category, Insta360 simply dominates it. Until today, the X3 was the ultimate 360 camera, with loads of features and shooting modes that were relatively easy to use. Insta360’s collection of selfie sticks, guards, cases and peripherals added even more cool tricks like bullet time effects and fast-zoom video effects. A few years later, we’re getting the Insta360 X4, with improvements prioritizing the fundamentals. There are higher-resolution camera sensors, a bigger battery and even more versatility, thanks to multiple resolutions and framerate options.

Photo by Mat Smith/Engadget

The Insta360 X4 doesn’t look hugely different from the X3. It has the same candy bar form factor, with two huge wide-angle lenses either side. It does seem more elongated, but I had no issue cramming it into my pocket during a week of testing.

The new camera has removable lens guards, which is an intelligent design improvement. Any damage or scratch to the lens will likely affect image quality, especially when it’s exposed in … adventurous settings. Previously, Insta360 offered sticky lens covers, but the X4 new lens has guards that can be twisted on and off the camera sensors. And they come included in the box, which is nice.

Both the USB-C port and battery compartment, where the microSD slot lives, are protected by solid covers with sliding locks. The Insta360 X4’s Type-C port now supports USB 3.0 speeds, arguably necessary when dealing with these higher-resolution videos and bigger files.

Photo by Mat Smith/Engadget

The button layout remains streamlined and familiar to anyone who’s used Insta360 cameras before. There’s a circular ‘shoot’ button (voice and gesture shooting options are built-in, too, but they’re a little less reliable), a mode switcher, a programmable Q button, and the power button. The 2.5-inch touchscreen is bigger, too, and most settings are only a few swipes away. It feels like using a smartphone, which helps make it intuitive.

However, the sheer versatility means there are a lot of menus to peruse. I never felt overwhelmed but during testing, I never quite managed to get Bullet Time and Time Shift to work anywhere near as well as I’ve seen on YouTube.

Photo by Mat Smith/Engadget

Newcomers can power up the X4 immediately and capture video and stills without too much struggle. Naturally, for those who know what they’re doing, this is where things get fun.

The technical improvements focus on video, with the new ability to record footage at up to 8K 30fps or 5.7k at 60fps. Slow-mo video has been boosted up to 4K resolution, too. Insta360’s Me Mode, which captures traditional ‘flat’ video (in combination with its ‘invisible’ selfie stick), has been upgraded to 4K 30fps. In short, it captures more of everything compared to its predecessor. More pixels mean more detail with 360-degree video (or any capture mode). It also ensures that when you crop down to create clips for social media, the footage doesn’t appear too low-res. Plus, Insta360 claims that stepping down to 5.7K resolution to record video will offer better performance in low light, which seemed true during my tests indoors and in the evening.

Insta360 has considered the increased processing demands of higher-resolution content. The X4 has a 2,290mAh battery, 67 percent bigger than the X3's. According to the press release, it should be able to capture video for up to 135 minutes.

While we’re focusing on the upgrades, a lot of Insta360’s best camera features are carryovers from the X3. 360-degree horizon lock keeps all your footage level regardless of how you hold the X4, and there’s still impressive image stabilization and waterproofing up to 33 feet. While the X3 fixed many of the biggest problems with capturing 360-degree video, the X4 has boosted fidelity to the point where it’s possible to capture polished footage without much effort.

The X4 is now available to order directly from Insta360, priced at $499.99. That is $100 more than its predecessor but still less than the company’s pro-level $800 camera, the One RS 1-inch 360 Edition.

This article originally appeared on Engadget at https://www.engadget.com/insta360-x4-release-date-price-first-impressions-130001066.html?src=rss