After shirking tradition and devoting its entire Google I/O keynote to showcase how it’s stuffing AI into everything imaginable, the company has reserved day two to catch up on the one-time star of the show, Android. Alongside the arrival of the second Android 15 beta on Wednesday, Google is unveiling previously unannounced security features in its 2024 mobile software, including AI-powered theft detection, Google Play fraud protection and more.
Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.
A fallback Remote Lock feature lets you quickly lock your handset if someone manages to take it without triggering Theft Detection Lock. With Remote Lock, you can (you guessed it) remotely lock the phone’s screen from any device with only your phone number and the completion of a “quick security challenge.” This is designed to avoid situations where someone gets their phone taken (or loses it) but doesn’t know their Google account password to access Find My Device.
Along similar lines, Offline Device Lock automatically locks your phone’s screen — requiring authentication to unlock — when it’s off the grid. This is designed to counter thieves who quickly take a stolen device offline before the owner can lock or wipe it remotely.
Meanwhile, an update to factory reset protection will require your credentials to use the phone after a data wipe, reducing the incentives for them to steal it in the first place. In addition, disabling Find My Device or lengthening the phone’s screen timeout will require security authentication, voiding another common tactic phone snatchers use to reset the device before getting locked out.
Similar to a feature Apple rolled out earlier this year, Android 15 will also require extra authentication when trying to change account security settings (changing the PIN, disabling theft protection or accessing Passkeys) from an untrusted location.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/android-15-will-make-it-harder-for-phone-thieves-to-steal-your-data-170037992.html?src=rss
Microsoft just released a new update for Xbox Cloud Gaming that finally brings mouse and keyboard support, after teasing the feature for years. The tool is currently in beta release and works with both the Edge and Chrome web browsers. It looks pretty simple to use. Just select a game that supports a mouse and keyboard and have at it.
You can also instantly switch between a mouse/keyboard combination to a standard controller by pressing the Xbox button on the controller or pressing a key on the keyboard. The company says it’ll be rolling out badges later in the month to alert users which games support mouse and keyboard inputs.
For now, there’s support for 26 games. These include blockbusters like ARK Survival Evolved, Halo Infinite and, of course, Fortnite. Smaller games like High on Life and Pentiment can also be controlled via mouse and keyboard. Check the above link for the full list.
Microsoft hasn’t said what took it so long to get this going. The feature was originally presumed to launch back in June of 2022, but we didn’t get a progress update until two months ago. No matter the reason, KBM setups are practically a requirement for first-person shooters and, well, better late than never.
This article originally appeared on Engadget at https://www.engadget.com/xbox-cloud-gaming-finally-supports-keyboard-and-mouse-inputs-on-web-browsers-165215925.html?src=rss
Anyone who wants to keep an eye on their perimeter or see nighttime trash panda action may want to check out this deal on Amazon. Currently, bundles of the Blink Outdoor 4 cameras are on sale, with the deepest discount going to a five-pack set. At full price, it costs $400. With the discount, it's $200 instead. That matches the Prime member-only price we saw earlier this year, but this time, even those who don't pay for Amazon's program can get the offer. Other bundles and Blink devices are on sale too as part of a larger sale.
The Blink Outdoor 4 security cameras allow users to see, hear and talk with anyone who comes into view and send motion-detection alerts and live feeds to a connected smartphone. They can also send footage to an Echo Show smart display and receive commands from other Alexa-enabled devices like an Echo Dot or Fire TV. Just note that Blink equipment isn't Google Assistant- or Siri-compatible, so these really only make sense for the Amazon-based smart home.
The Outdoor 4 is the latest generation of the cameras, offering a wider field of vision and better day and night image quality over the previous generation. During the day, they shoot 1080p video and use infrared night vision in the dark. Each unit runs on a pair of AA batteries which should power the camera for two years. A plug-in Sync Module that stays inside is required to operate the Outdoor 4 cameras and, conveniently, is included in each bundle — as are enough batteries for the cameras, mounting kits and the plug for the Sync Module.
For those who just need to keep an eye on one area outside, there's the one-camera system, which also includes the Sync Module and other accessories. It's 40 percent off right now and down to an all-time low of $60. For a longer battery life, the Outdoor 4 single-cam system can also be bundled with a battery pack that extends the run time from two years to four. That version is $80 after a 33 percent discount.
Amazon is also discounting its Blink branded doorbells, floodlights and indoor cameras as part of a larger sale. Blanketing a home in Alexa's watchful eye just got a whole lot cheaper.
This article originally appeared on Engadget at https://www.engadget.com/blink-outdoor-4-security-cameras-are-up-to-half-off-right-now-155239715.html?src=rss
After years of rumors, Canon has confirmed that a flagship EOS R1 camera is in the works for its EOS line. The full-frame mirrorless camera is slated to arrive later this year and, while Canon hasn't revealed all the details just yet, it teased just enough to whet your appetite. There's no indication as to how much the EOS R1 will cost just yet either, but you may need to dig deep into your wallet this one.
The company says that the professional-grade camera will have an RF mount and offer improved video and still performance compared with the EOS R3. It will boast an upgraded image processing system that combines a fresh CMOS sensor, a new image processor called Digic Accelerator and the existing Digic X processor.
Canon says the system will be able to process a large volume of data at high speed and deliver advancements in auto focus and other areas. The company claims it's been able to combine the capabilities of the image processing system with its deep-learning tech to achieve "high-speed and high-accuracy subject recognition."
This powers a feature called Action Priority, which can, for instance, detect a player carrying a certain action in a sports game (like shooting a ball) and identify them as the main subject for a shot. The system would be able to instantly shift the auto focus frame in that person's direction to help make sure the photographer doesn't miss out on capturing key moments from a game.
Canon claims the EOS R1 can track athletes during sporting events even if they're momentarily out of line of sight. The focus on sports in the initial announcement suggests that the camera could be put to the test at this summer's Olympic Games in Paris.
In addition, Canon says it's bringing the image noise reduction feature that was initially built for PC software directly into the camera. It suggests this further improves image quality and can help users fulfill their creative goals.
This article originally appeared on Engadget at https://www.engadget.com/canon-confirms-its-long-rumored-flagship-eos-r1-is-coming-later-this-year-142838188.html?src=rss
Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more.
Built-in eye-tracking for iPhones and iPads
The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it.
That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.
Vocal shortcuts for easier hands-free control
Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready.
Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.
To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.
Music haptics in Apple Music and other apps
For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too.
Help in cars — motion sickness and CarPlay
Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.
Apple
For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.
Other Apple Accessibility updates
There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss
At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology.
The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action.
The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses.
For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss
Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.
Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.
Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.
But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.
Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.
And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen.
While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.
One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.
Elon Musk continues to add policy after baffling policy.
The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.
Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss
At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates that Google announced at the event.
Gemini 1.5 Flash and updates to Gemini 1.5 Pro
Google
Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.
Project Astra
Google
Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”
In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.
The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.
Ask Google Photos
Google
Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.
Veo and Imagen 3
Google
Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.
Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.
Big updates to Google Search
Google
Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.
But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.
Gemini on Android
Google
Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/everything-announced-at-google-io-2024-including-gemini-ai-project-astra-android-15-and-more-210414580.html?src=rss
Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.
Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.
The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.
Google
Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.
The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.
Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss
Google's Gemini AI systems can do a lot, judging by today's I/O keynote. That includes the option to set up a virtual teammate with its own Workspace account. You can configure the teammate to carry out specific tasks, such as to monitor and track projects, organize information, provide context, pinpoint trends after analyzing data and to play a role in team collaboration.
In Google Chat, the teammate can join all relevant rooms and you can ask it questions based on all the conversation histories, Gmail threads and anything else it has access to. It can tell team members whether their projects are approved or if there might be an issue based on conflicting messages.
It seems like the virtual teammate was just a tech demo for now, however. Aparna Pappu, vice president and GM of Workspace, said Google has "a lot of work to do to figure out how to bring these agentive experiences, like virtual teammates, into Workspace." That includes finding ways to let third parties make their own versions.
While it doesn't seem like this virtual teammate will be available soon, it could eventually prove to be a serious timesaver — as long as you trust it to get everything right first time around.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-gemini-can-power-a-virtual-ai-teammate-with-its-own-workspace-account-182809274.html?src=rss