Posts with «software» label

Meta's Threads gets its own Tweetdeck clone

The web version of Threads could soon be much more useful. Meta is starting to test custom Tweetdeck-like feeds that will allow users to track multiple topics, searches and accounts in a single view.

People who are part of the test can set “pinned columns” that will track updates around specific topics, tags, accounts or search terms. Users can also opt to have these columns automatically refresh with new content. Based on screenshots shared by Mark Zuckerberg, the new Threads columns look a lot like Tweetdeck, the desktop app long favored by Twitter’s power users. The app is now called X Pro and only available to X’s paid subscribers.

The test is the latest sign Meta is looking to make Threads a more reliable source for real-time information. The company has also added a “recent” tab and trending topics to search. But being able to track multiple feeds of updates at once is even more useful. It could also address long-running complaints about Threads’ algorithmic “for you” feed, which tends to surface a random mix of days-old posts and bizarre personal stories from unconnected accounts.

It’s not clear how many people will be part of Meta’s initial test of the feature, though Adam Mosseri said the company is looking for feedback on the changes. But the company has often rolled out major Threads changes to small group of users first before making them more widely available.

This article originally appeared on Engadget at https://www.engadget.com/metas-threads-gets-its-own-tweetdeck-clone-172131218.html?src=rss

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Intel's Thunderbolt Share makes it easier to move large files between PCs

Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move. 

When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency. 

The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.

This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss

Sony PSP emulator PPSSPP hits the iOS App Store

PPSSPP, an app that's capable of emulating PSP games, has joined the growing number of retro game emulators on the iOS App Store. The program has been around for almost 12 years, but prior to this, you could only install it on your device through workarounds. "Thanks to Apple for relaxing their policies, allowing retro games console emulators on the store," its developer Henrik Rydgård wrote in his announcement. If you'll recall, Apple updated its developer guidelines in early April, and since then, the company has approved an app that can emulate Game Boy and DS games and another that can play PS1 titles

Rydgård's app is free to download, but as he told The Verge, there's $5 gold version coming, as well. While the paid version of PPSSPP for Android does have some extra features, it's mostly available so that you can support his work. At the moment, the emulator you can download from the App Store doesn't support Magic Keyboard for the iPad, because he originally enabled compatibility using an undocumented API. Retro Achievements is also currently unavailable. Rydgård said they'll be re-added in future updates.

The emulator's other versions support the Just-in-time (JIT) compiler, which optimizes code to make it run more smoothly on a particular platform. However, the one on the App Store doesn't and will not ever support it unless Apple changes its rules. Rydgård says iOS devices are "generally fast enough" to run almost all PSP games at full speed, though, so you may not notice much of a difference. Of course, the PPSSPP program only contains the emulator itself — you're responsible for finding games you can play on the app, since Apple will not allow developers to upload games they don't own the rights to. 

This article originally appeared on Engadget at https://www.engadget.com/sony-psp-emulator-ppsspp-hits-the-ios-app-store-052506248.html?src=rss

Google I/O 2024: Everything revealed including Gemini AI, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.

Gemini 1.5 Flash and updates to Gemini 1.5 Pro

Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra

Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Google Photos

Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo and Imagen 3

Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Big updates to Google Search

Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

WearOS 5 battery life improvements

Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

Android 15 anti-theft features

Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.

This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss

Google lets third-party developers into Home through new APIs

Google is opening up its Home platform to third-party developers through new APIs. As such, any app will eventually be able to tap into the more than 600 million devices that are connected to Home, even if they're not necessarily smart home-oriented apps. Google suggests, for instance, that a food delivery app might be able to switch on the outdoor lights before the courier shows up with dinner.

The APIs build on the foundation of Matter and Google says it created them with privacy and security at the forefront. For one thing, developers who tap into the APIs will need to pass certification before rolling out their app. In addition, apps won't be able to access someone's smart home devices without a user's explicit consent.

Developers are already starting to integrate the APIs, which include one focused on automation. Eve, for instance, will let you set up your smart blinds to lower automatically when the temperature dips at night. A workout app might switch on a fan for you before you start working up a sweat.

Google is taking things a little slow with the APIs, as there's a waitlist and it's working with select partners. It plans to open up access to the APIs on a rolling basis, and the first apps using them will hit the Play Store and App Store this fall.

Meanwhile, Google is turning TVs into smart home hubs. Starting later this year, you'll be able to control smart home devices via Chromecast with Google TV and certain models with Google TV running Android 14 or higher, as well as some LG TVs.

This article originally appeared on Engadget at https://www.engadget.com/google-lets-third-party-developers-into-home-through-new-apis-180420068.html?src=rss

Google announced an update for Android Auto with new apps and casting support

Google just announced at an update coming to Android for Cars that should make paying attention to the road just a tiny bit harder. The automobile-based OS is getting new apps, screen casting and more, which were revealed at Google I/O 2024.

First up, select car models are getting a suite of new entertainment apps, like Max and Peacock, for keeping passengers busy during road trips. The company hasn’t announced which makes and models are getting this particular update, and there are dozens upon dozens of major car models that use this platform. Still, more entertainment options are never a bad thing.

To that end, Android Auto is getting Angry Birds, for those who want another game to fool around with while stuck in traffic. The once-iconic bird-flinging simulator is likely the best known gaming IP on the platform, as Android Auto’s other games include stuff like Pin the UFO and Zoo Boom.

Cars with Android Automotive OS are getting Google Cast as part of a forthcoming update, which will let users stream content from phones and tablets. Rivian models will be the first to get this particular feature, with more manufacturers to come.

Google’s also rolling out new developer tools to make it easier for folks to create new apps and experiences for Android Auto. There’s even a new program that should make it much easier to convert pre-existing mobile apps into car-ready experiences.

Android Auto is becoming the de facto standard when it comes to car-based operating systems. Google also used the event to announce that there are now over 200 million cars on the road compatible with the OS. Recent updates to the platform allow users to instantly check on EV battery levels and take Zoom calls while on the road.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announced-an-update-for-android-auto-with-new-apps-and-casting-support-170831358.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

Built-in eye-tracking for iPhones and iPads

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Vocal shortcuts for easier hands-free control

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

Music haptics in Apple Music and other apps

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Help in cars — motion sickness and CarPlay

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

Other Apple Accessibility updates

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Google's Project Gameface hands-free 'mouse' launches on Android

At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss

Gemini will be accessible in the side panel on Google apps like Gmail and Docs

Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.

Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.

The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.

Google

Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.

The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.

Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss