Posts with «information technology» label

Xbox Cloud Gaming finally supports keyboard and mouse inputs on web browsers

Microsoft just released a new update for Xbox Cloud Gaming that finally brings mouse and keyboard support, after teasing the feature for years. The tool is currently in beta release and works with both the Edge and Chrome web browsers. It looks pretty simple to use. Just select a game that supports a mouse and keyboard and have at it.

You can also instantly switch between a mouse/keyboard combination to a standard controller by pressing the Xbox button on the controller or pressing a key on the keyboard. The company says it’ll be rolling out badges later in the month to alert users which games support mouse and keyboard inputs.

For now, there’s support for 26 games. These include blockbusters like ARK Survival Evolved, Halo Infinite and, of course, Fortnite. Smaller games like High on Life and Pentiment can also be controlled via mouse and keyboard. Check the above link for the full list.

Microsoft hasn’t said what took it so long to get this going. The feature was originally presumed to launch back in June of 2022, but we didn’t get a progress update until two months ago. No matter the reason, KBM setups are practically a requirement for first-person shooters and, well, better late than never.

This article originally appeared on Engadget at https://www.engadget.com/xbox-cloud-gaming-finally-supports-keyboard-and-mouse-inputs-on-web-browsers-165215925.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

Built-in eye-tracking for iPhones and iPads

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Vocal shortcuts for easier hands-free control

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

Music haptics in Apple Music and other apps

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Help in cars — motion sickness and CarPlay

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

Other Apple Accessibility updates

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

The Morning After: The biggest news from Google's I/O keynote

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

The biggest stories you might have missed

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Google reveals its visual AI assistant, Project Astra

Full of potential.

Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

X now treats the term cisgender as a slur

Elon Musk continues to add policy after baffling policy.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

OpenAI co-founder Ilya Sutskever is leaving the company

He’s moving to a new project.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss

Everything announced at Google I/O 2024 including Gemini AI, Project Astra, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates that Google announced at the event.

Gemini 1.5 Flash and updates to Gemini 1.5 Pro

Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra

Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Google Photos

Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo and Imagen 3

Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Big updates to Google Search

Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/everything-announced-at-google-io-2024-including-gemini-ai-project-astra-android-15-and-more-210414580.html?src=rss

Google builds Gemini right into Android, adding contextual awareness within apps

Google just announced some nifty improvements to its Gemini AI chatbot for Android devices as part of the company’s I/O 2024 event. The AI is now part of the Android operating system, allowing it to integrate in a more comprehensive way.

The coolest new feature wouldn’t be possible without that integration with the underlying OS. Gemini is now much better at understanding context as you control apps on the smartphone. What does this mean exactly? Once the tool officially launches as part of Android 15, you’ll be able to bring up a Gemini overlay that rests on top of the app you’re using. This will allow for context-specific actions and queries.

Google gives the example of quickly dropping generated images into Gmail and Google Messages, though you may want to steer clear of historical images for now. The company also teased a feature called “Ask This Video” that lets users pose questions about a particular YouTube video, which the chatbot should be able to answer.

It’s easy to see where this tech is going. Once Gemini has access to the lion’s share of your app library, it should be able to actually deliver on some of those lofty promises made by rival AI companies like Humane and Rabbit. Google says it's “just getting started with how on-device AI can change what your phone can do” so we imagine future integration with apps like Uber and Doordash, at the very least.

Circle to Search is also getting a boost thanks to on-board AI. Users will be able to circle just about anything on their phone and receive relevant information. Google says people will be able to do this without having to switch apps. This even extends to math and physics problems, just circle for the answer, which is likely to please students and frustrate teachers.

This article originally appeared on Engadget at https://www.engadget.com/google-builds-gemini-right-into-android-adding-contextual-awareness-within-apps-180413356.html?src=rss

Google expands digital watermarks to AI-made video

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system.

SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.

During a briefing with reporters, DeepMind CEO Demis Hassabis said that SynthID watermarks would also expand to AI-generated text. As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.

Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

In a video demonstrating the prowess of its new Project Astra app, the person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these weren't your bog-standard visual aid. These glasses had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss

Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

What is Project Astra?

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

Wait, were those Project Astra glasses? Is Google Glass back?

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

How does Project Astra work?

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

When will Project Astra be available?

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss

iPad Pro (2024) review: So very nice, and so very expensive

It hasn’t even been released yet, but Apple’s new iPad Pro is probably one of the most divisive devices the company has made in years. On the one hand, it’s an undeniable feat of engineering. Apple squeezed a new M4 chip and “tandem” OLED panel into a tablet that’s somehow thinner and lighter than the one it replaces. And the prior iPad Pro was no slouch either, garnering loads of praise for its combo of power and portability since it was first introduced in 2018.

On the other hand, this tech comes at a cost: the 11-inch iPad Pro starts at $999, while the 13-inch model costs $1,299. That’s $200 more than before, and that’s without a $299 or $349 Magic Keyboard and a $129 Pencil Pro. (The unit I’m testing is a 13-inch system with 1TB of storage and 5G, which costs $2,099) The iPad Pro has always felt like Apple flexing its muscles, showing off an absurdly powerful and portable vision of tablet computing that’s overkill for almost everyone, and that’s more true than ever. Furious debate has ensued over the value of an iPad Pro and why in the world anyone would buy one instead of a MacBook. This isn’t a new conversation, but it feels particularly heated this time.

Before getting into the details, it’s worth noting that I haven’t even had a week to use the iPad Pro M4. So I can’t assess things like long-term durability, which I can’t help but wonder about given just how thin it is. But in the short time I’ve had the iPad Pro, I can say that it’s somehow a major leap forward that doesn’t significantly change the iPad experience. As such, you’ll have to really ask yourself if it’s worth the price.

Hardware

If you stare at the iPad Pro M4 head-on, you won’t notice any difference between it and the previous model. The display still makes up the vast majority of the front, with thin, equally sized bezels surrounding it. The Face ID camera is now on the landscape edge (a great change that Apple first brought to the basic iPad in late 2022), but it’s basically invisible to the eye — no notch for the Pro.

However, picking up the iPad Pro tells another story altogether. While the new 13-inch model is fractionally taller and wider than the 12.9-inch version it replaces, the iPad Pro M4 is 20 percent thinner and about a quarter-pound lighter. I cannot stress enough how radically this changes the experience of holding the iPad Pro, especially the larger of the two.

Before, the big iPad Pro was just a bit too big and heavy to be comfortable as a hand-held tablet. I used to prefer using the 11-inch iPad Pro or Air when I’m relaxing on the couch browsing the web, playing some games, messaging friends and doing other light tasks. Now, however, it feels entirely reasonable to use the 13-inch model in that fashion. I still think smaller tablets are better for hand-held tasks, but the reduced thickness and weight make the new iPad Pro much easier to handle.

I want to talk a little more about how ridiculously thin this iPad is. Apple has rightly gotten its share of flack for relentlessly trying to make its products thinner, to the point where it affects durability and usability. Perhaps the best examples are the Touch Bar MacBook Pro models that Apple first introduced in 2016. Those laptops were indeed thinner and lighter than their predecessors, but at the expense of things like battery life, proper thermal cooling and a reliable keyboard. Apple reversed course by 2020 when it brought its own chips to the MacBook Pro; those laptops were heavier and chunkier than the disastrous Touch Bar models, but they had more ports and better keyboards and no issues staying cool under a heavy workload.

This is all to say that, for those computers, the pursuit of “thin and light” hampered their primary purpose, especially since they aren’t devices you hold in your hands all day. But with something like an iPad, where you’re meant to pick it up, hold it and touch it, shaving off a quarter of a pound and 20 percent of its thickness actually makes a huge difference in the experience of using the product. It’s more comfortable and easier to use — and, provided that there are no durability concerns here, this is a major improvement. I’ve only had the iPad Pro for less than a week, so I can’t say how it’ll hold up over time, but so far it seems sturdy and not prone to bending.

The iPad Pro on the left, next to the iPad Air on the right.
Photo by Nathan Ingraham / Engadget

Beyond that significant change, the new iPad Pro retains the same basic elements: There’s a power button in one corner, volume up and down buttons on another, and a USB-C Thunderbolt port on the bottom. There’s a camera bump on the back, in the same position as always, and a connector for the Magic Keyboard. Finally, there are four speakers, one in each corner, just as before. They sound much better than speakers from such a thin device should sound, a feat Apple has consistently pulled off across all its devices lately. Aside from the size and weight reduction, Apple hasn’t radically changed things here, and that’s mostly OK — though I could imagine some people wanting a second Thunderbolt port just for power when a peripheral is plugged in.

The specs of both the front- and back-facing cameras are unchanged; both are 12-megapixel sensors. Somewhat surprisingly, Apple removed the ultra-wide camera from the back, leaving it with a single standard camera alongside the LiDAR sensor and redesigned True Tone flash. That’s fine by me, as the standard lens is just fine for most things you’ll want out of an iPad camera. Its video capabilities are still robust, with support for ProRes video recording and 4K at a variety of frame rates.

Meanwhile, the front-facing camera on the landscape edge of the tablet means you can actually do video calls when the iPad is in its keyboard dock and not look ridiculous. I generally avoided doing video calls with my iPad before, but I’ve done a handful on the iPad Pro and all the feedback I’ve received is that the video quality is solid if not spectacular. Regardless, I won’t think twice about jumping onto FaceTime or Google Meet with the iPad Pro now that the camera position is no longer an issue.

Photo by Nathan Ingraham / Engadget

Tandem OLEDs

The next thing you’ll notice about the new iPad Pro is its OLED display. Specifically, Apple calls it a “tandem OLED” display, which means that you’re actually looking at two OLED panels layered on top of each other. The screen resolution is essentially the same as the old iPad Pro (2,752 x 2,064, 264 pixels per inch), but a number of other key specs have improved. It now features a 2,000,000-to-1 contrast ratio, one of the things OLED is best known for — blacks are literal darkness, as the pixels don’t emit any light.

The OLED enables more brightness and improved HDR performance compared to the old iPad Pro — standard screen brightness is up to 1,000 nits, compared to 600 nits for the last model. As before, though, HDR content maxes out at 1,600 nits. This is a nice upgrade over the Mini-LED screen on the old 12.9-inch iPad Pro, but it’s a massive improvement for the 11-inch iPad Pro. That model was stuck with a standard LCD with no HDR capabilities; the disparity between the screens Apple offered on the two iPad Pros was significant, but now both tablets have the same caliber display, and it’s one of the best I’ve ever seen.

Photo by Nathan Ingraham / Engadget

Everything is incredibly bright, sharp and vibrant, whether I’m browsing the web, editing photos, watching movies or playing games. I cannot stress enough how delightful this screen is — I have a flight this week, and I can’t wait to spend it watching movies. Watching a selection of scenes from Interstellar shows off the HDR capabilities as well as the contrast between the blackness of space and the brightness of surrounding stars and galaxies, while more vibrant scenes like the Shire in Fellowship of the Ring had deep and gorgeous colors without feeling overly saturated or unrealistic. Given how the screen is the most crucial experience of using a tablet, I can say Apple has taken a major leap forward here. If you’re upgrading from the Mini-LED display in the 12.9-inch iPad Pro, it won’t be quite as massive a difference, but anyone who prefers the 11-inch model will be thrilled with this improvement.

As usual, these screens have all the usual high-end features from prior models, including the ProMotion variable refresh rate (up to 120hz), fingerprint-resistant and antireflective coatings, True Tone color temperature adjustment, support for the P3 wide color gamut and full lamination. Other iPads have some, but not all of these features; specifically, ProMotion is saved for the Pro line. And this year, Apple added a $100 nano-texture glass option for the 1TB and 2TB models to further reduce glare, a good option if you often work in bright sunlight. (My review iPad did not have this feature.) Between that and the improved brightness, these tablets are well-suited to working in difficult lighting conditions.

M4 performance

Choosing to debut the M4 chip in the iPad Pro rather than a Mac is a major flex by Apple. Prior M-series silicon hit Macs first, iPads later. But as Apple tells it, the tandem OLED displays needed the new display engine on the M4 to hit the performance goals it wanted, so rather than engineer it into an existing processor, it just went forward with a whole new processor. The 1TB and 2TB iPad Pros have an M4 with four performance cores, six efficiency cores, a 10-core GPU and 16GB of RAM, while the less-expensive models have to make do with three performance cores and 8GB of RAM.

Either way, that’s more power than almost anyone buying an iPad will know what to do with. Interestingly, even Apple’s own apps don’t quite know what to do with it, either. When the company briefed the press last week, it showed off new versions of Final Cut Pro and Logic Pro for the iPad, both of which had some impressive additions. Final Cut Pro is getting a live multicam feature that lets you wirelessly sync multiple iPhones or iPads to one master device and record and direct all of them simultaneously. Logic Pro, meanwhile, has some new AI-generated “session players” that can create realistic backing tracks for you to play or sing over.

Photo by Nathan Ingraham / Engadget

Both features were very impressive in the demos I saw — but neither of them requires the M4 iPad Pro. Final Cut Pro will still work on any iPad with an M-series processor, and Logic Pro works on M-series iPads as well as the iPad Pro models with the A12Z chip (first released in 2020).

Of course, when you’re spending in excess of $1,000, it’s good to know you’ll get performance that’ll last you years into the future, and that’s definitely the case here. As apps get even more complex, the iPad Pro should be able to make short work of them. That includes AI, of course — the M4’s neural engine is capable of 38 trillion operations per second, a massive upgrade over the 18 trillion number quoted for the M3.

Unsurprisingly, the iPad Pro M4’s Geekbench CPU scores of 3,709 (single-core), 14,680 (multi-core) and 53,510 (GPU) significantly eclipse those of the M2 iPad Air (2,621 / 10,058 / 41,950). In reality, though, both of these tablets will churn through basically anything you throw at them. If your time is money and having faster video rendering or editing matters, or you work with a lot of apps that rely heavily on machine learning, the M4 should shave precious seconds or minutes out of your workflow, which will add up significantly over time.

Fortunately, the new chip remains as power efficient as ever. I haven’t done deep battery testing yet given I’ve only had the iPad Pro for a few days at this point. But I did use it as my main computer for several days and got through almost 10 hours of work before needing the charger. My workload is comparatively modest though, as I’m not pushing the iPad through heavy video or AI workloads, so your mileage may vary. As it has for more than a decade now, Apple quotes 10 hours of web browsing or watching video. But given what the M4 is capable of, chances are people doing more process-intensive tasks will run through the battery a lot faster.

New Magic Keyboard

As rumored, Apple has two new accessories for the iPad Pro: a new keyboard and the Pencil Pro. Both are still just as pricey as before. $350 for a keyboard case still feels like highway robbery, no matter how nice it is. But at least they’re not more expensive.

The good news is that the new Magic Keyboard is definitively better than the old one in a number of ways. First off, it’s thinner and lighter than before, which makes a huge difference in how the whole package feels. The last iPad Pro and its keyboard were actually rather thick and heavy, weighing in around three pounds — more than a MacBook Air. Now, both the iPad and keyboard case are thinner and lighter on their own, making the whole package feel much more compact.

Photo by Nathan Ingraham / Engadget

The base of the Magic Keyboard is now made of aluminum, which makes the typing experience more like what you’ll find on a MacBook. The keys are all about the same size as before, and typing on it remains extremely comfortable. If you’re familiar with the keyboards on Apple’s laptops, you’ll feel right at home here. Apple also made the trackpad bigger and added a function row of keys, both of which make the overall experience of navigating and using iPadOS much better.

The trackpad also now has no moving parts and instead relies on haptic feedback, similar to the MacBook trackpads. Every click is accompanied by a haptic that truly tricks me into thinking the trackpad moves, and small vibrations accompany other actions as well. For example, when I swipe up and hold to enter multitasking, there’s a haptic that confirms the gesture is recognized. Third-party developers will be able to add haptic trackpad feedback to their apps, as well.

Between the improved layout and thinner design, the Magic Keyboard is essential gear if, like me, you make your living while typing. It’s wildly expensive, yes, but it’s also extremely well-made and thoughtfully designed in a way that I just haven’t seen anyone else match yet. Yes, there are plenty of cheaper third-party options, but the Magic Keyboard is the best option I’ve tried.

Apple Pencil Pro

Photo by Nathan Ingraham / Engadget

Whenever I review an iPad, I can’t help but lament my complete lack of visual art skill. But even I can tell that the new Pencil Pro is a notable upgrade over the model it replaces, which was already excellent. As before, it magnetically attaches to the side of the iPad Pro for charging and storage, something that remains an elegant solution.

The Pencil Pro does everything the second-gen Apple Pencil does and has some new tricks to boot. One is Squeeze, which by default brings up the brush picker interface in apps like Notes and Freeform. It’s a quick and smart way to scrub through your different options, and it’s open to third-party developers to use as they wish in their own apps. The Pencil Pro isn’t too sensitive to the Squeeze gesture; I didn’t find myself accidentally popping open the menu while doodling away. The new Pencil also has a gyroscope, which lets it recognize rotation gestures — this means you can “turn” your virtual brush as you paint, giving it another layer of realism. Between tilt, pressure and now rotation sensitivity, the Pencil Pro is even better at capturing how you are using it.

Apple also added haptic feedback, so when you squeeze the Pencil you’ll get a vibration to confirm the action. It’s also used in a great new “undo” menu: if you squeeze the Pencil and then tap and hold on the back button, you can then quickly scrub through and undo everything you’ve written, step-by-step. This history makes it easy to take some risks while working on something and then quickly rewind if you’re not happy with the results. And each step of the log is accompanied by a haptic buzz as you scroll forwards and backwards.

Finally, the Apple Pencil Pro has Find My integration, which will make it easier to find when you inevitably lose it in the couch cushions (or leave it at a coffee shop). Given that Apple threw in a lot of new features and kept the price the same, I can’t complain too much about the Pencil Pro. The only bummer is that the new iPad Pro doesn’t work with the second-generation Pencil, presumably due to a different battery charging and pairing setup necessitated by moving the front camera to the same edge as the charging area. So if you’re upgrading, a Pencil Pro (or the less capable $79 USB-C Pencil) will be a requirement.

iPadOS

I think it’s worth a quick mention that Apple has not made any changes to iPadOS to go along with this release, and it’s one of the things that has made the internet very angry. There’s been a lot of chatter from some people who think the iPad Pro should run macOS or similar software; the vibe is that the iPad’s hardware is wasted on iPadOS.

I can only speak for myself and note that I was able to do everything my job asks of me on the iPad Pro while I was testing it, but that doesn’t mean it would be my choice over a Mac for certain situations. If I was at an event like CES, I’d want my MacBook Pro to facilitate things like transferring and editing photos as well as working in Google Docs. I can do those things on an iPad, but not as easily, mostly because the Google Docs app doesn’t handle going through comments and suggestions well. I did, however, find it easy and fast to import RAW photos from my SD card to the Lightroom app. For the first time, I felt comfortable doing my entire review photo workflow on an iPad. Even things like tearing through my email are better in the Gmail web app than the Gmail app for iPad. Overall, though, I was perfectly happy using the iPad Pro as my main computer; some things are a little tougher and some are easier. The whole experience doesn’t feel significantly better or worse, it’s just different. And at this point, I enjoy seeing what I can do on platforms that aren’t Windows and macOS.

Ultimately, Apple has shown no indication it’s going to make iPadOS more like a Mac. By the same coin, it still shows no indication of making a Mac with a touchscreen. For better or worse, those two worlds are distinct. And with no rumors pointing to a big iPadOS redesign at WWDC next month, you shouldn’t expect the software experience to radically change in the near future. As such, don’t buy an iPad Pro unless you’re content with the OS as it is right now.

Photo by Nathan Ingraham / Engadget

Wrap-up

The iPad Pro M4 is a fascinating device. I can’t help but want to use it. All the time. For everything. It’s truly wild to me that Apple is putting its absolute best tech into not a Mac but an iPad. That’s been a trend for a while, as the iPad Pro lineup has always been about showing off just how good of a tablet Apple can make, but this one truly is without compromise. It doesn’t just have a nice screen, it has the best screen Apple has ever made. It doesn’t have the same processor as some Macs, it has a newer and better one.

To get all of that technology into a device this thin and light truly feels, well, magical. That’s how Steve Jobs described the first iPad; significantly, he also said it contained “our most advanced technology.” In 2010, it was debatable whether the first iPad really had Apple’s most advanced tech, but it’s absolutely true now. And that’s what makes the iPad Pro such a delight to use: it’s a bit of an otherworldly experience, something hard to come by at this point when so much of technology has been commoditized.

But when I think realistically about what I need and what I can reasonably justify spending, I realize that the iPad Pro is just too much for me. Too expensive, too powerful, maybe a little too large (I truly love the 11-inch model, however). If you’re in the same boat, then fortunately, there’s an iPad that offers nearly everything the iPad Pro does for significantly less cash. The iPad Air may not be nearly as exciting as the Pro, but it offers the same core experience for a lot less cash. But if you aren’t put off by the price, the new iPad Pro is sure to delight.

This article originally appeared on Engadget at https://www.engadget.com/ipad-pro-2024-review-so-very-nice-and-so-very-expensive-210012937.html?src=rss

iOS 17.5 is here with support for web-based app downloads in the EU

Apple has rolled out iOS 17.5, which includes a bunch of updates, including support for a cross-OS alert system for unwanted Bluetooth trackers that the company worked on with Google. The other headline feature is the introduction of web-based app distribution in the European Union.

This is a function that Apple is introducing in the wake of the bloc's Digital Markets Act coming into force. It won't be a free-for-all, however. Developers who want to let users download iOS apps from their websites will need to opt into new App Store rules that will mean they have to pay a fee for each user after hitting a certain threshold. They'll also need to have a developer account that's in good standing and to have an app that had more than a million iOS installs in the EU in the previous year.

There's another notable update in iOS 17.5 in the form of a new feature called Repair State. In a nutshell, this will mean that iPhone users no longer need to turn off Find My when they send in their iPhone for repair.

Elsewhere, there are some changes on the Apple News+ front. The app now at long last has an offline mode, so you can use it to catch up on some reading while you're on a flight and don't feel like paying for Wi-Fi. The Today feed and News+ tab will work without an internet connection.

Apple is also moving beyond crosswords and deeper into the daily word game trend popularized by the likes of Wordle. Quartiles is a Boggle-style original game for Apple News+ subscribers. You'll connect tiles of various word combinations to create words and score points. You'll be able to share your scores with other players.

Apple

Last but not least, Apple has the latest incarnation of its annual Pride collection in honor of the LGBTQ+ community, including a Pride Radiance watch face and iOS and iPadOS wallpapers. You'll be able to customize these with a range of colors. 

You'll see the colors trace numerals of the watch face and react as you move your Apple Watch. A matching Apple Watch Pride Edition Braided Solo Loop will be available to order on May 22 for $99. The iPhone and iPad backgrounds spell out "Pride" in bold beams of color and move when you unlock the device.

This article originally appeared on Engadget at https://www.engadget.com/ios-175-is-here-with-support-for-web-based-app-downloads-in-the-eu-192624433.html?src=rss