Posts with «author_name|cherlynn low» label

HP Omnibook X hands-on: Vintage branding in the new era of AI

All over the PC industry today, we’re learning of new systems and products launching in conjunction with Microsoft’s Copilot+ push. But HP isn’t just showing off new Snapdragon-powered laptops as part of the program. The company up and decided to nuke its entire product portfolio altogether and unify most of its sub-series.

While HP was never the worst offender in the world of awful product names — I’m looking at you, Sony, LG and Lenovo — being able to quickly identify the make and model of a device is crucial when you’re deciding what to buy. HP’s vice president of consumer PC products Pierre-Antoine Robineau admits as much, saying “to be fair, we don’t make things easy with our portfolio.” He referred to the company’s brands like Spectre, Pavilion and Envy, saying that if you ask ChatGPT what they are, the answers you’d get might refer to a ghost or a gazebo.

To simplify things, HP is getting rid of all those names on its consumer product portfolio and unifying everything under the Omni label. It’ll use Omnibook to refer to laptops, Omnidesk for desktops and Omnistudio for all-in-ones. For each category, it’ll add a label saying “3,” “5,” “7,” “X” or “Ultra” to indicate how premium or high-end the model is. That means the Omnibook Ultra is the highest-tier laptop, while the Omnidesk 3 might be the most basic or entry-level desktop system. That sort of numbering echoes Sony’s recent streamlined nomenclature of its home theater and personal audio offerings.

If Omnibook sounds familiar, that’s because HP actually had a product with that name, and it was available from 1993 to about 2002. The Omni moniker makes sense now in the 2020s, HP says, because these are devices that can do just about anything and act as multiple things at once. (As long as they don’t claim to be omniscient, omnipresent or omnipotent, I’ll let this slide.)

The company is also cleaning things up on the commercial side of its business, where the word “Elitebook” has traditionally been the most recognized label. It’s keeping that name, adopting the same Elitebook, Elitedesk and Elitestudio distinctions across categories and using the same “Ultra” and “X” labels to denote each model’s tier. However, instead of “3,” “5” or “7” here, HP is using even numbers (2, 4, 6 or 8), in part because it has used even series numbers like “1040” and “1060” in the Elitebook line before. Keeping similar numbers around can help IT managers with the shift in names, HP said.

The first new laptops under this new naming system are the Omnibook X and the Elitebook Ultra. They share very similar specs, with the Elitebook offering software that make them easier for IT managers to deploy to employees. Both of these come with 14-inch 2.2K touchscreens that were, at least in my brief time with them during a recent hands-on, bright and colorful.

I didn’t get to explore much of the new Windows 11, since the units available either ran existing software or were locked. I presume, though, that these would have other Copilot+ PC goodies that Microsoft announced earlier today.

What I can tell you is that I prefer the aesthetic of HP’s older Spectre models. The company’s machines turned heads and caught eyes thanks to their shiny edges and uniquely cut-off corners. I’m a sucker for razor sharp edges and gold or silver finishes, so that line of laptops really called to me.

In contrast, the HP Omnibook X seems plain. It comes in white or silver (the Elitebook is available in blue) and has a uniform thickness along its edges. It’s still thin and light, at 14mm (or about 0.55 inches) and 1.33 kilograms (or 2.93 pounds). But it’s certainly lost a little flavor, and I crave some spice in a device.

That’s not to say the Omnibook is hideous. It’s fine! I actually like the color accents on the keyboard deck. The power button is a different shade of blue depending on the version you get, while the row of function keys is a light shade of gray or blue. Typing on the demo units felt comfortable, too, though I miss the clicky feedback on older Elitebooks and would like a tad more travel on the keyboard.

You might also need to invest in a dongle for a card reader or if you have lots of accessories, but the two USB-C sockets and one USB-A might be enough in a pinch. Thankfully, there’s a headphone jack, too. Like every other Copilot+ PC announced today, the Omnibook and Elitebook are both powered by Qualcomm’s Snapdragon X Elite processor and promise 26 hours of battery life when playing local video. HP says its “next-gen AI PCs” have dedicated NPUs that are “capable of 45 trillion operations per second (TOPS),” which is slightly more than the 40 TOPS Microsoft is claiming for its Copilot+ PCs.

The company is also distinguishing its own AI PCs by adorning them with a logo that’s the letters “A” and “I” twisted into a sort of DNA helix. You’ll find it on the keyboard deck and the spine of the machine. It’s not big enough to be annoying, though you’ll certainly see it.

If you're already a fan of the HP Omnibook X or Elitebook Ultra, you can pre-order them today. The Omnibook X will start at $1,200 and come with 1 TB of storage, while the Elitebook Ultra starts at $1,700. Both systems will begin shipping on June 18.

Catch up on all the news from Microsoft's Copilot AI and Surface event today!

This article originally appeared on Engadget at

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

Built-in eye-tracking for iPhones and iPads

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Vocal shortcuts for easier hands-free control

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

Music haptics in Apple Music and other apps

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Help in cars — motion sickness and CarPlay

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.


For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

Other Apple Accessibility updates

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

In a video demonstrating the prowess of its new Project Astra app, the person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these weren't your bog-standard visual aid. These glasses had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at

Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

What is Project Astra?

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

Wait, were those Project Astra glasses? Is Google Glass back?

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

How does Project Astra work?

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

When will Project Astra be available?

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at

Ask Google Photos to get help making sense of your gallery

Google is inserting more of its Gemini AI into every single product it has and the next target in its sights is Photos. At its I/O developer conference today, the company's CEO Sundar Pichai announced a feature called Ask Photos, which is designed to help you find specific images in your gallery by talking to Gemini. 

Ask Photos will show up as a new tab at the bottom of your Google Photos app. It'll start rolling out to One subscribers first, starting in US English over the upcoming months. When you tap over to that panel, you'll see the Gemini star icon and a welcome message above a bar that prompts you to "search or ask about Photos."

According to Google, you can ask things like "show me the best photo from each national park I've visited," which not only draws upon GPS information but also requires the AI to exercise some judgement in determining what is "best." The company's VP for Photos Shimrit Ben-Yair told Engadget that you'll be able to provide feedback to the AI and let it know which pictures you preferred instead. "Learning is key," Ben-Yair said.

You can also ask Photos to find your top photos from a recent vacation and generate a caption to describe them so you can more quickly share them to social media. Again, if you didn't like what Gemini suggested, you can also make tweaks later on.

For now, you'll have to type your query to Ask Photos — voice input isn't yet supported. And as the feature rolls out, those who opt in to use it will see their existing search feature get "upgraded" to Ask. However, Google said that "key search functionality, like quick access to your face groups or the map view, won't be lost."

The company explained that there are three parts to the Ask Photos process: "Understanding your question," "crafting a response" and "ensuring safety and remembering corrections." Though safety is only mentioned in the final stage, it should be baked in the entire time. The company acknowledged that "the information in your photos can be deeply personal, and we take the responsibility of protecting it very seriously."

To that end, queries are not stored anywhere, though they are processed in the cloud (not on device). People will not review conversations or personal data in Ask Photos, except "in rare cases to address abuse or harm." Google also said it doesn't train "any generative AI product outside of Google Photos on this personal data, including other Gemini models and products."

Your media continues to be protected by the same security and privacy measure that cover your use of Google Photos. That's a good thing, since one of the potentially more helpful ways to use Ask Photos might be to get information like passport or license expiry dates from pictures you might have snapped years ago. It uses Gemini's multimodal capabilities to read text in images to find answers, too.

Of course, AI isn't new in Google Photos. You've always been able to search the app for things like "credit card" or a specific friend, using the company's facial and object recognition algorithms. But Gemini AI brings generative processing so Photos can do a lot more than just deliver pictures with certain people or items in them.

For example, you might also get Photos to tell you what themes you might have used for the last few birthday parties you threw for your partner or child. Gemini AI is at work here to study your pictures and figure out what themes you already adopted.

There are a lot of promising use cases for Ask Photos, which is an experimental feature at the moment and that is "starting to roll out soon." Like other Photos tools, it might begin as a premium feature for One subscribers and Pixel owners before trickling down to all who use the free app. No official word yet on when or whether that might happen, though.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at

Rabbit R1 hands-on: Already more fun and accessible than the Humane AI Pin

At CES this January, startup Rabbit unveiled its first device, just in time for the end of the year of the rabbit according to the lunar calendar. It’s a cute little orange square that was positioned as a “pocket companion that moves AI from words to action.” In other words, it’s basically a dedicated AI machine that acts kind of like a walkie talkie to a virtual assistant.

Sound familiar? You’re probably thinking of the Humane AI Pin, which was announced last year and started shipping this month. I awarded it a score of 50 (out of 100) earlier this month, while outlets like Wired and The Verge gave it similarly low marks of 4 out of 10.

The people at Rabbit have been paying close attention to the aftermath of the Humane AI Pin launch and reviews. It was evident in founder and CEO Jesse Lyu's address at an unboxing event at the TWA hotel in New York last night, where the company showed off the Rabbit R1 and eager early adopters listened rapturously before picking up their pre-orders. Engadget's sample unit is on its way to Devindra Hardawar, who will be tackling this review. But I was in attendance last night to check out units at the event that industry peers were unboxing (thanks to Max Weinbach for the assistance!).

What is the Rabbit R1?

As a refresher, the Rabbit R1 is a bright orange square, co-engineered by Teenage Engineering and Rabbit. It has a 2.88-inch color display built in, an 8-megapixel camera that can face both ways and a scroll wheel reminiscent of the crank on the Playdate. The latter, by the way, is a compact gaming handheld that was also designed by Teenage Engineering, and the Rabbit R1 shares its adorable retro aesthetic. Again, like the Humane AI Pin, the Rabbit R1 is supposed to be your portal to an AI-powered assistant and operating system. However, there are a few key differences, which Lyu covered extensively at the launch event last night.

Rabbit R1 vs Humane AI Pin

Let's get this out of the way: The Rabbit R1 already looks a lot more appealing than the Humane AI Pin. First of all, it costs $199 — less than a third of the AI Pin's $700. Humane also requires a monthly $24 subscription fee or its device will be rendered basically useless. Rabbit, as Lyu repeatedly reiterated all night, does not require such a fee. You'll just be responsible for your own cellular service (4G LTE only, no 5G), and can bring your own SIM card or just default to good old Wi-Fi. There, you'll also find the USB-C charging port.

The R1's advantages over the Pin don't end there. By virtue of its integrated screen (instead of a wonky, albeit intriguing projector), the orange square is more versatile and a lot easier to interact with. You can use the wheel to scroll through elements and press the button on the right side to confirm a choice. You could also tap the screen or push down a button to start talking to the software.

Now, I haven’t taken a photo with the device myself, but I was pleasantly surprised by the quality of images I saw on its screen. Maybe my expectations were pretty low, but when reviewers in a media room were setting up their devices by using the onboard cameras to scan QR codes, I found the images on the screens clear and impressively vibrant. Users won’t just be capturing photos, videos and QR codes with the Rabbit R1, by the way. It also has a Vision feature like the Humane AI Pin that will analyze an image you take and tell you what’s in it. In Lyu’s demo, the R1 told him that it saw a crowd of people at “an event or concert venue.”

Cherlynn Low for Engadget

We’ll have to wait till Devindra actually takes some pictures with our R1 unit and downloads them from the web-based portal that Rabbit cleverly calls the Rabbit Hole. Its name for camera-based features is Rabbit Eye, which is just kind of delightful. In fact, another thing that distinguishes Rabbit from Humane is the former’s personality. The R1 just oozes character. From the witty feature names to the retro aesthetic to the onscreen animation and the fact that the AI will actually make (cheesy) jokes, Rabbit and Teenage Engineering have developed something that’s got a lot more flavor than Humane’s almost clinical appearance and approach.

Of all the things Lyu took shots at Humane about last night, though, talk of the R1’s thermal performance or the AI Pin’s heat issues was conspicuously absent. To be clear, the R1 is slightly bigger than the Humane device, and it uses an octa-core MediaTek MT6765 processor, compared to the AI Pin’s Snapdragon chip. There’s no indication at the moment that the Rabbit device will run as hot as Humane’s Pin, but I’ve been burned (metaphorically) before and remain cautious.

I am also slightly concerned about the R1’s glossy plastic build. It looks nice and feels lighter than expected, weighing just 115 grams or about a quarter of a pound. The scroll wheel moved smoothly when I pushed it up and down, and there were no physical grooves or notches, unlike the rotating hinge on Samsung’s Galaxy watches. The camera housing lay flush with the rest of the R1’s case, and in general the unit felt refined and finished.

Most of my other impressions of the Rabbit R1 come from Lyu’s onstage demos, where I was surprised by how quickly his device responded to his queries. He was able to type on the R1’s screen and tilted it so that the controls sat below the display instead of to its right. That way, there was enough room for an onscreen keyboard that Lyu said was the same width as the one on the original iPhone.

What’s next for the Rabbit R1?

Rabbit also drew attention for its so-called Large Action Model (LAM), which acts as an interpreter to convert popular apps like Spotify or Doordash into interfaces that work on the R1’s simple-looking operating system. Lyu also showed off some of these at the event last night, but I’d much rather wait for us to test these out for ourselves.

Lyu made many promises to the audience, seeming to acknowledge that the R1 might not be fully featured when it arrives in their hands. Even on the company’s website, there’s a list of features that are planned, in the works or being explored. For one thing, an alarm is coming this summer, along with a calendar, contacts app, GPS support, memory recall and more. Throughout his speech, Lyu repeated the phrase “we’re gonna work on” amid veiled references to Humane (for instance, emphasizing that Rabbit doesn’t require an additional subscription fee). Ultimately, Lyu said “we just keep adding value to this thing,” in reference to a roadmap of upcoming features.

Hopefully, Lyu and his team are able to deliver on the promises they’ve made. I’m already very intrigued by a “teach mode” he teased, which is basically a way to generate macros by recording an action on the R1, and letting it learn what you want to do when you tell it something. Rabbit’s approach certainly seems more tailored to tinkerers and enthusiasts, whereas Humane’s is ambitious and yet closed off. This feels like Google and Apple all over again, except whether the AI device race will ever reach the same scale remains to be seen.

Last night’s event also made it clear what Rabbit wants us to think. It was hosted at the TWA hotel, which itself used to be the head house of the TWA Flight Center. The entire place is an homage to retro vibes, and the entry to Rabbit’s event was lined with display cases containing gadgets like a Pokedex, a Sony Watchman, a Motorola pager, Game Boy Color and more. Every glass box I walked by made me squeal, bringing up a pleasant sense memory that also resurfaced when I played with the R1. It didn't feel good in that it's premium or durable; it felt good because it reminded me of my childhood.

Whether Rabbit is successful with the R1 depends on how you define success. The company has already sold more than 100,000 units this quarter and looks poised to sell at least one more (I’m already whipping out my credit card). I remain skeptical about the usefulness of AI devices, but, in large part due to its price and ability to work with third-party apps at launch, Rabbit has already succeeded in making me feel like Alice entering Wonderland.

This article originally appeared on Engadget at

The Humane AI Pin is the solution to none of technology's problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.


One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.


Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at

The best smartphone cameras for 2024: How to choose the phone with the best photography chops

I remember begging my parents to get me a phone with a camera when the earliest ones were launched. The idea of taking photos wherever I went was new and appealing, but it’s since become less of a novelty and more of a daily habit. Yes, I’m one of those. I take pictures of everything — from beautiful meals and funny signs to gorgeous landscapes and plumes of smoke billowing in the distance.

If you grew up in the Nokia 3310 era like me, then you know how far we’ve come. Gone are the 2-megapixel embarrassments that we used to post to Friendster with glee. Now, many of us use the cameras on our phones to not only capture precious memories of our adventures and loved ones, but also to share our lives with the world.

I’m lucky enough that I have access to multiple phones thanks to my job, and at times would carry a second device with me on a day-trip just because I preferred its cameras. But most people don’t have that luxury. Chances are, if you’re reading this, a phone’s cameras may be of utmost importance to you. But you’ll still want to make sure the device you end up getting doesn’t fall flat in other ways. At Engadget, we test and review dozens of smartphones every year; our top picks below represent not only the best phone cameras available right now, but also the most well-rounded options out there.

What to look for when choosing a phone for its cameras

Before scrutinizing a phone’s camera array, you’ll want to take stock of your needs — what are you using it for? If your needs are fairly simple, like taking photos and videos of your new baby or pet, most modern smartphones will serve you well. Those who plan to shoot for audiences on TikTok, Instagram or YouTube should look for video-optimizing features like stabilization and high frame rate support (for slow-motion clips).

Most smartphones today have at least two cameras on the rear and one up front. Those that cost more than $700 usually come with three, including wide-angle, telephoto or macro lenses. We’ve also reached a point where the number of megapixels (MP) doesn’t really matter anymore — most flagship phones from Apple, Samsung and Google have sensors that are either 48MP or 50MP. You’ll even come across some touting resolutions of 108MP or 200MP, in pro-level devices like the Galaxy S24 Ultra.

Most people won’t need anything that sharp, and in general, smartphone makers combine the pixels to deliver pictures that are the equivalent of 12MP anyway. The benefits of pixel-binning are fairly minor in phone cameras, though, and you’ll usually need to blow up an image to fit a 27-inch monitor before you’ll see the slightest improvements.

In fact, smartphone cameras tend to be so limited in size that there’s often little room for variation across devices. They typically use sensors from the same manufacturers and have similar aperture sizes, lens lengths and fields of view. So while it might be worth considering the impact of sensor size on things like DSLRs or mirrorless cameras, on a smartphone those differences are minimal.

Sensor size and field of view

If you still want a bit of guidance on what to look for, here are some quick tips: By and large, the bigger the sensor the better, as this will allow more light and data to be captured. Not many phone makers will list the sensor size in spec lists, so you’ll have to dig around for this info. A larger aperture (usually indicated by a smaller number with an “f/” preceding a digit) is ideal for the same reason, and it also affects the level of depth of field (or background blur) that’s not added via software. Since portrait modes are available on most phones these days, though, a big aperture isn’t as necessary to achieve this effect.

When looking for a specific field of view on a wide-angle camera, know that the most common offering from companies like Samsung and Google is about 120 degrees. Finally, most premium phones like the iPhone 15 Pro Max and Galaxy S24 Ultra offer telephoto systems that go up to 5x optical zoom with software taking that to 20x or even 100x.

Processing and extra features

These features will likely perform at a similar quality across the board, and where you really see a difference is in the processing. Samsung traditionally renders pictures that are more saturated, while Google’s Pixel phones take photos that are more neutral and evenly exposed. iPhones have historically produced pictures with color profiles that seem more accurate, though in comparison to images from the other two, they can come off yellowish. However, that was mostly resolved after Apple introduced a feature in the iPhone 13 called Photographic Styles that lets you set a profile with customizable contrast levels and color temperature that would apply to every picture taken via the native camera app.

Pro users who want to manually edit their shots should see if the phone they’re considering can take images in RAW format. Those who want to shoot a lot of videos while on the move should look for stabilization features and a decent frame rate. Most of the phones we’ve tested at Engadget record at either 60 frames per second at 1080p or 30 fps at 4K. It’s worth checking to see what the front camera shoots at, too, since they’re not usually on par with their counterparts on the rear.

Finally, while the phone’s native editor is usually not a dealbreaker (since you can install a third-party app for better controls), it’s worth noting that the latest flagships from Samsung and Google all offer AI tools that make manipulating an image a lot easier. They also offer a lot of fun, useful extras, like erasing photobombers, moving objects around or making sure everyone in the shot has their eyes open.

How we test smartphone cameras

For the last few years, I’ve reviewed flagships from Google, Samsung and Apple, and each time, I do the same set of tests. I’m especially particular when testing their cameras, and usually take all the phones I’m comparing out on a day or weekend photo-taking trip. Any time I see a photo- or video-worthy moment, I whip out all the devices and record what I can, doing my best to keep all factors identical and maintain the same angle and framing across the board.

It isn’t always easy to perfectly replicate the shooting conditions for each camera, even if I have them out immediately after I put the last one away. Of course, having them on some sort of multi-mount rack would be the most scientific way, but that makes framing shots a lot harder and is not representative of most people’s real-world use. Also, just imagine me holding up a three-prong camera rack running after the poor panicked wildlife I’m trying to photograph. It’s just not practical.

For each device, I make sure to test all modes, like portrait, night and video, as well as all the lenses, including wide, telephoto and macro. When there are new or special features, I test them as well. Since different phone displays can affect how their pictures appear, I wanted to level the playing field: I upload all the material to Google Drive in full resolution so I can compare everything on the same large screen. Because the photos from today’s phones are of mostly the same quality, I usually have to zoom in very closely to see the differences. I also frequently get a coworker who’s a photo or video expert to look at the files and weigh in.

This article originally appeared on Engadget at

The Pirate Queen interview: How Singer Studios and Lucy Liu brought forgotten history to life

I had a favorite version of Mulan growing up (Anita Yuen in the 1998 Taiwanese TV series). I obsessed over Chinese period TV series like Legend of the Condor Heroes, My Fair Princess and The Book and the Sword. I consider myself fairly well-versed in Chinese historical figures, especially those represented in ‘90s and 2000s entertainment in Asia. So when I found out that a UK-based studio had made a VR game called The Pirate Queen based on a forgotten female leader who was prolific in the South China Sea, I was shocked. How had I never heard of her? How had the Asian film and TV industry never covered her?

I got to play a bit of the game this week, which was released on the Meta Quest store and Steam on March 7th. The titular character Cheng Shih is voiced by actor Lucy Liu, who also executive produced this version of the game with UK-based Singer Studios’ CEO and founder Eloise Singer. Liu and Singer sat with me for an interview discussing The Pirate Queen, Cheng Shih, VR’s strengths and the importance of cultural and historical accuracy in games and films.

Cheng Shih, which translates to “Madam Cheng” or “Mrs Cheng,” was born Shi Yang. After she married the pirate Cheng Yi (usually romanized as Zheng Yi), she became known as Cheng Yi Sao, which translates to “wife of Cheng Yi.” Together they led the Guangdong Pirate Confederation in the 1800s. Upon her husband’s death in 1807, she took over the reins and went on to become what South China Morning Post described as “history’s greatest pirate.”

Singer Studios

How did Singer Studios learn about Cheng Shih and decide to build a game (and upcoming franchise including a film, podcast and graphic novels) around her? According to Singer, it was through word of mouth. “It was a friend of mine who first told me the story,” Singer said. “She said, ‘Did you know that the most famous pirate in history was a woman?’”

Cheng Shih had been loosely referenced in various films and games before this, like the character Mistress Ching in the 2007 film Pirates of the Caribbean: At World’s End and Jing Lang in Assassin’s Creed IV: Black Flag. As Singer pointed out, Cheng Shih had also appeared in a recent episode of Doctor Who.

Singer said that her team started developing the project as a film at the end of 2018. But the pandemic disrupted their plans, causing Singer to adapt it into a game. A short version of The Pirate Queen later debuted at Raindance Film Festival, and shortly after, Meta came onboard and provided funding to complete development of the game. Liu was then approached when the full version was ready and about to make its appearance at Tribeca Film Festival 2023.

“The rest is history,” Liu said, “But not forgotten history.” She said Cheng Shih was never really recognized for being the most powerful pirate. “It seems so crazy that in the 19th century, this woman who started as a courtesan would then rise to power and then have this fleet of pirates that she commanded,” Liu added. She went on to talk about how Cheng Shih was ahead of the time and also represented “a bit of an underdog story.” For the full 15-minute interview, you can watch the video in this article or listen to this week’s episode of The Engadget Podcast and learn more about Liu and Singer’s thoughts on VR and technology over the last 20 years.

Capturing the historical and cultural details of Cheng Shih’s life was paramount to Liu and Singer. They said the team had to create women’s hands from scratch to be represented from the player’s perspective in VR, and a dialect coach was hired to help Liu nail the pronunciation for the Cantonese words that Cheng Shih speaks in the game. Though I’m not completely certain if Cheng Shih spoke Mandarin or Cantonese, the latter seems like the more accurate choice given it’s the lingua franca in the Guangdong region.

Singer Studios

All that added to the immersiveness of The Pirate Queen, in which players find themselves in an atmospheric maritime environment. The Meta Quest 3’s controllers served as my hands in the game, and I rowed boats, climbed rope ladders and picked up items with relative ease. Some of the mechanics, especially the idea of “teleportation” as moving around, were a little clunky, but after about five minutes I got used to how things worked. You’ll have to point the left controller and push the joystick when you’ve chosen a spot, and the scene changes around you. This probably minimizes the possibility of nausea, since you’re not standing still while watching your surroundings move. It’s also pretty typical of VR games, so those who have experience playing in headsets will likely be familiar with the movement.

You can still walk around and explore, of course. I scrutinized the corners of rooms, inspected the insides of cabinets and more, while hunting for keys that would unlock boxes containing clues. A lot of this is pretty standard for a puzzle or room escape game, which is what I used to play the most in my teens. But I was particularly taken by sequences like rowing a boat across the sea and climbing up a rope ladder, both of which caused me to break a mild sweat. Inside Cheng Shih’s cabin, I lit a joss stick and placed it in an incense holder — an action I repeated every week at my grandfather’s altar when I was growing up. It felt so realistic that I tried to wave the joss stick to put out the flame and could almost smell the smoke.

It’s these types of activities that make VR games great vehicles for education and empathy. “We didn’t want to have these combat elements that traditional VR games do have,” Singer said, adding that it was one of the challenges in creating The Pirate Queen.

“It’s nice to see and to learn and be part of that, as opposed to ‘Let’s turn to page 48,’” Liu said. “That’s not as exciting as doing something and being actively part of something.” When you play as a historical character in a game, and one that’s as immersive as a VR game, “you’re living that person’s life or that moment in time,” Liu added.

While The Pirate Queen is currently only available on Quest devices, Singer said there are plans to bring it to “as many headsets as we possibly can.” Singer Studios also said it is “extending The Pirate Queen franchise beyond VR into a graphic novel, film and television series.”

This article originally appeared on Engadget at

Microsoft’s neural voice tool for people with speech disabilities arrives later this year

At its 14th Ability summit, which kicks off today, Microsoft is highlighting developments and collaborations across its portfolio of assistive products. Much of that is around Azure AI, including features announced yesterday like AI-powered audio descriptions and the Azure AI studio that better enables developers with disabilities to create machine-learning applications. It also showed off new updates like more languages and richer AI-generated descriptions for its Seeing AI tool, as well as new playbooks offering guidelines for best practices in areas like building accessible campuses and greater mental health support.

The company is also previewing a feature called “Speak For Me,” which is coming later this year. Much like Apple’s Personal Voice, Speak For Me can help those with ALS and other speech disabilities to use custom neural voices to communicate. Work on this project has been ongoing “for some time” with partners like the non-profit ALS organization Team Gleason, and Microsoft said it’s “committed to making sure this technology is used for good and plan to launch later in the year.” The company also shared that it’s working with Answer ALS and ALS Therapy Development Institute (TDI) to “almost double the clinical and genomic data available for research.”

One of the most significant accessibility updates coming this month is that Copilot will have new accessibility skills that enable users to ask the assistant to launch Live Caption and Narrator, among other assistive tools. The Accessibility Assistant feature announced last year will be available today in the Insider preview for M365 apps like Word, with the company saying it will be coming “soon” to Outlook and PowerPoint. Microsoft is also publishing four new playbooks today, including a Mental Health toolkit, which covers “tips for product makers to build experiences that support mental health conditions, created in partnership [with] Mental Health America.”

Ahead of the summit, the company’s chief accessibility officer Jenny Lay-Flurrie spoke with Engadget to share greater insight around the news as well as her thoughts on generative AI’s role in building assistive products.

“In many ways, AI isn’t new,” she said, adding “this chapter is new.” Generative AI may be all the rage right now, but Lay-Flurrie believes that the core principle her team relies on hasn’t changed. “Responsible AI is accessible AI,” she said.

Still, generative AI could bring many benefits. “This chapter, though, does unlock some potential opportunities for the accessibility industry and people with disabilities to be able to be more productive and to use technology to power their day,” she said. She highlighted a survey the company did with the neurodiverse community around Microsoft 365 Copilot, and the response of the few hundred people who responded was “this is reducing time for me to create content and it’s shortening that gap between thought and action,” Lay-Flurrie said.

The idea of being responsible in embracing new technology trends when designing for accessibility isn’t far from Lay-Flurrie’s mind. “We still need to be very principled, thoughtful and if we hold back, it’s to make sure that we are protecting those fundamental rights of accessibility.”

Elsewhere at the summit, Microsoft is featuring guest speakers like actor Michelle Williams and its own employee Katy Jo Wright, discussing mental health and their experience living with chronic Lyme disease respectively. We will also see Amsterdam’s Rijksmusem share how it used Azure AI’s computer vision and generative AI to provide image descriptions for over a million pieces of art for visitors who are blind or have low vision.

This article originally appeared on Engadget at