Posts with «author_name|cherlynn low» label

Rabbit R1 hands-on: Already more fun and accessible than the Humane AI Pin

At CES this January, startup Rabbit unveiled its first device, just in time for the end of the year of the rabbit according to the lunar calendar. It’s a cute little orange square that was positioned as a “pocket companion that moves AI from words to action.” In other words, it’s basically a dedicated AI machine that acts kind of like a walkie talkie to a virtual assistant.

Sound familiar? You’re probably thinking of the Humane AI Pin, which was announced last year and started shipping this month. I awarded it a score of 50 (out of 100) earlier this month, while outlets like Wired and The Verge gave it similarly low marks of 4 out of 10.

The people at Rabbit have been paying close attention to the aftermath of the Humane AI Pin launch and reviews. It was evident in founder and CEO Jesse Lyu's address at an unboxing event at the TWA hotel in New York last night, where the company showed off the Rabbit R1 and eager early adopters listened rapturously before picking up their pre-orders. Engadget's sample unit is on its way to Devindra Hardawar, who will be tackling this review. But I was in attendance last night to check out units at the event that industry peers were unboxing (thanks to Max Weinbach for the assistance!).

What is the Rabbit R1?

As a refresher, the Rabbit R1 is a bright orange square, co-engineered by Teenage Engineering and Rabbit. It has a 2.88-inch color display built in, an 8-megapixel camera that can face both ways and a scroll wheel reminiscent of the crank on the Playdate. The latter, by the way, is a compact gaming handheld that was also designed by Teenage Engineering, and the Rabbit R1 shares its adorable retro aesthetic. Again, like the Humane AI Pin, the Rabbit R1 is supposed to be your portal to an AI-powered assistant and operating system. However, there are a few key differences, which Lyu covered extensively at the launch event last night.

Rabbit R1 vs Humane AI Pin

Let's get this out of the way: The Rabbit R1 already looks a lot more appealing than the Humane AI Pin. First of all, it costs $199 — less than a third of the AI Pin's $700. Humane also requires a monthly $24 subscription fee or its device will be rendered basically useless. Rabbit, as Lyu repeatedly reiterated all night, does not require such a fee. You'll just be responsible for your own cellular service (4G LTE only, no 5G), and can bring your own SIM card or just default to good old Wi-Fi. There, you'll also find the USB-C charging port.

The R1's advantages over the Pin don't end there. By virtue of its integrated screen (instead of a wonky, albeit intriguing projector), the orange square is more versatile and a lot easier to interact with. You can use the wheel to scroll through elements and press the button on the right side to confirm a choice. You could also tap the screen or push down a button to start talking to the software.

Now, I haven’t taken a photo with the device myself, but I was pleasantly surprised by the quality of images I saw on its screen. Maybe my expectations were pretty low, but when reviewers in a media room were setting up their devices by using the onboard cameras to scan QR codes, I found the images on the screens clear and impressively vibrant. Users won’t just be capturing photos, videos and QR codes with the Rabbit R1, by the way. It also has a Vision feature like the Humane AI Pin that will analyze an image you take and tell you what’s in it. In Lyu’s demo, the R1 told him that it saw a crowd of people at “an event or concert venue.”

Cherlynn Low for Engadget

We’ll have to wait till Devindra actually takes some pictures with our R1 unit and downloads them from the web-based portal that Rabbit cleverly calls the Rabbit Hole. Its name for camera-based features is Rabbit Eye, which is just kind of delightful. In fact, another thing that distinguishes Rabbit from Humane is the former’s personality. The R1 just oozes character. From the witty feature names to the retro aesthetic to the onscreen animation and the fact that the AI will actually make (cheesy) jokes, Rabbit and Teenage Engineering have developed something that’s got a lot more flavor than Humane’s almost clinical appearance and approach.

Of all the things Lyu took shots at Humane about last night, though, talk of the R1’s thermal performance or the AI Pin’s heat issues was conspicuously absent. To be clear, the R1 is slightly bigger than the Humane device, and it uses an octa-core MediaTek MT6765 processor, compared to the AI Pin’s Snapdragon chip. There’s no indication at the moment that the Rabbit device will run as hot as Humane’s Pin, but I’ve been burned (metaphorically) before and remain cautious.

I am also slightly concerned about the R1’s glossy plastic build. It looks nice and feels lighter than expected, weighing just 115 grams or about a quarter of a pound. The scroll wheel moved smoothly when I pushed it up and down, and there were no physical grooves or notches, unlike the rotating hinge on Samsung’s Galaxy watches. The camera housing lay flush with the rest of the R1’s case, and in general the unit felt refined and finished.

Most of my other impressions of the Rabbit R1 come from Lyu’s onstage demos, where I was surprised by how quickly his device responded to his queries. He was able to type on the R1’s screen and tilted it so that the controls sat below the display instead of to its right. That way, there was enough room for an onscreen keyboard that Lyu said was the same width as the one on the original iPhone.

What’s next for the Rabbit R1?

Rabbit also drew attention for its so-called Large Action Model (LAM), which acts as an interpreter to convert popular apps like Spotify or Doordash into interfaces that work on the R1’s simple-looking operating system. Lyu also showed off some of these at the event last night, but I’d much rather wait for us to test these out for ourselves.

Lyu made many promises to the audience, seeming to acknowledge that the R1 might not be fully featured when it arrives in their hands. Even on the company’s website, there’s a list of features that are planned, in the works or being explored. For one thing, an alarm is coming this summer, along with a calendar, contacts app, GPS support, memory recall and more. Throughout his speech, Lyu repeated the phrase “we’re gonna work on” amid veiled references to Humane (for instance, emphasizing that Rabbit doesn’t require an additional subscription fee). Ultimately, Lyu said “we just keep adding value to this thing,” in reference to a roadmap of upcoming features.

Hopefully, Lyu and his team are able to deliver on the promises they’ve made. I’m already very intrigued by a “teach mode” he teased, which is basically a way to generate macros by recording an action on the R1, and letting it learn what you want to do when you tell it something. Rabbit’s approach certainly seems more tailored to tinkerers and enthusiasts, whereas Humane’s is ambitious and yet closed off. This feels like Google and Apple all over again, except whether the AI device race will ever reach the same scale remains to be seen.

Last night’s event also made it clear what Rabbit wants us to think. It was hosted at the TWA hotel, which itself used to be the head house of the TWA Flight Center. The entire place is an homage to retro vibes, and the entry to Rabbit’s event was lined with display cases containing gadgets like a Pokedex, a Sony Watchman, a Motorola pager, Game Boy Color and more. Every glass box I walked by made me squeal, bringing up a pleasant sense memory that also resurfaced when I played with the R1. It didn't feel good in that it's premium or durable; it felt good because it reminded me of my childhood.

Whether Rabbit is successful with the R1 depends on how you define success. The company has already sold more than 100,000 units this quarter and looks poised to sell at least one more (I’m already whipping out my credit card). I remain skeptical about the usefulness of AI devices, but, in large part due to its price and ability to work with third-party apps at launch, Rabbit has already succeeded in making me feel like Alice entering Wonderland.

This article originally appeared on Engadget at https://www.engadget.com/rabbit-r1-hands-on-already-more-fun-and-accessible-than-the-humane-ai-pin-163622560.html?src=rss

The Humane AI Pin is the solution to none of technology's problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to humane.center in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.

Screenshot

One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.

Wrap-up

Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at https://www.engadget.com/the-humane-ai-pin-is-the-solution-to-none-of-technologys-problems-120002469.html?src=rss

The best smartphone cameras for 2024: How to choose the phone with the best photography chops

I remember begging my parents to get me a phone with a camera when the earliest ones were launched. The idea of taking photos wherever I went was new and appealing, but it’s since become less of a novelty and more of a daily habit. Yes, I’m one of those. I take pictures of everything — from beautiful meals and funny signs to gorgeous landscapes and plumes of smoke billowing in the distance.

If you grew up in the Nokia 3310 era like me, then you know how far we’ve come. Gone are the 2-megapixel embarrassments that we used to post to Friendster with glee. Now, many of us use the cameras on our phones to not only capture precious memories of our adventures and loved ones, but also to share our lives with the world.

I’m lucky enough that I have access to multiple phones thanks to my job, and at times would carry a second device with me on a day-trip just because I preferred its cameras. But most people don’t have that luxury. Chances are, if you’re reading this, a phone’s cameras may be of utmost importance to you. But you’ll still want to make sure the device you end up getting doesn’t fall flat in other ways. At Engadget, we test and review dozens of smartphones every year; our top picks below represent not only the best phone cameras available right now, but also the most well-rounded options out there.

What to look for when choosing a phone for its cameras

Before scrutinizing a phone’s camera array, you’ll want to take stock of your needs — what are you using it for? If your needs are fairly simple, like taking photos and videos of your new baby or pet, most modern smartphones will serve you well. Those who plan to shoot for audiences on TikTok, Instagram or YouTube should look for video-optimizing features like stabilization and high frame rate support (for slow-motion clips).

Most smartphones today have at least two cameras on the rear and one up front. Those that cost more than $700 usually come with three, including wide-angle, telephoto or macro lenses. We’ve also reached a point where the number of megapixels (MP) doesn’t really matter anymore — most flagship phones from Apple, Samsung and Google have sensors that are either 48MP or 50MP. You’ll even come across some touting resolutions of 108MP or 200MP, in pro-level devices like the Galaxy S24 Ultra.

Most people won’t need anything that sharp, and in general, smartphone makers combine the pixels to deliver pictures that are the equivalent of 12MP anyway. The benefits of pixel-binning are fairly minor in phone cameras, though, and you’ll usually need to blow up an image to fit a 27-inch monitor before you’ll see the slightest improvements.

In fact, smartphone cameras tend to be so limited in size that there’s often little room for variation across devices. They typically use sensors from the same manufacturers and have similar aperture sizes, lens lengths and fields of view. So while it might be worth considering the impact of sensor size on things like DSLRs or mirrorless cameras, on a smartphone those differences are minimal.

Sensor size and field of view

If you still want a bit of guidance on what to look for, here are some quick tips: By and large, the bigger the sensor the better, as this will allow more light and data to be captured. Not many phone makers will list the sensor size in spec lists, so you’ll have to dig around for this info. A larger aperture (usually indicated by a smaller number with an “f/” preceding a digit) is ideal for the same reason, and it also affects the level of depth of field (or background blur) that’s not added via software. Since portrait modes are available on most phones these days, though, a big aperture isn’t as necessary to achieve this effect.

When looking for a specific field of view on a wide-angle camera, know that the most common offering from companies like Samsung and Google is about 120 degrees. Finally, most premium phones like the iPhone 15 Pro Max and Galaxy S24 Ultra offer telephoto systems that go up to 5x optical zoom with software taking that to 20x or even 100x.

Processing and extra features

These features will likely perform at a similar quality across the board, and where you really see a difference is in the processing. Samsung traditionally renders pictures that are more saturated, while Google’s Pixel phones take photos that are more neutral and evenly exposed. iPhones have historically produced pictures with color profiles that seem more accurate, though in comparison to images from the other two, they can come off yellowish. However, that was mostly resolved after Apple introduced a feature in the iPhone 13 called Photographic Styles that lets you set a profile with customizable contrast levels and color temperature that would apply to every picture taken via the native camera app.

Pro users who want to manually edit their shots should see if the phone they’re considering can take images in RAW format. Those who want to shoot a lot of videos while on the move should look for stabilization features and a decent frame rate. Most of the phones we’ve tested at Engadget record at either 60 frames per second at 1080p or 30 fps at 4K. It’s worth checking to see what the front camera shoots at, too, since they’re not usually on par with their counterparts on the rear.

Finally, while the phone’s native editor is usually not a dealbreaker (since you can install a third-party app for better controls), it’s worth noting that the latest flagships from Samsung and Google all offer AI tools that make manipulating an image a lot easier. They also offer a lot of fun, useful extras, like erasing photobombers, moving objects around or making sure everyone in the shot has their eyes open.

How we test smartphone cameras

For the last few years, I’ve reviewed flagships from Google, Samsung and Apple, and each time, I do the same set of tests. I’m especially particular when testing their cameras, and usually take all the phones I’m comparing out on a day or weekend photo-taking trip. Any time I see a photo- or video-worthy moment, I whip out all the devices and record what I can, doing my best to keep all factors identical and maintain the same angle and framing across the board.

It isn’t always easy to perfectly replicate the shooting conditions for each camera, even if I have them out immediately after I put the last one away. Of course, having them on some sort of multi-mount rack would be the most scientific way, but that makes framing shots a lot harder and is not representative of most people’s real-world use. Also, just imagine me holding up a three-prong camera rack running after the poor panicked wildlife I’m trying to photograph. It’s just not practical.

For each device, I make sure to test all modes, like portrait, night and video, as well as all the lenses, including wide, telephoto and macro. When there are new or special features, I test them as well. Since different phone displays can affect how their pictures appear, I wanted to level the playing field: I upload all the material to Google Drive in full resolution so I can compare everything on the same large screen. Because the photos from today’s phones are of mostly the same quality, I usually have to zoom in very closely to see the differences. I also frequently get a coworker who’s a photo or video expert to look at the files and weigh in.

This article originally appeared on Engadget at https://www.engadget.com/best-camera-phone-130035025.html?src=rss

The Pirate Queen interview: How Singer Studios and Lucy Liu brought forgotten history to life

I had a favorite version of Mulan growing up (Anita Yuen in the 1998 Taiwanese TV series). I obsessed over Chinese period TV series like Legend of the Condor Heroes, My Fair Princess and The Book and the Sword. I consider myself fairly well-versed in Chinese historical figures, especially those represented in ‘90s and 2000s entertainment in Asia. So when I found out that a UK-based studio had made a VR game called The Pirate Queen based on a forgotten female leader who was prolific in the South China Sea, I was shocked. How had I never heard of her? How had the Asian film and TV industry never covered her?

I got to play a bit of the game this week, which was released on the Meta Quest store and Steam on March 7th. The titular character Cheng Shih is voiced by actor Lucy Liu, who also executive produced this version of the game with UK-based Singer Studios’ CEO and founder Eloise Singer. Liu and Singer sat with me for an interview discussing The Pirate Queen, Cheng Shih, VR’s strengths and the importance of cultural and historical accuracy in games and films.

Cheng Shih, which translates to “Madam Cheng” or “Mrs Cheng,” was born Shi Yang. After she married the pirate Cheng Yi (usually romanized as Zheng Yi), she became known as Cheng Yi Sao, which translates to “wife of Cheng Yi.” Together they led the Guangdong Pirate Confederation in the 1800s. Upon her husband’s death in 1807, she took over the reins and went on to become what South China Morning Post described as “history’s greatest pirate.”

Singer Studios

How did Singer Studios learn about Cheng Shih and decide to build a game (and upcoming franchise including a film, podcast and graphic novels) around her? According to Singer, it was through word of mouth. “It was a friend of mine who first told me the story,” Singer said. “She said, ‘Did you know that the most famous pirate in history was a woman?’”

Cheng Shih had been loosely referenced in various films and games before this, like the character Mistress Ching in the 2007 film Pirates of the Caribbean: At World’s End and Jing Lang in Assassin’s Creed IV: Black Flag. As Singer pointed out, Cheng Shih had also appeared in a recent episode of Doctor Who.

Singer said that her team started developing the project as a film at the end of 2018. But the pandemic disrupted their plans, causing Singer to adapt it into a game. A short version of The Pirate Queen later debuted at Raindance Film Festival, and shortly after, Meta came onboard and provided funding to complete development of the game. Liu was then approached when the full version was ready and about to make its appearance at Tribeca Film Festival 2023.

“The rest is history,” Liu said, “But not forgotten history.” She said Cheng Shih was never really recognized for being the most powerful pirate. “It seems so crazy that in the 19th century, this woman who started as a courtesan would then rise to power and then have this fleet of pirates that she commanded,” Liu added. She went on to talk about how Cheng Shih was ahead of the time and also represented “a bit of an underdog story.” For the full 15-minute interview, you can watch the video in this article or listen to this week’s episode of The Engadget Podcast and learn more about Liu and Singer’s thoughts on VR and technology over the last 20 years.

Capturing the historical and cultural details of Cheng Shih’s life was paramount to Liu and Singer. They said the team had to create women’s hands from scratch to be represented from the player’s perspective in VR, and a dialect coach was hired to help Liu nail the pronunciation for the Cantonese words that Cheng Shih speaks in the game. Though I’m not completely certain if Cheng Shih spoke Mandarin or Cantonese, the latter seems like the more accurate choice given it’s the lingua franca in the Guangdong region.

Singer Studios

All that added to the immersiveness of The Pirate Queen, in which players find themselves in an atmospheric maritime environment. The Meta Quest 3’s controllers served as my hands in the game, and I rowed boats, climbed rope ladders and picked up items with relative ease. Some of the mechanics, especially the idea of “teleportation” as moving around, were a little clunky, but after about five minutes I got used to how things worked. You’ll have to point the left controller and push the joystick when you’ve chosen a spot, and the scene changes around you. This probably minimizes the possibility of nausea, since you’re not standing still while watching your surroundings move. It’s also pretty typical of VR games, so those who have experience playing in headsets will likely be familiar with the movement.

You can still walk around and explore, of course. I scrutinized the corners of rooms, inspected the insides of cabinets and more, while hunting for keys that would unlock boxes containing clues. A lot of this is pretty standard for a puzzle or room escape game, which is what I used to play the most in my teens. But I was particularly taken by sequences like rowing a boat across the sea and climbing up a rope ladder, both of which caused me to break a mild sweat. Inside Cheng Shih’s cabin, I lit a joss stick and placed it in an incense holder — an action I repeated every week at my grandfather’s altar when I was growing up. It felt so realistic that I tried to wave the joss stick to put out the flame and could almost smell the smoke.

It’s these types of activities that make VR games great vehicles for education and empathy. “We didn’t want to have these combat elements that traditional VR games do have,” Singer said, adding that it was one of the challenges in creating The Pirate Queen.

“It’s nice to see and to learn and be part of that, as opposed to ‘Let’s turn to page 48,’” Liu said. “That’s not as exciting as doing something and being actively part of something.” When you play as a historical character in a game, and one that’s as immersive as a VR game, “you’re living that person’s life or that moment in time,” Liu added.

While The Pirate Queen is currently only available on Quest devices, Singer said there are plans to bring it to “as many headsets as we possibly can.” Singer Studios also said it is “extending The Pirate Queen franchise beyond VR into a graphic novel, film and television series.”

This article originally appeared on Engadget at https://www.engadget.com/the-pirate-queen-interview-how-singer-studios-and-lucy-liu-brought-forgotten-history-to-life-160007029.html?src=rss

Microsoft’s neural voice tool for people with speech disabilities arrives later this year

At its 14th Ability summit, which kicks off today, Microsoft is highlighting developments and collaborations across its portfolio of assistive products. Much of that is around Azure AI, including features announced yesterday like AI-powered audio descriptions and the Azure AI studio that better enables developers with disabilities to create machine-learning applications. It also showed off new updates like more languages and richer AI-generated descriptions for its Seeing AI tool, as well as new playbooks offering guidelines for best practices in areas like building accessible campuses and greater mental health support.

The company is also previewing a feature called “Speak For Me,” which is coming later this year. Much like Apple’s Personal Voice, Speak For Me can help those with ALS and other speech disabilities to use custom neural voices to communicate. Work on this project has been ongoing “for some time” with partners like the non-profit ALS organization Team Gleason, and Microsoft said it’s “committed to making sure this technology is used for good and plan to launch later in the year.” The company also shared that it’s working with Answer ALS and ALS Therapy Development Institute (TDI) to “almost double the clinical and genomic data available for research.”

One of the most significant accessibility updates coming this month is that Copilot will have new accessibility skills that enable users to ask the assistant to launch Live Caption and Narrator, among other assistive tools. The Accessibility Assistant feature announced last year will be available today in the Insider preview for M365 apps like Word, with the company saying it will be coming “soon” to Outlook and PowerPoint. Microsoft is also publishing four new playbooks today, including a Mental Health toolkit, which covers “tips for product makers to build experiences that support mental health conditions, created in partnership [with] Mental Health America.”

Ahead of the summit, the company’s chief accessibility officer Jenny Lay-Flurrie spoke with Engadget to share greater insight around the news as well as her thoughts on generative AI’s role in building assistive products.

“In many ways, AI isn’t new,” she said, adding “this chapter is new.” Generative AI may be all the rage right now, but Lay-Flurrie believes that the core principle her team relies on hasn’t changed. “Responsible AI is accessible AI,” she said.

Still, generative AI could bring many benefits. “This chapter, though, does unlock some potential opportunities for the accessibility industry and people with disabilities to be able to be more productive and to use technology to power their day,” she said. She highlighted a survey the company did with the neurodiverse community around Microsoft 365 Copilot, and the response of the few hundred people who responded was “this is reducing time for me to create content and it’s shortening that gap between thought and action,” Lay-Flurrie said.

The idea of being responsible in embracing new technology trends when designing for accessibility isn’t far from Lay-Flurrie’s mind. “We still need to be very principled, thoughtful and if we hold back, it’s to make sure that we are protecting those fundamental rights of accessibility.”

Elsewhere at the summit, Microsoft is featuring guest speakers like actor Michelle Williams and its own employee Katy Jo Wright, discussing mental health and their experience living with chronic Lyme disease respectively. We will also see Amsterdam’s Rijksmusem share how it used Azure AI’s computer vision and generative AI to provide image descriptions for over a million pieces of art for visitors who are blind or have low vision.

This article originally appeared on Engadget at https://www.engadget.com/microsofts-neural-voice-tool-for-people-with-speech-disabilities-arrives-later-this-year-161550277.html?src=rss

Apple sold enough iPhones and services last quarter to reverse a downward revenue trend

After four consecutive quarters of revenue decline, Apple broke the trend and reported its first period of revenue growth today. In its earnings report for the first quarter of the financial year of 2024, the company announced a quarterly revenue of $119.6 billion, which is an increase of 2 percent from the same period last year. 

In addition, Apple CEO Tim Cook said its "installed base of active devices has now surpassed 2.2 billion, reaching an all-time high across all products and geographic segments." This quarter includes money brought in from the sales of the iPhone 15 line introduced in September 2023, which had an obvious impact on performance. 

"Today Apple is reporting revenue growth for the December quarter fueled by iPhone sales, and an all-time revenue record in Services,” Cook said. He noted the company hitting "all-time revenue records across advertising, Cloud services, payment services and video as well as December quarter records in App Store and Apple Care." Cook recapped some updates made to the Apple TV app, as well as TV+ content earning nominations and awards. 

Cook went on to remind us during the company's earnings call that tomorrow is the launch day for the Vision Pro headset, calling it historic. After saying that Apple is dedicated to investing in new technologies, Cook added that the company will be sharing more about its developments in AI later this year. 

Products in the wearables, home and accessories categories didn't fare well in this quarter, though sales in the Mac department did increase year over year. iPad sales in particular dropped 25 percent over the same period last year, though Cook attributed that to a "difficult compare" to the big numbers recorded in the first quarter of 2023 due to new models with refreshed Apple Silicon. Considering the company did not release a new iPad model in 2023 at all, this is not surprising. 

Cook continued by highlighting developments like Apple opening its 100th retail location in Asia Pacific and updates on its sustainability efforts. He wrapped up by saying "Apple is a company that has never shied away from big challenges," adding "so we're optimistic about the future, confident in the long term and as excited as we've ever been to deliver for our users."

This article originally appeared on Engadget at https://www.engadget.com/apple-sold-enough-iphones-and-services-last-quarter-to-reverse-a-downward-revenue-trend-223109289.html?src=rss

Galaxy S24 and S24 Plus hands-on: Samsung's AI phones are here, but with mixed results

I’ve never thought of Samsung as a software company, let alone as a name to pay attention to in the AI race. But with the launch of the Galaxy S24 series today, the company is eager to have us associate it with the year’s hottest tech trend. The new flagship phones look largely the same as last year’s models, but on the inside, change is afoot. At a hands-on session during CES 2024 in Las Vegas last week, I was more focused on checking out the new software on the Galaxy S24 and S24 Plus.

Thanks to a new Snapdragon 8 Gen 3 processor (in the US) customized “for Galaxy,” the S24 series are capable of a handful of new AI-powered tasks that seem very familiar. In fact, if you’ve used Microsoft’s CoPilot, Google’s Bard AI or ChatGPT, a lot of these tools won’t feel new. What is new is the fact that they’re showing up on the S24s, and are mostly processed on-device by Samsung’s recently announced Gauss generative AI model, which it has been quietly building out.

Samsung’s Galaxy AI features on the S24

There are five main areas where generative AI Is making a big difference in the Galaxy S24 lineup — search, translations, note creation, message composition and photo editing and processing. Aside from the notes and composition features, most of these updates seem like versions of existing Google products. In fact, the new Circle to Search feature is a Google service that is debuting on the S24 series, in addition to the Pixel 8 and Pixel 8 Pro.

Circle to Search

With Circle to Search, you basically press the middle of the screen’s bottom edge, the Google logo and a search bar pop up, and you can draw a ring around anything on the display. Well, almost anything. DRMed content or things protected from screenshots, like your banking app, are off limits. Once you’ve made your selection, a panel slides up showing your selection, along with results from Google’s Search Generative Experience (SGE).

You can scroll down to see image matches, followed by shopping, text, website and other types of listings that SGE thought were relevant. I circled the Samsung clock widget, a picture of beef wellington and a lemon, and each time I was given pretty accurate results. I was also impressed by how quickly Google correctly identified a grill that I circled on an Engadget article featuring a Weber Searwood, especially since the picture I drew around was at an off angle.

This is basically image search via Google or Lens, except it saves you from having to open another app (and take screenshots). You’ll be able to circle items in YouTube videos, your friend’s Instagram Stories (or, let’s be honest, ads). Though I was intrigued by the feature and its accuracy, I’m not sure how often I’d use it in the real world. The long-press gesture to launch Circle to Search works whether you use a gesture-based navigation or if you have the three-button layout. The latter might be slightly confusing, since you pretty much hold your finger down on the home button, but not exactly.

Circle to Search is launching on January 31st, and though it’s reserved for the Galaxy S24s and Pixel 8s for now, it’s not clear whether older devices might get the feature.

Chat Assist to tweak the tone of your messages

The rest of Samsung’s AI features are actually powered by the company’s own language models, not Google’s. This part is worth making clear, because when you use the S24 to translate a message from, say, Portuguese to Mandarin, you’ll be using Samsung’s database, not Google’s. I really just want you to direct your anger at the right target when something inevitably goes wrong.

I will say, I was a little worried when I first heard about Samsung’s new Chat Assist feature. It uses generative AI to help reword a message you’ve composed to change up the tone. Say you’re in a hurry, firing off a reply to a friend whom you know can get anxious and misinterpret texts. The S24 can take your sentences, like “On my way back now what do you need” and make it less curt. The options I saw were “casual,” “emojify,” “polite,” “professional” and “social,” which is a hashtag-filled caption presumably for your social media posts.

I typed “Hey there. Where can I get some delicious barbecue? Also, how are you?” Then I tapped the AI icon above the keyboard and selected the “Writing Style” option. After about one or two seconds, the system returned variations of what I wrote.

At the top of the results was my original, followed by the Professional version, which I honestly found hilarious. It said “Hello, I would like to inquire about the availability of delectable barbecue options in the vicinity. Additionally, I hope this message finds you well. Thank you for your attention to this matter.”

It reminded me of an episode of Friends where Joey uses a thesaurus to sound smarter. Samsung’s AI seems to have simply replaced every word with a slightly bigger word, while also adding some formal greetings. I don’t think “inquire about the availability of delectable barbecue options in the vicinity” is anything a human would write.

That said, the casual option was a fairly competent rewording of what I’d written, as was the polite version. I cannot imagine a scenario where I’d pick the “emojify” option, except for the sake of novelty. And while the social option pained me to read, at least the hashtags of #Foodie and #BBQLover seemed appropriate.

Samsung Translate

You can also use Samsung’s AI to translate messages into one of 13 languages in real-time, which is fairly similar to a feature Google launched on the Pixel 6 in 2021. The S24’s interface looks reminiscent of the Pixel’s, too, with both offering two text input fields. Like Google, Samsung also has a field at the top for you to select your target language, though the system is capable of automatically recognizing the language being used. I never got this to work correctly in a foreign language that I understand, and have no real way of confirming how accurate the S24 was in Portuguese.

Samsung’s translation engine is also used for a new feature called Live Translate, which basically acts as an interpreter for you during phone calls made via the native dialer app. I tried this by calling one of a few actors Samsung had on standby, masquerading as managers of foreign-language hotels or restaurants. After I dialed the number and turned on the Live Translate option, Samsung’s AI read out a brief disclaimer explaining to the “manager at a Spanish restaurant” that I was using a computerized system for translation. Then, when I said “Hello,” I heard a disembodied voice say “Hola” a few seconds later.

The lag was pretty bad and it threw off the cadence of my demo, as the person on the other end of the call clearly understood English and would answer in Spanish before my translated request was even sent over. So instead of:

Me: Can I make a reservation please?

S24: … ¿Puedo hacer una reserva por favor?

Restaurant: Si, cuantas personas y a que hora?

S24 (to me): … Yes, for how many people and at what time?

My demo actually went:

Me: Can I make a reservation please?

pause

Restaurant: Si, cuantas personas y a que hora?

S24: ¿Puedo hacer una reserva por favor?

pause

S24 (to me): Yes, for how many people and at what time?

It was slightly confusing. Do I think this is representative of all Live Translate calls in the real world? No, but Samsung will need to work on cutting down lag if it wants to be helpful and not confusing.

Galaxy AI reorganizing your notes

I was most taken by what Samsung’s AI can do in its Notes app, which historically has had some pretty impressive handwriting recognition and indexing. With the AI’s assistance, you can quickly reformat your large blocks of text into easy-to-read headers, paragraphs and bullets. You can also swipe sideways to see different themes, with various colors and font styles.

Notes can also generate summaries for you, though most of the summaries on the demo units didn’t appear very astute or coherent. After it auto-formatted a note titled “An Exploration of the Celestial Bodies in Our Solar System,” the first section was aptly titled “Introduction,” but the first bullet point under that was, confusingly, “The Solar System.” The second bullet point was two sentences, starting with “The Solar System is filled with an array of celestial bodies.”

Samsung also borrowed another feature from the Pixel ecosystem, using its speech-to-text software to transcribe, summarize and translate recordings. The transcription of my short monologue was accurate enough, but the speaker labels weren’t. Summaries of the transcriptions were similar to those in Notes, in that they’re not quite what I’d personally highlight.

Photo by Sam Rutherford / Engadget

That’s already a lot to cover, and I haven’t even gotten to the photo editing updates yet. My colleague Sam Rutherford goes into a lot more detail on those in his hands-on with the Galaxy S24 Ultra, which has the more-sophisticated camera system. In short though, Samsung offers edit suggestions, generative background filling and an instant slow-mo tool that fills in frames when you choose to slow down a video.

Samsung Galaxy S24 and S24 Plus hardware updates

That brings me to the hardware. On the regular Galaxy S24 and S24 Plus, you’ll be getting a 50-megapixel main sensor, 12MP wide camera and 10MP telephoto lens with 3x optical zoom. Up front is a 12MP selfie camera. So, basically, the same setup as last year. The S24 has a 6.2-inch Full HD+ screen, while the S24 Plus sports a 6.7-inch Quad HD+ panel and both offer adaptive refresh rates that can go between 1 and 120Hz. In the US, all three S24 models use a Snapdragon 8 Gen 3 for Galaxy processor, with the base S24 starting out with 8GB of RAM and 128GB of storage. Both the S24 and S24 Plus have slightly larger batteries than their predecessors, with their respective 4,000mAh and 4,900mAh cells coming in at 100mAh and 200mAh bigger than before.

Though the S24s look very similar to last year’s S23s, my first thought on seeing them was how much they looked like iPhones. That’s neither a compliment nor an indictment. And to be clear, I’m only talking about the S24 and S24 Plus, not the Ultra, which still has the distinctive look of a Note.

Photo by Sam Rutherford / Engadget

It feels like Samsung spent so much time upgrading the software and focusing on joining the AI race this year that it completely overlooked the S24’s design. Plus, unlike the latest iPhones, the S24s are also missing support for the newer Qi 2 wireless charging standard, which includes magnetic support, a la Apple’s MagSafe.

Wrap-up

I know it’s just marketing-speak and empty catchphrases, but I’m very much over Samsung’s use of what it thinks is trendy to appeal to people. Don’t forget, this is the company that had an “Awesome Unpacked” event in 2021 filled to the brim with cringeworthy moments and an embarrassingly large number of utterances of the words “squad” and “iconic”.

That doesn’t mean what Samsung’s done with the Galaxy S24 series is completely meaningless. Some of these features could genuinely be useful, like summarizing transcriptions or translating messages in foreign languages. But after watching the company follow trend after trend (like introducing Bixby after the rise of digital assistants, or bringing scene optimizers to its camera app after Chinese phone makers did), launching generative AI features feels hauntingly familiar. My annoyance at Samsung’s penchant for #trendy #hashtags aside, the bigger issue here is that if the company is simply jumping on a fad instead of actually thoughtfully developing meaningful features, then consumers run the risk of losing support for tools in the future. Just look at what happened to Bixby.

This article originally appeared on Engadget at https://www.engadget.com/galaxy-s24-and-s24-plus-hands-on-samsungs-ai-phones-are-here-but-with-mixed-results-180008236.html?src=rss

Apple Vision Pro hands-on, redux: Immersive Video, Disney+ app, floating keyboard, and a little screaming

With pre-orders for the Apple Vision Pro headset opening this week, the company is getting ready to launch one of its most significant products ever. It announced this morning an “entertainment format pioneered by Apple” called Apple Immersive Video, as well as new viewing environments in the Disney+ app featuring scenes from the studio’s beloved franchises like the Avengers and Star Wars.

We already got hands-on once back at WWDC when the headset was first announced, but two of our editors, Dana Wollman and Cherlynn Low, had a chance to go back and revisit the device (and Dana’s case, experience it anew). Since we’ve already walked you through some of the basic UI elements in our earlier piece, we decided to focus on some of the more recently added features, including Apple Immersive Video, the new Disney+ environments, a built-in “Encounter Dinosaurs” experience, as well as the floating keyboard, which didn’t work for us when we first tried the device in June of last year. Here, too, we wanted to really get at what it actually feels like to use the device, from the frustrating to the joyful to the unintentionally eerie. (Yes, there was a tear, and also some screaming.)

Fit, comfort and strap options

Cherlynn: The best heads-up display in the world will be useless if it can’t be worn for a long time, so comfort is a crucial factor in the Apple Vision Pro’s appeal. This is also a very personal factor with a lot of variability between individual users. I have what has been described as a larger-than-usual head, and a generous amount of hair that is usually flat-ironed. This means that any headgear I put on tends to slip, especially if the band is elastic.

Unlike the version that our colleague Devindra Hardawar saw at WWDC last year, the Vision Pro unit I tried on today came with a strap that you stretch and ends up at the back of your head. It was wide, ridged and soft, and I at first thought it would be very comfortable. But 15 minutes into my experience, I started to feel weighed down by the device, and five more minutes later, I was in pain. To be fair, I should have flagged my discomfort to Apple earlier, and alternative straps were available for me to swap out. But I wanted to avoid wasting time. When I finally told the company’s staff about my issues, they changed the strap to one that had two loops, with one that went over the top of my head.

Apple

Dana: The fitting took just long enough — required just enough tweaking — that I worried for a minute that I was doing it wrong, or that I somehow had the world’s one unfittable head. First, I struggled to get the lettering to look sharp. It was like sitting at an optometrist's office, trying out a lens that was just slightly too blurry for me. Tightening the straps helped me get the text as crisp as it needed to be, but that left my nose feeling pinched. The solution was swapping out the seal cushion for the lighter of the two options. (There are two straps included in the box, as well as two cushions.) With those two tweaks — the Dual Loop Band and the light seal cushion — I finally felt at ease.

Cherlynn: Yep, that Dual Loop band felt much better for weight distribution, and it didn’t keep slipping down my hair. It’s worth pointing out that Apple did first perform a scan to determine my strap size, and they chose the Medium for me. I also had to keep turning a dial on the back right to make everything feel more snug, so I had some control over how tightly the device sat. Basically, you’ll have quite a lot of options to adapt the VIsion Pro to your head.

Immersive Video

Dana: Sitting up close in the center of spatial videos reminded me of Jimmy Stewart’s character in It’s A Wonderful Life: I was both an insider and outsider at the same time. In one demo, we saw Alicia Keys performing the most special of performances: just for us, in a living room. In a different series of videos, we saw the same family at mealtime, and a mother and daughter outside, playing with bubbles.

As I watched these clips, particularly the family home videos that reminded me of my own toddler, I felt immersed, yes, but also excluded; no one in the videos sees you or interacts with you, obviously. You are a ghost. I imagined myself years from now, peering in from the future on bygone videos of my daughter, and felt verklempt. I did not expect to get teary-eyed during a routine Apple briefing.

Cherlynn: The Immersive Video part of my demo was near the end, by which point I had already been overwhelmed by the entire experience and did not quite know what more to expect. The trailer kicked off with Alicia Keys singing in my face, which I enjoyed. But I was more surprised by the kids playing soccer with some rhinos on the field, and when the animals charged towards me, I physically recoiled. I loved seeing the texture of their skin and the dirt on the surface, and was also impressed when I saw the reflection of an Apple logo on the surface of a lake at the end. I didn’t have the same emotional experience that Dana did, but I can see how it would evoke some strong feelings.

Apple

Disney+ app

Dana: Apple was very careful to note that the version of the Disney+ app we were using was in beta; a work in progress. But what we saw was still impressive. Think of it like playing a video game: Before you select your race course, say, you get to choose your player. In this case, your “player” is your background. Do you want to sit on a rooftop from a Marvel movie? The desert of Tatooine? Make yourself comfortable in whatever setting tickles your fancy, and then you can decide if actually you want to be watching Ted Lasso in your Star Wars wasteland. It’s not enough to call it immersive. In some of these “outdoor” environments in particular, it’s like attending a Disney-themed drive-in. Credit to Disney: They both understand – and respect – their obsessive fans. They know their audience.

Cherlynn: As a big Marvel fangirl, I really geeked out when the Avengers Tower environment came on. I looked around and saw all kinds of easter eggs, including a takeout container from Shawarma Grill on the table next to me. It feels a little silly to gush about the realism of the images, but I saw no pixels. Instead, I looked at a little handwritten note that Tony Stark had clearly left behind and felt like I was almost able to pick it up. When we switched over to the Tattooine environment, I was placed in the cockpit of Luke Skywalker’s landspeeder, and when I reached out to grab the steering controls, I was able to see my own hands in front of me. I felt slightly disappointed to not actually be able to interact with those elements, but it was definitely a satisfying experience for a fan.

Typing experience

Cherlynn: Devindra mentioned that the floating keyboard wasn’t available at his demo last year, and was curious to hear what that was like. I was actually surprised that it worked, and fairly well in my experience. When I selected the URL bar by looking at it and tapping my thumb and forefinger, the virtual keyboard appeared. I could either use my eyes to look at the keys I wanted, then tap my fingers together to push them. Or, and this is where I was most impressed, I could lean forward and press the buttons with my hands.

It’s not as easy as typing on an actual keyboard would be, but I was quite tickled by the fact that it worked. Kudos to Apple’s eye- and hand-tracking systems, because they were able to detect what I was looking at or aiming for most of the time. My main issue with the keyboard was that it felt a little too far away and I needed to stretch if I wanted to press the buttons myself. But using my eye gaze and tapping wasn’t too difficult for a short phrase, and if I wanted to input something longer I could use voice typing (or pair a Bluetooth keyboard if necessary).

Apple

Dana: This was one of the more frustrating aspects of the demo for me. Although there were several typing options – hunting and pecking with your fingers, using eye control to select keys, or just using Siri – none of them felt adequate for anything resembling extended use. It took several tries for me to even spell Engadget correctly in the Safari demo. This was surprising to me, as so many other aspects of the broader Apple experience – the pinch gesture, the original touch keyboard on the original iPhone – that “just work,” as Apple loves to say about itself. The floating keyboard here clearly needs improvement. In the meantime, it’s harder to imagine using the Vision Pro for actual work. The Vision Pro feels much further along as a personal home theater.

Meditation

Cherlynn: As someone who’s covered the meditation offerings by companies like Apple and Fitbit a fair amount, I wasn’t sure what to expect of the Vision Pro. Luckily, this experience took place in the earlier part of the demo, so I wasn’t feeling any head strain yet and was able to relax. I leaned back on the couch and watched as a cloud, similar to the Meditation icon in the Apple Watch, burst into dozens of little “leaves” and floated around me in darkness. As the 1-minute session started, soft, comforting music played in the background as a voice guided me through what to do. The leaves pulsed and I felt enveloped by relaxing visuals and calming sounds and altogether it felt quite soothing. It’s funny how oddly appropriate a headset is for something like meditating, where you can literally block out distractions in the world and simply focus on your breathing. This was a fitting use of the Vision Pro that I certainly did not anticipate.

Dana: I wanted more of this. A dark environment, with floating 3D objects and a prompt to think about what I am grateful for today. The demo only lasted one minute, but I could have gone longer.

Encounter Dinosaurs

Cherlynn: Fun fact about me: Dinosaurs don’t scare me, but butterflies do. Yep. Once you’ve stopped laughing, you can imagine the trauma I had to undergo at this demo. I’d heard from my industry friends and Devindra all about how they watched a butterfly land on their fingers in their demos at WWDC, before dinosaurs came bursting out of a screen to roar at them. Everyone described this as a realistic and impressive technological demo, since the Vision Pro was able to accurately pinpoint for everyone where their fingers were and have the butterflies land exactly on their fingertips.

I did not think I’d have to watch a butterfly land on my body today, and just generally do not want that in life. But for this demo, I kept my eyes open to see just how well Apple would do, and, because I had a minor calibration issue at the start of this demo, I had to do this twice. The first time this happened, I… screamed a bit. I could see the butterfly’s wings and legs. That’s really what creeped me out the most — seeing the insect’s legs make “contact” with my finger. There was no tactile feedback, but I could almost feel the whispery sensation of the butterfly’s hairy ass legs on my finger. Ugh.

Then the awful butterfly flew away and a cute baby dinosaur came out, followed by two ferocious dinosaurs that I then stood up to “pet”. It was much more fun after, and actually quite an impressive showcase of the Vision Pro’s ability to blend the real world with immersive experiences, as I was able to easily see and walk around a table in front of me to approach the dinosaur.

Dana: Unlike Cher, I did not scream, though I did make a fool of myself. I held out my hand, to beckon one of the dinosaurs, and it did in fact walk right up to me and make a loud sound in my face. I “pet” it before it retreated. Another dinosaur appeared. I once again held out my hand, but that second dino ignored me. As the demo ended, I waved and heard myself say “bye bye.” (Did I mention I live with a toddler?) I then remembered there were other adults in the room, observing me use the headset, and felt sheepish. Which describes much of the Vision Pro experience, to be honest. You could maybe even say the same of any virtual reality headset worth their salt. It is immersive to the point that you will probably, at some point, throw decorum to the wind.

Apple

Final (ish) thoughts

Cherlynn: I had been looking forward to trying on the Vision Pro for myself and was mostly not disappointed. The eye- and hand-tracking systems are impressively accurate, and I quickly learned how to navigate the interface, so much so that I was speeding ahead of the instructions given to me. I’m not convinced that I’ll want to spend hours upon hours wearing a headset, even if the experience was mind-blowing. The device’s $3,500 price is also way out of my budget.

But of all the VR, AR and MR headsets I’ve tried on in my career, the Apple Vision Pro is far and away the best, and easily the most thought-out. Apple also took the time to show us what you would look like to other people when using the device, with a feature called EyeSight that would put a visual feed of your eyes on the outside of the visor. Depending on what you’re doing in visionOS, the display would show some animations indicating whether you’re fully immersed in an environment or if you can see the people around you.

Dana: The Vision Pro was mostly easier to use than I expected, and while it has potential as an all-purpose device that you could use for web browsing, email, even some industrial apps, its killer application, for now, is clearly watching movies (home videos or otherwise). I can’t pretend that Apple is the first to create a headset offering an immersive experience; that would be an insult to every virtual reality headset we’ve tested previously (sorry, Apple, I’m going to use the term VR). But if you ask me what it felt like to use the headset, particularly photo and video apps, my answer is that I felt joy. It is fun to use. And it is up to you if this much fun should cost $3,500.

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-hands-on-redux-immersive-video-disney-app-floating-keyboard-and-a-little-screaming-180006222.html?src=rss

Audio Radar helps gamers with hearing loss 'see' sound effects instead

Audio cues can sometimes be crucial for success in games. Developers frequently design the sound environment for their experiences to be not only rich and immersive, but to also contain hints about approaching enemies or danger. Players who are hard of hearing can miss out on this, and it's not fair for them to be disadvantaged due to a disability. A product called Audio Radar launched at CES 2024 and it can help turn sound signals into visual cues, so that gamers with hearing loss can "see the sound," according to the company AirDrop Gaming LLC. 

The setup is fairly simple. A box plugs into a gaming console to interpret audio output and converts that data into lights. A series of RGB light bars surround the screen, and display different colors depending on the type of sound coming from the respective direction they represent. Put simply, it means that if you're walking around a Minecraft world, like I did at the company's booth on the show floor, you'll see lights of different colors appear on the different bars.

Red lights mean sounds from enemies are in the area adjacent to the corresponding light, while green is for neutral sounds. An onscreen legend also explains what the sounds mean, though that might just be for the modded Minecraft scenario on display at CES. 

Photo by Cherlynn Low / Engadget

I walked around the scene briefly, and could see green lights hovering above a pen of farm animals, while purple lights fluttered in tandem with a dragon flying overhead. I did find it a little confusing, but that is probably due more to the fact that I know very little about Minecraft, and as someone with hearing I might not appreciate the added information as much as someone without.

With an SDK that the company launched at the show, developers will be able to customize the lights and visual feedback to elements in their game so that they have control over what their hard-of-hearing gamers see. In the meantime, Audio Radar is using its own software to detect stereo or surround sound signals to convert to feedback in lights and colors. 

Though the product may seem in its early stages, various major gaming companies have appeared to indicate interest in Audio Radar. AirDrop Gaming's CEO Tim Murphy told me that Logitech is "providing support as we further develop our product and design our go-to-market strategy." Also, Microsoft CEO Satya Nadella was spotted at the booth on opening day.

Audio Radar is beginning to ship on a wider level this year, and the company continues to develop products for gamers who are deaf and hard of hearing, among other things. The system works with Xbox, PlayStation and PC.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/audio-radar-helps-gamers-with-hearing-loss-see-sound-effects-instead-195001226.html?src=rss

Our favorite accessibility products at CES 2024

So much of what we see at CES tends to be focused on technological innovation for the sake of innovation, or obvious attempts to tap into whatever trend is gripping the internet's attention that year. In the last few shows, though, there has been a heartening increase in attention to assistive products that are designed to help improve the lives of people with disabilities and other different needs. At CES 2024, I was glad to see more development in the accessibility category, with many offerings appearing to be more thoughtfully designed in addition to being clever. It's so easy to get distracted by the shiny, eye-catching, glamorous and weird tech at CES, but I wanted to take the time to give due attention to some of my favorite accessibility products here in Las Vegas.

GyroGlove

Before I even packed my bags, numerous coworkers had sent me the link to GyroGlove's website after it had been recognized as an honoree for several CES Innovation awards. The device is a hand-stabilizing glove that uses gyroscopic force to help those with hand tremors minimize the shakes. Because the demo unit at the show floor was too large for me, and, more importantly, I don't have hand tremors, I couldn't accurately assess the glove's effectiveness. 

But I spoke with a person with Parkinson's Disease at the booth, who had been wearing one for a few days. She said the GyroGlove helped her perform tasks like buttoning up a shirt more easily, and that she intended to buy one for herself. At $5,899, the device is quite expensive, which is the sad state of assistive products these days. But GyroGlove's makers said they're in talks with some insurance providers in the US, which could lead to it being covered for those in America who could benefit from it. That's one of the biggest reasons that led us to name GyroGlove one of our winners for CES 2024

Photo by Cherlynn Low / Engadget

MouthPad

I did not think I'd be looking deep into a person's mouth and up their nose at CES 2024, but here we are. Sometimes you have to do strange things to check out unconventional gadgets. The MouthPad is as unusual as it gets. It's a tongue-operated controller for phones, tablets and laptops, and basically anything that will accept a Bluetooth mouse input. The components include a touchpad mounted onto the palette of what's essentially a retainer, as well as a battery and Bluetooth radio. 

As odd as the concept sounds, it actually could be a boon for people who aren't able to use their limbs, since your tongue, as a muscle, can offer more precise movement and control than, say, your eyes. If you're feeling apprehensive about sticking a device inside your mouth, it might be helpful to know that the battery is from the same company that's made them for medical-grade implants, while the rest of the dental tray is made from a resin that's commonly used in aligners and bite guards. The product is currently available as an early access package that includes setup and calibration assistance, with a new version (with longer battery life) slated for launch later this year.

OrCam Hear

Assistive tech company OrCam won our Best of CES award for accessibility in 2022, so I was eager to check out what it had in store this year. I wasn't disappointed. The company had a few updated products to show off, but the most intriguing was a new offering for people with hearing loss. The OrCam Hear system is a three-part package consisting of a pair of earbuds, a dongle for your phone and an app. Together, the different parts work to filter out background noise while identifying and isolating specific speakers in a multi-party conversation.

At a demo during a noisy event at CES 2024, I watched and listened as the voices of selected people around me became clear or muffled as company reps dragged their icons in or out of my field of hearing. I was especially impressed when the system was able to identify my editor next to me and let me choose to focus on or filter out his voice. 

Audio Radar

If you're a gamer, you'll know how important audio cues can sometimes be for a successful run. Developers frequently design the sound environment for their games to be not only rich and immersive, but to also contain hints about approaching enemies or danger. Players who are hard of hearing can miss out on this, and it's not fair for them to be disadvantaged due to a disability. 

A product called Audio Radar can help turn sound signals into visual cues, so that gamers with hearing loss can "see the sound," according to the company. The setup is fairly simple. A box plugs into a gaming console to interpret the audio output and convert it into lights. A series of RGB light bars surround the screen, and display different colors depending on the type of sound coming from the respective direction they represent.

CES 2024 saw not just Audio Radar's official launch, but was also where the company introduced its SDK for game developers to create custom visual cues for players who are hard of hearing. The company's founder and CEO Tim Murphy told Engadget that it's partnering with Logitech, with the gaming accessory maker "providing support as we further develop our product and design our go-to-market strategy." 

Photo by Cherlynn Low / Engadget

Transcribe Glass

Google Glass was resurrected at CES 2024. Sort of. A new product called Transcribe Glass is a small heads up display you can attach to any frames, and the result looks a lot like the long-dead Google device. It connects to your phone and uses that device's onboard processing to transcribe what it hears, then projects the text onto the tiny transparent display hovering above the eye. You'll be able to resize the font, adjust the scrolling speed and choose your language model of choice, since TranscribeGlass uses third-party APIs for translation. Yes, it converts foreign languages into one you understand, too. 

The company is targeting year's end for launch, and hoping to offer the device at $199 to start. When I tried it on at the show floor, I was surprised by how light and adjustable the hardware was. I had to squint slightly to see the captions, and was encountering some Bluetooth lag, but otherwise the transcriptions took place fairly quickly and appeared to be accurate. The TranscribeGlass should last about eight hours on a charge, which seems reasonable given all that it's doing. 

Samsung's subtitle accessibility features

Though we didn't catch a demo of this in person, Samsung did briefly mention a "sign language feature in Samsung Neo QLED" that "can be easily controlled with gestures for the hearing impaired, and an Audio Subtitle feature [that] turns text subtitles into spoken words in real-time for those with low vision." We weren't able to find this at the show, but the concept is certainly meaningful. Plus, the fact that Samsung TVs have mainstream appeal means these features could be more widely available that most of the niche products we've covered in this roundup.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/our-favorite-accessibility-products-at-ces-2024-170009710.html?src=rss