Posts with «provider_name|engadget» label

CES 2024 Day 1 recap: Samsung and Sony dominated, as did chips and laptops

The first unofficial day of CES 2024 has come and gone and it feels like we’ve been run over by a giant metaphorical eighteen-wheeler full of press conferences. From home robots to electric vehicles to AI, laptops and processors, there was news from pretty much all areas of tech. There were pleasant surprises like Samsung’s cute new Ballie robot ball and Sony’s spatial content creation headset, and intriguing concepts like Razer’s vibrating cushion for gamers. We also got exactly what we expected in the form of new processors from the likes of AMD, Intel and NVIDIA, as well as the subsequent flood of laptops carrying the just-announced chips for 2024.

And for everyone else, this CES also saw the launch of things like headphones, electric vehicles, gaming handhelds, grills, gaming phones, e-ink tablets, strange hybrid devices, noise-suppressing masks, standing desks and more. It’s a free for all and we’re nowhere near done. Here’s just a small selection of the biggest news out of CES 2024’s press day, right before the show officially opens.

Samsung and Sony’s press conferences had some of the best surprises this year. Samsung showed us a new version of its Ballie robot, which is cute as heck. It’s basically a yellow bowling ball with a projector built in and can send you text messages and video clips of what’s at home while you’re out. You can ask it to close your curtains, turn on your lights or stream your favorite yoga video to your ceiling while you lie on your back for a meditative session. Samsung told The Washington Post that Ballie will be available for sale some time this year, but did not say how much it would cost. I guess that’s another surprise we can look forward to in the coming months.

Meanwhile, Sony brought us a few unexpected demos, starting by driving its Afeela concept electric car onstage using a PlayStation controller. Then, it showed off its mixed reality headset for “spatial content creation,” which sounds somewhat similar to Apple’s Vision Pro and Microsoft’s HoloLens. Sony’s does appear to target content creators, though, and looks like a pared down PSVR2 headset. It’ll be powered by a Snapdragon XR2+ Gen 2 chipset, sport dual 4K OLED microdisplays and have user and space tracking. The new Sony headset still has no name, no price, but it will be available later this year.

Also dominating our news feeds on Day 1 was the barrage of chip news coming from Intel, AMD and NVIDIA. AMD, for example, launched a new Radeon RX 7600 XT GPU, which is a slight upgrade from last year’s entry-level model. The company also brought processors with neural processing units for AI acceleration to its desktop offerings by announcing the Ryzen 8000G series. Meanwhile, NVIDIA unveiled the RTX 4080 Super, RTX 4070 Ti Super and RTX 4070 Super, which will cost $999, $799 and $599 respectively. It also announced updates for its GeForce Now cloud gaming service, adding G-Sync support and day passes for streaming. Intel kept things fairly tame and tidy, simply giving us its complete 14th-generation CPU family, including HX-series chips like a 24-core i9 model. It also launched the Core U Processor Series 1, which is designed to balance performance and power efficiency in thin and light laptops.

Speaking of laptops, most PC makers followed up the chip news flood by announcing all their new models containing the latest silicon. We saw notebooks from Alienware, Lenovo, MSI, Acer, Asus, and Razer, among others. MSI also had a new gaming handheld to show us, which is the first of its category to use Intel’s just-announced Core Ultra chip.

Asus also put that chip in a non-laptop product, debuting a new homegrown NUC. Meanwhile, Lenovo continued to challenge our notions of what a laptop with its ThinkBook Plus Gen 5, which is a weird gadget mermaid of sorts. Its top half is a 14-inch Android tablet, while its bottom half is a Windows keyboard and all of it is just funky.

Speaking of odd Android tablets, TCL was here with a new version of its NXTPAPER e-ink-ish tablet. This year’s model can switch between a matte e-paper-like display and a full-color LCD at the push of a button. The company also showed off a miniLED TV, which, at 115-inches large, is the biggest MiniLED TV with Quantom Dot technology to date.

We also got to check out Razer’s Project Esther, which is a proof of concept vibrating cushion showcasing the company’s new Sensa HD haptics platform for more immersive gaming experiences. That might be one of my favorite demos so far because… well... It vibrates. It’s a vibrating cushion for most office or gaming chairs.

There was plenty of car and transportation news, too, like Kia’s new fleet of modular vans and Volkswagen adding ChatGPT powers to its in-car voice assistant. The CES 2024 showfloor was also littered with piles of headphones, earbuds (and earwax) thanks to announcements from JBL, Sennheiser and less-known names like Mojawa, which put an AI-powered running coach in its bone-conducting headphones.

At the Pepcom showcase, we also saw some intriguing and fun products, like the Skyted Silent Mask that lets you talk in private in public, as well as the LifeSpan standing desk bike that lets you cycle really hard to generate enough power to charge your phone.

Intrigued? Check out our articles and videos with more details on everything I’ve mentioned and more. Or if you prefer, we’ll be back tomorrow to recap all the biggest news again to make your life easier. We’ve got plenty of press conferences coming up, and the show floor has officially opened, which means there’s still lots of stuff to check out in the days to come. 

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/ces-2024-day-1-recap-samsung-and-sony-dominated-as-did-chips-and-laptops-140024127.html?src=rss

Humane lays off staff before its 'Ai Pin' begins shipping

Wearable startup Humane AI laid off four percent of its employees before it has started shipping its Ai Pin, The Verge has reported. Leadership reportedly told employees that budgets would be lowered in 2024, according to sources familiar with the matter. The cuts were implemented earlier this week and affect around 10 people. 

On LinkedIn, CEO and co-founder Bethany Bongiorno called the cuts "part of a wider refresh of our organizational structure as our company evolves with purpose for this next phase of growth." She added that CTO Patrick Gates will be transitioning to an advisor role, and that Humane AI had promoted new heads of hardware, software and others as part of a reorganization. Bongiorno told The Verge that the cuts were "not communicated as a layoff," though sources told that outlet that they were, both verbally and in writing.  

Humane was founded by ex-Apple executives Bongiorno and her husband Imran Chaudhri. It's primary product is the "Ai Pin" that acts as a sort of wearable AI assistant. The company first unveiled the device at a Paris fashion show and announced last month that it would start shipping in March for $699. 

Humane

The pin takes voice commands from the user and beams relevant information onto the user’s hand via a built-in projector. It can also perform AI-powered optical recognition via a camera. It's powered by a quad-core Snapdragon processor with a dedicated Qualcomm AI Engine using Cosmos OS software. The founders have said that it "quickly understands what you need, connecting you to the right AI experience or service instantly." 

The Ai Pin has yet to be thoroughly reviewed (other than a few short tests), but the company demonstrated how it works in a video released last month. The founders showed how you can give it specific commands like "play music written by Prince, but not performed by Prince." The device can then display the information on your hand via the projector and control music playback and more by tilting or closing your hand. 

It can also answer questions by searching the web and send messages with modifiers like "add more excitement." You can use it to monitor your health and nutrition, and Humane provides a central hub for images, etc., along with accessories like clips, battery cases and more. How well it performs its AI tasks in the real world, though, remains to be seen.

This article originally appeared on Engadget at https://www.engadget.com/humane-lays-off-staff-before-its-ai-pin-begins-shipping-103548514.html?src=rss

Twitch is reportedly laying off 35 percent of its workforce

Amazon-owned Twitch is preparing to lay off 35 percent of its employees or around 500 people, Bloomberg has reported, citing "people familiar with the matter." The move follows a headcount reduction of around 400 people in 2023 and Twitch's decision to cease operations in Korea. The cuts could be announced Wednesday, but no other details were provided, including who may be affected.

The move was reportedly made amid concerns over losses at Twitch, which has failed to become profitable nine years after Amazon acquired it for nearly $1 billion. The costs of running the site are huge, given that it supports around 1.8 billion hours of live video content a month. A similar issue forced Twitch to leave South Korea, though CEO Dan Clancy said costs there are "ten times more expensive" than other countries. 

Near the end of last year, several key executives departed the company, including its chief product officer, chief customer officer, chief revenue officer and chief content officer. Clancy himself has been CEO less than a year, as he replaced co-founder and CEO Emmett Shear in March of 2023. 

In attempts to boost profitability, Twitch has reworked the way it does advertising and pays streamers in recent years. The site had over 50,000 partner creators back in 2022 and many have reportedly praised Clancy for using a more hands-on approach and listening to their concerns.

Parent Amazon has been on a cost-cutting mission, having laid off 27,000 employees over the last two years, including 9,000 in 2023. That's part of a downturn across tech companies, with large-scale layoffs last year at Google, Meta, Spotify, Epic Games, Unity and others. 

This article originally appeared on Engadget at https://www.engadget.com/twitch-is-reportedly-laying-off-35-percent-of-its-workforce-085946333.html?src=rss

The ASUS AirVision M1 is a wearable display for multi-taskers

ASUS has introduced quite a lengthy list of products at CES 2024 in Las Vegas, including a high-tech eyewear called the AirVision M1. It's not really a competitor to the upcoming Apple Vision Pro and the mixed reality headgears other companies debuted at the event, though. The AirVision M1 is a wearable display with the ability to generate multiple virtual screens, supposedly so that users can juggle several tasks at once. It's equipped with an FHD (1,920 x 1,080) Micro OLED display that has a 57-degree vertical perspective field of view. 

The device's system has three degrees of freedom, and users can pin several screens where they want in the aspect ratio they prefer, whether it's 16:9, 21:9 or 32:9. They can do so through the glasses' intuitive touchpad located on the left temple, where they can also adjust brightness and activate 3D mode. The device also comes with built-in noise-canceling microphones and speakers.

While it may sound like the AirVision M1 could be a good companion for people who need to bring their work with them when they travel, it's not a standalone wearable: It has to be connected to a PC or a phone via USB-C to work. ASUS has yet to reveal how much it costs and when it'll be available, but its specs and capabilities indicate that it'll cost a fraction of Apple's Vision Pro. 

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/the-asus-airvision-m1-is-a-wearable-display-for-multi-taskers-060237509.html?src=rss

TikTok pulled a hashtag-tracking feature researchers used to study the platform

TikTok recently pulled a tool that allowed researchers and others to study the popularity of hashtags on its app. The change, first reported by The New York Times, came shortly after researchers published a report using data from the tool that criticized the company.

As The New York Times points out, the tool was one of the few publicly-accessible methods of tracking details about the popularity of specific hashtags. TikTok, like other social media companies, has made it difficult for outsiders to track how content spreads in its app.

The tool in question is a feature called Creative Center, which provides data about the popularity of hashtags to would-be advertisers and others. Researchers at Rutgers’ Network Contagion Institute had used Creative Center’s search function to track hashtags deemed “sensitive” to Chinese government interests. The researchers compared the prevalence of the hashtags between TikTok and Instagram and concluded that many "sensitive" topics were "dramatically underrepresented on TikTok" compared with Instagram.

Soon after the report was published, the researchers said the search feature in Creative Center disappeared without an explanation. “Search capacity for Hashtags has itself now been removed from the user interface entirely, which NCRI discovered to have occurred on Christmas day, days after this report’s initial release,” they wrote in an addendum to the report. They added that TikTok had also disabled direct access to a number of “sensitive” topics they had previously tracked, including hashtags related to US politics and other geopolitical issues.

In a statement to The New York Times, TikTok confirmed the change. “Unfortunately, some individuals and organizations have misused the Center’s search function to draw inaccurate conclusions, so we are changing some of the features to ensure it is used for its intended purpose,” a company spokesperson said.

The dust-up is the latest example of mounting tensions between social media companies and researchers trying to study thorny topics like misinformation. Meta has also found itself at odds with researchers, and reportedly plans to deprecate CrowdTangle, a tool widely used by researchers and journalists to study how content spreads on Facebook. X has also greatly restricted researchers’ access to data since Elon Musk took control of the company, making its once open APIs prohibitively expensive to most groups.

In TikTok’s case, the company may be particularly sensitive to what it considers improper use of its tools. The company has for years denied that it aligns its content policies with the interests of the Chinese government as numerous government officials have called for the app to be banned. More recently, the company faced increased scrutiny over its handling of content related to the Israel-Hamas war — criticism that was also fueled by what the company said was an inaccurate portrayal of hashtag data.

That said, the company has made some concessions to researchers. TikTok began offering an official Research API to some academic institutions last year, and reportedly plans to make the tools available to some civil society groups that have questioned the company’s content moderation practices.

But for researchers, the move to abruptly cut off a tool will likely fuel more questions about just how willing the company is to work with them. “This lack of transparency is of deep concern to researchers,” the NCRI researchers wrote.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-pulled-a-hashtag-tracking-feature-researchers-used-to-study-the-platform-015454077.html?src=rss

This ring lets you whisper to your phone, because sometimes we need to use our inside voices

If there’s a problem with the world of ambient computing we’re all expected to live in, it’s that you can’t really be discreet. Most commands to your voice assistant of choice have to be spoken at a volume slightly higher than you would speak to another person. That’s the societal ill VTouch, a South Korean company, has chosen to tackle with its WHSP Ring. It’s a ring with a proximity sensor and microphone that activates when you raise it to your mouth. So when you want to talk to your assistant, you can simply mutter toward your knuckle and have it understand you.

The idea is that you’ll utilise VTouch’s own app, which will offer a number of AI assistants to aid you. This includes, in the example shown to me, an AI Art Curator or even a digital Psychiatrist, all of which can be accessed from your phone. You can also interact with your smart home, setting all of the usual parameters from the comfort of your hushed tones. It’ll last for around a day and a half on a single charge, while the charging case you can take around with it will extend the life by up to nine days. Any responses will, of course, be pumped through your headphones, enabling you to “chat” to your assistant. There’s even a button that, if tapped five times at once, will trigger a system to alert your contacts of an emergency and record ambient sound.

Touch is planning to launch WHSP Ring as a Kickstarter in the near future, but there’s no word yet on how much you could expect this thing to cost or when you can get your hands on one.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/this-ring-lets-you-whisper-to-your-phone-because-sometimes-we-need-to-use-our-inside-voices-002529204.html?src=rss

Rabbit R1 is an adorable AI-powered assistant co-designed by Teenage Engineering

Yes, you probably already have a virtual assistant in your pocket on your phone. Heck, if you're reading Engadget, I'm willing to bet you've got at least one smart speaker floating around your home as well that you can ask to complete basic tasks. But a new start up called Rabbit seems to think these are less than ideal implementations of AI (if you can really call Siri and Alexa that). It envisions a world where you trade apps for conversation and, rather than a distracting device shoving icons in your face, you interact with what amounts to a walkie-talkie for an AI.

The R1 is the first device to be launched by Rabbit and it's an objectively adorable little square in an endearingly bright shade of orange. Even if you're not sold on the necessity of a dedicated gadget for a virtual assistant, it's hard to deny the aesthetic appeal, which comes courtesy of the design gurus at Teenage Engineering. It features a small 2.88-inch touchscreen, an analog scroll wheel, two mics, a speaker and a "360 degree rotational eye", which is just a fancy name for a camera you can spin to face towards you or through the back of the handset. 

The primary way you interact with the R1, though, is by pressing and holding the "Push-to-Talk" button. This tells Rabbit OS to start listening. A heavily stylized and disembodied rabbit head bobs slowly as you ask your question or give it a task, and then it quickly gets to work. Want to book an Uber? Need a recipe to use up the left overs in your fridge? Wondering who sampled The Isley Brothers "That Lady"? (The answer is Beastie Boys, Basement Jaxx and Kendrick Lamar, FTR.) The R1 seems pretty capable of handling those tasks, at least in the controlled video demo.

Rabbit OS is able to tackle those tasks using what it calls the Large Action Model (LAM). This is what founder and CEO Jesse Lyu pitches as the company's major innovation. Its designed to take actions on interfaces, rather than through APIs or apps. In short, it can be trained to cary out almost any task that can be accomplished through a user interface. It's sort of like a fancy version of a macro. 

As a way of demonstrating its capabilities, Lyu teaches the R1 how to generate an image using Midjourney via Discord. Once Lyu walks performs the process, with Rabbit OS recording his actions, its able to repeat the task when asked. 

The rotating camera faces up into the body by default, acting as a sort of privacy shutter. Only turning its sensor towards its target when summoned. It can do the usual tricks like identify people or thing in the real world (within reason at least). But it's the way it interacts with the AI that is sure to pique people's interest. In the demo Lyu points the R1 at a full refrigerator and asks it to suggest a recipe thats "low in calories" based on it's contents. 

Rabbit

Of course, there's still a lot of unanswered questions about the R1. How is the battery life? The company claims it's "all day," but what does that really mean? And will the average user be able to train it easily. At least we know a few things, though. We know it costs $199 and that it's available for preorder now, with an expected ship date sometime in March or Apil. 

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/rabbit-r1-is-an-adorable-ai-powered-assistant-co-designed-by-teenage-engineering-001051537.html?src=rss

Squad Mobility’s tiny solar-powered EV is a dream for crowded cities

EVs, like me after the holidays, have a tendency to bloat at the slightest provocation, which is why I can’t fit into those size 34 jeans. The big issue for electric cars is heavy batteries force cars to grow in size to accommodate them. Of course, the heavier the load, the more power is needed to keep going, forcing you into a vicious cycle. Even a small city car like the original Smart has, in its latest electric version, grown into a grotesque parody of its predecessor. Which is why there’s a lot of hope riding on truly small EVs, like Squad Mobility’s solar-powered car that’s designed not to grow too big to fit inside a city.

The company was founded by Chris Klok and Robert Hoevers, who met while working on the Lightyear solar car. Klok was chief vehicle engineer of that project, while Hoevers was previously involved with NIO’s Formula E team. But they left Lightyear to help develop a small, solar-powered car that would offer affordable and clean mobility for dense cities. And while it’s just got a few prototypes to show off, like the one here at CES 2024, it’s expecting to begin production in 2025. Even better, many of its existing pre-order customers are based in the US, given the need for a car like this in those communities that exclusively rely on golf carts to get around.

The Solar City car has a 250Wp panel in its roof, which is designed to generate enough power for a few short trips each day. The company says that, in Las Vegas, you could expect to travel for around 13 miles purely from the energy collected from the panel. (You can plug it in to an outlet if you really need to.) With a kerb weight of 794 pounds, it’s light and efficient enough to get you around short distances without much stress. Of course, the speed is limited — and you’ll only get around 25mph out of the 4kW motor, but if you live in a big city and just need to get to work, or pick up some groceries, that’s probably all you need.

You might expect the car to be poky, but the high roofline and low floor means it’s surprisingly comfortable. The prototype here has some quirks — like acceleration and brake pedals that are a bit too close to the seat — which will be eliminated in the production version. There’s even a rear load space big enough for a suitcase or a couple of decently-sized bags, and the prominent tyres mean you could even tackle rough terrain in short doses. The fact it measures just 6.6 feet long means you can park it sideways and it’ll take up the same amount of room as most cars, too.

We’re still a year out before we’ll see the production model of this car, but there are plenty of reasons to be hopeful. The company expects the retail price to be $6,250 excluding sales tax, making it ideal as a city runaround or second (or third) car. That said, the figure does exclude the cost of the doors which, like AC, count as an optional extra.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/squad-mobilitys-tiny-solar-powered-ev-is-a-dream-for-crowded-cities-235540577.html?src=rss

The Spacetop is a laptop that really wants to swap your screen for AR glasses

Right now there are a bunch of companies trying to figure out new and better ways to work on the go. Lenovo made a laptop with two displays and a detachable keyboard to help give owners additional screen space without too much added bulk. And there are headsets from Meta, Apple and others that offer a way to create a completely virtual workspace without the need for a tethered PC. But with the Spacetop, startup Sightful has come up with an in-between solution that uses the bottom of a laptop, but instead of a traditional display, it’s attached to a pair of AR glasses.

In theory, the glasses provide a 100-inch virtual display that can hold more windows than you could ever fit on a traditional laptop screen. And with the Spacetop being powered by Android, you get a familiar working environment too. Instead of controllers or hand gestures, there’s a typical keyboard and touchpad for writing, browsing the web or anything else you might need to do. But after trying one out, while I like the idea, I’m not so sure about Sightful’s execution.

The system is powered by a Qualcomm Snapdragon XR2 chip (the same processor Meta used in the Quest 2 from 2020), just 8GB of RAM and 256GB of storage. It feels smooth, though I wouldn't necessarily call it fast. I didn’t notice much lag when dragging windows around or typing, but I didn’t get to see how it handles a ton of open apps or anything more demanding than a web browser. Sightful has also come up with some simple shortcuts for doing things like re-centering the screen (just press the left and right Shift keys at the same time).

The issue I ran into is that, while Sightful developed the base of the laptop itself and put its logo on the side of the glasses, the bundled AR eyewear attached to the PC is actually off-the-shelf specs from Xreal (in this case the Xreal Lights). So while Xreal glasses are somewhat sharp, their narrow field of view (especially vertically) and somewhat small sweet spot left a lot to be desired when it came to actually using that 100-inch virtual display.

Photo by Sam Rutherford/Engadget

Additionally, the Spacetop’s keys felt spongy and its touchpad was lackluster too. It’s a far cry from the more precise haptic surfaces you’d get on a premium Mac or Windows PC. And when you combine all this with a starting price of $2,000, I just don’t see the appeal. For people who already have a laptop, I’d argue you’d be much better off getting a Meta Quest 3, which costs $500, sports a much newer chip and can sync with your laptop to create its own version of a virtual desktop. Plus then you’d have a headset that’s way better at playing games, watching movies and more.

Photo by Sam Rutherford/Engadget

There’s a reason why Meta, Apple and others have sunk billions of dollars into making headsets and glasses with finely tuned optics. And I’m sure someday (maybe even sometime in the next few years), off the shelf AR glasses will make some pretty big advancements. But between its high price, dated specs, and a big, but not great-looking virtual display powered by two-year old AR glasses, the Spacetop doesn’t quite deliver on the promise of revolutionizing the common laptop.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/the-spacetop-is-a-laptop-that-really-wants-to-swap-your-screen-for-ar-glasses-ces-2024-233638523.html?src=rss

OrCam Hear hands-on: A surprisingly effective voice isolation platform for those with hearing loss

Imagine being at a crowded convention or noisy bar and trying to have a conversation with someone across from you. It's tough enough for people with hearing to focus on what the person is saying, not to mention those with hearing loss. Assistive technology company OrCam has rolled into CES 2024 with a host of new products including a set of devices and an iPhone app designed to help those with hearing loss deal with auditory overload. The platform is called OrCam Hear and after a quick hands-on at the show in Las Vegas, I'm pleasantly surprised.

OrCam Hear consists of a pair of earbuds and a dongle that plugs into any phone, and you'll use the app to control who you want to listen to. The system listens to voices for a few seconds (via the dongle) and uses AI to create speaker profiles for each person that then allows you to "selectively isolate specific voices even in noisy environments." This targets the issue sometimes known as the "cocktail party problem" that's a challenge for hearing aids.

During a demo, my editor Terrence O'Brien and I spoke to two people whose voice profiles were already set up in the app. We stood around a table with Terrence on my right and the two company spokespeople across us about five feet away. I put the earbuds in (after they were sanitized), and the noise around me immediately sounded a little less loud and a lot more muffled. 

Photo by Terrence O'Brien / Engadget

I looked at everyone around me and though I could see their lips moving, I couldn't hear anyone speaking. After OrCam's reps used the app to drag a floating circle into the ring surrounding me, I started to hear the person diagonally across me talk. And though the executive next to him was also moving his mouth, I could still only hear the voice of the person selected. Only after we moved the other speaker's icon into the ring did I start to hear them.

What impressed me more, though, was how the system handled relatively new participants like Terrence. He didn't have a profile set up in the app, and I initially couldn't hear him at all. A few seconds into the demo, though, a new circle appeared with a gray icon indicating a new "Anonymous" person had been recognized. When we dragged that into the ring, I was suddenly able to hear Terrence. This was all the more impressive because Terrence was wearing a fairly thick mask, which would have made him hard to understand any way. Yet, I was able to clearly make out what he was saying.

The OrCam Hear isn't perfect, of course. I was still able to hear the speakers as they talked, and the audio playing through the earbuds was slightly delayed, so there was a small echo. But people who have hearing loss, whom this product is designed for, aren't likely to experience that. There was also some audio distortion when the selected speakers were talking, but not so much that it impeded my comprehension.

OrCam said that the Hear platform is "currently in a technology preview phase and is expected to be shipped later in the year." Hopefully, that gives the company time to iron out quirks and make the app available on both iOS and Android, so that the assistive tech can be truly inclusive and accessible to more people.

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/orcam-hear-hands-on-a-surprisingly-effective-voice-isolation-platform-for-those-with-hearing-loss-230243953.html?src=rss