Posts with «technology & electronics» label

YouTube is rolling out a new 'You' section as part of a substantial update

YouTube’s rolling out a whole bunch of new features and design updates, three dozen in total. Some of these tools are for the web app, while others are for the smartphone app and smart TV software. These features aren’t game-changers by themselves, but they add up to an improved user experience. Let’s go over some of the more interesting ones.

It’s now easier to speed up videos for those who just can’t get enough of really fast podcast clips. Just hold your finger down on the video and it’ll automatically bump up the playback speed to 2x. This feature is also useful for searching through a video for a relevant portion, in addition to fast-paced playback. The tool’s available across web, tablets and mobile devices.

The app’s launching bigger preview thumbnails to help with navigation. There’s also a new haptic feedback component that vibrates when you hover over the original start point, so you never lose your place. This will help when perusing videos with your finger on a smartphone or tablet, as the current way to do this isn’t exactly accurate.

One of the more useful updates here is a new lock screen tool to avoid accidental interruptions while you watch stuff on your phone or tablet. This should be extremely handy for those who like to take walks or exercise while listening to YouTube, as the jostling typically interrupts whatever’s on-screen. In other words, your quiet meditation video won’t accidentally switch to some guy yelling about the end of masculinity as your phone sits in a pocket, purse or handbag.

Speaking of guys yelling about the end of masculinity, the company’s finally (finally) added a stable volume feature, which ensures that the relative loudness of videos don’t fluctuate too much. This tool’s automatically turned on once you snag the update.

Even the humble library tab has gotten a refresh. It’s now called “You” and relays a bit more data than before. You’ll have access to previously watched videos, playlists, downloads and purchase all from one place. Again, this change impacts the app on both web and mobile devices. 

The rest of the updates are design related, with on-screen visual cues that appear when creators ask you to subscribe complete with dopamine-enhancing sparkles when you finally “smash that like button.” There’s even a new animation that follows the view count and like count throughout a video’s first 24 hours. Some design elements extend to the smart TV app, including a new vertical menu, video chapters, a scrollable description section and more.

YouTube’s latest update is a tiered release and the company says it could be a few weeks before it reaches every user throughout the globe. The popular streaming platform says more features are forthcoming, including a redesign of the YouTube Kids app.

YouTube’s constantly changing up its core features. The past year has seen an enhanced 1080p playback option for web users and the company's even announced a spate of AI-enhanced creator tools, among other updates. Evolve or die right? The social media landscape, after all, is currently in the midst of something of a sea change.

This article originally appeared on Engadget at https://www.engadget.com/youtube-is-rolling-out-a-new-you-section-as-part-of-a-substantial-update-174512477.html?src=rss

Could MEMS be the next big leap in headphone technology?

If you have a pair of in-ear headphones, there’s a good chance they are using a technology that’s several decades old. Despite attempts to introduce different, exotic-sounding systems like planar magnetic, electrostatic and even bone conduction, most IEMs or in-ear headphones still use either balanced armature or dynamic drivers. But there’s another contender, promising high fidelity, low power consumption and a tiny physical footprint. The twist is, it’s a technology that’s been in your pocket for the last ten years already.

We’re talking about micro-electromechanical systems (MEMS), and it’s a technology that’s been used in almost every microphone in every cell phone since the 2010s. When applied to headphone drivers (the inverse of a microphone) the benefits are many. But until recently, the technology wasn’t mature enough for mainstream headphones. California-based xMEMS is one company pushing the technology and consumer products featuring its solid-state MEMS drivers are finally coming to market. We tested the high-end Oni from Singularity, but Creative has also confirmed a set of TWS headphones with xMEMS drivers will be available in time for the holidays.

Where conventional speakers and drivers typically use magnets and coils, MEMS uses piezos and silicon. The result, if the hype is to be believed, is something that’s more responsive, more durable and with consistent fidelity. And unlike balanced-armature or dynamic, MEMS drivers can be built on a production line with minimal-to-no need for calibration or driver matching, streamlining their production. xMEMS, for example, has partnered with TSMC, one of the largest producers of microprocessors for its manufacturing process.

xMEMS

Of course, MEMS drivers lend themselves to any wearable that produces sound from AR glasses to VR goggles and hearing aids. For most of us, though, it's headphones where we’re going to see the biggest impact. Not least because the potential consistency and precision of MEMS should marry perfectly with related technologies such as spatial audio where fast response times and perfect phase matching (two headphones being perfectly calibrated to each other) is essential.

For now, MEMS is best suited to earbuds, IEMS and TWS-style headphones but xMEMS hopes to change that. “The North Star of the company was to reinvent loudspeakers,” Mike Householder, Marketing & Business Development at the company told Engadget. “But to generate that full bandwidth audio in free air is a little bit more of a development challenge that's going to take some more time. The easier lift for us was to get into personal audio and that's the product that we have today.”

To look at, the first IEM to feature xMEMS’ solid-state drivers, Singularity’s Oni, seem like regular, stylish high-end in-ear monitors. Once the music started to flow, though, there was a very clear difference. Electronic genres sounded crisp and impactful in a way that feld more . The MEMS drivers’ fast transient response evidenced in the sharp, punch percussion of RJD2’s “Ghostwriter” and the Chemical Brothers’ “Live Again.” The latter’s mid- and high-end sections in particular shone through with remarkable clarity. Bass response was good, especially in the lower-mids, but perhaps not the strong point of the experience.

Singularity

When I tried Metallica’s “For Whom the Bell Tolls,” I immediately noticed the hi-hats pushing through in a way I’d never heard before. The only way I can describe it is “splashy.” It didn’t sound weird, just noticeable. I asked Householder about this and he wasn’t as surprised. “Yeah, the hi-hats, cymbals and percussion, you're gonna hear it with a new level of detail that you're really not accustomed to.” He said, adding that some of this will be the tuning of the supplied headphone amplifier (made by iFi) so it’s partly the EQ of that, mixed with the improved clarity of high frequencies from the MEMS drivers.

There was another surprise with the supplied amp/DAC also — it had a specific “xMEMS” mode. I originally planned to use my own, but it turns out that I needed this specific DAC as the MEMS drivers require a 10-volt bias to work. I asked Householder if all headphones would require a DAC (effectively ending their chances of mainstream adoption), but apparently xMEMS has developed its own amp “chip” that can both drive the speakers and supply the 10-volt bias. The forthcoming True Wireless buds from Creative, for example, obviously won’t need any additional hardware.

This is where things get interesting. While we don't know the price for Creative’s TWS buds with xMEMS drivers, we can be sure that they will be a lot cheaper than Singularity’s IEMs which retail for $1,500. “You know, they're appealing to a certain consumer, but you could just very easily put that same speaker into a plastic shell, and retail it for 150 bucks,” Householder told Engadget. The idea that xMEMS can democratize personal audio at every price point is a bold one. Not least because most audiophiles aren’t used to seeing the exact same technology in their IEMs also in sub $200 wireless products. Until we have another set to test, though, we can’t comment on the individual character each manufacturer can imbue on them.

xMEMS

One possible differentiating factor for higher-end products (and competing MEMS-based products) is something xMEMS is calling “Skyline.” Householder described it as a dynamic “vent” that can be opened and closed depending on the listener’s needs. Similar to how open-back headphones are favored by some for their acoustic qualities, xMEMS-powered IEMs could include Skyline that would open and close to prevent occlusion, improve passive noise canceling and other acoustic qualities such as “transparency” mode where you want to temporarily let external, environmental noises come through.

For those who prefer on-ear or over-ear headphones, MEMS technology will likely be paired with legacy dynamic drivers, at least initially. “The first step that we're taking into headphone is actually a two way approach,” Householder said. The idea being that a smaller dynamic driver can handle the low frequencies, while MEMS drivers currently don’t scale up so well. “It's really the perfect pairing. The dynamic for the low end, let it do what it does best, and then we've got the far superior high frequency response [from MEMS],” he said. “But the long term vision is to eventually fully replace that dynamic driver.”

The ultimate goal would of course be a set of solid-state desktop speakers, but we’re a little way out on that it seems. For now though, there’s a tantalizing promise that MEMS-based in-ears could modernize and maybe even democratize consumer audio, at least around a certain price point. Not to mention that xMEMS isn’t the only company in the game. Austrian startup, Usound, already showed its own reference-design earphone last year and Sonic Edge has developed its own MEMS “speaker-in-chip” technology. With some competition in the market, there’s hope that the number of products featuring it will increase and improve steadily over the next year or so.

This article originally appeared on Engadget at https://www.engadget.com/could-mems-be-the-next-big-leap-in-headphone-technology-173034402.html?src=rss

Baidu's CEO says its ERNIE AI 'is not inferior in any aspect to GPT-4'

ERNIE, Baidu’s answer to ChatGPT, has “achieved a full upgrade,” company CEO Robin Li told the assembled crowd at the Baidu World 2023 showcase on Tuesday, “with drastically improved performance in understanding, generation, reasoning, and memory.”

During his keynote address, Li demonstrated improvements to those four core capabilities on-stage by having the AI create a multimodal car commercial in a few minutes based on a short text prompt , solve complex geometry problems and progressively iterate the plot for a short story on the spot. The fourth-gen generative AI system “is not inferior in any aspect to GPT-4,” he continued.

ERNIE 4.0 will offer an “improved” search experience resembling that of Google’s SGE, aggregating and summarizing information pulled from the wider web and distilled into a generated response.The system will be multimodal, providing answers as text, images or animated graphs through an “interactive chat interface for more complex searches, enabling users to iteratively refine their queries until reaching the optimal answer, all in one search interface,” per the company’s press. What’s more, the AI will be able to recommend “highly customized” content streams based on previous interactions with the user.

Similar to ChatGPT Enterprise, ERNIE’s new Generative Business Intelligence will offer a more finely-tuned and secure model trained on each client’s individual data silo. ERNIE 4.0 will also be capable of, “conducting academic research, summarizing key information, creating documents, and generating slideshow presentations” as well as enable users to search and retrieve files using text and voice prompts.

Baidu is following the example set by the rest of the industry and has announced plans to put its generative AI in every app and service it can manage. The company has already integrated some of the AI’s functions into Baidu Maps, including navigation, ride hailing and hotel bookings. It is also offering “ow-threshold access and productivity tools” to help individuals and enterprises develop API plugins for the Baidu Qianfan Foundation Model Platform.

Baidu had already been developing its ERNIE large language model for a number of years prior to the debut of ChatGPT in 2022, though its knowledge-base is focused primarily on the Chinese market. Baidu released ERNIE Bot, it’s answer to ChatGPT, this March with some 550 billion facts packed into its knowledge graph, though it wasn’t until this August that it rolled out to the general public.

Baidu’s partner startups also showed off new product series that will integrate the AI’s functionality during the event, including a domestic robot, an All-in-One learning machine and a smart home speaker.

This article originally appeared on Engadget at https://www.engadget.com/baidus-ceo-says-its-ernie-ai-is-not-inferior-in-any-aspect-to-gpt-4-162333722.html?src=rss

The Stream Deck MK.2 is on sale for just $130

Elgato’s Stream Deck MK.2 is on sale for $130, a discount of $20 from the MSRP of $150. That’s 13 percent off and actually beats the sale price from last week’s Amazon Prime Day event. If you’re a podcaster or a livestreamer, this is a pretty good time to snag this highly useful streaming device.

This is the latest and greatest Stream Deck, and we said it sets a new standard for the industry when we placed it in our list of the best game streaming gear. Not to be confused with Valve’s Steam Deck, this similarly-named device boasts a hub of 15 LCD hotkeys that you can customize to your liking to simplify livestreaming, podcasting and related activities.

For instance, one button press can turn on a connected accessory, instantly mute a microphone, adjust the lights, trigger on-screen effects or activate audio clips, to name a few examples. You have 15 of these keys, and each can be customized as you see fit. You can even set them to perform in-game actions, like any standard keyboard shortcut.

Additionally, many users have found these devices useful for programming, media editing and any other profession/hobby that could use a bit of hotkey simplification. The buttons are also really satisfying to press.

The main reason you’d get this, however, is right in the name. It’s for streamers that have to moderate a fast-moving chat all while gaming or performing some other task. Each button has a tiny display to let you know at a glance the end result of each press. Over time, you won’t even need these mini displays, instead relying on simple muscle memory, just like keyboard hotkeys. Each of the major streaming platforms, like Twitch and YouTube, offer their own plugins for the device complete with a set of commonly-used adjustment options.

This article originally appeared on Engadget at https://www.engadget.com/the-stream-deck-mk2-is-on-sale-for-just-130-152539642.html?src=rss

The new $79 Apple Pencil has a USB-C charging port

Apple has unveiled a new Apple Pencil. The latest model costs $79 ($69 for education) and it pairs and charges via a USB-C cable. It’ll be available in early November and it’s compatible with every iPad that has a USB-C port.

This is the company’s most budget-friendly Apple Pencil yet. It’s $20 less than the original model and $40 cheaper than the second-gen Apple Pencil. Apple says features of the new version include pixel-perfect accuracy, low latency and tilt sensitivity.

There's no pressure sensitivity this time around, though, so if you want that feature, you'll need to stick with either of the previous iterations. While you can attach the USB-C Apple Pencil magnetically to the side of your iPad for storage (in which case it will go into a sleep state to prolong the battery life), there's no wireless charging support either. To top up the Pencil's battery, you'll need to slide back a cap to expose a USB-C port and plug in a charging cable.

Apple

Unlike the second-gen Pencil, you won't be able to double tap the latest version to change drawing tools. Apple has also declined to offer free engraving this time around. However, if you have an M2-powered iPad, you'll be able to take advantage of the hover feature that's supported on the second-gen Pencil. That enables you to preview any mark you intend to make before it's actually applied to your note, sketch, annotation and so on.

This is Apple's latest step in its transition away from the Lightning port, which was largely prompted by European Union rules. The company started embracing USB-C on iPads several years ago, while it ditched the Lightning port in all iPhone 15 models. It'll take Apple a while longer to move away from Lightning entirely. Several devices it sells — such as older iPhones, AirPods Max, Magic Mouse, Magic Trackpad and the first-gen Apple Pencil — still use that charging port. But this is another step toward an all-USB-C future, and one fewer charging cable you'll need to carry around.

This article originally appeared on Engadget at https://www.engadget.com/the-new-79-apple-pencil-has-a-usb-c-charging-port-141732710.html?src=rss

Microsoft Copilot: Here's everything you need to know about the company's AI assistant

Microsoft’s new Copilot AI has wormed its way into nearly every aspect of Windows 11. However, there’s a bit of a learning curve, but don’t worry. We’ve got you covered. We've put together a primer on the company's new AI assistant, along with step-by-step instructions on how to both enable and disable it on your Windows computer.

What does Microsoft Copilot do?

Microsoft’s Copilot is a suite of AI tools that work together to create a digital personal assistant of sorts. Just like other modern AI assistants, the tech is based on generative artificial intelligence and large language models (LLM.)

You can use Copilot to do a whole bunch of things to increase productivity or just have fun. Use the service to summarize a web page or essay, write an email, quickly change operating system settings, generate custom images based on text, transcribe audio or video, generate a screenshot and even connect to an external device via Bluetooth. It also does the sorts of things other AI chatbots do, like creating lists of recipes, writing code or planning itineraries for trips. Think of it as a more robust version of the pre-existing Bing AI chatbot.

How to enable Microsoft Copilot

Update your computer to the latest version of Windows 11

First of all, you need the latest Windows 11 update, so go ahead and download that first. 

1. Head to Settings and look for the Windows Update option. 

2. Follow the prompts and reset your computer if required. 

You’re now ready to experience everything Copilot has to offer. If Microsoft just dropped an update, you may have to wait a bit before it reaches your region. Click the tab to automatically install the latest update when available.

Once your computer is updated, click the Copilot button

As for enabling the feature, click the Copilot button on the taskbar or press Win + C on the keyboard. That’s all there is to it.

How to disable Microsoft Copilot

Engadget/Terrence O'Brien

Microsoft Copilot isn’t an always-on feature. Once it shows up in the taskbar, it only works when you ask it something. However, if you want to disable or delete the feature entirely, you have a couple of options.

The easiest method is to remove it from the taskbar. Out of sight, out of mind, right? Open up Settings and click on Personalization. Next, tap the Taskbar page to the right side. Look for Taskbar Items and then click on the Copilot toggle switch to remove it from the line-up. This ensures you won’t ever accidentally turn it on via the Taskbar, but you can still call up the AI by typing Win + C.

If you want to delete the toolset entirely, the process is a bit more involved. Start by opening a PowerShell window. Search for Windows PowerShell, right-click on the results and select the option to run as an administrator. Next, click yes on the UAC prompt. This opens up a command prompt.

Paste the following into the window: reg add HKCU\Software\Policies\Microsoft\Windows\WindowsCopilot /v "TurnOffWindowsCopilot" /t REG_DWORD /f /d 1

That should do it. Every trace of Copilot will disappear from your system.

What are the limitations of Copilot?

This is new technology, so the limitations are extensive. Like all modern LLMs, Microsoft’s Copilot can and will make up stuff out of thin air every once in a while, a phenomenon known as hallucination. It also doesn’t retain information from conversation to conversation, likely for security reasons. This means it restarts the conversation from a blank slate every time you close a window and open another one. It won’t remember anything about you, your preferences or even your favorite order from the coffee shop down the street. Finally, it doesn’t integrate with too many third-party sources of data, beyond the web, so you won’t be able to incorporate personal fitness data and the like.

What's the difference between Github Copilot and Microsoft Copilot?

There is a primary difference between the two platforms, despite the similar names. Github Copilot is all about helping craft and edit code for developing software applications. Microsoft Copilot can whip up some rudimentary code but it’s far from a speciality. If your primary use case scenario for an AI assistant is code, go with Github. If you only dabble in coding, or have no interest at all, go with Microsoft.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-heres-everything-you-need-to-know-about-the-companys-ai-assistant-130004909.html?src=rss

WhatsApp debuts passkey logins on Android

WhatsApp just made logging in a much simpler and faster process, at least on Android devices. The Meta-owned chat application has launched passkey support for Android, which means users no longer have to use OTPs from two-factor authentication to be able to log into their account. Passkeys are a relatively new login technology designed to be resistant to phishing attacks, password leaks and other security vulnerabilities plaguing its older peers. 

They're made up of cryptographic pairs consisting of one public key and one private key that lives on the user's device. Services that support passkeys don't have access to that private key, and it also can't be written down or given away. Without that private key, nobody else can log into somebody's account. Now that WhatsApp has launched passkey support, users can log in using their device's authentication procedure, so they can simply verify their identities with their face, fingerprint or their PINs. 

While a lot of applications still don't have passkey support, the list continues to grow. PayPal launched passkey logins for Android back in March, while TikTok rolled out support for the technology in July. More recently, 1Password rolled out passkeys to all its users on desktop and iOS after testing the login solution for three months. 

Android users can easily and securely log back in with passkeys 🔑 only your face, finger print, or pin unlocks your WhatsApp account pic.twitter.com/In3OaWKqhy

— WhatsApp (@WhatsApp) October 16, 2023

This article originally appeared on Engadget at https://www.engadget.com/whatsapp-debuts-passkey-logins-on-android-122036260.html?src=rss

The Morning After: Get ready for the Myspace documentary

Myspace is getting the documentary treatment, with a film currently in the works chronicling the rise and fall of arguably the first big social network. When it launched in 2003, you chose your top eight digital friends, and drama ensued. The platform went mainstream, becoming an important music promotional tool long before Bandcamp or even YouTube.

The movie will be a joint project between production companies Gunpowder & Sky and The Documentary Group. Gunpowder & Sky has produced documentaries like 69: The Saga of Danny Hernandez and Everybody’s Everything, about deceased rapper Lil Peep. The Documentary Group’s behind shows like Amend: The Fight for America and The Deep End, a series focusing on spiritual wellness guru Teal Swan.

Maybe, just maybe, we’ll even learn what Myspace Tom’s last name is.

— Mat Smith​​

The biggest stories you might have missed

Intel hits 6GHz (again) with its 14th-gen desktop CPUs

Alienware’s new Aurora desktop can overclock to an astounding 6.1GHz

Google Pixel 8 bundles are up to 25 percent off at Amazon

Twitch adds stories to keep followers tuned in

Australian regulators fine X for dodging questions about CSAM response

The best VPN services for 2023

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Marvel’s Spider-Man 2 review

Bigger and better.

Sony

Web-swinging around New York City in Marvel’s Spider-Man might be the best game mechanic in recent times, but why not add wings? With the sequel, Insomniac did just that — and gave players two Spideys to control.

The team has also streamlined and expanded combat movesets and abilities. A lot of the gadgets from the first game return, but they’re easier than ever to access. Previously, if you wanted to use a gadget, you’d have to hold R1 and switch from your web-shooters to another option. Now, web shooters are always triggered by mashing R1, but you can hold R1 and hit one of the four face buttons to activate your slotted gadgets. It’s all further augmented by a compelling plot featuring the likes of Venom’s symbiote, the Lizard, Sandman, and more.

Continue reading.

Ray-Ban Meta smart glasses review

Instagram-worthy shades.

Engadget

After a week with the Meta and Ray-Ban’s latest $299 smart sunglasses, they still feel a little bit like a novelty. But Meta has improved the core features, with better audio and camera quality, as well as the ability to livestream directly from the frames. If you’re a creator or already spend a lot of time in Meta’s apps (Facebook, Instagram, even WhatsApp), though, there are plenty of reasons to give the second-generation shades a look. These Ray-Ban Meta smart glasses feel more like a finished product.

Continue reading.

The Nintendo 64 gets a retro console remake from Analogue.

The Analogue 3D will output old game carts in 4K.

Analogue’s 3D aims to be the ultimate Nintendo 64 console tribute, playing original cartridges on modern 4K displays. All Analogue’s machines use field-programmable gate arrays (FPGA) coded to mimic the original hardware. Instead of playing often legally questionable ROM files, like most software emulators, Analogue consoles play original media, without the downsides that software emulation often brings. The Analogue 3D is currently slated to ship in 2024, but no price yet.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-get-ready-for-the-myspace-documentary-111556330.html?src=rss

Snapchat enables video and stories embeds

Snapchat has rolled out two new features, including the ability to embed content from the platform into a website. Users can now embed Lenses, Spotlight videos and public stories or profiles through their computer browser by clicking the embed button under share options. This will automatically copy the code — just as competitors like Instagram and TikTok have long allowed users to do. 

Following years of trying to broaden from just a platform to send pictures back and forth with friends, the option to embed is a logical next step from Snapchat. It builds on other features like articles and discovering local places of interest and, in 2022, Snapchat for Web

Along with embeds, Snapchat has also launched an OpenAI-powered feature that lets users extend their snaps to include more of their possible surroundings. The tool is reminiscent of Photoshop's Content-Aware Fill but, in this case, estimates what the entire border area looks like versus one targeted piece. Engadget has confirmed this feature is available for Snapchat+ subscribers. 

The company has regularly been using AI tools as perks for its now five million-plus Snapchat+ subscribers. The company's AI-powered Dreams feature — which lets users generate eight packs of "fantastical" images — is limited to one time only for regular users or one set per month for Snapchat+ subscribers. Anyone can buy extra packs for $0.99 each.

Snapchat was quick to hop on the AI boom, rolling out a chatbot called My AI using "OpenAI's GPT technology that the authors have customized" back in February. Initially also available solely to Snapchat+ subscribers, My AI expanded to all global users two months later with everything from restaurant recommendations to photo responses (as has been the case for AI bots in 2023, not always appropriately). 

This article originally appeared on Engadget at https://www.engadget.com/snapchat-enables-video-and-stories-embeds-103535731.html?src=rss

Ray-Ban Meta smart glasses review: Instagram-worthy shades

A lot has changed in the two years since Facebook released its Ray Ban-branded smart glasses. Facebook is now called Meta. And its smart glasses also have a new name: the Ray-Ban Meta smart glasses. Two years ago, I was unsure exactly how I felt about the product. The Ray-Ban Stories were the most polished smart glasses I’d tried, but with mediocre camera quality, they felt like more of a novelty than something most people could use. 

After a week with the company’s latest $299 sunglasses, they still feel a little bit like a novelty. But Meta has managed to improve the core features, while making them more useful with new abilities like livestreaming and hands-free photo messaging. And the addition of an AI assistant opens up some intriguing possibilities. There are still privacy concerns, but the improvements might make the tradeoff feel more worth it, especially for creators and those already comfortable with Meta’s platform.

What’s changed

Just like its predecessor, the Ray-Ban Meta smart glasses look and feel much more like a pair of Ray-Bans than a gadget and that’s still a good thing. Meta has slimmed down both the frames and the charging case, which now looks like the classic tan leather Ray-Ban pouch. The glasses are still a bit bulkier than a typical pair of shades, but they don’t feel heavy, even with extended use.

This year’s model has ditched the power switch of the original, which is nice. The glasses now automatically turn on when you pull them out of the case and put them on (though you sometimes have to launch the Meta View app to get them to connect to your phone).

Image by Karissa Bell for Engadget

The glasses themselves now charge wirelessly through the nosepiece, rather than near the hinges. According to Meta, the device can go about four hours on one charge, and the case holds an additional four charges. In a week of moderate use, I haven’t had to top up the case, but I do wish there was a more precise indication of its battery level than the light at the front (the Meta View app will display the exact power level of your glasses, but not the case.)

My other minor complaint is that the new charging setup makes it slightly more difficult to pull the glasses out of the case. It takes a little bit of force to yank the frames off the magnetic charging contacts and the vertical orientation of the case makes it easy to grab (and smudge) the lenses.

The latest generation of smart glasses comes in both the signature Wayfarer style, which start at $299, as well as a new, rounder “Headliner” design, which sells for $329. I opted for a pair of Headliners in the blue “shiny jean” color, but there are tan and black variations as well. One thing to note about the new colors is that both the “shiny jeans” and “shiny caramel” options are slightly transparent, so you can see some of the circuitry and other tech embedded in the frames.

The lighter colors also make the camera and LED indicator on the top corner of each lens stand out a bit more than on their black counterparts. (Meta has also updated its software to prevent the camera from being used when the LED is covered.) None of this bothered me, but if you want a more subtle look, the black frames are better at disguising the tech inside.

New camera, better audio

Look closely at the transparent frames, though, and you can see evidence of the many upgrades. There are now five mics embedded in each pair, two in each arm and one in the nosepiece. The additional mics also enable some new “immersive” audio features for videos. If you record a clip with sound coming from multiple sources — like someone speaking in front of you and another person behind you — you can hear their voices coming from different directions when you play back the video through the frames. It’s a neat trick, but doesn’t feel especially useful.

The directional audio is, however, a sign of how dramatically the sound quality has improved. The open-ear speakers are 50 percent louder and, unlike the previous generation, don’t distort at higher volumes. Meta says the new design also has reduced the amount of sound leakage, but I found this really depends on the volume you’re listening at and your surrounding noise conditions.

There will always be some quality tradeoffs when it comes to open-ear speakers, but it’s still one of my favorite features of the glasses. The design makes for a much more balanced level of ambient noise than any kind of “transparency mode” I’ve experienced with earbuds or headphones. And it’s especially useful for things like jogging or hiking when you want to maintain some awareness of what’s around you.

Camera quality was one of the most disappointing features on the first-generation Ray-Ban Stories so I was happy to see that Meta and Luxottica ditched the underpowered 5-megapixel cameras for a 12MP ultra-wide.

The upgraded camera still isn’t as sharp as most phones, but it’s more than adequate for social media. Shots in broad daylight were clear and the colors were more balanced than snaps from the original Ray-Ban Stories, which tended to look over-processed. I was surprised that even photos I took indoors or at dusk — occasions when most people wouldn't wear sunglasses — also looked decent. One note of caution about the ultra-wide lens, however: if you have long hair or bangs, it’s very easy for wisps of hair to end up in the edges of your frame if you're not careful.

The camera also has a few new tricks of its own. In addition to 60-second videos, you can now livestream directly from the glasses to your Instagram or Facebook account. You can even use touch controls on the side of the glasses to hear a readout of likes and comments from your followers. As someone who has live streamed to my personal Instagram account exactly one time before this week, I couldn’t imagine myself using this feature.

But after trying it out, it was a lot cooler than I expected. Streaming a first-person view from your glasses is much easier than holding up your phone, and being able to seamlessly switch between the first-person view and the one from your phone’s camera is something I could see being incredibly useful to creators. I still don’t see many IG Lives in my future, but the smart glasses could enable some really creative use cases for content creators.

The other new camera feature I really appreciated was the ability to snap a photo and share it directly with a contact via WhatsApp or Messenger (but not Instagram DMs) using only voice commands. While this means you can’t review the photo before sending it, it’s a much faster and convenient way to share photos on the go.

Meta AI

Two years ago, I really didn’t see the point of having voice commands on the Ray-Ban Stories. Saying “hey Facebook” felt too cringey to utter in public, and it just didn’t seem like there was much point to the feature. However, the addition of Meta’s AI assistant makes voice interactions a key feature rather than an afterthought.

The Meta Ray-Ban smart glasses are one of the first hardware products to ship with Meta’s new generative AI assistant built in. This means you can chat with the assistant about a range of topics. Answers to queries are broadcast through the internal speakers, and you can revisit your past questions and responses in the Meta View app.

To be clear: I still feel really weird saying “hey Meta,” or “OK Meta,” and haven’t yet done so in public. But there is now, at least, a reason you may want to. For now, the assistant is unable to provide “real-time” information other than the current time or weather forecast. So it won’t be able to help with some practical queries, like those about sports standings or traffic conditions. The assistant’s “knowledge cutoff” is December 2022, and it will remind you of that for most questions related to current events. However, there were a few questions I asked where it hallucinated and gave made-up (but nonetheless real-sounding) answers. Meta has said this kind of thing is an expected part of the development of large language models, but it’s important to keep in mind when using Meta AI.

Karissa Bell

Meta has suggested you should instead use it more for creative or general interest questions, like basic trivia or travel ideas. As with other generative AI tools, I found that the more creative and specific your questions, the better the answer. For example, “Hey Meta, what’s an interesting Instagram caption for a view of the Golden Gate Bridge,” generated a pretty generic response that sounded more like an ad. But “hey Meta, write a fun and interesting caption for a photo of the Golden gate Bridge that I can share on my cat’s Instagram account,” was slightly better.

That said, I’ve been mostly underwhelmed by my interactions with Meta AI. The feature still feels like something of a novelty, though I appreciated the mostly neutral personality of Meta AI on the glasses compared to the company’s corny celebrity-infused chatbots.

And, skeptical as I am, Meta has given a few hints about intriguing future possibilities for the technology. Onstage at Connect, the company offered a preview of an upcoming feature that will allow wearers to ask questions based on what they’re seeing through their glasses. For example, you could look at a monument and ask Meta to identify what you’re looking at. This “multi-modal” search capability is coming sometime next year, according to the company, and I’m looking forward to revisiting Meta AI once the update rolls out.

Privacy

The addition of generative AI also raises new privacy concerns. First, even if you already have a Facebook or Instagram account, you’ll need a Meta account to use the glasses. While this also means they don’t require you to use Facebook or Instagram, not everyone will be thrilled at the idea of creating another Meta-linked account.

The Meta View app still has no ads and the company says it won’t use the contents of your photos or video for advertising. The app will store transcripts of your voice commands by default, though you can opt to remove transcripts and associated voice recordings from the app’s settings. If you do allow the app to store voice recordings, these can be surfaced to “trained reviewers” to “improve, troubleshoot and train Meta’s products.”

Karissa Bell

I asked the company if it plans to use Meta AI queries to inform its advertising and a spokesperson said that “at this time we do not use the generative AI models that power our conversational AIs, including those on smart glasses, to personalize ads.” So you can rest easy that your interactions with Meta AI won’t be fed into Meta’s ad-targeting machine, at least for now. But it’s not unreasonable to imagine that could one day change. Meta tends to keep new products ad-free in the beginning and introduce ads once they start to reach a critical mass of users. And other companies, like Snap, are already using generative AI to boost their ad businesses.

Are they worth it?

If any of that makes you uncomfortable, or you’re interested in using the shades with non-Meta apps, then you might want to steer clear of the Ray-Ban Meta smart glasses. Though your photos and videos can be exported to any app, most of the devices’ key features work best when you’re playing in Meta’s ecosystem. For example, you can connect your WhatsApp and Messenger accounts to send hands-free photos or messages but can’t send texts via SMS or other apps (Meta AI will read out incoming texts, however). Likewise, the livestreaming abilities are limited to Instagram and Facebook, and won’t work with other platforms.

If you’re a creator or already spend a lot of time in Meta’s apps, though, there are plenty of reasons to give the second-generation shades a look. While the Ray-Ban Stories of two years ago were a fun, if overly expensive, novelty, the $299 Ray-Ban Meta smart glasses feel more like a finished product. The improved audio and photo quality better justify the price, and the addition of AI makes them feel like a product that’s likely to improve rather than a gadget that will start to become obsolete as soon as you buy it.

This article originally appeared on Engadget at https://www.engadget.com/ray-ban-meta-smart-glasses-review-instagram-worthy-shades-070010365.html?src=rss