Could MEMS be the next big leap in headphone technology?

If you have a pair of in-ear headphones, there’s a good chance they are using a technology that’s several decades old. Despite attempts to introduce different, exotic-sounding systems like planar magnetic, electrostatic and even bone conduction, most IEMs or in-ear headphones still use either balanced armature or dynamic drivers. But there’s another contender, promising high fidelity, low power consumption and a tiny physical footprint. The twist is, it’s a technology that’s been in your pocket for the last ten years already.

We’re talking about micro-electromechanical systems (MEMS), and it’s a technology that’s been used in almost every microphone in every cell phone since the 2010s. When applied to headphone drivers (the inverse of a microphone) the benefits are many. But until recently, the technology wasn’t mature enough for mainstream headphones. California-based xMEMS is one company pushing the technology and consumer products featuring its solid-state MEMS drivers are finally coming to market. We tested the high-end Oni from Singularity, but Creative has also confirmed a set of TWS headphones with xMEMS drivers will be available in time for the holidays.

Where conventional speakers and drivers typically use magnets and coils, MEMS uses piezos and silicon. The result, if the hype is to be believed, is something that’s more responsive, more durable and with consistent fidelity. And unlike balanced-armature or dynamic, MEMS drivers can be built on a production line with minimal-to-no need for calibration or driver matching, streamlining their production. xMEMS, for example, has partnered with TSMC, one of the largest producers of microprocessors for its manufacturing process.

xMEMS

Of course, MEMS drivers lend themselves to any wearable that produces sound from AR glasses to VR goggles and hearing aids. For most of us, though, it's headphones where we’re going to see the biggest impact. Not least because the potential consistency and precision of MEMS should marry perfectly with related technologies such as spatial audio where fast response times and perfect phase matching (two headphones being perfectly calibrated to each other) is essential.

For now, MEMS is best suited to earbuds, IEMS and TWS-style headphones but xMEMS hopes to change that. “The North Star of the company was to reinvent loudspeakers,” Mike Householder, Marketing & Business Development at the company told Engadget. “But to generate that full bandwidth audio in free air is a little bit more of a development challenge that's going to take some more time. The easier lift for us was to get into personal audio and that's the product that we have today.”

To look at, the first IEM to feature xMEMS’ solid-state drivers, Singularity’s Oni, seem like regular, stylish high-end in-ear monitors. Once the music started to flow, though, there was a very clear difference. Electronic genres sounded crisp and impactful in a way that feld more . The MEMS drivers’ fast transient response evidenced in the sharp, punch percussion of RJD2’s “Ghostwriter” and the Chemical Brothers’ “Live Again.” The latter’s mid- and high-end sections in particular shone through with remarkable clarity. Bass response was good, especially in the lower-mids, but perhaps not the strong point of the experience.

Singularity

When I tried Metallica’s “For Whom the Bell Tolls,” I immediately noticed the hi-hats pushing through in a way I’d never heard before. The only way I can describe it is “splashy.” It didn’t sound weird, just noticeable. I asked Householder about this and he wasn’t as surprised. “Yeah, the hi-hats, cymbals and percussion, you're gonna hear it with a new level of detail that you're really not accustomed to.” He said, adding that some of this will be the tuning of the supplied headphone amplifier (made by iFi) so it’s partly the EQ of that, mixed with the improved clarity of high frequencies from the MEMS drivers.

There was another surprise with the supplied amp/DAC also — it had a specific “xMEMS” mode. I originally planned to use my own, but it turns out that I needed this specific DAC as the MEMS drivers require a 10-volt bias to work. I asked Householder if all headphones would require a DAC (effectively ending their chances of mainstream adoption), but apparently xMEMS has developed its own amp “chip” that can both drive the speakers and supply the 10-volt bias. The forthcoming True Wireless buds from Creative, for example, obviously won’t need any additional hardware.

This is where things get interesting. While we don't know the price for Creative’s TWS buds with xMEMS drivers, we can be sure that they will be a lot cheaper than Singularity’s IEMs which retail for $1,500. “You know, they're appealing to a certain consumer, but you could just very easily put that same speaker into a plastic shell, and retail it for 150 bucks,” Householder told Engadget. The idea that xMEMS can democratize personal audio at every price point is a bold one. Not least because most audiophiles aren’t used to seeing the exact same technology in their IEMs also in sub $200 wireless products. Until we have another set to test, though, we can’t comment on the individual character each manufacturer can imbue on them.

xMEMS

One possible differentiating factor for higher-end products (and competing MEMS-based products) is something xMEMS is calling “Skyline.” Householder described it as a dynamic “vent” that can be opened and closed depending on the listener’s needs. Similar to how open-back headphones are favored by some for their acoustic qualities, xMEMS-powered IEMs could include Skyline that would open and close to prevent occlusion, improve passive noise canceling and other acoustic qualities such as “transparency” mode where you want to temporarily let external, environmental noises come through.

For those who prefer on-ear or over-ear headphones, MEMS technology will likely be paired with legacy dynamic drivers, at least initially. “The first step that we're taking into headphone is actually a two way approach,” Householder said. The idea being that a smaller dynamic driver can handle the low frequencies, while MEMS drivers currently don’t scale up so well. “It's really the perfect pairing. The dynamic for the low end, let it do what it does best, and then we've got the far superior high frequency response [from MEMS],” he said. “But the long term vision is to eventually fully replace that dynamic driver.”

The ultimate goal would of course be a set of solid-state desktop speakers, but we’re a little way out on that it seems. For now though, there’s a tantalizing promise that MEMS-based in-ears could modernize and maybe even democratize consumer audio, at least around a certain price point. Not to mention that xMEMS isn’t the only company in the game. Austrian startup, Usound, already showed its own reference-design earphone last year and Sonic Edge has developed its own MEMS “speaker-in-chip” technology. With some competition in the market, there’s hope that the number of products featuring it will increase and improve steadily over the next year or so.

This article originally appeared on Engadget at https://www.engadget.com/could-mems-be-the-next-big-leap-in-headphone-technology-173034402.html?src=rss

Study: Wearable sensors more accurately track Parkinson’s disease progression than traditional observation

In a study from Oxford University, researchers found that by using a combination of wearable sensor data and machine learning algorithms the progression of Parkinson’s disease can be monitored more accurately than in traditional clinical observation. Monitoring movement data collected by sensor technology may not only improve predictions about disease progression but also allows for more precise diagnoses.

Parkinson’s disease is a neurological condition that affects motor control and movement. Although there is currently no cure, early intervention can help delay the progression of the disease in patients. Diagnosing and tracking the progression of Parkinson's disease currently involves a neurologist using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) to assess the patient's motor symptoms by assigning scores to the performance of specific movements. However, because this is a subjective, human analysis, classification can be inaccurate.

In the Oxford study, 74 patients with Parkinson’s were monitored for disease progression over a period of 18 months. The participants wore wearables with sensors in different regions of the body: on the chest, at the base of the spine and on each wrist and foot. These sensors — which had gyroscopic and accelerometric capabilities — kept tabs on 122 different physiological measurements, and tracked the patients during walking and postural sway tests. Kinetic data was then analyzed by custom software programs using machine learning.

Oxford

The sensor data collected by the wearables were compared to standard MDS-UPDRS assessments, which are considered the gold standard in current practice. That traditional test, in this study's patients "did not capture any change" while the sensor-based analysis "detected a statistically significant progression of the motor symptoms" according to the researchers.

Having more precise data on the progression of Parkinson's isn't a cure, of course. But the incorporation of metrics from wearables could help researchers confirm the efficacy of novel treatment options.

This article originally appeared on Engadget at https://www.engadget.com/study-wearable-sensors-more-accurately-track-parkinsons-disease-progression-than-traditional-observation-171132495.html?src=rss

Baidu's CEO says its ERNIE AI 'is not inferior in any aspect to GPT-4'

ERNIE, Baidu’s answer to ChatGPT, has “achieved a full upgrade,” company CEO Robin Li told the assembled crowd at the Baidu World 2023 showcase on Tuesday, “with drastically improved performance in understanding, generation, reasoning, and memory.”

During his keynote address, Li demonstrated improvements to those four core capabilities on-stage by having the AI create a multimodal car commercial in a few minutes based on a short text prompt , solve complex geometry problems and progressively iterate the plot for a short story on the spot. The fourth-gen generative AI system “is not inferior in any aspect to GPT-4,” he continued.

ERNIE 4.0 will offer an “improved” search experience resembling that of Google’s SGE, aggregating and summarizing information pulled from the wider web and distilled into a generated response.The system will be multimodal, providing answers as text, images or animated graphs through an “interactive chat interface for more complex searches, enabling users to iteratively refine their queries until reaching the optimal answer, all in one search interface,” per the company’s press. What’s more, the AI will be able to recommend “highly customized” content streams based on previous interactions with the user.

Similar to ChatGPT Enterprise, ERNIE’s new Generative Business Intelligence will offer a more finely-tuned and secure model trained on each client’s individual data silo. ERNIE 4.0 will also be capable of, “conducting academic research, summarizing key information, creating documents, and generating slideshow presentations” as well as enable users to search and retrieve files using text and voice prompts.

Baidu is following the example set by the rest of the industry and has announced plans to put its generative AI in every app and service it can manage. The company has already integrated some of the AI’s functions into Baidu Maps, including navigation, ride hailing and hotel bookings. It is also offering “ow-threshold access and productivity tools” to help individuals and enterprises develop API plugins for the Baidu Qianfan Foundation Model Platform.

Baidu had already been developing its ERNIE large language model for a number of years prior to the debut of ChatGPT in 2022, though its knowledge-base is focused primarily on the Chinese market. Baidu released ERNIE Bot, it’s answer to ChatGPT, this March with some 550 billion facts packed into its knowledge graph, though it wasn’t until this August that it rolled out to the general public.

Baidu’s partner startups also showed off new product series that will integrate the AI’s functionality during the event, including a domestic robot, an All-in-One learning machine and a smart home speaker.

This article originally appeared on Engadget at https://www.engadget.com/baidus-ceo-says-its-ernie-ai-is-not-inferior-in-any-aspect-to-gpt-4-162333722.html?src=rss

Netflix's first live sports event is a golf tournament featuring F1 drivers and PGA Tour pros

Netflix is getting into live sports streaming, but it's not shelling out hundreds of millions of dollars on NFL games, Formula 1 races or the English Premier League quite yet. The company's first live sports event is a pro-am golf tournament that features athletes from its Formula 1: Drive to Survive and Full Swing docuseries.

The Netflix Cup will see four pairs of Formula 1 drivers and PGA Tour golfers pairing up in a match play tournament that will take place in Las Vegas. You'll be able to watch the event starting at 6PM ET on Tuesday, November 14 — just a few days before F1's inaugural Las Vegas Grand Prix.

As things stand, The Netflix Cup is set to feature F1 drivers Alex Albon, Pierre Gasly, Lando Norris and Carlos Sainz. The golf pros who have lined up to take part are Rickie Fowler, Max Homa, Collin Morikawa and Justin Thomas. The tournament will see the pro-am pairs play an eight-hole match. The top two teams will duke it out on a final hole to try and win the Netflix Cup.

“The continued success of Drive to Survive has played a significant role in the growth of Formula 1 in the US, which has ultimately led to the addition of a third American race,” Emily Prazer, chief commercial officer of Las Vegas Grand Prix, Inc, said in a statement. “It’s only fitting that we kick off our inaugural race weekend with a fun event that can be streamed by F1 and PGA Tour fans around the globe.”

This is a logical way for Netflix to dip its toes into live sports streaming. It means that the company doesn't have to immediately snap up expensive rights to high-profile leagues (many of which have deals with rival streaming services anyway) or to showcase lower-tier sports.

It's also another example of Netflix's cross-branding coming to the forefront. The company is placing more focus on its own properties with things like a Squid Game reality competition series and branded retail stores that will feature an obstacle course based on its biggest hit to date. Netflix is also said to be developing more video game adaptations of its shows and movies, such as Extraction and Black Mirror.

Netflix's first livestreamed event was a Chris Rock standup special. However, the company ran into technical problems with its second planned livestream, a Love is Blind cast reunion. The company instead filmed the reunion and uploaded it to the platform as quickly as it could. Netflix will be hoping things go more smoothly this time around.

This article originally appeared on Engadget at https://www.engadget.com/netflixs-first-live-sports-event-is-a-golf-tournament-featuring-f1-drivers-and-pga-tour-pros-160042770.html?src=rss

Alan Wake brings his flashlight to Fortnite

Alan Wake is coming to Fortnite in a cross-promotional event ahead of the 2010 game’s long-awaited sequel. Alan Wake: Flashback “reimagines Remedy Entertainment’s iconic story in Fortnite” as Epic Games and Remedy Entertainment introduce younger players to a franchise that faded in and out of public consciousness before some of them were born.

The game within a game appears to provide a quick recap of the events of the first title within Fortnite. “Troubled author Alan Wake embarks on a desperate search for his missing wife, Alice,” Epic’s description reads. “Following her mysterious disappearance from the Pacific Northwest town of Bright Falls, he discovers pages of a horror story he has supposedly written, but has no memory of.”

The surreal pairing becomes more logical when you consider Epic and Alan Wake developer Remedy have a working relationship. Remedy signed a publishing agreement with Epic in 2020 in a program covering up to 100 percent of a title’s development costs, including paying for quality assurance, localization and marketing. Once a game recovers its development costs, the companies split their profits 50/50. So, the Fortnite tie-in is a win-win for both companies’ bottom lines.

Alan Wake will also be a playable character via an Alan Wake Outfit. It will launch in the “Waking Nightmare” set available on the Fortnite shop beginning on October 26. Meanwhile, Alan Wake 2 launches for $50 on October 27 for PlayStation 5, Xbox Series X/S and PC via the Epic Store.

This article originally appeared on Engadget at https://www.engadget.com/alan-wake-brings-his-flashlight-to-fortnite-155907947.html?src=rss

The Stream Deck MK.2 is on sale for just $130

Elgato’s Stream Deck MK.2 is on sale for $130, a discount of $20 from the MSRP of $150. That’s 13 percent off and actually beats the sale price from last week’s Amazon Prime Day event. If you’re a podcaster or a livestreamer, this is a pretty good time to snag this highly useful streaming device.

This is the latest and greatest Stream Deck, and we said it sets a new standard for the industry when we placed it in our list of the best game streaming gear. Not to be confused with Valve’s Steam Deck, this similarly-named device boasts a hub of 15 LCD hotkeys that you can customize to your liking to simplify livestreaming, podcasting and related activities.

For instance, one button press can turn on a connected accessory, instantly mute a microphone, adjust the lights, trigger on-screen effects or activate audio clips, to name a few examples. You have 15 of these keys, and each can be customized as you see fit. You can even set them to perform in-game actions, like any standard keyboard shortcut.

Additionally, many users have found these devices useful for programming, media editing and any other profession/hobby that could use a bit of hotkey simplification. The buttons are also really satisfying to press.

The main reason you’d get this, however, is right in the name. It’s for streamers that have to moderate a fast-moving chat all while gaming or performing some other task. Each button has a tiny display to let you know at a glance the end result of each press. Over time, you won’t even need these mini displays, instead relying on simple muscle memory, just like keyboard hotkeys. Each of the major streaming platforms, like Twitch and YouTube, offer their own plugins for the device complete with a set of commonly-used adjustment options.

This article originally appeared on Engadget at https://www.engadget.com/the-stream-deck-mk2-is-on-sale-for-just-130-152539642.html?src=rss

Honda to test its Autonomous Work Vehicle at Toronto's Pearson Airport

While many of the flashy, marquee mobility and transportation demos that go on at CES tend to be of the more... aspirational variety, Honda's electric cargo hauler, the Autonomous Work Vehicle (AWV), could soon find use on airport grounds as the robotic EV trundles towards commercial operations. 

Honda first debuted the AWV as part of its CES 2018 companion mobility demonstration, then partnered with engineering firm Black & Veatch to further develop the platform. The second-generation AWV was capable of being remotely piloted or following a preset path while autonomously avoiding obstacles. It could carry nearly 900 pounds of sutff onboard and atow another 1,600 pounds behind it, both on-road and off-road. Those second-gen prototypes spent countless hours ferrying building materials back and forth across a 1,000-acre solar panel construction worksite, both individually and in teams, as part of the development process. 

This past March, Honda unveiled the third-generation AWV with a higher carrying capacity, higher top speed, bigger battery and better obstacle avoidance. On Tuesday, Honda revealed that it is partnering with the Greater Toronto Airports Authority to test its latest AWV at the city's Pearson Airport. 

The robotic vehicles will begin their residencies by driving the perimeters of airfields, using mounted cameras and an onboard AI, checking fences and reporting any holes or intrusions. The company is also considering testing the AWV as a FOD (foreign object debris) tool to keep runways clear, as an aircraft component hauler, people mover or baggage cart tug. 

The AWV is just a small part of Honda's overall electrification efforts. The automaker is rapidly shifting its focus from internal combustion to e-motors with plans to release a fully-electric mid-size SUV, as well as nearly a dozen EV motorcycle models by 2025, and develop an EV sedan with Sony. Most importantly, however, the Motocompatco is making a comeback

This article originally appeared on Engadget at https://www.engadget.com/honda-to-test-its-autonomous-work-vehicle-at-torontos-pearson-airport-153025911.html?src=rss

The new $79 Apple Pencil has a USB-C charging port

Apple has unveiled a new Apple Pencil. The latest model costs $79 ($69 for education) and it pairs and charges via a USB-C cable. It’ll be available in early November and it’s compatible with every iPad that has a USB-C port.

This is the company’s most budget-friendly Apple Pencil yet. It’s $20 less than the original model and $40 cheaper than the second-gen Apple Pencil. Apple says features of the new version include pixel-perfect accuracy, low latency and tilt sensitivity.

There's no pressure sensitivity this time around, though, so if you want that feature, you'll need to stick with either of the previous iterations. While you can attach the USB-C Apple Pencil magnetically to the side of your iPad for storage (in which case it will go into a sleep state to prolong the battery life), there's no wireless charging support either. To top up the Pencil's battery, you'll need to slide back a cap to expose a USB-C port and plug in a charging cable.

Apple

Unlike the second-gen Pencil, you won't be able to double tap the latest version to change drawing tools. Apple has also declined to offer free engraving this time around. However, if you have an M2-powered iPad, you'll be able to take advantage of the hover feature that's supported on the second-gen Pencil. That enables you to preview any mark you intend to make before it's actually applied to your note, sketch, annotation and so on.

This is Apple's latest step in its transition away from the Lightning port, which was largely prompted by European Union rules. The company started embracing USB-C on iPads several years ago, while it ditched the Lightning port in all iPhone 15 models. It'll take Apple a while longer to move away from Lightning entirely. Several devices it sells — such as older iPhones, AirPods Max, Magic Mouse, Magic Trackpad and the first-gen Apple Pencil — still use that charging port. But this is another step toward an all-USB-C future, and one fewer charging cable you'll need to carry around.

This article originally appeared on Engadget at https://www.engadget.com/the-new-79-apple-pencil-has-a-usb-c-charging-port-141732710.html?src=rss

Microsoft Copilot: Here's everything you need to know about the company's AI assistant

Microsoft’s new Copilot AI has wormed its way into nearly every aspect of Windows 11. However, there’s a bit of a learning curve, but don’t worry. We’ve got you covered. We've put together a primer on the company's new AI assistant, along with step-by-step instructions on how to both enable and disable it on your Windows computer.

What does Microsoft Copilot do?

Microsoft’s Copilot is a suite of AI tools that work together to create a digital personal assistant of sorts. Just like other modern AI assistants, the tech is based on generative artificial intelligence and large language models (LLM.)

You can use Copilot to do a whole bunch of things to increase productivity or just have fun. Use the service to summarize a web page or essay, write an email, quickly change operating system settings, generate custom images based on text, transcribe audio or video, generate a screenshot and even connect to an external device via Bluetooth. It also does the sorts of things other AI chatbots do, like creating lists of recipes, writing code or planning itineraries for trips. Think of it as a more robust version of the pre-existing Bing AI chatbot.

How to enable Microsoft Copilot

Update your computer to the latest version of Windows 11

First of all, you need the latest Windows 11 update, so go ahead and download that first. 

1. Head to Settings and look for the Windows Update option. 

2. Follow the prompts and reset your computer if required. 

You’re now ready to experience everything Copilot has to offer. If Microsoft just dropped an update, you may have to wait a bit before it reaches your region. Click the tab to automatically install the latest update when available.

Once your computer is updated, click the Copilot button

As for enabling the feature, click the Copilot button on the taskbar or press Win + C on the keyboard. That’s all there is to it.

How to disable Microsoft Copilot

Engadget/Terrence O'Brien

Microsoft Copilot isn’t an always-on feature. Once it shows up in the taskbar, it only works when you ask it something. However, if you want to disable or delete the feature entirely, you have a couple of options.

The easiest method is to remove it from the taskbar. Out of sight, out of mind, right? Open up Settings and click on Personalization. Next, tap the Taskbar page to the right side. Look for Taskbar Items and then click on the Copilot toggle switch to remove it from the line-up. This ensures you won’t ever accidentally turn it on via the Taskbar, but you can still call up the AI by typing Win + C.

If you want to delete the toolset entirely, the process is a bit more involved. Start by opening a PowerShell window. Search for Windows PowerShell, right-click on the results and select the option to run as an administrator. Next, click yes on the UAC prompt. This opens up a command prompt.

Paste the following into the window: reg add HKCU\Software\Policies\Microsoft\Windows\WindowsCopilot /v "TurnOffWindowsCopilot" /t REG_DWORD /f /d 1

That should do it. Every trace of Copilot will disappear from your system.

What are the limitations of Copilot?

This is new technology, so the limitations are extensive. Like all modern LLMs, Microsoft’s Copilot can and will make up stuff out of thin air every once in a while, a phenomenon known as hallucination. It also doesn’t retain information from conversation to conversation, likely for security reasons. This means it restarts the conversation from a blank slate every time you close a window and open another one. It won’t remember anything about you, your preferences or even your favorite order from the coffee shop down the street. Finally, it doesn’t integrate with too many third-party sources of data, beyond the web, so you won’t be able to incorporate personal fitness data and the like.

What's the difference between Github Copilot and Microsoft Copilot?

There is a primary difference between the two platforms, despite the similar names. Github Copilot is all about helping craft and edit code for developing software applications. Microsoft Copilot can whip up some rudimentary code but it’s far from a speciality. If your primary use case scenario for an AI assistant is code, go with Github. If you only dabble in coding, or have no interest at all, go with Microsoft.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-heres-everything-you-need-to-know-about-the-companys-ai-assistant-130004909.html?src=rss

WhatsApp debuts passkey logins on Android

WhatsApp just made logging in a much simpler and faster process, at least on Android devices. The Meta-owned chat application has launched passkey support for Android, which means users no longer have to use OTPs from two-factor authentication to be able to log into their account. Passkeys are a relatively new login technology designed to be resistant to phishing attacks, password leaks and other security vulnerabilities plaguing its older peers. 

They're made up of cryptographic pairs consisting of one public key and one private key that lives on the user's device. Services that support passkeys don't have access to that private key, and it also can't be written down or given away. Without that private key, nobody else can log into somebody's account. Now that WhatsApp has launched passkey support, users can log in using their device's authentication procedure, so they can simply verify their identities with their face, fingerprint or their PINs. 

While a lot of applications still don't have passkey support, the list continues to grow. PayPal launched passkey logins for Android back in March, while TikTok rolled out support for the technology in July. More recently, 1Password rolled out passkeys to all its users on desktop and iOS after testing the login solution for three months. 

Android users can easily and securely log back in with passkeys 🔑 only your face, finger print, or pin unlocks your WhatsApp account pic.twitter.com/In3OaWKqhy

— WhatsApp (@WhatsApp) October 16, 2023

This article originally appeared on Engadget at https://www.engadget.com/whatsapp-debuts-passkey-logins-on-android-122036260.html?src=rss