Posts with «technology & electronics» label

The first affordable headphones with MEMS drivers don't disappoint

The headphone industry isn’t known for its rapid evolution. There are developments like spatial sound and steady advances in Bluetooth audio fidelity, but for the most part, the industry counts advances in decades rather than years. That makes the arrival of the Aurvana Ace headphones — the first wireless buds with MEMS drivers — quite the rare event. I recently wrote about what exactly MEMS technology is and why it matters, but Creative is the first consumer brand to sell a product that uses it.

Creative unveiled two models, the Aurvana Ace ($130) and the Aurvana Ace 2 ($150) in tandem. Both feature MEMS drivers, the main difference is that the Ace model supports high-resolution aptX Adaptive while the Ace 2 has top-of-the-line aptX Lossless (sometimes marketed as “CD quality”). The Ace 2 is the model we’ll be referring to from here on.

In fairness to Creative, just the inclusion of MEMS drivers alone would be a unique selling point, but the aforementioned aptX support adds another layer of HiFi credentials to the mix. Then there’s adaptive ANC and other details like wireless charging that give the Ace 2 a strong spec-sheet for the price. Some obvious omissions include small quality of life features like pausing playback if you remove a bud and audio personalization. Those could have been two easy wins that would make both models fairly hard to beat for the price in terms of features if nothing else.

Photo by James Trew / Engadget

When I tested the first ever xMEMS-powered in-ear monitors, the Singularity Oni, the extra detail in the high end was instantly obvious, especially in genres like metal and drum & bass. The lower frequencies were more of a challenge, with xMEMS, the company behind the drivers in both the Oni and the Aurvana, conceding that a hybrid setup with a conventional bass driver might be the preferred option until its own speakers can handle more bass. That’s exactly what we have here in the Aurvana Ace 2.

The key difference between the Aurvana Ace 2 and the Oni though is more important than a good low end thump (if that’s even possible). MEMS-based headphones need a small amount of “bias” power to work, this doesn’t impact battery life, but Singularity used a dedicated DAC with a specific xMEMS “mode.” Creative uses a specific amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. The popularity of true wireless (TWS) headphones these days means that if MEMS is to catch on, it has to be compatible.

The good news is that even without the expensive iFi DAC that the Singularity Oni IEMs required to work, the Aurvana Ace 2 bring extra clarity in the higher frequencies than rival products at this price. That’s to say, even with improved bass, the MEMS drivers clearly favor the mid- to high-end frequencies. The result is a sound that strikes a good balance between detail and body.

Listening to “Master of Puppets” the iconic chords had better presence and “crunch” than on a $250 pair of on-ear headphones I tried. Likewise, the aggressive snares in System of a Down’s “Chop Suey!” pop right through just as you’d hope. When I listened to the same song on the $200 Grell Audio TWS/1 with personalized audio activated the sounds were actually comparable. Just Creative’s sounded like that out of the box, but the Grell buds have slightly better dynamic range over all and more emphasis on the vocals.

For more electronic genres the Aurvana Ace’s hybrid setup really comes into play. Listening to Dead Prez’s “Hip-Hop” really shows off the bass capabilities, with more oomph here than both the Grell and a pair of $160 House of Marley Redemption 2 ANC — but it never felt overdone or fuzzy/loose.

Photo by James Trew / Engadget

Despite besting other headphones on specific like-for-like comparisons, as a whole the nuances and differences between the headphones is harder to quantify. The only set I tested that sounded consistently better, to me, was the Denon Perl Pro (formerly known as the NuraTrue Pro) but at $349 those are also the most expensive.

It would be remiss of me not to point out that there were also many songs and tests where differences between the various sets of earbuds were much harder to discern. With two iPhones, one Spotify account and a lot of swapping between headphones during the same song it’s possible to tease out small preferences between different sets, but the form factor, consumer preference and price point dictate that, to some extent, they all broadly overlap sonically.

The promise of MEMS drivers isn’t just about fidelity though. The claim is that the lack of moving parts and their semiconductor-like fabrication process ensures a higher level of consistency with less need for calibration and tuning. The end result being a more reliable production process which should mean lower cost. In turn this could translate into better value for money or at least a potentially more durable product. If the companies choose to pass that saving on of course.

For now, we’ll have to wait and see if other companies explore using MEMS drivers in their own products or whether it might remain an alternative option alongside technology like planar magnetic drivers and electrostatic headphones as specialist options for enthusiasts. One thing’s for sure: Creative’s Aurvana Ace series offers a great audio experience alongside premium features like wireless charging and aptX Lossless for a reasonable price — what’s not to like about that?

This article originally appeared on Engadget at https://www.engadget.com/the-first-affordable-headphones-with-mems-drivers-review-161536317.html?src=rss

Apple and Google are probably spying on your push notifications

Foreign governments likely spy on your smart phone usage, and now Senator Ron Wyden's office is pushing for Apple and Google to reveal how exactly it works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked. But it appears the Department of Justice won't let companies come clean about the practice. 

Push notifications don't actually come straight from the app. Instead, they pass through the smart phone provider, like Apple for iPhones or Google for Androids, to deliver the notifications to your screen. This has created murky room for government surveillance. "Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information," Wyden wrote in the letter on Wednesday.

Apple claims it was suppressed from coming clean about this process, which is why Wyden's letter specifically targets the Department of Justice. "In this case, the federal government prohibited us from sharing any information and now that this method has become public we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Apple's next transparency report will include requests for push notification tokens, according to the company. Specifically, Wyden asks the DOJ to let Apple and Google tell customers and the general public about the demand for these app notification records. Google did not respond to a request for comment by the time of publication.

It's even more complicated because apps can't do much about it. Even if there's an individual pledge for security, if an app delivers push notifications, it must use the Apple or Google system to do so. In theory, this means your private messaging could be shared with a foreign government if you're getting push notifications from the app. That includes any metadata about the notification, too, like account information.

The revelation about push notifications come at a time when privacy and security have become a selling point. Companies advertise how they'll keep your information safe, but as more loopholes come to light, it's becoming harder to suss out what's actually trustworthy. 

This article originally appeared on Engadget at https://www.engadget.com/apple-and-google-are-probably-spying-on-your-push-notifications-154543184.html?src=rss

Google's answer to GPT-4 is Gemini: 'the most capable model we’ve ever built'

OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.

“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.

The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared. 

The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.

Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.

Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year's competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.

While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra. 

Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.

Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version's availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.

Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.

Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.

This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss

Google announces new AI processing chips and a cloud 'hypercomputer'

Undoubtedly, 2023 has been the year of generative AI, and Google is marking its end with even more AI developments. The company has announced the creation of its most powerful TPU (formally known as Tensor Processing Units) yet, Cloud TPU v5p, and an AI Hypercomputer from Google Cloud. "The growth in [generative] AI models — with a tenfold increase in parameters annually over the past five years — brings heightened requirements for training, tuning, and inference," Amin Vahdat, Google's Engineering Fellow and Vice President for the Machine Leaning, Systems, and Cloud AI team, said in a release.

The Cloud TPU v5p is an AI accelerator, training and serving models. Google designed Cloud TPUs to work with models that are large, have long training periods, are mostly made of matrix computations and have no custom operations inside its main training loop, such as TensorFlow or JAX. Each TPU v5p pod brings 8,960 chips when using Google's highest-bandwidth inter-chip interconnect.

The Cloud TPU v5p follows previous iterations like the v5e and v4. According to Google, the TPU v5p has two times greater FLOPs and is four times more scalable when considering FLOPS per pod than the TPU v4. It can also train LLM models 2.8 times faster and embed dense models 1.9 times faster than the TPU v4. 

Then there's the new AI Hypercomputer, which includes an integrated system with open software, performance-optimized hardware, machine learning frameworks, and flexible consumption models. The idea is that this amalgamation will improve productivity and efficiency compared to if each piece was looked at separately. The AI Hypercomputer's performance-optimized hardware utilizes Google's Jupiter data center network technology.

In a change of pace, Google provides open software to developers with "extensive support" for machine learning frameworks such as JAX, PyTorch and TensorFlow. This announcement comes on the heels of Meta and IBM's launch of the AI Alliance, which prioritizes open sourcing (and Google is notably not involved in). The AI Hypercomputer also introduces two models, Flex Start Mode and Calendar Mode. 

Google shared the news alongside the introduction of Gemini, a new AI model that the company calls its "largest and most capable," and its rollout to Bard and the Pixel 8 Pro. It will come in three sizes: Gemini Pro, Gemini Ultra and Gemini Nano. 

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-ai-processing-chips-and-a-cloud-hypercomputer-150031454.html?src=rss

Google’s Gemini AI is coming to Android

Google is bringing Gemini, the new large language model it just introduced, to Android, beginning with the Pixel 8 Pro. The company’s flagship smartphone will run Gemini Nano, a version of the model built specifically to run locally on smaller devices, Google announced in a blog post. The Pixel 8 Pro is powered by the Google Tensor G3 chip designed to speed up AI performance.

This lets the Pixel 8 Pro add several smarts to existing features. The phone’s Recorder app, for instance, has a Summarize feature that currently needs a network connection to give you a summary of recorded conversations, interviews, and presentations. But thanks to Gemini Nano, the phone will now be able to provide a summary without needing a connection at all.

Gemini smarts will also power Gboard’s Smart Reply feature. Gboard will suggest high-quality responses to messages and be aware of context in conversations. The feature is currently available as a developer preview and needs to be enabled in settings. However, it only works with WhatsApp currently and will come to more apps next year.

“Gemini Nano running on Pixel 8 Pro offers several advantages by design, helping prevent sensitive data from leaving the phone, as well as offering the ability to use features without a network connection,” wrote Brian Rakowski, Google Pixel’s vice president of product management.

As part of today’s AI push, Google is upgrading Bard, the company’s ChatGPT rival, with Gemini as well, so you should see significant improvements when using the Pixel’s Assistant with Bard experience. Google is also rolling out a handful of AI-powered productivity and customization updates on other Pixel devices, including the Pixel Tablet and the Pixel Watch, although it isn’t immediately clear what they are.

Google

Gemini Nano is the smallest version of Google's large language model, while Gemini Pro is a larger model that will power not just Bard but other Google services like Search, Ads and Chrome, among others. Gemini Ultra, Google's beefiest model, will arrive in 2024 and will be used to further AI development.

Although today’s updates are focused on the Pixel 8 Pro, Google spoke today about AI Core, an Android 14 service that allows developers to access AI features like Nano. Google says AI Core is designed run on “new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon.” The company adds that “additional devices and silicon partners will be announced in the coming months.”

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-ai-is-coming-to-android-150025984.html?src=rss

AI joins you in the DJ booth with Algoriddim’s djay Pro 5

Algoriddim’s djay Pro software has always had close ties to Apple and often been at the forefront of new DJ tech, especially on Mac, iOS or iPadOS. Today marks the launch of djay Pro version 5 and it includes a variety of novel features, many of which leverage the company’s AI and a new partnership with the interactive team at AudioShake.

There are several buzzy trademarked names to remember this time around including Next-generation Neural Mix, Crossfader Fusion and Fluid Beatgrid. These are the major points of interest in djay Pro 5, with only a passing mention of improved stem separation on mobile, UI refreshes for the library and a new simplified Starter Mode that may cater to new users on the platform. The updates include some intriguing AI-automated features that put the system in control of more complex maneuvers. Best of all, existing users get it all for free as part of their subscription.

AudioShake and Algroiddim have been working on their audio separation tech (like many other companies) and are calling this refreshed version Next-generation Neural Mix. We’re told to expect crisp, clear separation of elements from vocals, harmonies and drums. The tools have also been optimized for mobile devices, as long as they run a supported OS.

Fluid Beatbrid is perhaps one of the easiest to understand and seems to be an underlying part of the crossfader updates. Anyone who’s used beatgrids knows they’re rarely perfect on first analysis and often take a bit of work to lock in, especially on tracks that need it. Songs with live instrumentation that tend to shift tempo naturally, EDM with varying tempo shifts during breakdowns and even just older dance tracks that tend to meander slightly throughout playback have been pain points. Fluid Beatgrid is supposed to use AI to accommodate for those shifts and find the right points to mark.

Crossfader Fusion is where stems, automation and those beatgrids all come into play. There are now a variety of settings for the crossfader beyond the usual curves. One of the highlighted modes is the Neural Mix (Harmonic Sustain) setting. This utilizes stem separation and automated level adjustments as you go from one track to the next.

For those who enjoy cutting and scratching, there are crossfade settings that use automated curves and spatial effects so, for example, outgoing track vocals can be dropping out as you cut into the next track automatically. The incoming track’s vocals can be highlighted for scratching and as your mix completes the transition, things are blended together further with AI.

There's even an example provided that shows how you can mix across vastly different BPMs, where the incoming song matches up with a slower outgoing track, but its original tempo is slowly integrated during the transition leaving you with the new faster tempo. 

Existing users should be alerted to the update, but newcomers can find djay Pro version 5 starting today at the App Store. While there will continue to be a free version, the optional Pro subscription costs $7 per month or $50 per year and gives you access to all the features across Mac, iOS and iPhone. Support for the app includes devices running MacOS 10.15 or later and iOS 15 / iPadOS 15 or later.

And as a side note, we’re told that djay Pro for Windows users were leveled up in September and will get Fluid Beatgrid in an update for that platform as soon as next week. Newer features like Crossfader Fusion are expected in the near future.

This article originally appeared on Engadget at https://www.engadget.com/ai-joins-you-in-the-dj-booth-with-algoriddims-djay-pro-5-150007224.html?src=rss

Goat Simulator 3's headbutting mayhem finally arrives to mobile

Everyone's favorite hooved menace is back on mobile with the launch of Goat Simulator 3 for iOS and Android, Swedish developer Coffee Stain Studios announced. As before, you play in an open world as a mayhem-loving goat in order to cause maximum chaos and ruin the day of as many NPC's as possible. The latest version dials up the destruction with accessories like jetpacks, rocket launchers and supercharged headbutts, while letting you kit out your goats with dubious fashion accessories. 

The mobile versions offers much the same feature set found on PS5, Xbox Series X/S and PC, particularly the co-op multiplayer support. Other mobile features include multiple goat options (tall, fishy, with hats), an "OK amount" of quests in the open world, mini-games, "ragdoll physics that slap Newton in the face" and more, according to the Play Store listing. 

Goat Simulator famously started as a jokey demo for Global Game Jam 2014, replete with bugs, bizarre physics and just a weird, weird concept. Flocks of players loved the alpha version, though, so Coffee Stain elected to release it as a full game, leaving the floppy necks and ability to use your goat's tongue to somehow walk up construction cranes. 

Goat Simulator 3 is actually the second game in the series (the developer famously skipped over 2), appearing last year a full eight years after the original. The original version appeared shortly after the alpha, and basically left most of the bugs in — part of the charm or terribleness of the game, depending on your point of view. It turns out that "buggy and stupid" is hard to do on purpose though, as GS3's creative director put it, hence the long delay. In any case, it's now available on Android and iOS for $13. 

This article originally appeared on Engadget at https://www.engadget.com/goat-simulator-3s-headbutting-mayhem-finally-arrives-to-mobile-111553057.html?src=rss

iOS 17.2 will enable Qi2 next-gen wireless charging on iPhone 13 and 14

Apple, which usually plays safe when it comes to new standards, already surprised us with Qi2 compatibility on its iPhone 15, but it turns out that Cupertino has more up its sleeve. As spotted by 9to5Mac and some users, the release notes for iOS 17.2 RC (release candidate) state that this update adds "Qi2 charger support for all iPhone 13 models and iPhone 14 models." This means that said iPhone models should support up to 15W of wireless charging with Qi2-certified chargers, though the release notes stopped short at confirming the power specs. We'll be able to find out when iOS 17.2 rolls out for the general public — likely in a few days' time.

Until now, 15W input on these iPhone models was only possible through MagSafe-certified chargers, whereas the cheaper MagSafe-compatible ones are limited to 7.5W. With Qi2's matching performance, consumers will be offered more affordable choices when it comes to 15W wireless chargers, as manufacturers won't need to pay the Apple premium for MagSafe certification.

Qi2 was first announced at CES 2023, with its main highlight being its MagSafe-like wireless fast charging standard — even for Android devices. This uses "Magnetic Power Profile" to ensure compatibility across phones and chargers. While the output is currently capped at 15W, future iterations will "significantly" raise charging levels past 15W, according to the WPC (Wireless Power Consortium). We've been told to expect a slew of Qi2-compatible accessories — including some from Anker — arriving by the holidays, and I'm sure that it'll also become a theme at CES 2024 next month.

iOS 17.2 enables Qi2 on the iPhones 13 and 14. (iPhones 15 already shipped with it). pic.twitter.com/pvwPvciq7q

— Rosyna Keller (@rosyna) December 5, 2023

This article originally appeared on Engadget at https://www.engadget.com/ios-172-will-enable-qi2-next-gen-wireless-charging-on-iphone-13-and-14-042459126.html?src=rss

Beeper says it reverse-engineered iMessage into an Android app

The universal chat app Beeper just got a lot more, well, universal. The company just unveiled the Beeper Mini app, which makes the bold claim to bring true iMessage support to Android devices. Even bolder? It seems to actually work, according to users who have tried it. This isn’t done in a strange hacky way that could compromise privacy and security, like Nothing’s beleaguered attempt to play nice with iOS devices.

Instead, the code has been reverse-engineered from the ground up, so it’s basically the official iMessage protocol. The texts are even sent to Apple’s servers before moving on to their final destination, just like a real iMessage created by an iPhone. Even weirder? All of this high-tech wizardry was created by a 16-year-old high school student.

Once you open the app, it goes through all of your text message conversations and flags the ones from iMessage users. The system then switches them over to blue bubble conversations via Apple’s official platform. From then on, every time you talk to that person, the bubbles will be bluer than a clear spring day. You also don’t need an Apple ID to login, alleviating many of the security concerns that plagued rival offerings.

Beeper co-founder Eric Migicovsky was contacted by the talented high-schooler and was blown away by the tech. “No one on Earth had done that,” he told The Verge. “No one had put all the pieces together.”

It’s worth reiterating. This platform isn’t hacking the iMessage experience so it works on Android. It is the iMessage experience working on Android, as it's sending actual iMessages. The tech was created by jailbreaking iPhones to get a good look at how the operating system handles iMessages, before recreating the software.

Beeper is being really transparent here, and the company knows it's potentially skating on thin ice with regard to how Apple will respond. Apple has never been especially friendly to those it deems to be infringing on company secrets, but it did just announce forthcoming support for the RCS messaging standard. This will allow for greater interoperability between Android and iOS devices, so maybe it’ll let Beeper Mini slide for now. Being as how the app actually recreates Apple code, however, it likely wouldn’t be difficult to put the kibosh on Beeper from its end.

Migicovsky says Beeper’s iMessage code will be open source to ensure there will be no security or privacy lapses. As for potential legal hurdles, the co-founder says his company is on the right side of the law, noting there’s no actual Apple code in Beeper Mini, just custom-made recreated code. Also, he cites legal precedence in copyright law that has sided with those who reverse engineer code. In any event, Beeper Mini is available, for now, and it's free to download, though it does feature in-app purchases.

This article originally appeared on Engadget at https://www.engadget.com/beeper-says-it-reverse-engineered-imessage-into-an-android-app-172250419.html?src=rss

Discord overhauls its mobile app with new tabs, messaging features and more

Discord bluntly describes the mobile app it launched in 2015 as a squished-down version of its desktop platform. But that acknowledgment comes with an announcement that said app is getting a complete redesign that's an "independent experience" from its computer-based counterpart. It includes a new set of navigation tabs prominently displayed at the bottom of your screen: Servers, Messages, Notifications and You. 

While Discord considered changes like a horizontal layout, the Servers tab looks very similar to before — just no direct messaging option. Instead, a Messaging tab replaces the existing Friends one, displaying all your one-on-one and group messages in one place instead of having to click through multiple pages. You can also favorite a conversation so it stays at the top of your chats and use a search bar to find a message, file, pin or attachment across all discussions — same as WhatsApp or general messaging. Also new in conversations is the ability to swipe left on a message to reply to it, rather than having to hold it down. You might have noticed that Discord already changed the formatting of picture messages to show in a gallery style versus one by one.

Engadget

The Notifications tab will now include server events, friend requests and message replies, all of which you can click to reach the source immediately. Plus, notifications should now auto-clear instead of requiring you to remove them. Rounding out the now four tabs on the bottom (bye search) is still the You page. The Friends tab has been integrated here, alongside features like changing your status or profile picture. This is also still the tab for accessing account settings but with a bit more convenience. You can double-tap the You tab to go directly to account settings and, once there, there's a search bar to find whatever information you need. One tool you can access there is the new Midnight theme, providing a pure black background that should rest your eyes a bit.

The app's functioning has also improved, with Discord claiming that opening the app will take you 55 percent less time on Android and 43 percent less on iOS — apparently using four times less data while doing so. Android users' crash rate has also been reduced by half over the past year. Plus, voice and video calls have improved functioning, with an updated UI allowing for "more intuitive interactions." 

Discord also shared that it's working on other requested updates, such as quick access to a server's member list, better search filters, more customization options for viewing messages, and overall app performance improvements. You can use the feedback forum at any point to express things you're unhappy with or that you'd like to see changed.

Notably, Discord got itself in a bit of hot water recently with the US Senate Judiciary Committee. The company refused to have its CEO, Jason Citron, testify about children's safety online, wouldn't accept an electronic subpoena and merited an office visit by US Marshals to hand deliver one. Citron will speak with the committee about protecting kids — and Discord's "failures" to do so — alongside the CEOs of Meta, X, TikTok and Snap on January 31, 2024, at 10 AM ET.

This article originally appeared on Engadget at https://www.engadget.com/discord-overhauls-its-mobile-app-with-new-tabs-messaging-features-and-more-170035917.html?src=rss