Posts with «information technology» label

Apple Vision Pro two months later: A telepresence dream

Two months after I started using the Apple Vision Pro, it hasn't transformed the way I live. It hasn't replaced my TV, and it doesn't make me want to give up my powerful desktop or slim laptops. It's just another tool in my gadget arsenal — one I can don to catch up on X-Men '97 in bed, or to help me dive deep into research while I'm away from my office. The Vision Pro becomes normal so quickly, it's almost easy to forget how groundbreaking it actually is. Its screens are still absolutely stunning, and the combination of eye tracking and Apple's gesture controls makes for the most intuitive AR/VR interface I've seen yet.

While the Vision Pro still isn't something most people should consider buying, Apple has thrown out a few bones to early adopters. There are more games popping up on the App Store and Arcade every week, and there are also a handful of 3D films being offered to Apple TV+ subscribers. The addition of Spatial Personas also goes a long way towards making the Vision Pro more of a telepresence machine (more on that below). But we're still waiting for the company to make good on the promise of 180-degree Immersive Video, as well as to let users create higher quality Spatial Videos on iPhones.

Photo by Devindra Hardawar/Engadget

How I use the Apple Vision Pro

Once the pressure of reviewing every aspect of the Vision Pro was over, I started incorporating it into my life like a typical user. (Full disclosure: I returned the unit I originally bought, but Apple sent along a sample for further testing.) Mostly, that means not forcing myself to use the headset for large chunks of the day. Instead, my Vision Pro time is more purpose-driven: I slip it on in the morning and project my MacBook's screen to clear out emails and catch up on Slack conversations, all while a YouTube video is virtually projected on my wall.

In the middle of a work session, or sometimes right before diving into a busy workday, I run through a five- or ten-minute meditation session with the Mindfulness app. I can easily meditate without any headgear, but I've found the app's calm narration and the immersive environment it creates (since it completely blocks out the real world) to be incredibly helpful. It's like having your own yoga teacher on staff, ready to help calm your brain whenever you have a free moment.

I've also learned to appreciate the Vision Pro as a way to expand where I can get work done. As someone who's been primarily working from home since 2009, I learned early on that changing locations was an easy way to keep myself engaged. I try not to write in the same place where I've been checking email in the morning, for example. I normally hop between a PC desktop and large monitor (currently it's Alienware's 32-inch 4K OLED) in my office, and a MacBook Air or Pro for writing around the house. Sometimes I'll go to a nearby park or cafe when I need to zone into a writing assignment for several hours.

Photo by Devindra Hardawar/Engadget

With the Vision Pro, I can actually handle some serious multitasking from my deck or kitchen without being tied to a desktop computer. I've found that useful for covering events to avoid getting stuck inside my basement office (I can have a video streaming on a virtual window, as well as Slack and web browsers open via a projected MacBook). I've juggled conference calls while being sick in bed with the Vision Pro, because it felt more comfortable than staring down at a tiny laptop display.

I still haven’t traveled much with the headset, but I can foresee it being useful the next time I take a weekend trip with my family. Tested's Norman Chan told me he's used the Vision Pro during long flights, where it makes the hours just disappear. I'm still working myself up to that — I'd much rather use a small laptop and headphones on planes, but I can imagine the beauty of watching big-screen movies on the Vision Pro while everyone else is staring at tablets or cramped seat-back displays.

The Vision Pro remains a fantastic way to watch movies or TV shows at home, as well. When I'm too tired to head downstairs after putting my kids to sleep, I sometimes just veg in bed while projecting YouTube videos or anime on the ceiling. That's where I experienced a trippy temporal shift while watching X-Men '97: As soon as its remastered theme song spun up, I was immediately transported back to watching the original show on a 13-inch TV in my childhood bedroom. If I could somehow jump back into the past, Bishop-style, it would be impossible to convince my 10-year-old self that I'd eventually be watching a sequel series in a futuristic headset, projected in a 200-inch window. How far we've come.

Photo by Devindra Hardawar/Engadget

Spatial Personas are a telepresence dream

When Apple first announced the Vision Pro last year, I couldn't help but be creeped out by its Persona avatars. They looked cold and inhuman, the exact sort of thing you'd imagine from soulless digital clones. The visionOS 1.1 update made them a bit less disturbing, but I didn't truly like the avatars until Apple unveiled Spatial Personas last week. Instead of being confined to a window, Spatial Personas hover in your virtual space, allowing you to collaborate with friends as if they were right beside you.

The concept isn't entirely new: I tested Microsoft Mesh a few years ago with a HoloLens 2 headset, which also brought digital avatars right into my home office. But they looked more like basic Miis from the Nintendo Wii than anything realistic. Meta's Horizon Workrooms did something similar in completely virtual spaces, but that's not nearly as impressive as collaborating digitally atop a view of the real world.

Apple's Spatial Personas are far more compelling than Microsoft’s and Meta's efforts because they're seamless to set up — you just have to flip on Spatial mode during a FaceTime chat — and they feel effortlessly organic. During a Spatial Persona call with Norm from Tested, we were conversing as if he was sitting right in front of me in my home theater. We were able to draw and write together in the Freeform app easily — when I stood up and reached out to the drawing board, it was almost as if we were standing beside each other at a real white board.

Photo by Devindra Hardawar/Engadget

SharePlay with Spatial Personas

We were also able to customize our viewing experiences while watching a bit of Star Trek Beyond together using SharePlay in the Vision Pro. Norm chose to watch it in 2D, I watched in 3D, and our progress was synchronized. The experience felt more engrossing than a typical SharePlay experience, since I could just lean over and chat with him instead of typing out a message or saying something over a FaceTime call. I also couldn't help but imagine how easy it would be to record movie commentaries for podcasts using Spatial Personas. (We'd have to use separate microphones and computers, in addition to Vision Pros, but it would make for a more comfortable recording session than following movies on a monitor or TV.)

Our attempts to play games together failed, unfortunately, because we were running slightly different versions of Game Room. We also didn’t have enough time during our session to sync our apps up. I eventually was able to try out Chess and Battleship with other Vision Pro-equipped friends and, once again, it felt like they were actually playing right beside me. (Norm and CNET's Scott Stein also looked like they were having a ball with virtual chess.)

The main stumbling block for Spatial Personas, of course, is that they require a $3,500 headset. Apple is laying the groundwork for truly great telepresence experiences, but it won't matter for most people until they can actually afford a Vision Pro or a cheaper Apple headset down the line.

With Horizon Workrooms, Meta allowed non-VR users to join virtual meetings using Messenger on phones and computers, so that they weren’t left out. Standard FaceTime users can also join Vision Pro chats alongside spatial personas, but they'll be stuck in a window. And unlike Meta's offering, regular users won't be able to see any virtual environments (though you could still collaborate on specific apps like FreeForm). Meta's big advantage over Apple was with capacity: Horizon Workrooms supports up to 16 people in VR, as well as 34 more calling in from other devices. Spatial Persona chats, on the other hand, are limited to five participants.

Apple

No momentum for Immersive Video

Apple's 180-degree Immersive Video format was one of the most impressive aspects of the Vision Pro when I previewed it last year, and the handful of experiences at launch were pretty compelling. But the Immersive Video well has been dry since launch — the only new experience was a five-minute short showing off the 2023 MLS Playoffs, which was mostly disappointing.

While that short had such great resolution and depth that it felt like I was actually on the pitch, the MLS experience is disorienting because it cuts far too often, and with no sense of rhythm. Once you get settled into a scene, perhaps watching someone gear up for a well-placed goal, the camera view changes and you have no idea where you are. It's almost like a five-minute lesson in what not to do with Immersive Video. Hopefully, the MLS has a longer experience in the works.

I'm not expecting a tsunami of Immersive Video content, since the Vision Pro is still an obscenely expensive device meant for developers and professionals, but it would be nice to see more of a push from Apple. The company is teasing another six-minute episode of Prehistoric Planet for later this month, but again that isn't really much. Where are the creators pushing Immersive Video to new heights? While the content is likely hard to work with since it's shot in 3D and 8K, the format could be a perfect way for Apple to extol the virtues of its new chips.

In lieu of more Immersive Videos, I’ve been spending more time re-watching Spatial Videos captured with my iPhone 15 Pro. They still look more realistic than 2D clips, but I’ve grown to dislike the 1080p/30fps limitation. It’s just hard to accept that resolution when I know my phone can also produce crisp 4K and 60fps footage. The $3 app Spatialify helps somewhat by unlocking 1080p/60fps and 4k/30fps spatial video capture, but its footage is also more shaky and buggy than the iPhone’s built-in camera. At this point, I’ll consider using Spatialify if my phone is on a tripod or gimbal, but otherwise I’ll stick with the native camera app.

Photo by Devindra Hardawar/Engadget

What’s next for the Apple Vision Pro

We’ll likely have to wait until Apple’s WWDC 24 event in June before we hear about any more major upgrades for Vision Pro or visionOS. That would be appropriate, since last year’s WWDC was the headset’s big debut (and a hellish day for us trying to cover all the news). Now that the hardware is in the wild, Apple has to convince developers that it’s worth building Vision Pro apps alongside their usual iOS, iPadOS and macOS wares. It’s not just some mythical spatial computing platform anymore, after all.

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-two-months-later-a-telepresence-dream-181550906.html?src=rss

The Razer Stream Controller is down to its all-time low price

Streaming can be an art form in its own right and made all the more enjoyable with the proper tools — though they don't always come cheap. That's why the 26 percent discount currently running on the Razer Stream Controller is so exciting. The sale brings the all-in-one keypad down from $270 to $200 — a return to its record-low price. 

Razor launched its Stream Controller back in 2022 as a competitor to Elgato's Stream Deck — albeit at a much higher price point. To be fair, the device offers quite a lot for the cost, including 12 haptic switchblade keys, six tactile analog dials and eight programmable buttons. The haptic switchblade keys have customizable icons and, of course, have adjustable haptic feedback. The tactile analog dials control audio levels and the programmable buttons can make regular actions all the more accessible — and speedy.

The Razor Stream Controller works with Mac or PC and has integrated support for platforms like Discord, Twitch and Spotify. It's also good for any artists looking for a new creative device, as it works with Photoshop, Premiere Pro, Illustrator and more. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/the-razer-stream-controller-is-down-to-its-all-time-low-price-143755384.html?src=rss

Google announces its first Arm-based CPU for data centers

Google Cloud Next 2024 has begun, and the company is starting the event with some big announcements, including its new Axion processor. It's Google's first Arm-based CPU specifically created for data centers, which was designed using Arm's Neoverse V2 CPU.

According to Google, Axion performs 30 percent better than its fastest general purpose Arm-based tools in the cloud and 50 percent better than the most recent, comparable x86-based VMs. They also claim it's 60 percent more energy efficient than those same x86-based VMs. Google is already using Axion in services like BigTable and Google Earth Engine, expanding to more in the future.

The release of Axion could bring Google into competition with Amazon, which has led the field of Arm-based CPUs for data centers. The company's cloud business, Amazon Web Services (AWS), released the Graviton processor back in 2018, releasing the second and third iterations over the following two years. Fellow chip developer NVIDIA released its first Arm-based CPU for data centers in 2021 named Grace, and companies like Ampere have also been making gains in the area.

Arm-based processors are often a lower-cost and more energy-efficient option. Google's announcement came right after Arms CEO Rene Haas issued a warning about the energy usage of AI models, according to the Wall Street Journal. He called models such as ChatGPT "insatiable" regarding their need for electricity. "The more information they gather, the smarter they are, but the more information they gather to get smarter, the more power it takes, Haas stated. By the end of the decade, AI data centers could consume as much as 20 percent to 25 percent of US power requirements. Today that's probably four percent or less. That's hardly very sustainable, to be honest with you." He stressed the need for greater efficiency in order to maintain the pace of breakthroughs.

This article originally appeared on Engadget at https://www.engadget.com/google-announces-its-first-arm-based-cpu-for-data-centers-120508058.html?src=rss

Logitech’s tiny G Pro X 60 gaming keyboard has some big competition

Logitech has unveiled the G Pro X 60, its latest mechanical gaming keyboard. Similar to the peripheral maker's G Pro X TKL from last year, this is a wireless model aimed at competitive-minded gamers first and foremost. Unlike that device, it has a smaller 60 percent layout, which means it lacks a dedicated function row, number pad, arrow keys and nav cluster but takes up much less space on a desk. This can be a boon for games because it leaves more room to flick a mouse around while retaining the most common action keys. Naturally, it’s also more portable.

The G Pro X 60 is up for pre-order today for $179 in the US or €229 in Europe. It’s available in three colors (black, white or pink) with either the linear or tactile version of Logitech’s GX Optical switches. The company says it’ll be available at major retailers in “late April.”

I’ve had the keyboard on hand for a few days prior to today’s announcement and have mostly been impressed, though I’d have a hard time calling it a great value.

Let’s start with the good: This thing is well-built. Its aluminum top plate is surrounded by a plastic frame, but it all feels sturdy, with no real flex or give when you press down. Its doubleshot PBT keycaps are pleasingly crisp and should avoid any of the shininess that'd develop with cheaper ABS plastic over time. The legends on the keycaps are neatly printed and transparent, so any RGB backlight effects you set will come through cleanly. All the keys are angled comfortably, and there’s a set of flip-out feet on the back.

Logitech

I’m not crazy about the side-mounted volume roller — once you’ve blessed your keyboard with a full-on rotary knob, it’s hard to give up — but it’s easy to reach with your pinky, so you can adjust volume without having to lift your other fingers during the heat of a game. There’s also a dedicated switch for flipping on Logitech’s “game mode,” which deactivates keys you might otherwise hit by accident; those include the Windows and Fn keys by default, but you can add others through Logitech’s G Hub software. 

The keyboard can connect over a detachable USB-C cable, Bluetooth or a 2.4GHz wireless dongle. Per usual with Logitech gear, the latter’s connection is rock solid; I’ve had none of the hiccups or stuttering I’ve seen with some wireless keyboards from less established brands, particularly when waking the device from sleep. There are buttons to swap between Bluetooth or the 2.4GHz connection built into the board, as well as a handy compartment for stashing the adapter itself. You can also connect the G Pro X 60 and certain Logitech mice simultaneously using one dongle. Logitech rates the battery life at up to 65 hours; that sounds about right based on my testing so far, but the exact amount will fluctuate based on how bright you set the RGB backlight.

The best thing about the G Pro X 60 might have nothing to do with the keyboard at all — it’s the fact that Logitech includes a hard carrying case in the box. More companies should do this! It makes the device much easier to transport.

Alas, this probably isn't a keyboard you’d want to take to the office. The linear GX Optical switches in my test unit feel totally pleasant: They’re fast enough for gaming, and they come pre-lubricated, so each press goes down smoothly. Since they’re optical, and thus not reliant on any physical contact points, they should also prove durable over time.

Logitech

But they aren’t exactly quiet. Logitech has fit a couple layers of silicone rubber inside the board, but there isn’t the wealth of sound-dampening foam you'd find in some other options in this price range. To peel back the curtain a bit: I received the G Pro X 60 just after testing a bunch of mechanical keyboards for an upcoming buying guide, so I’m a little spoiled on this point. Some people may like the obvious clack of each press here, too. I can’t imagine their coworkers or roommates being as thrilled, though, and some modifier and nav keys like Alt, Ctrl and Tab sound hollower than others.

Besides that, my issues with the G Pro X 60 are more about what's missing than anything the keyboard does wrong. For one, its switches aren’t hot-swappable, so you can’t easily remove and replace them without desoldering. Yes, this is a niche thing, but so are $180 gaming keyboards as a whole. Being able to pop in new switches isn’t just a plus for long-term repairability; it’s half the fun for some keyboard enthusiasts in the first place. Swapping keycaps is straightforward, though. 

Taking a step back, a growing number of the G Pro X 60’s peers have some sort of analog functionality, which means they can respond to varying levels of pressure. The top pick in our gaming keyboard buyer’s guide, the Wooting 60HE+, is a good example: Its magnetic Hall effect sensors let you set custom actuation points, so you can make each key extra sensitive while playing a fast FPS, then make them feel heavier and more deliberate while typing. They also enable a “rapid trigger” feature that lets you repeat inputs faster, which can be helpful for, say, strafing back and forth during an in-game shootout. Other models from Razer and SteelSeries provide similar functionality. But the G Pro X 60 lacks any sort of adjustable actuation or rapid trigger mode. That’s probably not a dealbreaker for most people, but the people who would use those features are the kind of hardcore gamers Logitech is targeting with this device.

Logitech

What is here is a new remapping system called “Keycontrol.” Through G Hub, this allows you to assign several different commands or macros to each key, with three separate control layers. This is a convenient way to get around some of the design’s missing keys: I made it so holding Alt temporarily turns WASD into arrow keys, for example. But it also lets you base different actions on whether you press, hold or release a key, so you could tie complementary actions in a game — casting a couple of buffs in an RPG, perhaps — to one press. Some of the analog keyboards noted above can work like this, too, and you need to have G Hub open for some bindings to stay active. Still, it’s better to have this sort of flexibility than not. Logitech says more of its keyboards will receive Keycontrol support in the future but declined to give more specific details.

All of this makes for a keyboard that’s solid in a vacuum but faces some stiff competition. Rival gaming keyboards like the Wooting 60HE+ and SteelSeries Apex Pro Mini Wireless are a little richer with performance-focused features, while a slightly larger option like the ASUS ROG Azoth sounds better and offers more customizable hardware for keyboard geeks. There are plenty of great non-gaming keyboards that cost much less, too. But the G Pro X 60 isn’t a bad choice if you want something compact and wireless, so it might be worthwhile during a sale.

This article originally appeared on Engadget at https://www.engadget.com/logitechs-tiny-g-pro-x-60-gaming-keyboard-has-some-big-competition-070154542.html?src=rss

Apple’s second-generation AirPods Pro are back on sale for $190

Apple’s second-generation AirPods Pro have dipped to under $200 in a deal from Amazon. The AirPods Pro, which normally cost $250, are $60 off right now, bringing the price down to just $190. That’s the same price we saw during Amazon’s Big Spring Sale. The AirPods Pro offer a number of premium features over the standard AirPods, including active noise cancellation for when you want to shut out the world, and an impressive transparency mode for when you want to hear your surroundings.

The second-generation AirPods Pro came out in 2022 and brought Apple’s H2 chip to the earbuds for a notable performance boost. It offers Adaptive Audio, which will automatically switch between Active Noise Cancellation and Transparency Mode based on what’s going on around you. With Conversation Awareness, they can lower the volume when you’re speaking and make it so other people's voices are easier to hear.

We gave this version of the AirPods Pro a review score of 88, and it’s one of our picks for the best wireless earbuds on the market. The second-generation AirPods Pro are dust, sweat and water resistant, so they should hold up well for workouts, and they achieve better battery life than the previous generation. They can get about six hours of battery life with features like ANC enabled, and that goes up to as much as 30 hours with the charging case. Apple says popping the AirPods Pro in the case for 5 minutes will give you an hour of additional listening or talking time.

AirPods Pro also offer Personalized Spatial Audio with head tracking for more immersive listening while you’re watching TV or movies. The gesture controls that were introduced with this generation of the earbuds might take some getting used to, though. With AirPods Pro, you can adjust the volume by swiping the touch control.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-second-generation-airpods-pro-are-back-on-sale-for-190-142626914.html?src=rss

OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times. The report, which describes the lengths OpenAI, Google and Meta have gone to in order to maximize the amount of data they can feed to their AIs, cites numerous people with knowledge of the companies’ practices. It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed, Matt Bryant, a spokesperson for Google, told NYT, also saying that the company was unaware of any such use by OpenAI.

The report, however, claims there were people at Google who knew but did not take action against OpenAI because Google was using YouTube videos to train its own AI models. Google told NYT it only does so with videos from creators who have agreed to take part in an experimental program. Engadget has reached out to Google and OpenAI for comment.

The NYT report also claims Google tweaked its privacy policy in June 2022 to more broadly cover its use of publicly available content, including Google Docs and Google Sheets, to train its AI models and products. Bryant told NYT that this is only done with the permission of users who opt into Google’s experimental features, and that the company “did not start training on additional types of data based on this language change.”

This article originally appeared on Engadget at https://www.engadget.com/openai-and-google-reportedly-used-transcriptions-of-youtube-videos-to-train-their-ai-models-163531073.html?src=rss

Apple officially allows retro game emulators on the App Store

In addition to updating its developer guidelines to allow music streaming apps to link to external website, Apple has also added new language that allows game emulators on the App Store. The updated guidelines, first noticed by 9to5Mac, now say that retro gaming console emulator apps are welcome and can even offer downloadable games. Apple also reportedly confirmed to developers in an email that they can create and offer emulators on its marketplace. 

Emulator software wasn't allowed on the App Store prior to this update, though developers have been finding ways to distribute them to iOS users. To be able to install them, users usually need to resort to jailbreaking and downloading sideloading tools or unsanctioned alternate app stores first. This rule update potentially eliminates the need for users to go through all those lengths and could bring more Android emulators to iOS.

Apple warns developers, however, that they "are responsible for all such software offered in [their] app, including ensuring that such software complies with these Guidelines and all applicable laws." Clearly, allowing emulators on the App Store doesn't mean that it's allowing pirated games, as well. Any app offering titles for download that the developer doesn't own the rights to is a no-no, so fans of specific consoles will just have to hope that their companies are planning to release official emulators for iOS. While these latest changes to Apple's developer guidelines seem to be motivated by the EU's Digital Markets Act regulation, which targets big tech companies' anti-competitive practices, the new rule on emulators applies to all developers worldwide. 

This article originally appeared on Engadget at https://www.engadget.com/apple-officially-allows-retro-game-emulators-on-the-app-store-130044937.html?src=rss

Apple Vision Pro owners now have more decent controller options

The Apple Vision Pro is an impressive piece of hardware, and the eye-tracking/hand gesture input combo is fantastic for navigating menus and the like. It’s not so great for gaming. There haven't been many easy ways to connect a third-party controller for playing iPad or cloud games. This is changing, however, as accessory manufacturer 8BitDo just announced Vision Pro compatibility for a number of its controllers.

These accessories are officially supported by Apple, so they should work as soon as you make a Bluetooth connection. No muss and no fuss. All told, eight devices got the Apple seal of approval here. One such gadget is the company’s Ultimate Bluetooth Controller, which we basically called the perfect gamepad for PC.

8BitDo

Other compatible devices include various iterations of the SN30 Pro controller, the Lite 2 and the NES-inspired N30 Pro 2. The integration isn’t just for game controllers, as 8BitDo also announced AVP compatibility for its Retro Mechanical Keyboard. Of course, the Vision Pro works out of the box with most Bluetooth keyboards.

This is pretty big news, however, as media consumption is one of the best parts of the Vision Pro experience. Video games fall squarely in that category. Just about every iPad title works on the device. If playing Cut the Rope on a giant virtual screen doesn’t do it for you, the headset also integrates with Xbox Cloud Gaming and Nvidia GeForce Now for access to AAA titles. 

8BitDo announced official controller support for Apple devices last year, though this was primarily for smartphones, tablets and Mac computers. The integration was thanks to new controller firmware and Apple's recent iOS 16.3, iPadOS 16.3, tvOS 16.3 and macOS 13.2 updates. It looks like all of the accessories that work with iPhones and iPads also work with the Vision Pro. 

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-owners-now-have-more-decent-controller-options-150055872.html?src=rss

Who exactly is YouTube’s multicam Coachella stream for?

YouTube is hyping its exclusive Coachella streaming coverage, which starts next week. The headlining feature is the platform’s multiview experience (already familiar to sports fans) for the two-weekend festival. Our question from this announcement is, who wants to watch several different artists’ sets at the same time — when you can only listen to one?

The multiview experience will let you watch up to four stages simultaneously, letting you pick which one to hear: exactly how multiview works for March Madness, NFL games or any other sporting event. Here’s how YouTube pitches the feature: “Two of your favorite bands playing on different stages at the same time? No problem, multiview will have you and your friends covered to catch both sets at the same time via the YouTube app on TV at no additional cost.”

Maybe I’m of the wrong generation and have too long of an attention span, but who wants to watch an artist’s set without hearing it? That’s what will happen to the three stages you aren’t listening to. Wouldn’t it be better to... watch the one you’re hearing? And then catch up on the others on-demand when you can listen to them as well?

Sports multiview makes sense because there are scores to track and timeouts, halftimes and blowouts to divert your attention to another game. You don’t need to hear an NBA game to keep an eye on the ball. (Depending on the commentators, you may prefer not to listen to it.) It’s primarily a visual experience; the audio is secondary.

But music, even when played live with all the light shows, fog machines and dancing accompanying it, is still an auditory experience first and foremost. If multiple artists you like play at once, you still can’t (and wouldn’t want to) hear more than one simultaneously. In YouTube’s multiview, you pick one stage to hear and the rest to… watch them sing and dance on mute in a little box alongside three other muted performances. Yay?

It sounds like a solution looking for a problem — YouTube applying its existing tech (which, to be fair, works very well with sports) to a music festival. Never mind that it doesn’t make a lot of sense.

Perplexed rants aside, YouTube will have six livestream feeds to bounce between (but, again, only four at once in multiview). That includes Sonora for the first weekend and Yuma for the second. This year’s headliners include Lana Del Rey, Doja Cat, No Doubt and Tyler, the Creator.

Between sets, YouTube will stream “special editorial content” from the artists onsite. Each day after the night’s final set, YouTube’s Coachella channel will repeat that day’s sets until the livestream returns the next day. That sounds like a better way to catch up on the sets you didn’t see live.

The event takes place in Indio, California, about 130 miles east of LA, from April 12 to 14 and April 19 to 21. You can tune in on YouTube’s Coachella channel.

This article originally appeared on Engadget at https://www.engadget.com/who-exactly-is-youtubes-multicam-coachella-stream-for-183744741.html?src=rss

Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they're still not being used for any practical purposes. That might change with today's announcement from Microsoft and Quantinuum, who say they've developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they're prone to error, which is the main reason today's quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft's solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum's hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft's Strategic Missions and Technologies division, this achievement could finally bring us to "Level 2 Resilient" quantum computing, which would be reliable enough for practical applications.

"The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems," Zander, wrote in a blog post today. "In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation. ... By having high-quality hardware components and breakthrough error-handling capabilities designed for that machine, we can get better results than any individual component could give us."

Microsoft

Researchers will be able to get a taste of Microsoft's reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It's unclear how long it'll take to actually reach that point, but for now, at least we're moving one step closer towards practical quantum computing.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss