Posts with «author_name|sam rutherford» label

What to expect from Microsoft Build 2024: The Surface event, Windows 11 and AI

If you can't tell by now, just about every tech company is eager to pray at the altar of AI, for better or worse. Google's recent I/O developer conference was dominated by AI features, like its seemingly life-like Project Astra assistant. Just before that, OpenAI debuted GPT 4o, a free and conversational AI model that's disturbingly flirty. Next up is Microsoft Build 2024, the company's developer conference that's kicking off next week in Seattle.

Normally, Build is a fairly straightforward celebration of Microsoft's devotion to productivity, with a dash of on-stage coding to excite the developer crowd. But this year, the company is gearing up to make some more huge AI moves, following its debut of the ChatGPT-powered Bing Chat in early 2023. Take that together with rumors around new Surface hardware, and Build 2024 could potentially be one of the most important events Microsoft has ever held.

But prior to Build, Microsoft is hosting a showcase for new Surfaces and AI in Windows 11 on May 20. Build kicks off a day later on May 21. For the average Joe, the Surface event is shaping up to be the more impactful of the two, as rumors suggest we will see some of the first systems featuring Qualcomm’s Arm-based Snapdragon X Elite chip alongside new features coming in the next major Windows 11 update.

That's not to say it's all rosy for the Windows maker. Build 2024 is the point where we'll see if AI will make or break Microsoft. Will the billions in funding towards OpenAI and Copilot projects actually pay off with useful tools for consumers? Or is the push for AI, and the fabled idea of "artificial general intelligence," inherently foolhardy as it makes computers more opaque and potentially untrustworthy? (How, exactly, do generative AI models come up with their answers? It's not always clear.)

Here are a few things we expect to see at Build 2024:

New Surface hardware

While Microsoft did push out updates to the Surface family earlier this spring, those machines were more meant for enterprise customers, so they aren’t available for purchase in regular retail stores. A Microsoft spokesperson told us at the time that it "absolutely remain[s] committed to consumer devices," and that the commercial focused announcement was "only the first part of this effort."

Instead, the company's upcoming refresh for its consumer PCs is expected to consist of new 13 and 15-inch Surface Laptop 6 models with thinner bezels, larger trackpads, improved port selection and the aforementioned X Elite chip. There’s a good chance that at the May 20th showcase, we’ll also see an Arm-based version of the Surface Pro 10, which will sport a similar design to the business model that came out in March, but with revamped accessories including a Type Cover with a dedicated Copilot key.

According to The Verge, Microsoft is confident that these new systems could outmatch Apple's M3-powered MacBook Air in raw speed and AI performance.

The company has also reportedly revamped emulation for x86 software in its Arm-based version of Windows 11. That's a good thing, since poor emulation was one of the main reasons we hated the Surface Pro 9 5G, a confounding system powered by Microsoft's SQ3 Arm chip. That mobile processor was based on Qualcomm's Snapdragon 8cx Gen 3, which was unproven in laptops at the time. Using the Surface Pro 9 5G was so frustrating we felt genuinely offended that Microsoft was selling it as a "Pro" device. So you can be sure we're skeptical about any amazing performance gains from another batch of Qualcomm Arm chips.

It'll also be interesting to see if Microsoft's new consumer devices look any different than their enterprise counterparts, which were basically just chip swaps inside of the cases from the Surface Pro 9 and Laptop 5. If Microsoft is actually betting on mobile chips for its consumer Surfaces, there's room for a complete rethinking of its designs, just like how Apple refashioned its entire laptop lineup around its M-series chips.

AI Explorer

Aside from updated hardware, one of the biggest upgrades on these new Surfaces should be vastly improved on-device AI and machine learning performance thanks to the Snapdragon X Elite chip, which can deliver up to 45 TOPS (trillions of operations per second) from its neural processing unit (NPU). This is key because Microsoft has previously said PCs will need at least 40 TOPs in order to run Windows AI features locally. This leads us to some of the additions coming in the next major build of Microsoft’s OS, including something the company is calling its AI Explorer, expanded Studio effects and more.

According to Windows Central, AI Explorer is going to be Microsoft’s catch-all term covering a range of machine learning-based features. This is expected to include a revamped search tool that lets users look up everything from websites to files using natural language input. There may also be a new timeline that will allow people to scroll back through anything they've done recently on their computer and the addition of contextual suggestions that appear based on whatever they're currently looking at. And building off of some of the Copilot features we’ve seen previously, it seems Microsoft is planning to add support for tools like live captions, expanded Studio effects (including real-time filters) and local generative AI tools that can help create photos and more on the spot.

Smarter and more local Copilots

Microsoft wants an AI Copilot in everything. The company first launched Github Copilot in 2021 as a way to let programmers use AI to deal with mundane coding tasks. At this point, all of the company's other AI tools have also been rebranded as "Microsoft Copilot" (that includes Bing Chat, and Microsoft 365 Copilot for productivity apps). With Copilot Pro, a $20 monthly offering launched earlier this year, the company provides access to the latest GPT models from OpenAI, along with other premium features.

But there's still one downside to all of Microsoft's Copilot tools: They require an internet connection. Very little work is actually happening locally, on your device. That could change soon, though, as Intel confirmed that Microsoft is already working on ways to make Copilot local. That means it may be able to answer simpler questions, like basic math or queries about files on your system, more quickly without hitting the internet at all. As impressive as Microsoft's AI assistant can be, it still typically takes a few seconds to deal with your questions.

More from Microsoft at Build 2024

After all the new hardware and software are announced, Build is positioned to help developers lay even more groundwork to better support those new AI and expanded Copilot features. Microsoft has already teased things like Copilot on Edge and Copilot Plugins for 365 apps, so we’re expecting to hear more on how those will work. And by taking a look at some of the sessions already scheduled for Build, we can see there’s a massive focus on everything AI-related, with breakouts for Customizing Microsoft Copilot, Copilot in Teams, Copilot Extensions and more.

What else to look out for?

While Microsoft will surely draw a lot of attention, it’s important to mention that it won’t be the only manufacturer coming out with new AI PCs. That’s because alongside revamped Surfaces, we’re expecting to see a whole host of other laptops featuring Qualcomm’s Snapdragon X Elite Chip (or possibly the X Plus) from other major vendors like Dell, Lenovo and more.

Admittedly, following the intense focus Google put on AI at I/O 2024, the last thing people may want to hear about is yet more AI. But at this point, like most of its rivals, Microsoft is betting big on machine learning to grow and expand the capabilities of Windows PCs.

This article originally appeared on Engadget at https://www.engadget.com/what-to-expect-from-microsoft-build-2024-the-surface-event-windows-11-and-ai-182010326.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Pixel 8a review: The best midrange Android phone gets flagship AI features

The recipe for Google’s A-series Pixels is incredibly straightforward: Combine top-notch cameras with a vivid display and then cram all that in a tried and tested design for a reasonable price. But with the addition of a Tensor G3 chip, the Pixel 8a now supports the same powerful AI features as Google’s flagship phones. So when you consider that all this comes for just $499, you’re looking at not just the top midrange Android handset on the market but possibly one of the best values of any phone on sale today.

Design and display

Aside from a new aloe color option – which in my opinion is the best of the bunch – the Pixel 8a is nearly identical to the standard Pixel 8. However, there are a few subtle differences that become more noticeable when the two are viewed side-by-side. The most obvious is slightly larger bezels, which also has an impact on the Pixel 8’s screen size. Instead of a 6.2-inch display like on its pricier sibling, the Pixel 8a tops out at 6.1 inches. That said, you still get a vibrant OLED panel that produces deep blacks and rich colors, plus a slightly faster 120Hz refresh rate compared to the 90Hz on last year’s Pixel 7a.

The phone’s frame is still made out of aluminum, which feels great, while the metal camera bar in the back is actually a millimeter or two thinner, resulting in an ever so slightly sleeker device. Google also switched out the Pixel 8’s rear glass panel for plastic. But thanks to a new matte finish that’s supposed to mimic the texture of cashmere, it definitely doesn’t feel cheap. And while its IP67 rating for dust and water resistance is one step down from what’s on the mainline Pixel 8, that’s still enough to withstand dunks of up to 1 meter for 30 minutes. Not bad.

Performance

One of the biggest knocks against Google’s Tensor chips is that they don’t offer the same level of raw performance you get from rival Apple or Qualcomm silicon. And while that’s still true of the G3, when we’re talking about it powering a phone that costs $499, I’m much less bothered. In normal use, the Pixel 8a feels swift and snappy and even when gaming. Titles like Marvel Snap and TMNT: Shredder’s Revenge looked smooth. The only time I noticed significant hiccups or lag was when playing more demanding shooters like Call of Duty: Mobile.

While both sport very similar designs, the Pixel 8a (left) has a slightly smaller 6.1-inch screen with larger bezels than the standard Pixel 8 (right).
Photo by Sam Rutherford/Engadget

Of course, the other part of the performance equation is all the on-device AI features that the Tensor G3 unlocks such as Audio Magic Eraser, Best Take and the Magic Editor, which you can use as much as you want instead of the 10-picture cap that free users are subject to in Google Photos.

Cameras

The Pixel 8a features the same 64MP main and 13MP ultra-wide sensors used in last year’s P7a. But that’s OK, because Google’s affordable phones punch way above their weight. So instead of comparing it with a similarly priced rival, I decided to really challenge the Pixel 8a by putting it up against the Samsung 24 Ultra. And even then, it still largely kept up.

In bright light, I’d argue the Pixel 8a might be the superior shooter, as it captured more accurate colors and excellent details compared to the warmer tones and often oversaturated hues from Samsung. This was especially noticeable when shooting a single yellow rose. The S24 Ultra made the middle of the flower appear orange and super contrasty, which looks great in a vacuum but doesn’t reflect what I saw in real life.

However, at night the S24 Ultra’s massive 200MP main sensor pulled back in front, producing images that were generally sharper and more well-exposed. That said, thanks to Google’s powerful Night Sight mode, the Pixel 8a wasn’t far behind, an impressive feat for a phone that costs $800 less.

Finally, while the Pixel 8a doesn’t have any other hardware tricks besides a solid 13MP selfie cam, Google’s AI is here to take your photos even further. Best Take allows you to capture multiple group shots and then swap in people’s reactions from various options. It’s easy to use and lets you create a composite where everyone is smiling, which feels like a win-win scenario. Then there’s the Magic Editor, a fun and powerful way to eliminate distracting elements or move subjects around as you please. It’s the kind of thing you might not use every day, but now and then it will salvage a shot you might have otherwise deleted. So even if you don’t care about AI or how it works, Google is finding a way to add value with machine learning.

Battery life and charging

Photo by Sam Rutherford/Engadget

While the Pixel 8a’s 4,492 mAh battery is a touch smaller than what you get on the standard model (4,575 mAh), it actually boasts slightly better battery life, possibly due to its more petite screen. On our video rundown test, the 8a lasted a solid 20 hours and 29 minutes, barely beating the regular Pixel 8’s time of 20:16.

Meanwhile, when it comes to recharging, both wired and Qi wireless speeds have stayed the same. This means you get up to 18 watts when using a cable, but a rather lethargic rate of 7.5 watts if you slap it on an induction pad. That might not be a big deal if you only use wireless charging overnight or to conveniently top up the phone while you’re doing something else. But if you need some juice in a jiffy, you better grab a cord.

Wrap-up

Google isn’t breaking new ground with the Pixel 8a. But the simple formula of class-leading cameras, a great display, strong battery life and a slick design will never go out of style – especially when you get all this for just $499. And with the addition of AI features that were previously only available on Google’s flagship phones, the Pixel 8a is a midrange smartphone that really is smarter than all of its rivals. To top everything off, there’s a configuration with 256GB of storage for the first time on any A-series handset (though only on the Obsidian model), plus even better support with a whopping seven years of Android and security updates.

Photo by Sam Rutherford/Engadget

The one wrinkle to this is that the deciding factor comes down to how much its siblings cost. If you go by their default pricing, the $499 Pixel 8a offers incredible savings compared to the standard $799 Pixel 8. However, prior to the 8a’s announcement, we saw deals that brought the Pixel 8 down to as low as $549, at which point you might as well spend an extra $50 to get the full flagship experience.

But for those who don’t feel like waiting for a discount or might not care about details like slower wireless charging speeds, in addition to being the best midrange Android phone, the Pixel 8a is just a damn good deal.

This article originally appeared on Engadget at https://www.engadget.com/google-pixel-8a-review-the-best-midrange-android-phone-gets-flagship-ai-features-140046032.html?src=rss

Alienware m16 R2 review: When less power makes for a better laptop

The Alienware m16 R2 is a rarity among modern laptops. That’s because normally after a major revamp, gadget makers like to keep new models on the market for as long as possible to minimize manufacturing costs. However, after its predecessor launched last year sporting a fresh design, the company reengineered the entire system again for 2024 while also limiting how big of a GPU can fit inside. So what gives? The trick is that by looking at the configurations people actually bought, Alienware was able to rework the m16 into a gaming laptop with a sleeker design, better battery life and a more approachable starting price, which is a great recipe for a well-balanced notebook.

Design

There are so many changes on the m16 R2’s chassis it’s hard to believe it’s from the same line. Not only has Alienware gotten rid of the big bezels and chin from the R1, but the machine is also way more portable now. Weight is down more than 20 percent to 5.75 pounds (from 7.28 pounds) and it’s also significantly more compact with a depth of 9.8 inches (versus 11.4 inches before). For some style points, Alienware added RGB lighting around the perimeter of the touchpad. This result is a major upgrade for anyone who wants to take the laptop on the go. It fundamentally changes the system from something more like a desktop replacement to a portable all-rounder.

Critically, despite being smaller, the m16 R2 still has a great array of connectivity options. On its sides are two USB 3.2 Type-A ports, a microSD card reader, an Ethernet jack and a 3.5mm audio socket. Around back, there are two USB-C slots (one supports Thunderbolt 4 while the other has DisplayPort 1.4), a full-size HDMI 2.1 connector and a proprietary barrel plug for power. Generally, I like this arrangement as moving some ports to the rear of the laptop helps keep clutter down. That said, I wish Alienware had switched the placement of the Ethernet jack and one of the USB-C ports, as I find myself reaching for the latter much more often.

Display

Photo by Sam Rutherford/Engadget

The m16 R2 has a single display option: a 16-inch 240Hz panel with a QHD+ resolution (2,560 x 1,600). It’s totally serviceable and for competitive gamers, that high refresh rate could be valuable during matches where potential advantage matters. But you don’t get any support for HDR, so colors don’t pop as much as they would on a system with an OLED screen. Furthermore, brightness is just OK at around 300 nits, which might not be a big deal if you prefer gaming at night or in darker environments. But if you plan on lugging this around to a place with big windows or a lot of sunlight, games and movies may look a bit subdued. That said, it’s not a deal breaker, I just wish this model had some other display options like the previous one.

Performance

While the m16 R2’s sleeker design is a major plus, the trade-off is less space for a beefy GPU. So unlike its predecessor, the biggest card that fits is an NVIDIA RTX 4070. This may come as a downer for performance enthusiasts, but Alienware said it made this change after seeing only a small fraction of buyers opt for RTX 4080 graphics on the old model. Even so, the R2 can still hold its own when playing AAA titles. In Cyberpunk 2077 at 1080p and ultra graphics, it hit 94 fps, barely behind what we saw from the ASUS ROG G16 (95 fps) with a more powerful 4080. And while the performance gap grew slightly when I turned ray tracing on, the m16 still pumped out a very playable framerate of 62 fps (versus 69 fps for the G16).

Battery life

One of the biggest benefits of the m16 R2’s redesign is that it allowed Alienware to install a larger 90Wh battery versus the 84Wh pack in its predecessor. When you combine that with components and fans better tailored to the kind of performance this machine delivers, you get improved longevity. On our rundown test, the m16 R2 lasted 7 hours and 51 minutes, which is longer than both the Razer Blade 14 (6:46) and the ASUS ROG Zephyrus G14 (7:29) and just shy of what we got from a similarly specced XPS 16 (8:31). That said, it’s still not as good as the ASUS G16’s time of 9:17. Regardless, the ability to go longer between charges is never a bad thing. Meanwhile, for those who want to pack super light, one of the m16 R2’s USB-C ports in the back supports power input, though you won’t get the full 240 watts like you do with Alienware’s included brick.

Wrap-up

Photo by Sam Rutherford/Engadget

For 2024, it would have been so easy for Alienware to give the m16 a basic spec refresh and call it a day. But it didn’t. Instead, the company looked at its customers' preferences and gave it a revamp to match. So despite not having the same top-end performance as before, the R2 is still a very capable gaming laptop with a more compact chassis, improved battery life and a lower starting price of $1,500 with an RTX 4050. Sure, I wish its display was brighter and that there was another panel option, but getting 240Hz standard is pretty nice.

Really, the biggest argument against the m16 R2 is that for higher-specced systems like our $1,850 review unit with an RTX 4070, you can spend another $150 for an ASUS ROG G16 with the same GPU, a brighter and more colorful OLED display and an even lighter design that weighs a full pound less. But for people seeking a well-priced gaming machine that can do a bit of everything, there’s a lot of value in the m16 R2.

This article originally appeared on Engadget at https://www.engadget.com/alienware-m16-r2-review-when-less-power-makes-for-a-better-laptop-174027103.html?src=rss

Google Pixel 8a hands-on: Flagship AI and a 120Hz OLED screen for $499

A new Pixel A-series phone typically gets announced at Google I/O. Unfortunately, that means the affordable handset sometimes gets buried amongst all the other news during the company’s annual developer conference. So for 2024, Google moved things up a touch to give the new Pixel 8a extra attention. And after checking it out in person, I can see why. It combines pretty much everything I like about the regular Pixel 8 but with a lower price of $499.

Right away, you’ll see a very familiar design. Compared to the standard Pixel 8, which has a 6.2-inch screen, the 8a features a slightly smaller 6.1-inch OLED display with noticeably larger bezels. But aside from that, the Pixel 8 and 8a are almost the exact same size. Google says the material covering the display should be pretty durable as it's made out of Gorilla Glass, though it hasn’t specified an exact type (e.g. Gorilla Glass 6, Victus or something else).

Some other changes include a higher 120Hz refresh rate (up from 90Hz on the previous model), a more streamlined camera bar and a new matte finish on its plastic back that Google claims mimics the texture of cashmere. Now, I don’t think I’d go that far, but it did feel surprisingly luxurious. The 8a still offers decent water resistance thanks to an IP67 rating, though that is slightly worse than the IP68 certification on a regular Pixel 8. Its battery is a bit smaller too at 4,492 mAh (instead of 4,575 mAh). That said, Google says thanks to some power efficiency improvements, the new model should run longer than the previous model.

As for brand new features, the most important addition is that alongside the base model with 128GB of storage, Google is offering a version with 256GB. That’s a first for any A-series Pixel. And, following in the footsteps of last year’s flagships, the Pixel 8a is also getting 7 years of software and security updates, which is a big jump from the three years of Android patches and five years of security on last year’s 7a. Finally, the Pixel 8a is getting a partially refreshed selection of colors including bay, porcelain, obsidian and a brand new aloe hue, which is similar to the mint variant of the Pixel 8 earlier this year but even brighter and more saturated. I must say, even though I’ve only played around with it for a bit, it's definitely the best-looking of the bunch.

Photo by Sam Rutherford/Engadget

One thing that hasn’t changed, though, is the Pixel 8a’s photography hardware. It uses the same 64-megapixel and 13MP sensors for its main and ultra-wide cameras. However, as the Pixel 7a offered the best image quality of any phone in its price range, it’s hard to get too mad about that. And because this thing is powered by a Tensor G3 chip, it supports pretty much all the AI features Google introduced on the regular Pixel 8 last fall, including Best Take, Audio Magic Eraser, Circle to Search, Live Translate and more. Furthermore, while Google is giving everyone access to its Magic Editor inside Google Photos later this month, free users are limited to 10 saves per month, whereas there’s no cap for people with Pixel 8s and now the 8a.

However, there are a few features available on the flagship Pixels that you don’t get on the 8a. The biggest omission is a lack of pro camera controls, so you can’t manually adjust photo settings like shutter speed, ISO, white balance and more. Google also hasn’t upgraded the 8a’s Qi wireless charging speed, which means you’re limited to just 7.5 watts instead of up to 18 watts. Finally, while the phone does offer a digital zoom, there’s no dedicated telephoto lens like on the Pixel 8 Pro.

But that’s not a bad trade-off to get a device that delivers 90 percent of what you get on Google’s top-tier phones for just $499, which is $200 less than the Pixel 8’s regular starting price. And for anyone who likes the Pixel 8a but might not care as much about AI, the Pixel 7a will still be on sale at a reduced price of $349. Though if you want one of those, you might want to scoop it up soon because there’s no telling how long supplies will last.

The one wrinkle to all this is that at the time of writing, the standard Pixel 8 has been discounted to $549, just $50 more than the Pixel 8a. So unless an extra Ulysses S. Grant is going to make or break your budget, I’d probably go with that. Still, even though the Pixel 8a doesn’t come with a lot of surprises, just like its predecessor, it’s shaping up to once again be the mid-range Android phone to beat.

Pre-orders go live today with official sales starting next week on May 14th.

This article originally appeared on Engadget at https://www.engadget.com/google-pixel-8a-hands-on-flagship-ai-and-a-120hz-oled-screen-for-499-160046236.html?src=rss

Apple's M4 chip arrives with a big focus on AI

Today at its "Let Loose" event, Apple detailed its new M4 chip featuring a major focus on improved AI and machine learning capabilities. 

Built on a new second-gen 3nm process, Apple's M4 chip features four performance and six efficiency cores along with a 10-core GPU. On top of that, Apple says it's maintaining class-leading energy efficiency. In terms of general performance, Apple claims the M4's CPU is 50 percent faster compared to M2, with a GPU that's four times as fast. Memory bandwidth has been improved with speeds of up to 120GB/s.

Apple

The M4 also features an upgraded 16-core neural engine capable of delivering up to 38 trillion operations per second.  

Developing...

Follow all of the news live from Apple's 'Let Loose' event right here.

This article originally appeared on Engadget at https://www.engadget.com/apples-m4-chip-arrives-with-a-big-focus-on-ai-142448428.html?src=rss

Walmart thinks it's a good idea to let kids buy IRL items inside Roblox

Walmart's Discovered experience started out last year as a way for kids to buy virtual items for Roblox inside the game. But today, that partnership is testing out an expanded pilot program that will allow teens to buy real-life goods stocked on digital shelves before they're shipped to your door. 

Available to children 13 and up in the US, the latest addition to Walmart Discovered is an IRL commerce shop featuring items created by partnered user-generated content creators including MD17_RBLX, Junozy, and Sarabxlla. Customers can browse and try on items inside virtual shops, after which the game will open a browser window to Walmart's online store (displayed on an in-game laptop) in order to view and purchase physical items. 

Furthermore, anyone who buys a real-world item from Discovered will receive a free digital twin so they can have a matching virtual representation of what they've purchased. Some examples of the first products getting the dual IRL and virtual treatment are a crochet bag from No Boundaries, a TAL stainless steel tumbler and Onn Bluetooth headphones

According to Digiday, during this initial pilot phase (which will take place throughout May), Roblox will not be taking a cut from any of the physical sales made as part of Walmart's Discovered experience as it looks to determine people's level of interest. However, the parameters of the partnership may change going forward as Roblox gathers more data about how people embrace buying real goods inside virtual stores. 

Unfortunately, while Roblux's latest test may feel like an unusually exploitative way to squeeze even more money from teenagers (or more realistically their parent's money), this is really just another small step in the company's efforts to turn the game into an all-encompassing online marketplace. Last year, Roblox made a big push into digital marketing when it launched new ways to sell and present ads inside the game before later removing requirements for advertisers to create bespoke virtual experiences for each product. 

So in case you needed yet another reason not to save payment info inside a game's virtual store, now instead of wasting money on virtual items, kids can squander cash on junk that will clutter up their rooms too. 

This article originally appeared on Engadget at https://www.engadget.com/walmart-thinks-its-a-good-idea-to-let-kids-buy-irl-items-inside-roblox-180054985.html?src=rss

It doesn’t matter how many Vision Pro headsets Apple sells

Earlier this week, noted Apple analyst Ming-Chi Kuo posted an updated forecast for Apple’s Vision Pro headset, claiming production was being cut to 400,000 or 450,000 units compared to a previous market consensus north of 700,000. This came after a related report from Bloomberg’s Mark Gurman, who said in his Power On newsletter that demand for Vision Pro demos is “way down” while sales in some locations have significantly slowed.

Naturally, this incited a lot of panic and hand-wringing among Apple enthusiasts who feared that the headset that was supposed to change VR forever might not have the staying power they expected. However, before anyone else starts clutching their pearls, I want to let you in on a secret: It doesn’t actually matter how many headsets Apple sells.

Photo by Devindra Hardawar/Engadget

First, let’s talk production numbers. Is it 400,000 or 800,000, or something in between? Back in January, the same Ming-Chi Kuo estimated that the company sold between 160,000 and 180,000 units during its initial pre-order weekend, which was up from previous production predictions of around 60,000 to 80,000. But if we go back even further to last July, the Financial Times cited two people who said Apple only asked its supplier to make fewer than 400,000 units in 2024 while other sources put that number closer to 150,000. Now obviously numbers are subject to change over time as Apple responds to feedback and interest from developers and the public. Regardless, trying to predict the exact number of devices to make is extremely tricky, especially for an attention-grabbing and innovative product that has been the subject of rumors dating back to 2015 (and even before that, according to some very early patent applications).

Still, let’s take that 400,000 number and see how far it goes. Without factoring in accessories (some of which are very important, especially if the owner wears glasses), the Vision Pro sells for $3,500. Rough napkin math suggests that Apple is looking at around $1.4 billion in sales. That’s a pretty big number and for a lot of other companies, that would represent a banner year. But this is Apple we’re talking about —it raked in $383 billion in 2023 with around $97 billion in net income. And that was considered a down year. So we're talking less than one percent of the company’s total revenue, which is basically a rounding error for Apple’s finances.

Photo by Devindra Hardawar/Engadget

That figure looks even less impressive when you consider all the research and development that went into making the Vision Pro. Apple is always cagey when it comes to revealing how much money it invests into various departments. But if we look at another major player in VR, Meta, we can get a better sense of what Apple’s VR budget might look like. According to Business Insider, based on an analysis of regulatory findings, Meta’s Reality Labs has lost nearly $50 billion since the start of 2019. That’s a serious chunk of change and more than enough to cause some consternation among investors, with Meta’s stock recently falling big after its most recent earnings report.

But all these numbers are just noise. Analysts like to look at this stuff to help predict company growth, though they’re so busy focusing on quarterly numbers that they often miss the bigger picture. Depending on who you ask, Apple has more cash on hand than any other company in the world, with upwards of $165 billion sitting in a bank somewhere. And with recent reports claiming that Apple has canceled its secretive car project, I’d argue that the company may want to double down on its headset endeavors.

Photo by Devindra Hardawar/Engadget

That’s because the Vision Pro might be the first step towards a platform that could reshape the company’s entire trajectory like the original iPhone did back in 2007. From the start, it was clear Apple’s first handset would have a massive impact. But when people look back, they never cite the iPhone’s first year of sales, which according to Statista only amounted to around 1.4 million units. Sure, that’s more than 400,000, but that was also for a significantly less expensive device and a drop in the bucket compared to the HUNDREDS of millions Apple has been selling more recently. Those figures were meaningless.

The Vision Pro is Apple’s Field of Dreams device for virtual reality, spatial computing or whatever you want to call the category that encompasses head-mounted displays. Apple had to build it so developers have actual hardware to test software on. Apple had to build it so there’s a platform for people to download apps from. (If you remember, the original App Store didn’t arrive until July 2008, more than a year after the OG iPhone went on sale and on its own made an estimated $85 billion in 2022.) Apple had to build it to plant a flag, lest they cede the first mover's advantage completely to Meta or someone else.

Photo by Devindra Hardawar/Engadget

Even though I’d posit that the Vision Pro is a glorified dev kit (it was announced at WWDC after all), there are features that evoke the magical feeling I had the first time I used an iPhone. The Vision Pro has possibly the best optics I’ve seen on any headset, including enterprise-only models that cost way more than $3,500. It also has the best eye-tracking I’ve experienced, and it makes navigating menus and apps incredibly intuitive. It just kind of works. And slowly but surely, it’s getting better, as my colleague Devindra noted in his recent two-month check-in.

Just like Apple’s first phone, though, the Vision Pro isn’t without its issues. It’s heavy and not super comfortable during long sessions. Its wired battery pack isn’t the most elegant solution for power delivery. Its front visor is prone to cracking, typing still feels clunky and there aren’t enough bespoke apps to make it an essential part of your everyday tech kit. But those are fixable issues and there’s clearly something there, a foundation that Apple can iterate on. Even in its infancy, the Vision Pro brings enough to compel hundreds of thousands of people (or developers) to buy a device that doesn’t make much practical sense.

The focus should be on what upgrades or additions Apple can make in the future, not on how many units it does (or doesn’t) make. So don’t let analysts or other noisemakers convince you otherwise.

This article originally appeared on Engadget at https://www.engadget.com/it-doesnt-matter-how-many-vision-pro-headsets-apple-sells-ming-chi-kuo-production-numbers-143112470.html?src=rss

Qualcomm is expanding its next-gen laptop chip line with the Snapdragon X Plus

Last fall, Qualcomm revealed a major upgrade for its laptop chips with the Snapdragon X Elite. And while we’re still waiting for those processors to make their way into retail devices, today Qualcomm is expanding the line with the Snapdragon X Plus, which I had a chance to test out ahead of its arrival on gadgets later this year.

Similar to the X Elite, the X Plus is based on the same 4nm process and Arm-based Oryon CPU architecture as its sibling. The difference is that the new chip is meant to be used in slightly more affordable mainstream laptops, and as such it only has 10 CPU cores (vs 12 for the X Elite) and reduced clock speeds (3.4Ghz vs 3.8Ghz for the X Elite). This positioning is a lot like what Qualcomm’s rivals have been doing for a while, with the X Elite serving as the flagship chip (like Intel’s Core Ultra 9 series) and the X Plus sitting just below that (which would be equivalent to the Core Ultra 7 line).

Qualcomm

However, one thing that hasn’t changed is that just like the X Elite, the X Plus’ Hexagon NPU puts out the same 45 TOPS of machine learning performance. This is particularly notable as Microsoft recently suggested that laptops would require at least 40 TOPS in order to run various elements of its Copilot AI service on-device. Qualcomm is also making some big claims regarding power efficiency, with the X Plus chip said to deliver 37 percent faster CPU performance compared to an Intel Core Ultra 7 155H when both chips are running at the same wattage. And when put up against other Arm-based chips, Qualcomm says the X Plus is 10 percent faster than Apple’s M3 processor in multi-threaded CPU tasks.

Photo by Sam Rutherford

Unfortunately, the X Plus is not expected to show up in retail devices until sometime in the second half of 2024. That said, at a hands-on event, I was able to run a few benchmarks on some early Qualcomm-built reference devices. And to my pleasant surprise, the X Plus performed as expected with multi-core scores in Geekbench of 12,905 and multi-thread performance in Cinebench 2024 of 852. (Note: Because the processor has not been released yet, there’s an error in Cinebench that results in the chip’s GPU incorrectly being listed as from the X Elite instead of the X Plus.)

This is a promising showing for Qualcomm’s second and less expensive chip featuring its Oryon architecture. Though as always, the real test will come when the X Plus starts showing up in proper retail hardware. That’s because even if it boasts impressive benchmark figures, these processors will still need to play nicely with Windows, which has not had nearly as smooth a transition to Arm-based silicon as Apple’s macOS.

Photo by Sam Rutherford/Engadget

But with renewed support for Windows on Snapdragon PCs and Qualcomm recently working with major players like Google to bring “dramatic performance improvements” in Chrome for devices running its laptop chips, things may be smoother this time.

This article originally appeared on Engadget at https://www.engadget.com/qualcomm-is-expanding-its-next-gen-laptop-chip-line-with-the-snapdragon-x-plus-130018288.html?src=rss

X’s AI bot is so dumb it can’t tell the difference between a bad game and vandalism

Last night, Golden State Warriors guard Klay Thompson had a rough outing shooting 0 for 10 in a loss against the Sacramento Kings, ending the team’s chances of making the NBA playoffs. But then, almost as if to add insult to injury, X’s AI bot Grok generated a trending story claiming Thompson was vandalizing homes in the area with bricks.

Now at this point, even casual basketball fans may be able to see what went wrong. But Grok isn’t very smart, because it seems that after seeing user posts about a player simply missing a bunch of shots (aka shooting bricks), the bot took things literally resulting in a completely fictitious AI-generated report.

After misinterpreting user posts about Klay Thompson's poor shooting during an NBA game, X's AI bot Grok created a fictitious story on the social media platform's trending section. 
Screenshot by Sam Rutherford (via X)

In the event this fabrication — which was the #5 trending story at the time of writing — gets corrected or deleted by Elon Musk, Grok originally wrote “In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.” Amusingly, despite pointing out the unusual nature of the story Grok went ahead of put out some nonsense anyway.

Granted, in fine print beneath the story, X says “Grok is an early feature and can make mistakes. Verify its outputs.” But even that warning seems to have backfired, as basketball fans began memeing on the AI with posts sarcastically verifying the AI’s erroneous statement.

After Grok created an erroneous story about Golden State Warriors guard Klay Thompson, users began memeing on the situation. 
Screenshot by Sam Rutherford (via X)

For most people, Grok’s latest gaff may merely be another example in an ongoing series of early AI tools messing up. But for others like Musk who believes that AI will be smarter than humans as soon as the end of next year, this should serve as a reminder that AI is still in desperate need of regular fact-checking.

This article originally appeared on Engadget at https://www.engadget.com/xs-ai-bot-is-so-dumb-it-cant-tell-the-difference-between-a-bad-game-and-vandalism-172707401.html?src=rss