It's all AI all the time at Google I/O! Today, Google announced its new AI media creation engines: Veo, which can produce "high-quality" 1080p videos; and Imagen 3, its latest text-to-image framework. Neither sound particularly revolutionary, but they're a way for Google to keep up the fight against OpenAI's Sora video model and Dall-E 3, a tool that has practically become synonymous with AI-generated images.
Google claims Veo has "an advanced understanding of natural language and visual semantics" to create whatever video you have in mind. The AI generated videos can last "beyond a minute." Veo is also capable of understanding cinematic and visual techniques, like the concept of a timelapse. But really, that should be table stakes for an AI video generation model, right?
To prove that Veo isn't out to steal artist's jobs, Google has also partnered with Donald Glover and Gilga, his creative studio, to show off the model's capabilities. We haven't yet seen that footage, but hopefully it's more like Atlanta season 3 and not season 2. According to Google, Veo can simulate real-world physics better than its previous models, and it's also improved how it renders high-definition footage.
It remains to be seen if anyone will actually want to watch AI generated video, outside of the morbid curiosity of seeing a machine attempt to algorithmically recreate the work of human artists. But that's not stopping Google or OpenAI from promoting these tools and hoping they'll be useful (or at least, make a bunch of money). Veo will be available inside of Google's VideoFX tool today for some creators, and the company says it'll also be coming to YouTube Shorts and other products. If Veo does end up becoming a built-in part of YouTube Shorts, that's at least one feature Google can lord over TikTok.
As for Imagen 3, Google is making the usual promises: It's said to be the company's "highest quality" text-to-image model, with "incredible level of detail" for "photorealistic, lifelike images" and fewer artifacts. The real test, of course, will be to see how it handles prompts compared to Dall-E 3. Imagen 3 handles text better than before, Google says, and it's also smarter about handling details from long prompts.
The sun rises and sets. We're all slowly dying. And AI is getting smarter by the day. That seems to be the big takeaway from Google's latest media creation tools. Of course they're getting better! Google is pouring billions into making the dream of AI a reality, all in a bid to own the next great leap for computing. Will any of this actually make our lives better? Will they ever be able to generate art with genuine soul? Check back at Google I/O every year until AGI actually appears, or our civilization collapses.
Developing...
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-veo-and-imagen-3-its-latest-ai-media-creation-models-173617373.html?src=rss
They say "Twitter isn't real life," but Black Twitter proved otherwise. For years, that phrase has been a way to ignore the real-world impact of social media conversations, especially when they spark radically new ideas. But that's clearly not true when you look at Black Twitter, an unofficial community made up of the site's black users, which inspired culturally significant movements with hashtags like #BlackLivesMatter and #OscarsSoWhite. Hulu's new documentary, "Black Lives Matter: A People's History," adapted from Jason Parham's Wired article, explores the rise and global influence of the community. Over the course of three engaging and often hilarious episodes, the series cements itself as an essential cultural document.
"The way I would define Black Twitter is a space where Black culture specifically was hanging out in a digital way," said Prentice Penny, the series director and former show-runner of HBO's Insecure, in an interview on the Engadget Podcast. "And even though it was a public space — clearly, it's Twitter, anybody can get on it — it still felt like you were having conversations with your friends that are like on the back of the bus. Or like on the stoop, or in the lunchroom. I mean, that's the energy of it."
In particular, Penny says that Twitter felt special because there was no real hierarchy, especially in the early days. That meant that even celebrities weren't immune to being mocked, or acting out on their own social media profiles (like Rihanna's notorious early Twitter presence). Twitter in its heyday felt like a place where money or class didn't really matter.
"This was kind of an equalization of a lot of things, that somebody in Kentucky who nobody knows could have the same strong opinion as someone who you revere, right?" Penny said. "And I think that's what made the space so fresh, because we don't really have spaces that are kind of a level playing ground in this country."
Twitter also felt genuinely different from the other social networks in the late 2000s. At the time, Facebook was mostly focused on connecting you with schoolmates and family members — it wasn't really a place for simply hanging out and joking around. Prentice notes that the forced brevity on Twitter also made it unique, since you had to really focus on what you were trying to say in 140 characters.
"Each of the creators [in the series] had a different idea of what Twitter should be," Penny added. "Some thought it should be a town square, some people thought it should be a news information thing... I think like with Black culture, the one thing we do really well is, because we're often given the scraps of things, we have to repurpose something, like taking the worst of the pig and making soul food... I think we are really good at taking things that could kind of be different things and make it be pliable for us."
The documentary recounts the many ways Black Twitter leveraged the platform, both for fun and for kicking off serious social movements. The community helped make live-tweeting TV shows a common occurrence, and it's one reason Scandal became a hit TV show.
This article originally appeared on Engadget at https://www.engadget.com/hulus-black-twitter-documentary-is-a-vital-cultural-chronicle-161557720.html?src=rss
As rumors foretold, Apple has revamped the iPad Pro with an M4 chip, tandem OLED screen and a thinner case. There's also a new Magic Keyboard that should deliver a more MacBook-like typing experience! In this week's episode, Cherlynn and Devindra discuss how Apple is shining a new light on tablets (which also includes the new iPad Air models) and reworking its vision of mobile computing. Does anyone really need the iPad Pro today? And could it be more compelling if iPadOS improves its multitasking capabilities? Also, we discuss the launch of Google's new mid-range phone, the Pixel 8a.
Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!
Topics
New iPad Pro with OLED and M4 processor, iPad Air and Apple Pencil announced at ‘Let Loose’ event – 1:04
Google announces Pixel 8a with 120Hz OLED screen and AI capability – 20:50
What the heck happed with Helldivers 2? – 28:31
Microsoft shuts down Tango Gameworks and Arkane Austin – 34:10
Hades 2 early access is out now – 42:01
Around Engadget: Steve Dent reviews Fujifilm X100 VI – 45:39
As rumors foretold, Apple has revamped the iPad Pro with an M4 chip, tandem OLED screen and a thinner case. There's also a new Magic Keyboard that should deliver a more MacBook-like typing experience! In this week's episode, Cherlynn and Devindra discuss how Apple is shining a new light on tablets (which also includes the new iPad Air models) and reworking its vision of mobile computing. Does anyone really need the iPad Pro today? And could it be more compelling if iPadOS improves its multitasking capabilities? Also, we discuss the launch of Google's new mid-range phone, the Pixel 8a.
Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!
Topics
New iPad Pro with OLED and M4 processor, iPad Air and Apple Pencil announced at ‘Let Loose’ event – 1:04
Google announces Pixel 8a with 120Hz OLED screen and AI capability – 20:50
What the heck happed with Helldivers 2? – 28:31
Microsoft shuts down Tango Gameworks and Arkane Austin – 34:10
Hades 2 early access is out now – 42:01
Around Engadget: Steve Dent reviews Fujifilm X100 VI – 45:39
The iPad Pro has always struck me as a baffling device. It's significantly more expensive than the (very capable!) iPad and iPad Air. iPadOS still isn’t a great environment for multitasking. And Apple hasn't yet justified why, exactly, you'd want a super-powerful tablet in the first place (simplified versions of Final Cut Pro and Audition aren't enough!). If you're trying to get serious work done, you're better off buying a slightly used last-gen MacBook Pro, instead of shelling out $1,000 or more on a souped-up tablet.
Taken individually, most of the tablet's new features seem inessential. It's the first device with Apple's M4 chip, which has vastly better AI performance than its earlier M-series hardware. It has a "tandem" OLED display, which stacks two OLED panels together for better performance. And both the 11-inch and 13-inch iPad Pros are incredibly thin and light (the latter model is the slimmest device Apple has ever made, measuring 5.1mm).
But when you wrap all of those advancements together and pair them up with a redesigned, MacBook-like Magic Keyboard, the M4 iPad Pro is starting to look more and more like the ultra-light computer of my dreams. A super-powerful machine that's easy to take anywhere, with a gorgeous screen for binging TV shows and a capable keyboard for writing on the go. Maybe I'm just charmed by the side profile of the iPad Pro with the Magic Keyboard, which looks like it could have been designed by Syd Mead in the '90s, imagining how laptops could be transformed in a few decades.
I'll admit, the new iPad Pro looks very similar to the 2022 model. But, as the kids say, it just hits differently now. This year’s iPad Pro is thinner than I ever thought possible, and the revamped Magic Keyboard solves most of the problems I've had with earlier versions, thanks to its aluminum top cover, function keys and larger touchpad.
Part of the appeal, for me at least, is that Apple has also taken the idea of a tablet PC a step further than Microsoft's Surface tablets. While those devices can function as genuine PCs and run full Windows apps, Microsoft hasn't improved its keyboard covers or overall design in years. If you want to hold a Surface on your lap, you'll still have a kickstand digging into your legs and a pretty flimsy typing experience. The M4 iPad Pro, on the other hand, now more closely resembles an actual laptop.
Now I realize part of this gadget lust comes from covering Apple's recent launch event. I've been thinking far too much about iPads over the past few days, and it's taken a toll. You could potentially get a laptop-like PC experience from either the entry-level iPad or iPad Air when paired together with a keyboard case. But, then again, I’ve already bought a 10th-gen iPad with Logitech’s Slim Folio keyboard and I don’t actually use it much for typing. It’s fine for jotting down something short like emails, but the unsatisfying keys makes it tough to get into a writing flow.
I'd also feel better about jumping on the iPad Pro bandwagon once iPadOS becomes an even better platform for multi-tasking. Stage Manager is a start, but it's a bit clunky and hard to navigate. Sure, Apple is constrained by what's possible on smaller displays, but I could imagine iPads (along with iPhones and Macs) becoming far more functional once the company starts rolling out its rumored local AI models.
What if Siri could accurately note down your shopping list, pull in prices from local stores and share it with your friends. What if it could automatically edit your vacation videos to post on Instagram? Now imagine you could do those things without losing focus from the email on your screen, or your company’s Slack channel. Multitasking doesn't necessarily need to involve jumping between several apps. With AI enhancements down the line, we could potentially complete complex tasks with natural language, and our devices could better anticipate what we actually need.
Apple
The iPad Pro M4’s price problem
Price is another obvious problem facing the iPad Pro. It has always been expensive, but Apple is really pushing the boundaries of acceptability with these new models. Both the 11-inch and 13-inch tablets are $200 more than before, starting at $999 and $1,299 respectively. While it's nice to see them come with 256GB of storage by default (up from 128GB), creative professionals will probably want to spend another $200 to get 512GB.
If you want the full 10-core CPU power of the M4 chip, though, you'd have to shell out for at least 1TB of storage, which makes the 11-inch iPad Pro $1,599. Want nano-textured glass for additional glare reduction? That's another $100. Oh, and don't forget the Magic Keyboard! That's $299 or $349 more, depending on the size. If you actually wanted to spec out the iPad Pro like a laptop, it's easy to hit a price near $2,000.
Alternatively, you could just get a $1,299 MacBook Air, or $1,599 14-inch MacBook Pro. Maybe add another $200 to get 16GB of RAM. At least with those machines, you've got larger screens, excellent keyboards, the full desktop power of macOS and more than a single port for connectivity. If you really want an iPad Pro experience, you could always keep an eye out for used or refurbished 2022 models, which come with the very capable M2 chip.
Given just how expensive it is, I likely won't be buying a new iPad Pro anytime soon. But the desire is certainly there, sitting somewhere deep within me, ready to take over my cognitive functions the minute these tablets get cheaper.
This article originally appeared on Engadget at https://www.engadget.com/oh-no-i-think-i-want-an-ipad-pro-now-170041331.html?src=rss
Apple's new iPad Pro models are its most laptop-like tablets yet. They're the first devices powered by the company's M4 chip, which is said to deliver more AI capabilities. And, for the first time outside of the iPhone and Apple Watch, both the new 11-inch and 13-inch iPad Pros will sport OLED screens. That's a step up from the LCD and Mini-LED displays on the previous models, and it should lead to bolder colors, inky dark black levels, and far better contrast.
These aren't your typical OLEDs, either: Apple says the iPad Pros feature "tandem" OLED screens for 1,000 nits of typical brightness and 1,500 nits of peak brightness. That solves the brightness issues facing earlier OLED screens, and it means you likely won't miss the extreme brightness of Mini-LED.
Developing...
Follow all of the news live from Apple's 'Let Loose' event right here.
This article originally appeared on Engadget at https://www.engadget.com/apples-thinner-new-ipad-pros-feature-the-m4-chip-and-tandem-oled-displays-142031520.html?src=rss
More than anything, Star Wars: Episode 1 - The Phantom Menace is a fascinating cultural object. It's been 25 years since I saw the film in theaters, and over a decade since I last rewatched it (in a vain attempt to help my Trekkie wife catch up to the prequels). I've had enough time to process the initial disappointment and embarrassment of introducing my wife to Jar Jar Binks. So when Disney announced it was bringing the prequel trilogy back to theaters, I was practically giddy about revisiting them to see how George Lucas's final films compared to the onslaught of Star Warsmedia we've experienced over the past decade. Was The Phantom Menace as bad as I'd remembered? Well, yes and no.
Disney/Lucasfilm
Boring but full of imagination
In 1999, I knew Episode 1 would be a bit of a slog as soon as we hit the second line of the opening crawl: "The taxation of trade routes to outlying star systems is in dispute." Really, George? This was what Star Wars fans were waiting for since 1983's Return of the Jedi? During this rewatch, I was more tickled than annoyed by the many baffling narrative choices: The empty drama of a trade blockade; the confusing decision to establish a romance between a literal child and an older teenager; and throwing in Jar Jar Binks to appease kids amid the hideously dull dialog.
It's as if The Phantom Menace was written and directed by an alien who hadn't actually seen a movie, or engaged in any aspect of pop culture, since the early '80s. At the same time, that near-outsider perspective is part of the film's charm. Seeing a society slowly lose control of an idealistic democracy to a power-hungry dictator is a lot for a PG-rated fantasy film. Yet that also sets up the first two prequels to feel eerily-prescient beside the global response to 9/11.
By the time we reached 2005's Revenge of the Sith, the allusions to George W. Bush's Patriot Act and Global War on Terror were hard to miss. "This is how liberty dies, with thunderous applause," Padme says as her fellow Senators hand over emergency powers to Palpatine, turning Supreme Chancellor Palpatine into the Emperor, and transforming the Galactic Republic into the Galactic Empire.
Disney/Lucasfilm
Beyond political machinations, The Phantom Menace is filled with loads of gorgeous imagery: Naboo's lush palace and aquatic Gungan city; the designs of new ships and weapons; and, of course, every single outfit worn by Princess Amidala. It would have been nice if these visuals cohered into the narrative better, but their presence makes it clear that Lucas was surrounded by world-class talent, like renowned costume designer Trisha Biggar.
The Phantom Menace also leaps to life in its handful of action set-pieces. Sure, maybe the pod-race goes on a bit too long, but the sense of speed, scale and bombastic sound throughout is still absolutely thrilling. (The film's sound team — Gary Rydstrom, Tom Johnson, Shawn Murphy and John Midgley — was nominated for an Oscar, but lost out to The Matrix.)
And yes, the entire Duel of the Fates fight is still an absolute banger. There's no doubt that The Phantom Menace would have been a stronger film with less-clunky dialog and more character development shown through action. At one point in the fight, all of the participants are separated by laser barriers. Qui-Gon Jinn meditates, almost completely at peace. Darth Maul prowls like a caged lion. And Obi-Wan Kenobi is simply eager to get on with the fight, like a hot-shot student who just wants to show off. That sequence tells you more about those characters than the remaining two hours of the film.
Disney/Lucasfilm
A precursor to ubiquitous digital characters
While I didn't come around to loving Jar Jar Binks during this rewatch, his very existence as a fully-CG character felt more significant than ever. Voiced by the actor and comedian Ahmed Best, Jar Jar was roundly trashed upon release and his implementation was far from seamless. But it was also the first time we saw a motion-captured performance be transformed into a fully-realized character. Now that technology is so common in movies we practically take it for granted.
"You can’t have Gollum without Jar Jar," Best said in a recent interview for TheNew York Times. "You can’t have the Na’vi in ‘Avatar’ without Jar Jar. You can’t have Thanos or the Hulk without Jar Jar. I was the signal for the rest of this art form, and I’m proud of Jar Jar for that, and I’m proud to be a part of that. I’m in there!”
In 2017, Best offered an expanded version of his thoughts in a Twitter thread (via ScreenRant): "Jar Jar helped create the workflow, iteration process and litmus test for all CGI characters to this day. On some days the code was being written in real time as I was moving. To deny Jar Jar's place in film history is to deny the hundreds of VFX technicians, animators, code writers and producers their respect. People like John Knoll, Rob Coleman and scores of others who I worked with for two years after principal photography was ended to bring these movies to you."
Disney/Lucasfilm
A great story stuck in a bad film
I've learned the best way to watch The Phantom Menace is to take in the aspects that I like and replace Lucas's many baffling choices with my own head canon. The story of Anakin Skywalker being born through the sheer power of the Force and becoming the Jedi's Chosen One? That's interesting! Inventing Midi-chlorians to give people a literal Jedi power score? That's bad, to hell with you! (Midi-chlorians are still technically canon, but they've been largely ignored in recent Star Wars media.)
This time around, I couldn't help but imagine how a more natural and energetic storyteller would have tackled The Phantom Menace. Surely they wouldn't front-load trade disputes and taxation. A more skilled writer, like Andor's Tony Gilroy, could thoughtfully weave together the Republic's potential downfall. And I'd bet most people wouldn't waste Ewan McGregor's Obi-Wan by keeping him off-screen for an hour, while everyone else goes on a pod-racing adventure. (It sure would be nice to have him spend more time with Anakin!)
Disney/Lucasfilm
I still haven't seen Topher Grace's fabled 85-minute edit of the Star Wars prequels, but his decision to start in the middle of Phantom Menace's climactic lightsaber battle makes sense. So much of Episode 1 feels entirely superfluous when the real story of Anakin Skywalker is about falling in love, being tempted by the Dark Side and ultimately betraying his master.
This article originally appeared on Engadget at https://www.engadget.com/i-guess-i-learned-how-to-appreciate-the-phantom-menace-173010855.html?src=rss
I hate the Rabbit R1. It's yet another sign that standalone AI gadgets, like the Humane AI Pin, are fundamentally useless devices meant to attract hype and VC funding without benefitting users at all. It's like trying to build a skyscraper on quicksand: Today's AI models are great for parlor tricks, but they're ultimately untrustworthy. How do you create a device around that?
The Rabbit R1's big selling point has been its "large action model," or LAM, which can supposedly understand what you say and get things done. But really, that's just marketing speak. At the moment, the R1 can barely do anything as an AI assistant. And the few tasks it can actually accomplish, like placing DoorDash orders, are faster and easier to tackle on your phone. You know, the device we already own that can tap into AI features and fast cellular networking.
Rabbit R1: design and build
I'll admit, the Rabbit R1 looks adorable, but that's mostly down to the design magic of Teenage Engineering, a company that can make a simple tripod look desirable. The R1 is clearly building on the Playdate, another tiny square gadget from Teenage Engineering. Instead of that game handheld’s iconic crank, the R1 has a far less satisfying scroll wheel. Its glossy plastic case also feels a lot cheaper and thicker than the Playdate, almost like what you'd expect from a child's toy.
Alongside the dull 2.9-inch screen, there's a unique 8-megapixel "360 eye" camera, which can rotate either towards you or away from you. It's an interesting way to avoid bundling two separate cameras, so I'll give Rabbit credit for that. But the 360 eye isn't meant for taking photos: Instead, it's all about computer vision. You can ask the R1 to describe what's in front of you, from objects to documents and articles, and wait for an AI-generated summary. While this is something that could be useful for people with visual impairments, those users could do the same with ChatGPT, Microsoft's Copilot or built-in tools on their phones (which also have vastly superior cameras).
Using the Rabbit R1 is an exercise in futility
Beyond its looks, the Rabbit R1 is mostly a failure. Once it’s turned on, you should be able to hit the push to talk button on its side and ask the AI assistant whatever you want: the weather, local traffic or a summary of a recent book. In my testing, though, the R1 would often deliver the weather when I asked for traffic, and sometimes it would hear my request and simply do nothing.
The R1 becomes more frustrating the more you use it: Its scroll wheel is the only way to interact with its interface (even though the display is also a touchscreen), and it's simply awkward to use. There's no rhyme or reason for how long you need to scroll to move between menu options. The mere act of selecting things is a pain, since the confirmation button is on the right side of the R1. That button would be far easier to hit somewhere below the scroll wheel — or better yet, just let me use the damn touchscreen!
Photo by Devindra Hardawar/Engadget
Oddly, the Rabbit's touchscreen does recognize taps whenever you need to enter text like a Wi-Fi network password. But even that process is annoying, since it involves turning the R1 on its side and typing on a laughably tiny keyboard. Honestly, I felt like I was being punked every time I had to use it. (Cue the obligatory, "What is this, a keyboard for ants?")
Third-party apps on the Rabbit R1
The more I used the Rabbit R1, the more I felt like it was purposefully designed to drive me insane. It can play music from Spotify (if you have a paid subscription), but what's the point of doing that with its terrible 2-watt speaker? Are you expected to connect Bluetooth headphones? You can ask the R1 to generate art via Midjourney AI (again, with a paid account), but it often failed to show me the pictures that were created. On the rare occasion they did show up, I couldn't actually do anything with the AI pictures from the R1. I'd have to load up Midjourney's Discord server on my phone or computer to share them around.
Photo by Devindra Hardawar/Engadget
When I asked the R1 to find me an Uber to a local theater, it told me that the Uber service may be slow to load on RabbitOS and isn't available everywhere (uh, thanks?). After 30 seconds of idling, it said the Uber service may be under maintenance, or there may be an issue with my credentials. (I logged out and back into Uber on the "Rabbit Hole" website, which you use to manage the R1, but the error persisted.)
“LAM works by operating the Uber web app on the cloud on your behalf,” Rabbit representative Ryan Fenwick told me over e-mail when I asked why I couldn’t get the Uber service to work. “Uber ultimately decides how and whether it serves users, so depending on factors like the location you’re booking from, your ride history, etc., it may vary from time to time. We’re implementing measures that help to improve the success rate and transparency of ride booking through R1, so over time the experience should improve.”
At least the Rabbit R1 was able to get me a sandwich. I asked it to find some lunch nearby and it spent an entire minute communing with Postmates and its AI cloud — the precise amount of time it would take me to complete a GrubHub order on my phone. The R1 eventually returned with three chaotic choices: Subway, a nearby Henri’s Bakery and a restaurant five miles away I’ve never heard of
Photo by Devindra Hardawar/Engadget
I opted for Henri’s (they do make killer sandwiches), and the R1 showed me a whopping six menu items. Its tiny screen could only hold a picture of the item, its name and the price — you can't tap into it to get a longer description or customize anything. You can only add items to your cart or remove them. I chose two sandwiches and, to my surprise, the R1 completed the order without ever confirming my payment information or delivery address. It was working entirely off of my DoorDash defaults, and thankfully those were up to date.
As soon as the order was placed, my iPhone started lighting up with all sorts of useful information from DoorDash. I received a confirmation from the restaurant, a detailed look at the bill (the R1 apparently added my default 20% tip) and the name of my delivery driver. It took the R1 several minutes before it confirmed the order, and it only occasionally updated me that it was coming closer.
My sandwiches eventually arrived, but I was more struck by the many ways things could have gone wrong. This isn't 1999; I'm no longer impressed by simply being able to order food online like I did from Kozmo.com (RIP). But even back then, I was able to get a full look at menus and customize things. The fact that I could look over at my phone and see the DoorDash app being far more useful made me instantly lose faith in the R1.
There are other things the R1 can do, like recording and summarizing meetings. But that’s also something several apps can do on my phone and computer. The on-demand translation feature seemed to work fine converting English to Spanish and Japanese, but it’s no better than Google Translate or ChatGPT on my phone.
Photo by Devindra Hardawar/Engadget
What’s the point of the Rabbit R1?
All of this leads me to ask: What's the point of the Rabbit R1, really? it certainly can't replace your phone, since it can't make calls or send texts. While you can add a SIM card for always-on connectivity, that just makes it more expensive. It'll still be useless on the go, anyway. Perhaps, you could argue, it's a companion device to help avoid being distracted by your phone. But it's so slow and hard to use that I find my smartphone's notification-filled hellscape far more calming. There's nothing zen at all about having yet another device that you have to buy, charge and carry.
And if you suffer battery life anxiety, you absolutely should stay away from the Rabbit R1. When I first received it, the R1 would burn through its battery while sitting idle, doing absolutely nothing, for eight hours. The first major RabbitOS update helped considerably, but the R1 still can’t last an entire day on a single charge. For a device that has such a tiny screen and offloads its work to the cloud, that’s simply inexcusable.
Photo by Devindra Hardawar/Engadget
I suppose you could argue that the $199 Rabbit R1 is a good deal compared to the $699 Humane AI Pin (which also requires a $24 monthly subscription), but that’s like saying rabbit droppings don’t smell bad compared to dog poop. Technically true! But in the end it’s all still shit. The Humane’s projection screen is at least an interesting twist on mobile UI, and its potentially less cumbersome as a wearable. The Rabbit AI assistant, on the other hand, is basically just a chunkier and dumber phone.
Don’t buy the R1. Even if Rabbit somehow manages to deliver on some of the promises of its LAM – like the ability to train the R1 to handle the variety of tasks – I have no faith that it’ll actually work well. My advice extends to every standalone AI gadget: Just stay away. Your phone is enough.
This article originally appeared on Engadget at https://www.engadget.com/rabbit-r1-review-a-199-ai-toy-that-fails-at-almost-everything-161043050.html?src=rss
The Rabbit R1 is finally here, and it's yet another useless AI gadget. Sure, at $199 with no monthly fee, it's a lot cheaper than the $699 Humane AI Pin. But the R1 is slow, hard to use, and doesn't actually do much. The much-promised "Large Action Model" mostly powers things you can easily do on your phone. In this episode, Devindra and Engadget's Sam Rutherford chat with CNET's Lisa Eadicicco about the Rabbit R1 and whether AI devices are necessary at all. Just like cameras, the best AI device is the one you always have with you: your smartphone.
Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!
Hosts: Devindra Hardawar and Sam Rutherford Guest: Lisa Eadicicco from CNET Producer: Ben Ellman Music: Dale North and Terrence O'Brien
This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-why-tiktok-will-never-be-the-same-again-113009291.html?src=rss
The Rabbit R1 is finally here, and it's yet another useless AI gadget. Sure, at $199 with no monthly fee, it's a lot cheaper than the $699 Humane AI Pin. But the R1 is slow, hard to use, and doesn't actually do much. The much-promised "Large Action Model" mostly powers things you can easily do on your phone. In this episode, Devindra and Engadget's Sam Rutherford chat with CNET's Lisa Eadicicco about the Rabbit R1 and whether AI devices are necessary at all. Just like cameras, the best AI device is the one you always have with you: your smartphone.
Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!