Posts with «personal investing ideas & strategies» label

Rebel Moon Part 2 review: A slow-mo sci-fi slog

Rebel Moon: Part 2 - The Scargiver is an empty feast. It's a relentless onslaught of explosions, sci-fi tropes and meaningless exposition that amounts to nothing. And yet somehow it's still better than the first film in Zack Snyder's wannabe sci-fi epic franchise for Netflix, Rebel Moon: Part 1 - A Child of Fire. (What do these titles really mean? Who cares.) 

With all of the dull table-setting complete, Snyder is able to let his true talents soar in Rebel Moon: Part 2 by delivering endless battles filled with slow-motion action and heroic poses. It looks cool, I just wish it added up to something. Anything.

Spoilers ahead for Rebel Moon: Part 2.

If you somehow missed the first Rebel Moon film, the basic setup is that it's Star Wars meets The Seven Samurai. Sofia Boutella stars as Kora, a former elite soldier of an evil empire who is hiding out in an all-too idyllic farming village, just planting and harvesting her days away. When a group of military baddies kills the chief of the village and starts threatening a young girl, Kora goes on a murdering spree (in defense!), leaving the community open to a retaliatory attack. 

She spends the first movie recruiting potential warriors to defend the village, including a fallen gladiator (Djimoun Hounsou) and a bad-ass swordswoman (Doona Bae). (Their names are Titus and Nemesis, respectively, but those don't really matter because the characters are paper thin.)

Full disclosure: I tried writing a review for the first Rebel Moon and just gave up in disgust. It was a shockingly boring epic, so much so that it took me several days to watch without falling asleep. By the end, I was only left with a feeling of dread, knowing that there was still another two hours of Rebel Moon ahead of me.

It's somewhat empty praise, but at least I didn't fall asleep during The Scargiver. Mostly, that's due to the film actually having a sense of momentum and a lot more action. You can turn off your brain and enjoy the pretty pictures, much like you could for Snyder's Sucker Punch, Justice League and Watchmen adaptation. He's more a stylist than a natural storyteller, but occasionally Snyder's visuals, such as a baffling montage of our heroes harvesting wheat, can be almost poetic.

Netflix

It's just a shame that I didn't care much about the film's characters or any aspect of its story. James Gunn's Guardian's of the Galaxy trilogy made us fall in love with a band of misfits and screwups, with storylines that directly led to their personal and emotional growth. The crew in Rebel Moon, instead, feel like cardboard cutouts from better movies, and the overall plot feels forced (there's even setup for another film by the end). 

Hounsou tries to sell the pathos of Titus with his eyes, but he can only do so much. And while Bae's warrior woman exudes cool (and has a very compelling flashback), she's mostly wasted when the action really heats up. Then there's Jimmy, a robot voiced by Anthony Hopkins, who is briefly introduced in the first film and pops up for a few minutes here to kick butt. Why? It doesn't matter. Somehow that character is also important enough to serve as the narrator for both Rebel Moon films (but really it seems Snyder just wanted Hopkins' voice adding gravitas).

Perhaps the only real saving grace for Rebel Moon: Part 2, much like the first film, is Ed Skrein as the villainous Atticus Noble. As a sadistic baddie, he's really nothing new, but Skrein's heightened scenery chomping makes the character interesting to watch. Where Darth Vader exudes a calm sense of dread, Skrein's Noble is entertainingly chaotic, like the Joker crossed with Christoph Waltz's Hans Landa from Inglorious Basterds. He just has a lot of fun being bad — that's something!

Given how popular the first film was (according to Snyder and Netflix, anyway), we'll likely see more Rebel Moon down the line. Snyder previously said he'd like to do a six-hour director's cut of both films, and he recently told Radio Times that he'd like to stretch the Rebel Moon series out to four or six films. Somehow, that just feels like a threat. 

This article originally appeared on Engadget at https://www.engadget.com/rebel-moon-part-2-review-a-slow-mo-sci-fi-slog-195505911.html?src=rss

Apple says it was ordered it to remove WhatsApp and Threads from China App Store

Apple users in China won't be able to find and download WhatsApp and Threads from the App Store anymore, according to The Wall Street Journal and The New York Times. The company said it pulled the apps from the store to comply with orders it received from Cyberspace Administration, China's internet regulator, "based on [its] national security concerns." It explained to the publications that it's "obligated to follow the laws in the countries where [it operates], even when [it disagrees]."

The Great Firewall of China blocks a lot of non-domestic apps and technologies in the country, prompting locals to use VPN if they want to access any of them. Meta's Facebook and Instagram are two of those applications, but WhatsApp and Threads have been available for download until now. The Chinese regulator's order comes shortly before the Senate is set to vote on a bill that could lead to a TikTok ban in the US. Cyberspace Administration's reasoning — that the apps are a national security concern — even echoes American lawmakers' argument for blocking TikTok in the country. 

In the current version of the bill, ByteDance will have a year to divest TikTok, or else the short form video-sharing platform will be banned from app stores. The House is expected to pass the bill, which is part of a package that also includes aid to Ukraine and Israel. President Joe Biden previously said that he supports the package and will immediately sign the bills into law. 

This article originally appeared on Engadget at https://www.engadget.com/apple-says-it-was-ordered-it-to-remove-whatsapp-and-threads-from-china-app-store-061441223.html?src=rss

Media coalition asks the feds to investigate Google’s removal of California news links

The News/Media Alliance, formerly the Newspaper Association of America, asked US federal agencies to investigate Google’s removal of links to California news media outlets. Google’s tactic is in response to the proposed California Journalism Preservation Act (CJPA), which would require it and other tech companies to pay for links to California-based publishers’ news content.

The News/Media Alliance, which represents over 2,200 publishers, sent letters to the Department of Justice, Federal Trade Commission and California State Attorney General on Tuesday. It says the removal “appears to be either coercive or retaliatory, driven by Google’s opposition to a pending legislative measure in Sacramento.”

The CJPA would require Google and other tech platforms to pay California media outlets in exchange for links. The proposed bill passed the state Assembly last year.

In a blog post last week announcing the removal, Google VP of Global News Partnerships Jaffer Zaidi warned that the CJPA is “the wrong approach to supporting journalism” (because Google’s current approach totally hasn’t left the industry in smoldering ruins!). Zaidi said the CJPA “would also put small publishers at a disadvantage and limit consumers’ access to a diverse local media ecosystem.” Nothing to see here, folks: just your friendly neighborhood multi-trillion-dollar company looking out for the little guy!

Google described its link removal as a test to see how the bill would impact its platform:

“To prepare for possible CJPA implications, we are beginning a short-term test for a small percentage of California users,” Zaidi wrote. “The testing process involves removing links to California news websites, potentially covered by CJPA, to measure the impact of the legislation on our product experience. Until there’s clarity on California’s regulatory environment, we’re also pausing further investments in the California news ecosystem, including new partnerships through Google News Showcase, our product and licensing program for news organizations, and planned expansions of the Google News Initiative.”

In its letters, The News/Media Alliance lists several laws it believes Google may be breaking with the “short-term” removal. Potential federal violations include the Lanham Act, the Sherman Antitrust Act and the Federal Trade Commission Act. The letter to California’s AG cites the state’s Unruh Civil Rights Act, regulations against false advertising and misrepresentation, the California Consumer Privacy Act and California’s Unfair Competition Law (UCL).

“Importantly, Google released no further details on how many Californians will be affected, how the Californians who will be denied news access were chosen, what publications will be affected, how long the compelled news blackouts will persist, and whether access will be blocked entirely or just to content Google particularly disfavors,” News/Media Alliance President / CEO Danielle Coffey wrote in the letter to the DOJ and FTC. “Because of these unknowns, there are many ways Google’s unilateral decision to turn off access to news websites for Californians could violate laws.”

Google has a mixed track record in dealing with similar legislation. It pulled Google News from Spain for seven years in response to local copyright laws that would have required licensing fees to publishers. However, it signed deals worth around $150 million to pay Australian publishers and retreated from threats to pull news from search results in Canada, instead spending the $74 million required by the Online News Act.

Google made more than $73 billion in profits in 2023. The company currently has a $1.94 trillion market cap.

This article originally appeared on Engadget at https://www.engadget.com/media-coalition-asks-the-feds-to-investigate-googles-removal-of-california-news-links-212052979.html?src=rss

Amazon says a whopping 140 third-party stores in four countries use its Just Walk Out tech

Amazon published a blog post on Wednesday providing an update about its Just Walk Out technology, which it reportedly pulled from its Fresh grocery stores earlier this month. While extolling Just Walk Out’s virtues as a sales pitch to potential retail partners, the article lists a startlingly minuscule number of (non-Amazon) stores using the tech. There are now “more than 140 third-party locations with Just Walk Out technology in the U.S., UK, Australia, and Canada.”

Mind you, that isn’t the number of companies or retail chains licensing the tech; that’s the total number of locations. Nor is that the tally in one state or even one country. In four countries combined — with a total population of about 465 million — Just Walk Out is being used in “more than 140 third-party locations.”

On average, that means there’s one third-party Just Walk Out store for every 3.3 million people in those four countries. (They must be busy!) By contrast, there are over one million retail locations in the US, and, as of 2019, Starbucks had 241 locations in New York City alone, and there are over one million

Amazon had reportedly already planned to remove Just Walk Out tech from its Fresh grocery stores for roughly a year because it was too expensive and complicated for larger retail spaces to run and maintain. The company now pitches its tech as ideal for smaller convenience stores with fewer customers and products — like its own Amazon Go stores, which it has been busy shutting down over the last couple of years.

Amazon

The company reportedly gutted the team of developers working on Just Walk Out tech earlier this month. (You get one guess as to how the laid-off workers were instructed to leave the office.) As part of recent layoffs from Amazon’s AWS unit and Physical Stores Team, the company allegedly left only “a skeleton crew” to work on the tech moving forward. A skeleton crew to maintain a skeleton sounds about right.

In fairness, some of those locations are at high-traffic venues. That includes nine merch stores at Seattle’s Lumen Field (home to the Seahawks and Sounders), near Amazon’s headquarters. Delaware North, a large hospitality and entertainment company, has opened “more than a dozen” stores using the tech. Amazon says stores adopting Just Walk Out have reported increased transactions, sales and customer satisfaction.

Despite the reported gutting of Just Walk Out’s development team, Amazon says it “continues to invent the next generation of this technology to improve the checkout experience for large-format stores.” Its next steps include improving latency for “faster and more reliable receipts,” new algorithms to recognize customer actions and new sensors better.

If the reports about layoffs are accurate, the handful of remaining Just Walk Out developers will have their work cut out for them.

This article originally appeared on Engadget at https://www.engadget.com/amazon-says-a-whopping-140-third-party-stores-in-four-countries-use-its-just-walk-out-tech-191649492.html?src=rss

The Morning After: Meta crams its AI chatbot into your Instagram DMs

Instagram got a surprise visitor. Meta AI, the company’s AI-powered chatbot that can answer questions, write poetry and generate images with a simple text prompt, is up in your DMs. Meta warned that Meta AI was coming and has spent the last few months adding the chatbot to products like Facebook Messenger and WhatsApp. We all knew Instagram would be next.

“Our generative AI-powered experiences are under development in various phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told Engadget. For some of us at Engadget, the feature appeared in Instagram’s Direct Messaging inbox.

We could tap it to start a conversation with Meta AI, where it could give definitions of words, suggest headlines and… generate images of dogs on skateboards.

Ah, the future.

— Mat Smith

The biggest stories you might have missed

The Humane AI Pin review

Our favorite Sony wireless earbuds are on sale for a record-low price

Interstellar is coming back to theaters in September for its 10-year anniversary

Playdate revisited: Two years later

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TCL’s first original movie is this terrible-looking AI-generated love story

Stop reading this and just watch.

TCL

TCL, maker of many TVs, is to release its first special — a short romance movie — on TCLtv+ this summer. Minimizing effort (and artistic license), it’s using generative AI, and the result is as creepy, dreamy and blurry as all the other generative AI video we’ve seen so far. Watch the protagonists’ faces contort and blur. Marvel at the tone and color profiles switching for no apparent reason. You have to watch it: a rare laugh on a Monday morning.

Continue reading.

Apple claims Epic is trying to ‘micromanage’ its business

The company is asking a judge to deny Epic’s recent motion.

Last month, Epic Games filed a motion asking a California judge to hold Apple in contempt for what it claims are violations of a 2021 injunction. Now, Apple is asking the judge to reject Epic’s request, alleging the motion is an attempt to “micromanage Apple’s business operations in a way that would increase Epic’s profitability.” Epic said Apple’s “so-called compliance is a sham” and accused the company of violating the injunction with its recent moves. Apple maintains it has acted in compliance with the injunction, stating in the new filing: “The purpose of the injunction is to make information regarding alternative purchase options more readily available, not to dictate the commercial terms.”

Continue reading.

Google, a $1.97 trillion company, is protesting California’s plan to pay journalists

The company is temporarily removing links to California news for some.

Google, the search giant that brought in more than $73 billion in profit last year, is protesting a California bill that would require it and other platforms to pay media outlets. The company announced it was beginning a “short-term test” to block links to local California news sources for a “small percentage” of users in the state. How will this end up? Let’s take a look elsewhere.

The company pulled its News service out of Spain for seven years in protest of local copyright laws. However, in Australia, the company signed deals worth about $150 million to pay publishers. It also eventually backed off threats to pull news from search results in Canada and forked over about $74 million.

Continue reading.

The best laptops for both gaming and schoolwork

True work-and-play machines.

Engadget

Gaming laptops are now cheaper and more powerful than ever, and many wouldn’t look out of place in a classroom. If you aim to do some serious multimedia work alongside playing video games online, it’s worth looking at a dedicated gaming system. We select the best machines for balancing work with play, with advice on screen sizes, portability and more. Jack will no longer be a dull boy.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-meta-crams-its-ai-chatbot-into-your-instagram-dms-111512763.html?src=rss

The latest version of xAI's Grok can process images

xAI, the OpenAI competitor founded by Elon Musk, has introduced the first version of Grok that can process visual information. Grok-1.5V is the company's first-generation multimodal AI model, which cannot only process text, but also "documents, diagrams, charts, screenshots and photographs." In xAI's announcement, it gave a few samples of how its capabilities can be used in the real world. You can, for instance, show it a photo of a flow chart and ask Grok to translate it into Python code, get it to write a story based on a drawing and even have it explain a meme you can't understand. Hey, not everyone can keep up with everything the internet spits out. 

The new version comes just a couple of weeks after the company unveiled Grok-1.5. That model was designed to be better at coding and math than its predecessor, as well as to be able to process longer contexts so that it can check data from more sources to better understand certain inquiries. xAI said its early testers and existing users will soon be able to enjoy Grok-1.5V's capabilities, though it didn't give an exact timeline for its rollout. 

In addition to introducing Grok-1.5V, the company has also released a benchmark dataset it's calling RealWorldQA. You can use any of RealWorldQA's 700 images to evaluate AI models: Each item comes with questions and answers you can easily verify, but which may stump multimodal models like Grok. xAI claimed its technology received the highest score when the company tested it with RealWorldQA against competitors, such as OpenAI's GPT-4V and Google Gemini Pro 1.5.

This article originally appeared on Engadget at https://www.engadget.com/the-latest-version-of-xais-grok-can-process-images-120025782.html?src=rss

Meta is stuffing its AI chatbot into your Instagram DMs

On Friday, people around the web noticed a new addition to their Instagram: Meta AI, the company’s general-purpose, AI-powered chatbot that can answer questions, write poetry and generate images with a simple text prompt. The move isn’t surprising. Meta revealed Meta AI in September 2023 and has spent the last few months adding the chatbot to products like Facebook Messenger and WhatsApp, so adding it to Instagram seems like a no-brainer. 

Just got access to Meta AI on one of my Instagram accounts. pic.twitter.com/VNyRa5wbG4

— Krish Murjani  (@appleforever18) April 11, 2024

“Our generative AI-powered experiences are under development in various phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told Engadget, which suggests that not everyone has the feature available yet. TechCrunch, which first noted the news, said that Meta AI was showing up in Instagram’s search bar. But for some of us at Engadget, the feature actually showed up in the search bar in Instagram’s Direct Messaging inbox. 

Tapping it let me start a conversation with Meta AI just I would DM a friend on Instagram. I was able to ask the chatbot to give me definitions of words, suggest headlines for some stories I’m working on, and generate images of dogs on skateboards. I was also able to ask Meta AI to recommend Reels with cats in them, which it was able to do easily.

But when my colleague Aaron Souppouris asked Meta AI in WhatsApp to recommend Reels, it showed him some Reels in that app too — suggesting that the bot in Instagram isn’t really doing anything specific to Instagram. Instead, Meta is simply shoehorning the same chatbot into every app it owns.

If you tap a hamburger menu within the bot, Meta AI will also show you a long list of possible actions you ask the bot to take.

Aaron Souppouris

Why would you want a chatbot in Instagram to suggest tips for dealing with credit card debit, have a debate about cardio versus weights, or suggest hacks to travel with points, I do not know. But the point is that if you want to, you can.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-stuffing-its-ai-chatbot-into-your-instagram-dms-231855991.html?src=rss

Instagram's status update feature is coming to user profiles

Instagram’s status update feature, Notes, will soon be more prominent in the app. Up until now, Notes have only been visible from Instagram’s inbox, but the brief updates will soon also be visible directly on users’ profiles.

The change should increase the visibility of the feature and give people a new place to interact with their friends’ updates. (Instagram added reply functionality to Notes back in December.) The app is also experimenting with “prompts” for Notes, which will allow users to share questions for their friends to answer in their updates, much like the collaborative “add yours” templates for Stories.

Notes are similar to Stories in that the updates only stick around for 24 hours, though they are only visible to mutual followers, so they aren’t meant to be as widely shared as a typical grid or Stories post. The latest updates are another sign of how Meta has used the feature, first introduced in 2022, to encourage users to post more often for smaller, more curated groups of friends.

Separately, the app is also adding a new “cutouts” feature, which allows users to make stickers out of objects in their photos, much like the iOS sticker feature. On Instagram, these stickers can be shared in Stories or in a Reel. Cutouts can also be made from other users’ public posts, effectively giving people a new way to remix content from others (Instagram’s help page notes that users can disable this feature if they prefer for their content to not be reused.)

This article originally appeared on Engadget at https://www.engadget.com/instagrams-status-update-feature-is-coming-to-user-profiles-182621692.html?src=rss

The Humane AI Pin is the solution to none of technology's problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to humane.center in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.

Screenshot

One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.

Wrap-up

Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at https://www.engadget.com/the-humane-ai-pin-is-the-solution-to-none-of-technologys-problems-120002469.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss