Posts with «personal investing ideas & strategies» label

The Humane AI Pin is the solution to none of technology's problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to humane.center in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.

Screenshot

One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.

Wrap-up

Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at https://www.engadget.com/the-humane-ai-pin-is-the-solution-to-none-of-technologys-problems-120002469.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss

Apple is developing personal robots for your home, Bloomberg says

Apple is still on the hunt for the next revolutionary product to help it remain dominant in the market and to serve as new sources of revenue after abandoning its plans to develop an electric vehicle of its own. According to Bloomberg's Mark Gurman, one of the areas the company is exploring is personal robotics. It reportedly started looking into robots and electric vehicles at the same time, with the hopes of developing a machine that doesn't need human intervention. 

While Apple's robotics projects are still in the very early stages, Bloomberg said it had already started working on a mobile robot that can follow users around their home and had already developed a table-top device that uses a robot to move a screen around. The idea behind the latter is to have a machine that can mimic head movements and can lock on to a single person in a group, presumably for a better video call experience. Since these robots are supposed to be able to move on their own, the company is also looking into the use of algorithms for navigation. Based on the report, Apple's home devices group is in charge of their development, and at least one engineer who worked on its scrapped EV initiative has joined the team. 

Robots, however, aren't like phones in the sense that people these days need them in their lives. Apple is apparently worried about whether people would pay "top dollar" for the robots it has in mind, and executives still can't get to an agreement on whether the company should keep working on these projects. Gurman previously reported that Apple may have sold its EV for $100,000 — if that's true, it had a bigger potential to grow the company's revenue. But the Apple Car is now out of the picture, and the company is reportedly putting all of its focus on the Vision Pro and new products for the home, which also includes a home hub device with a display that resembles an iPad. Of course, Apple could still scrap these projects, and it could find other classes of products to invest in if it discovers that they could bring in bigger money in the future. 

This article originally appeared on Engadget at https://www.engadget.com/apple-is-developing-personal-robots-for-your-home-bloomberg-says-044254029.html?src=rss

Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they're still not being used for any practical purposes. That might change with today's announcement from Microsoft and Quantinuum, who say they've developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they're prone to error, which is the main reason today's quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft's solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum's hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft's Strategic Missions and Technologies division, this achievement could finally bring us to "Level 2 Resilient" quantum computing, which would be reliable enough for practical applications.

"The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems," Zander, wrote in a blog post today. "In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation. ... By having high-quality hardware components and breakthrough error-handling capabilities designed for that machine, we can get better results than any individual component could give us."

Microsoft

Researchers will be able to get a taste of Microsoft's reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It's unclear how long it'll take to actually reach that point, but for now, at least we're moving one step closer towards practical quantum computing.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss

Instagram is working on new Reels feed that combines two users' interests

Instagram is working on a feature that would recommend Reels to you and a friend based on videos you've shared with each other and your individual interests. Reverse engineer Alessandro Paluzzi unearthed the feature, which is called Blend. Instagram confirmed to TechCrunch that it's testing Blend internally and it hasn't started trialing it publicly. It may be the case that Blend never sees the light of day, though it's always intriguing to find out about the ideas Instagram is toying with.

The platform hasn't revealed more details about how Blend will work, though the idea seems to be that Instagram users and one of their besties will discover new Reels together instead of one of them finding a video they like and DMing it to the other. It would make sense for Blend to have an indicator that the other person has already seen a particular Reel so that the two people who have access to the feed can start chatting about it. 

TikTok doesn't have a feature along these lines, as TechCrunch notes, so Blend could give Instagram an advantage when it comes to folks who like to check out short-form videos together. As with many of the other features platforms of this ilk introduce, Blend fundamentally seems to be about increasing engagement.

#Instagram is working on Blend: #Reels recommendations based on reels you've shared each other and your reels interests 👀

ℹ️ Private between the two of you. You can leave a Blend at any time. pic.twitter.com/1kcssBuf7G

— Alessandro Paluzzi (@alex193a) March 28, 2024

This article originally appeared on Engadget at https://www.engadget.com/instagram-is-working-on-new-reels-feed-that-combines-two-users-interests-192018928.html?src=rss

How Uber and the gig economy changed the way we live and work

Gig work predates the internet. Besides traditional forms of self-employment, like plumbing, offers for ad-hoc services have long been found in the Yellow Pages and newspaper classified ads, and later Craigslist and Backpage which supplanted them. Low-cost broadband internet allowed for the proliferation of computer-based gig platforms like Mechanical Turk, Fiverr and Elance, which offered just about anyone some extra pocket change. But once smartphones took off, everywhere could be an office, and everything could be a gig — and thus the gig economy was born.

Maybe it was a confluence of technological advancement and broad financial anxiety from the 2008 recession, but prospects were bad, people needed money and many had no freedom to be picky about how. This was the same era in which the phrase "the sharing economy" proliferated — at once sold as an antidote to overconsumption, but that freedom from ownership belied the more worrying commoditization of any skill or asset. Of all the companies to take advantage of this climate, none went further or have held on harder than Uber.

Uber became infamous for railroading its way into new markets without getting approval from regulators. It cemented its reputation as a corporate ne'er-do-well through a byzantine scandal to avoid regulatory scrutiny, several smaller ones over user privacy and minimally-beneficial surcharges as well as, in its infancy, an internal reputation for sexual harassment and discrimination. Early on, the company used its deep reserves of venture capital to subsidize its own rides, eating away at the traditional cab industry in a given market, only to eventually increase prices and try to minimize driver pay once it reached a dominant position. Those same reserves were spent aggressively recruiting drivers with signup bonuses and convincing them they could be their own boss.

Self-employment has a whiff of something liberatory, but Uber effectively turned a traditionally employee-based industry into one that was contractor-based. This meant that one of the first casualties of the ride-sharing boom were taxi medallions. For decades, cab drivers in many locales effectively saw these licenses as retirement plans, as they'd be able to sell them on to newcomers when it was time to hang up their flat cap. But in large part due to the influx of ride-sharing services, the value of medallions has plummeted over the last decade or so — in New York, for instance, the value of a medallion dropped from around $1 million in 2014 to $100,000 in 2021. That's in tandem with a drop in earnings, leaving many struggling to pay off enormous loans they took out to buy a medallion.

Some jurisdictions have sought to offset that collapse in medallion value. Quebec pledged $250 million CAD in 2018 to compensate cab drivers. Other regulators, particularly in Australia, applied a per-ride fee to ride-sharing services as part of efforts to replace taxi licenses and compensate medallion holders. In each of those cases, taxpayers and riders, not rideshare companies, bore the brunt of the impact on medallion holders.

At first it was just cab drivers that were hurting, but over the years, compensation for this new class of non-employee app drivers dried up too. In 2017, Uber paid $20 million to settle allegations from the Federal Trade Commission that it used false promises about potential earnings to entice drivers to join its platform. Late last year, Uber and Lyft agreed to pay $328 million to New York drivers after the state conducted a wage theft investigation. The settlement also guaranteed a minimum hourly rate for drivers outside of New York City, where drivers were already subject to minimum rates under Taxi & Limousine Commission rules.

Many rideshare drivers have also sought recognition as employees rather than contractors, so they can have a consistent hourly wage, overtime pay and benefits — efforts that the likes of Uber and rival Lyft have been fighting against. In January, the Department of Labor issued a final rule that aims to make it more difficult for gig economy companies to classify workers as independent contractors rather than employees. The EU is also weighing a provisional deal to reclassify millions of app workers as employees.

Of course, the partial erosion of an entire industry's labor market wasn't always the end goal. At one point, Uber wanted to zero out labor costs by getting rid of drivers entirely. It planned to do so by rolling out a fleet of self-driving vehicles and flying taxis.

"The reason Uber could be expensive is because you're not just paying for the car — you're paying for the other dude in the car," former CEO Travis Kalanick said in 2014, a day after Uber suggested drivers could make $90,000 per year on the platform. "When there's no other dude in the car, the cost of taking an Uber anywhere becomes cheaper than owning a vehicle. So the magic there is, you basically bring the cost below the cost of ownership for everybody, and then car ownership goes away."

Uber's grand automation plans didn't work out as intended, however. The company, under current CEO Dara Khosrowshahi, sold its self-driving car and flying taxi units in late 2020.

Uber's success had second-order effects too: despite a business model best described as "set money on fire until (fingers crossed!) a monopoly is established" a whole slew of startups were born, taking their cues from Uber or explicitly pitching themselves as "Uber for X." Sure, you might find a place to stay on Airbnb or Vrbo that's nicer and less expensive than a hotel room. But studies have shown that such companies have harmed the affordability and availability of housing in some markets, as many landlords and real-estate developers opt for more profitable short-term rentals instead of offering units for long-term rentals or sale. Airbnb has faced plenty of other issues over the years, from a string of lawsuits to a mass shooting at a rental home.

Increasingly, this is becoming the blueprint. Goods and services are exchanged by third parties, facilitated by a semi-automated platform rather than a human being. The platform's algorithm creates the thinnest veneer between choice and control for the workers who perform identical labor to the industry that platform came to replace, but that veneer allows the platform to avoid traditionally pesky things like legal liability and labor laws. Meanwhile, customers with fewer alternative options find themselves held captive by these once-cheap platforms that are now coming to collect their dues. Dazzled by the promise of innovation, regulators rolled over or signed a deal with the devil. It's everyone else who's paying the cost.


To celebrate Engadget's 20th anniversary, we're taking a look back at the products and services that have changed the industry since March 2, 2004.

This article originally appeared on Engadget at https://www.engadget.com/how-uber-and-the-gig-economy-changed-the-way-we-live-and-work-164528738.html?src=rss

Snapchat’s latest paid perk is an AI Bitmoji of your pet

Snapchat has a new AI-powered perk for subscribers: Bitmoji versions of your pet. The feature, which is unfortunately not called “petmoji,” allows users to snap a photo of their four-legged friend to create a cartoon-like avatar to accompany their Bitmoji in the Snap Map.

Based on screenshots shared by the company, it seems users will be able to choose from a few different variations of the AI-generated images after sharing a photo of their pet. That’s considerably less customization than what you can do with your own human-inspired Bitmoji,though it should allow users to create something that looks similar to their IRL pet. (No word on if Snap could one day introduce branded pet accessories for animal avatars like they do for human Bitmoji.)

The addition is also the latest example of how Snap has embraced AI features in its subscription offering. Since debuting Snapchat+ in 2022, the company has used the premium service to experiment with generative AI features, including its MyAI assistant as well as camera-powered features like Dreams and AI-generated snaps. Snapchat+ has more than 7 million subscribers, the company announced in December.

Elsewhere, Snap added some updates for non-subscribers, too. The app is adding a new template feature to make it easier to edit clips, and new swipe-based gestures to send and edit snaps more quickly. Snapchat will also support longer video uploads for Stories and Spotlight. In-app captures can now be three minutes long, while the app will support uploads of up to five minutes.

This article originally appeared on Engadget at https://www.engadget.com/snapchats-latest-paid-perk-is-an-ai-bitmoji-of-your-pet-235027028.html?src=rss

You can now use your phone to get started with Amazon’s palm-reading tech

Amazon just launched an app that lets people sign up for its palm recognition service without having to head to an in-store kiosk. The Amazon One app uses a smartphone’s camera to take a photo of a palm print to set up an account. Once signed up, you can pay for stuff by using just your hand, ending the tyranny of having to carry a smartphone, cash or a burdensome plastic card.

The tech uses generative AI to analyze a palm's vein structure, turning the data into a “unique numerical, vector representation” which is recognized by scanning machines at retail locations. You’ll have to add a payment method within the app to get started and upload a photo of your ID for the purpose of age verification.

The app launches today for iOS and Android. Previously, you’d have to go to a physical location to sign up for Amazon One. Beyond payments, the tech is also used as an age verification tool and as a way to enter concerts and sporting events without having to bring along a ticket.

Once you hand over your palm-print to the completely benevolent Amazon corporation, you’ll have unfettered access to each and every Whole Foods grocery store throughout the country. Amazon, after all, owns Whole Foods. Amazon One payments are also accepted at some Panera Bread locations, in addition to certain airports, stadiums and convenience stores.

There are obvious privacy concerns here, as passwords can change but palms cannot. Amazon says that all uploaded palm images are “encrypted and sent to a secure Amazon One domain” in the Amazon Web Service cloud. The company also says the app “includes additional layers of spoof detection,” noting that it’s not possible to save or download palm images to the phone itself.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-use-your-phone-to-get-started-with-amazons-palm-reading-tech-184814302.html?src=rss

Google's Circle to Search feature will soon handle language translation

Google just announced that it’s expanding its recently-launched Circle to Search tool to include language translation, as part of an update to various core services. Circle to Search, as the name suggests, already lets some Android users research stuff by drawing a circle around an object.

The forthcoming language translation component won’t even require a drawn circle. Google says people will just have to long press the home button or the navigation bar and look for the translate icon. It’ll do the rest. The company showed the tech quickly translating an entire menu with one long press. Google Translate can already do this, though in a slightly different way, but this update means users won’t have to pop out of one app and into another just to check on something.

The translation tool begins rolling out in the “coming weeks”, though only to Android devices that can run Circle to Search. This list currently includes Pixel 7 devices, Pixel 8 devices and the Samsung Galaxy S24 series, though Google says it's coming to more phones and tablets this week, including some foldables.

Google Maps is also getting a refresh, with an emphasis on AI. When you pull up a place on Maps, like a restaurant, artificial intelligence will display a summary that describes unique points of interest and “what people love” about the business. The AI will also analyze photos of food and identify what the dish is called, in addition to the cost and whether it's vegetarian or vegan. The company hopes this will make it easier to make reservations and book trips.

Google

On the non-AI side of things, Maps is getting an updated lists feature in select cities throughout the US and Canada. This will aggregate lists of must-visit destinations pulled from members of the community and local publishers. There will be tools to customize these lists as you see fit.

These will be joined by lists created by Google and its algorithm, including a weekly trending list to discover the “latest hot spots” and something called Gems that chronicles under-the-radar spots. All of these Maps updates are coming to both Android and iOS devices later this month.

This article originally appeared on Engadget at https://www.engadget.com/googles-circle-to-search-feature-will-soon-handle-language-translation-174802558.html?src=rss

China bans Intel and AMD processors in government computers

China has introduced guidelines that bar the the use of US processors from AMD and Intel in government computers and servers, The Financial Times has reported. The new rules also block Microsoft Windows and foreign database products in favor of domestic solutions, marking the latest move in a long-running tech trade war between the two countries.

Government agencies must now use "safe and reliable" domestic replacements for AMD and Intel chips. The list includes 18 approved processors, including chips from Huawei and the state-backed company Phytium — both of which are banned in the US. 

The new rules — introduced in December and quietly implemented recently — could have a significant impact on Intel and AMD. China accounted for 27 percent of Intel's $54 billion in sales last year and 15 percent of AMD's revenue of $23 billion, according to the FT. It's not clear how many chips are used in government versus the private sector, however. 

The moves are China's most aggressive yet to restrict the use of US-built technology. Last year, Beijing prohibited domestic firms from using Micron chips in critical infrastructure. Meanwhile, the US has banned a wide range of Chinese companies ranging from chip manufacturers to aerospace firms. The Biden administration has also blocked US companies like NVIDIA from selling AI and other chips to China. 

The US, Japan and the Netherlands have dominated the manufacturing of cutting-edge processors, and those nations recently agreed to tighten export controls on lithography machines from ASL, Nikon and Tokyo Electron. However, Chinese companies, including Baidu, Huawei, Xiaomi and Oppo have already started designing their own semiconductors to prepare for a future wherein they could longer import chips from the US and other countries.

This article originally appeared on Engadget at https://www.engadget.com/china-bans-intel-and-amd-processors-in-government-computers-065859238.html?src=rss