When Snapchat introduced the notion of “ephemeral data” to the masses a decade ago, self-destructive messaging really took off. There were tons of companies trying to cash in, from Meta-created Poke to Wickr, Confide, Hash and others. For the most part, all of those companies failed, but the idea has thrived. To that end, WhatsApp just introduced voice messages that automatically delete after being played.
The messaging app’s View Once feature already exists for photos and messages, but this is the first time it has been applied to voice messages. The interface is simple. Just select View Once and make a voice message. It’ll self-destruct after the recipient hears it. This is not only fun in a Mission Impossible sort of way, but actively enhances privacy in the case of audio recordings that mention sensitive topics. Hey, once in a while you have to give someone credit card details and it’s better to be safe than sorry.
There are some caveats, as no technology is foolproof. WhatsApp encourages users to only send View Once voice messages to people they trust, as there are ways to get around the ephemeral nature of the data. For instance, Android users can use the screen record function as they listen and anyone can use another camera or external microphone to capture the message.
The tool’s rolling out globally over the next few days, so it might be a bit before the update hits your box. WhatsApp has been making all sorts of improvements throughout the past year. Just last week, the platform introduced the ability to share photos in their original format, without compression. The app also recently added a tool that masks your IP address when making calls.
This article originally appeared on Engadget at https://www.engadget.com/whatsapp-adds-disappearing-voice-messages-to-its-roster-of-privacy-features-172813331.html?src=rss
Cloud storage app Proton Drive is rolling out a new tool that automatically sends photos to a private cloud server, bringing the feature set closer to something like Google Drive. Not only does the software automatically sync and upload photos to its servers, but there’s a management tool that categorizes images based on when the photos were taken, which Proton calls “snapshots of your life.” All of these features are reserved for Android users.
All you have to do is download the update and enable photo uploads in the settings. Like all aspects of Proton Drive, the transfer will be end-to-end encrypted so you don’t have to worry about prying digital eyes. The encryption applies to the photo itself and any associated metadata.
As for revisiting the photos, the app lays them all out in a grid view, with preview options in a variety of file types, including panoramas, portraits, and even timelapse videos. It’s worth noting that Proton Drive already offered cloud storage for photos, but there wasn’t an automatic sync. Now there is. Android users rejoice, though the company has yet to announce an iOS version.
The update begins showing up today, but it’ll be a few days before everyone gets it. You know the drill. A 200GB Proton Drive subscription costs $5 per month, while a 500GB plan costs $13 each month. There’s a free tier, but it's only 1GB.
This article originally appeared on Engadget at https://www.engadget.com/proton-drive-for-android-can-back-up-your-photos-to-a-private-cloud-server-edited-163116819.html?src=rss
Truly great Android tablets are uncommon, but the Google Pixel Tablet stands out among them for its ability to function like a smart display. If you've been interested in picking one up, the 11-inch slate is back on sale for $399 at several retailers, including Amazon, Target, Wellbots, and Google's online store. We've seen this $100 discount a couple of times over the past month, but it nevertheless matches the lowest price we've tracked. This deal applies to the 128GB versions of the device in each colorway; if you need more storage space, the 256GB models are also $100 off at $499. Google says the offer will run through December 17, and it comes as part of a wider range of Pixel device deals the company is running this week.
We note the Pixel Tablet in our tablet buying guide, and Engadget Deputy Editor Cherlynn Low gave the device a review score of 85 this past June. Taken purely as a tablet, it's not as pleasant as the top Android pick in our guide, Samsung's Galaxy Tab S9: It uses an LCD panel instead of OLED, the screen is limited to a 60Hz refresh rate and Samsung's software experience is generally better-suited to multitasking and productivity. But for $350 or so less, the Pixel Tablet's 2,560 x 1,600 resolution display, Tensor G2 chip and 5,000mAh battery are still more than nice enough for video streaming, gaming, web browsing and other casual tablet tasks. Google says it'll support the device with OS updates through June 2026 (with security updates through June 2028), though, as with all Android tablets, some apps aren't as optimized for large screens here as they are on iPads.
What sets the Pixel Tablet apart is the dock that comes with it, which serves as both a charger and a dedicated speaker. When you pop the tablet onto that, it can go into a "Hub Mode" and work along the lines of a Nest Hub Max. It's not quite as seamless, but you can still use the Google Assistant to control certain smart home devices, cast video from a phone, showcase photos, stream music and the like. If you've been in the market for both a tablet and smart display anyway, this is a clever compromise, though you should still want the former first and foremost.
A few other Google devices we recommend are also on sale. The top Android picks in our guide to the best smartphones, the Pixel 8 and Pixel 8 Pro, are down to $531 and $799, respectively. The former applies to a 256GB model and beats the deal we saw on Black Friday by $78. Beyond that, the 4K Chromecast has dropped back to $38, the entry-level Nest Thermostat is down to $90 and the Pixel Watch 2 is still down to a low of $300.
This article originally appeared on Engadget at https://www.engadget.com/googles-pixel-tablet-falls-back-to-an-all-time-low-of-399-154549399.html?src=rss
Google officially introduced its most capable large language model to date, Gemini. CEO Sundar Pichai said it’s the first of “a new generation of AI models, inspired by the way people understand and interact with the world.” Of course, it’s all very complex, but Google’s multimillion-dollar investment in AI has created a model more flexible than anything before it. Let’s break it down.
The system has been developed from the ground up as an integrated multimodal AI. As Engadget’s Andrew Tarantola puts it, “think of many foundational AI models as groups of smaller models all stacked together.” Gemini is trained to seamlessly understand and reason on all kinds of inputs, and this should make it pretty capable in the face of complex coding requests and even physics problems.
Gemini is being ‘made’ into three sizes: Nano, Pro and Ultra. Nano is on-device, and Pro will fold into Google’s chatbot, Bard. The improved Bard chatbot will be available in the same 170 countries and territories as the existing service. Gemini Pro apparently outscored the earlier model, which initially powered ChatGPT, called GPT-3.5, on six of eight AI benchmarks. However, there are no comparisons yet between OpenAI’s dominant chatbot running on GPT-4 and this new challenger.
Meanwhile, Gemini Ultra, which won’t be available until at least 2024, scored higher than any other model, including GPT-4 on some benchmark tests. However, this Ultra flavor reportedly requires additional testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for further testing and feedback.
— Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The headphone industry isn’t known for its rapid evolution, which makes the arrival of the Creative’s Aurvana Ace headphones — the first wireless buds with MEMS drivers — notable. MEMS-based headphones need a small amount of “bias” power to work and while Singularity used a dedicated DAC with a specific xMEMS “mode,” Creative uses an amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. If MEMS is to catch on, it has to be compatible with true wireless headphones.
Foreign governments likely spy on your smartphone use, and now Senator Ron Wyden’s office is pushing for Apple and Google to reveal how exactly that works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked.
“Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information,” Wyden wrote in the letter on Wednesday.
Apple claims it was suppressed from coming clean about this process, which is why Wyden’s letter specifically targets the Department of Justice. “In this case, the federal government prohibited us from sharing any information, and now this method has become public, we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Meanwhile, Google said it shared “the Senator’s commitment to keeping users informed about these requests.”
Scientists have developed a new implantable device that could change the way Type 1 diabetics receive insulin. The thread-like implant, or SHEATH (Subcutaneous Host-Enabled Alginate THread), is installed in a two-step process, which ultimately leads to the deployment of “islet devices,” derived from the cells that produce insulin in our bodies naturally. A 10-centimeter-long islet device secretes insulin through islet cells that form around it, while also receiving nutrients and oxygen from blood vessels to stay alive. Because the islet devices eventually need to be removed, the researchers are still working on ways to maximize the exchange of nutrients and oxygen in large-animal models — and eventually patients.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-googles-gemini-is-the-companys-answer-to-chatgpt-121531424.html?src=rss
When LG still made phones (sigh), at one point it tried to implement a "Hand ID" unlock gimmick on the G8 ThinQ, though in our experience, there was much room for improvement. For one, you'd have to turn on the screen first to toggle hand tracking. That was dumb. Fast forward to today, Realme is bringing a similar feature back to a new phone, the GT5 Pro, with support for some seemingly practical hand gestures.
Rather than using a time-of-flight camera and an infrared light, the Realme GT5 Pro utilizes its 32-megapixel selfie camera to detect your palm print. In the above demo, you can see how the screen wakes up automatically when the palm moves away from it. I highly doubt that the front camera stays on all the time, so I'm willing to bet that this is working in conjunction with an ultrasonic proximity sensor — most likely by Elliptic Labs, which is present on many Android handsets.
Realme said palm unlock is faster than face recognition, partly thanks to machine learning using over 10,000 models. The company even went as far as claiming that this security feature passed a penetration test involving over 10 million attacks. Still, the good-old under-display fingerprint reader is still there, so palm unlock is just an extra option — probably the most convenient for when you're cooking or driving.
Realme
Like the LG, the Realme GT5 Pro also supports several hand gestures. A pinch gesture would toggle the recent app list, and from there you can gently brush left or right to browse through the recent apps. Holding up your index finger will toggle cursor control, and hovering over a spot triggers a click. A three-finger palm gesture takes a screenshot. Flipping your palm around takes you back to the home screen. Pointing your thumb to the left toggles a "back" action. Finally, moving your palm towards the screen switches it off.
The phone itself is otherwise a standard flagship affair. It packs Qualcomm's latest Snapdragon 8 Gen 3 processor, a 6.78-inch curved AMOLED panel from China's BOE (2,780 x 1,264, 144Hz, 4,500 nits), a generous 5,400mAh battery which supports 100W fast charging (12 minutes to 50 percent charge) or 50W wireless fast charging, USB-C 3.2, NFC, dual speakers and infrared remote. As part of its nine-layer thermal structure, Realme threw in a three-layer vapor-cooling chamber, which apparently has the industry's largest cooling surface area. The device is also rated with IP64 for dust and liquid protection.
Realme
Photography-wise, you get a 50-megapixel main camera (powered by a Sony LYT-808 sensor; as found on the OnePlus 12), an 8-megapixel ultra-wide camera and the same 50-megapixel, 3x periscopic telephoto camera (with a Sony IMX890) as the one on the Oppo Find X6 series. You can already tell the synergy between Realme, Oppo and OnePlus within the BBK family here.
The Realme GT5 Pro is available in China starting from 3,298 yuan or about $460 for the 12GB RAM with 256GB storage model, and maxing out at 4,198 yuan or $590 for the 16GB RAM with 1TB storage model. Color options include black for the glass body, and orange or gold for the vegan leather options.
This article originally appeared on Engadget at https://www.engadget.com/realmes-gt5-pro-phone-can-unlock-itself-by-reading-your-palm-091320182.html?src=rss
It was reported in late November that Google Drive for desktop (v84.0.0.0-84.0.4.0) had a sync issue, which caused months or even years of files to disappear. If you were unfortunate enough to be part of this "small subset" of users, there's finally some good news. In the latest version of Drive for desktop app (version 85.0.13.0 or higher), you'll be able to access a file recovery tool via a few steps: go to the menu bar or system tray, click the Drive for desktop icon, press and hold the "Shift" key and click "Settings," and then you'll be able to hit "Recover from backups."
From there, you should see a notification saying "Recovery has started," and hopefully you'll get a "Recovery is complete" message after a while. You'll then find a new folder named "Google Drive Recovery" containing the unsynced files on your desktop.
Good luck, though, as Google doesn't expect this method to work for everyone. "If you’ve tried to run the recovery tool and are experiencing issues, submit feedback through the Drive for desktop app with the hashtag '#DFD84' and make sure to check the box to include diagnostic logs," the company said on the support page. There are also instructions for those who prefer trying with command line interface, Windows backup and Time Machine backup.
This article originally appeared on Engadget at https://www.engadget.com/updated-google-drive-for-desktop-offers-a-recovery-tool-for-missing-files-042758933.html?src=rss
Last January, AMD beat out Intel by launching its Ryzen 7040 chips, the first x86 processors to integrate a neural processing unit (NPU) for AI workloads. Intel's long-delayed Core Ultra "Meteor Lake" chips, its first to integrate an NPU, are set to arrive on December 14th. But it seems AMD can't help but remind Intel it's lagging behind: Today, AMD is announcing the Ryzen 8040 series chips, its next batch of AI-equipped laptop hardware, and it's also giving us a peak into its future AI roadmap.
The Ryzen 8040 chips, spearheaded by the 8-core Ryzen 9 8945HS, are up to 1.4 times faster than its predecessors when its comes to Llama 2 and AI vision model performance, according to AMD. They're also reportedly up to 1.8 times faster than Intel's high-end 13900H chip when it comes to gaming, and 1.4 times faster for content creation. Of course, the real test will be comparing them to Intel's new Core Ultra chips, which weren't available for AMD to benchmark.
AMD
AMD's NPU will be available on all of the Ryzen 8040 chips except for the two low-end models, the six-core Ryzen 5 8540U and the quad-core Ryzen 3 8440U. The company says the Ryzen 7040's NPU, AMD XDNA, is capable of reaching 10 TOPS (tera operations per second), while the 8040's NPU can hit 16 TOPS. Looking further into 2024, AMD also teased its next NPU architecture, codenamed "Strix Point," which will offer "more than 3x generative AI NPU performance." Basically, don't expect AMD to slow down its AI ambitions anytime soon.
It's worth remembering that both AMD and Intel are lagging behind Qualcomm when it comes to bringing NPUs to Windows PCs. Its SQ3 powered the ill-fated Surface Pro 9 5G. That was just a minor win for the Snapdragon maker, though: the Windows-on-Arm experience is still a mess, especially when it comes to running older apps that require x86 emulation.
The far more compelling competitor for Intel and AMD is Apple, which has been integrating Neural Engines in its hardware since the A11 Bionic debuted in 2017, and has made them a core component in the Apple Silicon chips for Macs. Apple's Neural Engine speeds up AI tasks, just like AMD and Intel's NPUs, and it helps tackle things like Face ID and photo processing. On PCs, NPUs enable features like Windows 11's Studio Effects in video chats, which can blur your background or help maintain eye contact.
Just like Intel, AMD is also pushing developers to build NPU features into their apps. Today, it's also unveiling the Ryzen AI Software platform, which will allow developers to take pre-trained AI models and optimize them to run on Ryzen AI hardware. AMD's platform will also help those models run on Intel's NPUs, similar to how Intel's AI development tools will ultimately help Ryzen systems. We're still in the early days of seeing how devs will take advantage of NPUs, but hopefully AMD and Intel's competitive streak will help deliver genuinely helpful AI-powered apps soon.
This article originally appeared on Engadget at https://www.engadget.com/amds-ryzen-8040-chips-remind-intel-its-falling-behind-in-ai-pcs-200043544.html?src=rss
Acer just announced a new gaming laptop, the Nitro V 16. This computer has some serious bells and whistles, with the key takeaway being the inclusion of the just-announced AMD Ryzen 8040 Series processor. The processor has plenty of oomph for modern gaming applications, with the addition of AI technology to enable enhanced ray-traced visuals.
You can spec out the laptop how you see fit, with GPU options up to the respectable NVIDIA GeForce RTX 4060. This GPU features DLSS 3.5 tech and its own AI-powered ray-tracing, called Ray Reconstruction. You have your pick of two display options, with availability of WQXGA or WUXGA screens. Both options boast 165 Hz refresh rates and 3ms response times. Acer promises that the displays offer “fluid visuals with minimal ghosting and screen tearing.”
As for other specs, you can beef up the laptop with up to 32GB of DRR55600 RAM and 2TB of PCIe Gen 4 SSD storage. Acer also touts a new cooling system that features a pair of high-powered fans that make it “well-equipped to take on heavy gameplay.” To that end, you can monitor performance and temperature via the company’s proprietary NitroSense utility app.
There are three microphones outfitted with AI-enhanced noise reduction tech, for online tomfoolery, and the speakers incorporate DTS:X Ultra sound optimization algorithms for immersive audio. Finally, you get a USB-4 Type C port, two USB 3 ports, an HDMI port, a microSD card reader and WiFi 6E compatibility.
If the name of the processor seems a bit confusing, that's because AMD recently changed up its naming conventions. Here's a simple breakdown. The "8" relates to 2024 and the second number refers to the product line or relevant market segment, so that doesn't mean much to consumers. The third number, however, is all about performance. The "4" indicates that the chip uses the advanced Zen 4 architecture. Finally, the fourth number illustrates what type of Zen 3 architecture the chip uses. The "0" denotes a lower-tier Zen 3 experience when compared to Zen 3+, which would be marked as "5".
The Windows 11 gaming laptop will be available in March, with a starting price of $1,000 for the base model. It also comes with one month of Xbox Game Pass, so you can run it through its paces.
This article originally appeared on Engadget at https://www.engadget.com/acers-nitro-v16-gaming-laptop-is-powered-by-new-amd-ryzen-8040-processors-200031118.html?src=rss
Ahead of the International Day of Persons with Disabilities last Sunday, Apple released a short film that showcased its Personal Voice accessibility feature, which debuted earlier this year in iOS 17. Personal Voice allows users to create digital versions of their voice to use on calls, supported apps and Apple’s own Live Speech tool.
For those who are at risk of permanently losing their voice due to conditions like Parkinson’s disease, multiple sclerosis, ALS and vocal cord paralysis, not sounding like yourself can be yet another form of identity loss. Being able to create a copy of your voice while you’re still able might help alleviate the feeling that you’ll never feel like yourself again, or that your loved ones won’t know what you sound like.
All iOS 17, iPadOS 17 and macOS Sonoma users can create a personal voice in case you need it in the future — whether temporarily or for long-term use. I found the process (on my iPhone 14 Pro) pretty straightforward and was surprisingly satisfied with my voice. Here’s how you can set up your own Personal Voice, as long as you’ve upgraded to iOS 17, iPadOS 17 or macOS Sonoma (on Macs with Apple Silicon).
Before you start the process, make sure you have a window of about 30 minutes. You’ll be asked to record 150 sentences, and depending on how quickly you speak, it could take some time. You should also find a quiet place with minimal background sound and get comfortable. It’s also worth having a cup of water nearby and making sure your phone has at least 30 percent of battery.
How to set up Personal Voice on iPhone
When you’re ready, go to the Personal Voice menu by opening Settings and finding Accessibility > Personal Voice (under Speech). Select Create A Personal Voice, and Apple will give you a summary of what to expect. Hit Continue, and you’ll see instructions like “Find a quiet place” and “Take your time.”
Importantly, one of the tips is to “Speak naturally.” Apple encourages users to “read aloud at a consistent volume, as if you’re having a conversation.” After you tap Continue on this page, there is one final step where your phone uses its microphone to analyze the level of background noise, before you can finally start reading prompts.
The layout for the recording process is fairly intuitive. Hit the big red record button at the bottom, and read out the words in the middle of the page. Below the record button, you can choose from “Continuous Recording” or “Stop at each phrase.”
Screenshot
In the latter mode, you’ll have to tap a button each time you’ve recorded a phrase, while Continuous is a more hands-free experience that relies on the phone to know when you’re done talking. For those with speech impairments or who read slowly, the continuous mode could feel too stressful. Though it happened just once for me, the fact that the iPhone tried to skip ahead to the next phrase before I was ready was enough for me to feel like I needed to be quick with my reactions.
Personal Voice on iOS 17: First impressions
Still, for the most part the system was accurate at recognizing when I was done talking, and offered enough of a pause that I could tap the redo button before moving to the next sentence. The prompts mostly consisted of historical and geographical information, with the occasional expressive exclamation thrown in. There’s a fairly diverse selection of phrases, ranging from simple questions like “Can you ask them if they’re using that chair?” to forceful statements like “Come back inside right now!” or “Ouch! That is really hot!”
I found myself trying to be more exaggerated when reading those particular sentences, since I didn’t want my resulting personal voice to be too robotic. But it was exactly when I was doing that when I realized the problem inherent to the process. No matter how well I performed or acted, there would always be an element of artifice in the recordings. Even when I did my best to pretend like something was really hot and hurt me, it still wasn’t a genuine reaction. And there’s definitely a difference between how I sound when narrating sentences and having a chat with my friends.
That’s not a ding on Apple or Personal Voice, but simply an observation to say that there is a limit to how well my verbal self can be replicated. When you’re done with all 150 sentences, Apple explains that the process “may need to complete overnight.” It recommends that you charge and lock your iPhone, and your Personal Voice “will be generated only while iPhone is charging and locked” and that you’ll be alerted when it’s ready to use. It’s worth noting that in this time, Apple is training neural networks fully on the device to generate text-to-speech models and not in the cloud.
Screenshot
In my testing, after 20 minutes of putting down my iPhone 14 Pro, only 4 percent of progress was made. Twenty more minutes later, the Personal Voice was only 6 percent done. So this is definitely something you’ll need to allocate hours, if not a whole night, for. If you’re not ready to abandon your device for that long, you can still use your phone — just know that it will delay the process.
When your Personal Voice is ready, you’ll get a notification and can then head to settings to try it out. On the same page where you started the creation process, you’ll see options to share your voice across devices, as well as to allow apps to request to use it. The former stores a copy of your voice in iCloud for use in your other devices. Your data will be end-to-end encrypted in the transfer, and the recordings you made will only be stored on the phone you used to create it, but you can export your clips in case you want to keep a copy elsewhere.
How to listen to and use Personal Voice
You can name your Personal Voice and create another if you prefer (you can generate up to three). To listen to the voice you’ve created, go back to the Speech part of the accessibility settings, and select Live Speech. Turn it on, choose your new creation under Voices and triple click your power button. Type something into the box and hit Send. You can decide if you like what you hear and whether you need to make a new Personal Voice.
At first, I didn’t think mine sounded expressive enough, when I tried things like “How is the weather today?” But after a few days, I started entering phrases like “Terrence is a monster” and it definitely felt a little more like me. Still robotic, but it felt like there was just enough Cherlynn in the voice that my manager would know it was me calling him names.
With concerns around deepfakes and AI-generated content at an all-time high this year, perhaps a bit of artifice in a computer-generated voice isn’t such a bad thing. I certainly wouldn’t want someone to grab my phone and record my digital voice saying things I would never utter in real life. Finding a way to give people a sense of self and improve accessibility while working with all the limits and caveats that currently exist around identity and technology is a delicate balance, and one that I’m heartened to see Apple at least attempt with Personal Voice.
This article originally appeared on Engadget at https://www.engadget.com/how-to-use-personal-voice-on-iphone-with-ios-17-193002021.html?src=rss
Apple’s latest tvOS beta suggests the iTunes Movies and TV Shows apps on Apple TV are on their way out. 9to5Macreports the set-top box’s former home of streaming purchases and rentals is no longer in the tvOS 17.2 release candidate (RC), now available to developers. (Unless Apple finds unexpected bugs, RC firmware usually ends up identical to the public version.) Apple’s folding of the iTunes apps into the TV app was first reported in October.
9to5Mac says the home screen icons for iTunes Movies and iTunes TV Shows are still present in the tvOS 17.2 firmware, but they point to the TV app, where the old functionality will live. The publication posted a photo of a redirect screen, which reads, “iTunes Movies and Your Purchases Have Moved. You can buy or rent movies and find your purchases in the Apple TV App.” Below it are options to “Go to the Store” or “Go to Your Purchases.”
The change doesn’t remove any core functionality since the TV app replicates the iTunes Movies and TV Shows apps’ ability to buy, rent and manage purchases. The move is likely about streamlining — shedding the last remnants of the aging iTunes brand — while perhaps nudging more users into Apple TV+ subscriptions.
The update also adds a few features to the TV app on Apple’s set-top box. These include the ability to filter by genre in the purchased section, the availability of box sets in store listings and a new sidebar design for easier navigation.
This article originally appeared on Engadget at https://www.engadget.com/apples-latest-tvos-beta-kills-the-itunes-movies-and-tv-shows-apps-192056618.html?src=rss