If you regularly use people who regularly use more than one WhatsApp account this new beta update is going to be of interest to you. The messaging app is reportedly working on multi-account support for its Android app, an update that would allow you to switch between profiles on the same device, WABetaInfo reports. The feature appears to work just like changing accounts on fellow Meta-owned app Instagram with a pop-up at the bottom of your app showing current accounts and the option to add new ones.
📝 WhatsApp beta for Android 2.23.13.5: what's new?
Thanks to the business version of the app, we discovered that WhatsApp is working on a multi-account feature, and it will be available in a future update of the app!https://t.co/jDnLxnJtbvpic.twitter.com/kz4PrYbCvX
Any new accounts will be stored within your device and, of course, can be logged out of at any point. Multi-account support might be advantageous if you have different work and personal numbers or want to try out recent social media-centric WhatsApp features like Channels. This update lets you send broadcasts like photos and polls to followers, with WhatsApp planning to monetize it for creators in the future. Similarly, WhatsApp has reportedly been working on a username feature that would allow you to find people the same as Instagram or Twitter, without having their phone numbers.
Multi-account support also follows the iOS and Android release of companion mode, an update that allows you to use the same WhatsApp account on up to four phones. Previously, you could only be logged in on a single mobile phone along with your tablet and computer.
There's no timeline yet for when you'll be able to jump between accounts from one device. We can expect multi-account support will be widespread when it gets fully released, but right now it's only visible through an Android beta update.
This article originally appeared on Engadget at https://www.engadget.com/whatsapp-may-soon-let-you-add-multiple-accounts-to-one-device-121532162.html?src=rss
Logitech has rolled out new AI-powered tools for its Streamlabs platform that could make editing podcasts go much, much faster. Starting today, Streamlabs Ultra subscribers will get access to Podcast Editor, which provides easy text-based editing capabilities that they could use to auto-generate transcripts and real-time translations. They could also use the editor to add subtitles to their video podcasts in several languages, as well as create clips in different sizes (and with different template designs) for sharing on platforms, such as Facebook and TikTok.
The screenshot below shows the tool's interface with its text editor where users can highlight parts of the transcript and automatically create short clips featuring those sections of their podcast. That editor is also where users can generate translations, as well as style and insert subtitles. Users can also remove filler words like "ums" and awkward pauses from their podcasts within just a few seconds using the tool.
Logitech, which purchased the creator of the Streamlabs OBS livestreaming app back in 2019, says Podcast Editor could trim hours off creators' total edit time. Although Streamlabs Ultra subscribers will get the most out of Podcast Editor, seeing as the paid service allows them to manage 40 hours of content, non-paying users will also get limited access to the tool. They can use Podcast Editor through the free version of Streamlabs and edit one hour of content at no cost every month.
Engadget
Vincent Borel, Head of PC Gaming and Creators at Logitech G, said: "Podcast Editor now enables Streamlabs to provide the most robust suite of offerings for creators to reach their audiences wherever they are while focusing on the elements of content creation they love the most - streaming and engaging with their audience."
This article originally appeared on Engadget at https://www.engadget.com/streamlabs-gets-an-ai-powered-podcast-editor-120041029.html?src=rss
Feel like the Steam desktop client was long overdue for a major upgrade? So did Valve. Today the company released an update for the platform that includes many of the features it has been testing in the app's beta channel. The latest client features updated fonts and menus, a revised notification system and a redesigned in-game overlay. Better still, Valve says the platform has been rebuilt with an all-new framework designed to help features ship simultaneously across all versions of Steam.
That means some of the new features baked into the desktop client are also already available on Steam Deck. Specifically, Valve called out the client's overhauled in-game overlay — in addition to a new interface and more versatile toolbar, players also now have access to a new notes tool that syncs across PCs. Thanks to the new framework, this feature is now available on desktop and Steam Deck simultaneously. The overlay also has a new "pin" feature that will allow users to keep that notes tool (or any other window from the in-game interface) visible during gameplay.
Valve is also trying to clean up the client's notifications. Clicking on the icon should now only show the most recent and relevant notifications, and will prompt users to click through to see a full historical view. Users should see small updates across the rest of the client as well, including updated dialog text, new fonts and tweaked colors.
The new features are nice, but Valve seems most proud of the improvements its made under the hood. In addition to the new framework, the company says that its enabled hardware acceleration for Mac and Linux users, offering a smoother experience across all platforms.
Last, and perhaps least, the update may be the end of the legacy Steam Big Picture mode. Buried among the client's long list of bug fixes is a note that the command line option to enable "oldbigpicture" has been removed. We're all using the Steam Deck interface now.
The new Steam Client update is available to download now. Check out the full patch notes for details.
This article originally appeared on Engadget at https://www.engadget.com/steam-overhauls-notifications-ui-elements-and-the-in-game-overlay-000839366.html?src=rss
Capture One has brought its eponymous photography app to the iPhone. Photographers can connect their camera to their phone and shoot images directly to the app. Capture One works with more than 500 cameras, the company says, including Canon, Sony, Nikon, Fujifilm, Leica and Sigma models.
The app can automatically apply edits to images as your camera sends them to your iPhone. As such, Capture One suggests, photographers can swiftly provide their clients with edited images. You can plan ahead by creating styles on Capture One's desktop or iPad apps and AirDropping them to your iPhone.
Capture One enables photographers to share a live link of a shoot with their colleagues, who can follow along in real time whether they're remote or on location. The company suggests this will allow collaborators to quickly select their favorite shots and provide feedback from any device, wherever they might be.
Gabija Morkūnaitė/Capture One
Other features of the app include RAW conversion and color processing. You'll be able to transfer shots via the cloud to Capture One Pro and finish editing on your desktop. Capture One says ratings, color tags and edits will remain intact when you transfer your images. You can export images from the app however you like, the company said, including to an external SSD.
Capture One, which is an increasingly popular Lightroom alternative, is available in the App Store now. A subscription costs $5 per month after a seven-day trial, but users who have the iPad app or the All in One bundle can use Capture One on iPhone at no extra cost.
This article originally appeared on Engadget at https://www.engadget.com/capture-ones-photo-editing-app-arrives-on-iphone-203034503.html?src=rss
Google released its redesigned Home app last month, adding routines to give users more control over smart home automations. Now, it's introducing a new script editor, the company announced in a Google Nest blog post. It gives users even more granular control over automations, letting them do things like "dim lights and lower blinds when the living room TV is on after dark," to cite one Google example. It does require some basic programming abilities, though, as it uses the YAML data serialization language.
Building an automation requires three elements: starters, conditions and actions. A starter triggers the automation, for example flipping on the TV in the above example. Conditions, meanwhile, are prerequisites to be met before the script will run; for example the time must be between sunset and sunrise. Finally, actions specify which devices will then be triggered, i.e., lowering the blinds and turning off the lights.
Google
While Google already offers a decent level of control with routines, you can't do things like program multiple starters in an automation or set conditions. The script editor, in comparison, allows the use of nearly 100 starters, conditions and actions that can be used for creating custom automations.
The script editor is available in a public preview build of the Google Home app or on the web, along with a variety of sample scripted automations. Google has provided a step-by-step guide as well. It works across Google Home and third-party smart home devices, and automations work with popular sensors including Matter-supported sensors. For more, check out the Google Nest blog.
This article originally appeared on Engadget at https://www.engadget.com/google-homes-new-script-editor-can-make-smart-device-automations-even-more-powerful-103019126.html?src=rss
It's been quite a day for Reddit. Thousands of communities have temporarily closed shop to protest changes the company is making to its API, which is impacting several third-party apps. On top of that, the platform suffered a "major outage" across its desktop and mobile websites, as well as the mobile apps.
"We're aware of problems loading content and are working to resolve the issues as quickly as possible," read a message on the Reddit status page as of 10:58AM ET. By 11:30AM, the site was loading again.
"A significant number of subreddits shifting to private caused some expected stability issues, and we’ve been working on resolving the anticipated issue," Reddit told Engadget in a statement.
A bot was tracking all of the subreddits that were going private as part of the protests. As you might expect, the bot was out of commission while Reddit was down, but it's up and running again.
Reddit said in April that it would start charging for access to its API, which third-party developers have used in thousands of apps that tie into the platform, such as moderation tools. While the primary target of the API changes may have been companies that are scraping Reddit for content to train language learning models for generative AI systems, the move has been a significant blow for those making third-party clients that many redditors prefer to the company's own website or apps.
After claiming that he would have to pay $20 million to keep operating Apollo for Reddit as is, Christian Selig ultimately decided to shut down the app. Apollo will close its doors on June 30th. RIF, another popular third-party Reddit app, will shut down on the same day.
This story is developing; refresh for updates.
This article originally appeared on Engadget at https://www.engadget.com/reddit-suffers-a-major-outage-after-thousands-of-subreddits-temporarily-shut-down-151741809.html?src=rss
The Reddit community’s mass protest over the company’s controversial API changes has started. Thousands of subreddits have “gone dark,” setting their communities private and making their content inaccessible to anyone not already subscribed.
Some of the site’s most popular subreddits, including r/Music, r/funny, r/aww and r/todayilearned — each of which has millions of followers — have joined the effort, along with thousands of other communities. The movement has grown significantly in the last few days following CEO Steve Huffman’s AMA with users in which he defended the new policies, which will result in popular third-party apps like RIF and Apollo shutting down for good.
As of last week, the number of participating subreddits was just over 3,000. But by Monday morning, the number had climbed to more than 6,200 communities, according to a Twitch stream tracking the protest. With the blackout, participating subreddits have posted brief messages alerting users that they are protesting the company’s planned API changes. Most have committed to a 48-hour blackout, but at least 60 subreddits say they plan to protest “indefinitely” until the company walks back its changes. Many are also urging users not to browse Reddit at all. Some have also set up Discord servers to encourage subscribers to stay off of Reddit.
The backlash against the company’s new API policy kicked off after Christian Selig, the developer behind Reddit client app Apollo, shared that Reddit’s new pricing would cost him as much as $20 million a year to keep his app going. The company further angered Apollo fans by claiming that Selig had “threatened” the company, which the developer promptly refuted with an audio clip of a phone call with a Reddit employee. Huffman then doubled down on the criticism in his AMA last week.
“As the subreddit blackout begins, I wanted to say thank you from the bottom of my heart to the Reddit community and everyone standing up,” Selig wrote in a post on Twitter. “Let's hope Reddit listens.”
Reddit’s users aren’t only upset about the company’s treatment of Selig and Apollo, though, They are also frustrated with losing moderation and accessibility features only available via third-party apps. In a message to users, moderators of r/blind said the native Reddit app was so lacking in accessibility that a sighted user had to switch the subreddit private.
If Reddit was a restaurant third party apps are franchises. We can get a burger from Reddit directly or from a franchise. The official Reddit location is at the top of a cliff. Disabled people can't get there. Reddit is charging franchise fees so high nobody else can afford to offer burgers. We, with thousands of other subreddits, have gone dark for 48 hours. We will be back on June 14. Our Discord server remains open. Thank you for understanding; app so bad, vision required to go dark
Reddit’s moderators — who are often quick to point out that they are unpaid volunteers — shared similar. “In many cases these apps offer superior mod tools, customization, streamlined interfaces, and other quality of life improvements that the official app does not offer,” moderators wrote in an open letter. “The potential loss of these services due to the pricing change would significantly impact our ability to moderate efficiently, thus negatively affecting the experience for users in our communities and for us as mods and users ourselves.”
For now, it's unclear whether the protest will be able to influence Reddit's leaders. The company didn’t immediately respond to a request for comment, but has previously defended the new API policy, citing the rise of generative AI companies taking advantage of its data. “We’ll continue to be profit-driven until profits arrive,” Huffman said last week in his AMA.
This article originally appeared on Engadget at https://www.engadget.com/reddit-sees-more-than-6000-communities-go-dark-in-protest-over-api-changes-095311637.html?src=rss
It’s easy to groan when Apple describes the Vision Pro as a “spatial computer.” Isn’t it just a high-end mixed reality headset? To a degree, yes. You can play games, create content and be productive on a much cheaper device like the $299 Meta Quest 2. And if you’re a professional who needs to get serious work done, wearables like the Quest Pro and Microsoft’s HoloLens 2 can already handle some of those duties. There’s not much point to buying Apple’s offering if you just want a refinement of the status quo.
However, it would be wrong to say that the Vision Pro is just a faster, prettier version of what you’ve seen before. In many ways, Apple’s headset concept is the polar opposite of Meta’s — it’s building a general computing platform that encompasses many experiences, where Meta mostly sees its hardware as a vehicle for the metaverse. And Microsoft’s HoloLens is courting a completely different audience with different needs. So, Apple already stands out from the herd simply by embracing a different mixed reality philosophy.
Software: A complete platform
Apple
The mixed reality headsets you’ve seen to date, including Meta’s, have typically centered around hop in, hop out experiences. That is, you don the headgear to accomplish one thing and leave as soon as it’s done. You’ll strap in to play a round of Beat Saber, meet your friends in Horizon Worlds or preview your company’s latest product design, but switch to your computer or phone for almost everything else.
That’s fine in many cases. You probably don’t want to play VR games for long periods, and you might rarely need an AR collaboration tool. But that also limits the incentive to buy a headset if it’s not for general use. And while Meta envisions Quest users spending much of their time in the metaverse, it hasn’t made a compelling argument for the concept. It’s still a novelty you enjoy for short stints before you return to Facebook or Instagram. You may come for a virtual party or meeting, but you’re not going to hang out for much longer. And that’s backed up by data: The Wall Street Journalreported last fall that most Horizon Worlds users don’t come back at all after the first month, and only nine percent of worlds have ever had more than 50 visitors.
The interface is barebones, too. While there’s a degree of multitasking, Meta’s front-end is largely designed to run one app at a time. There’s not much flexibility for positioning and resizing your apps, and you can’t really run 2D and 3D programs side-by-side. This helps make the most of modest hardware (more on that later), but you aren’t about to replace an office PC with a Quest Pro.
Meanwhile, Apple’s VisionOS is precisely what it sounds like: a general-purpose operating system. It’s clearly designed for running multiple apps at once, with a sophisticated virtual desktop that can juggle 2D and 3D software placed throughout your physical space. It includes familiar apps like the Safari web browser, and it can run hundreds of thousands of iPad and iPhone titles. That’s critical — even if you rarely need mixed reality apps, you can still take advantage of a vast software library without connecting to a computer. Meta has just over 1,000 apps in its store, and while all of those are designed with headsets in mind, they just won’t cover as many use cases.
Even at this early stage, the Vision Pro offers a greater breadth of possibilities. Yes, you can watch videos, make video calls or access your computer like you would on other headsets, but you also have enhanced versions of key apps from your phone or tablet, like Messages and Photos. You can play conventional video games on a virtual display. And since you have an extra-sharp view of the outside world, it’s easier to interact with others than it has been with past wearables — during the keynote, Apple showed people talking to coworkers and friends. My colleague Devindra says the Vision Pro interface is Minority Report-like in its sheer power and ease of use, and that’s no small compliment given how that movie’s portrayal of holographic computing is considered a Holy Grail.
And before you ask: While Microsoft’s HoloLens could easily be seen as the parent of Vision Pro-style spatial computing, Apple isn’t just following the lead. Aging hardware notwithstanding (HoloLens 2 has been around since 2019), Microsoft’s headset and interface are aimed primarily at business customers who need specialized mixed reality apps and only occasionally dip into semi-conventional software like Teams. Apple’s platform is simply more comprehensive. It’s meant to be used by everyone, even if the initial device is best-suited to developers and pros.
Hardware: A computer on your head
Photo by Devindra Hardawar/Engadget
The technology in mixed reality headsets like Meta’s Quest line is frequently optimized for battery life and light weight at the expense of performance. Their mobile-oriented chips aren’t usually powerful enough to handle multiple demanding apps or render photorealistic visuals, and even the Quest Pro’s Snapdragon XR2+ chip has its roots in the 865 that powered the three-year-old Galaxy S20. There are advantages to this (you wouldn’t want a heavy headset during a Supernatural workout), but there’s also no question that Meta, HTC and others are making deliberate tradeoffs.
If Meta’s mixed reality proposition revolves around lean, focused headsets that get you into the metaverse, Apple’s Vision Pro is a do-it-all machine. The M2 inside is a laptop-class chip that can easily run multiple apps at once with rich graphics, and the 4K per eye resolution ensures you won’t have to squint at a web browser or spreadsheet on a virtual desktop. It’s also one of the few headsets that can capture 3D photos and videos, although that’s admittedly a novelty at the moment.
Apple is also taking a very different approach to input than Meta, or even Microsoft. While eye and hand tracking aren’t new, Apple is relying on them exclusively for navigating the general interface. You only want to use physical controllers if you’re playing conventional games or prefer the speed of typing on a real keyboard. And unlike HoloLens, you don’t need to point or otherwise make conspicuous gestures. You just look at what you want and pinch your fingers to manipulate it, even if your hands are on your lap. The Vision Pro is meant to be intuitive and comfortable for extended periods, like a computer, even if that means giving up the conveniences of buttons and triggers.
A new strategy doesn’t solve everything, but it might help
Apple
This isn’t to say that Apple has addressed all of mixed reality’s problems just by taking a different approach. Headsets still create solitary, isolating experiences. While you could more realistically wear a Vision Pro all day than a Quest Pro due to the stronger app selection and higher-resolution display, you’re still putting a screen between yourself and the outside world. It’s heavier than you might like. Apple also hasn’t solved the too-short battery life that’s common in this category, so you won’t be free to roam during the workday.
The $3,499 price underscores one of the biggest challenges: It’s difficult to make technology that lives up to the promises of mixed reality while remaining accessible to everyday users. Apple may have found a way to put a fast, easy-to-use computer on your head, but it hasn’t figured out how to make that computer affordable. It’s a much riskier strategy than Meta’s in that regard. Meta is undoubtedly cost-conscious (it even dropped the Quest Pro’s price to $999), and is gradually upgrading its hardware to make mixed reality more viable at a given price. See the $499 Quest 3’s pass-through cameras as an example. Apple, meanwhile, is betting that it’s more important to nail the execution first and think about affordability later.
Is Apple’s overall strategy better? Not necessarily. Meta may be struggling to popularize the metaverse, but it’s still the current frontrunner in mixed reality hardware for a reason: It offers well-made, reasonably priced headsets with enough useful apps to appeal to enthusiasts. As alluring as Apple’s spatial computing debut might be, it’s also untested. There’s no guarantee people will take a chance on the Vision Pro, even if rumors of an eventual lower-cost model prove true.
With that said, Apple’s different direction is notable. Mixed reality is still a niche industry, even as much as Meta and other companies have done to expand its appeal. Even if Apple fails with the Vision Pro, it will at least show more of what’s possible and provide lessons that could improve the technology at large.
This article originally appeared on Engadget at https://www.engadget.com/apples-pitch-for-the-vision-pro-couldnt-be-more-different-than-the-meta-quest-120001109.html?src=rss
The “save” icon for plenty of modern computer programs, including Microsoft Office, still looks like a floppy disk, despite the fact that these have been effectively obsolete for well over a decade. As fewer and fewer people recognize what this icon represents, a challenge is growing for retrocomputing enthusiasts that rely on floppy disk technology to load any programs into their machines. For some older computers that often didn’t have hard disk drives at all, like the Commodore 64, it’s one of the few ways to load programs into computer memory. And, rather than maintaining an enormous collection of floppy discs, [RaspberryPioneer] built a way to load programs on a Commodore using Microsoft Excel instead.
The Excel sheet that manages this task uses Visual Basic for Applications (VBA), an event-driven programming language built into Office, to handle the library of applications for the Commodore (or Commodore-compatible clone) including D64, PRG, and T64 files. This also includes details about the software including original cover art and any notes the user needs to make about them. Using VBA, it also communicates to an attached Arduino, which is itself programmed to act as a disk drive for the Commodore. The neceessary configuration needed to interface with the Arduino is handled within the spreadsheet as well. Some additional hardware is needed to interface the Arduino to the Commodore’s communications port but as long as the Arduino is a 5V version and not a 3.3V one, this is fairly straightforward and the code for it can be found on its GitHub project page.
With all of that built right into Excel, and with an Arduino acting as the hard drive, this is one of the easiest ways we’ve seen to manage a large software library for a retrocomputer like the Commodore 64. Of course, emulating disk drives for older machines is not uncommon, but we like that this one can be much more dynamic and simplifies the transfer of files from a modern computer to a functionally obsolete one. One of the things we like about builds like this, or this custom Game Boy cartridge, is how easy it can be to get huge amounts of storage that the original users of these machines could have only dreamed of in their time.
Meta’s generative AI plans are starting to come into focus. Though the company hasn’t adopted much in the way of generative AI features yet, Mark Zuckerberg has made it clear he wants Meta to be viewed as one of the leaders in the field.
Now, Axios reports that at a companywide all-hands meeting this week, Zuckerberg laid out some of Meta’s plans in more detail. The CEO reportedly briefed employees on some of the ways Meta plans to put generative AI “into every single one of our products.”
The planned features include AI “agents” for WhatsApp and Messenger, something that Zuckerberg has discussed in the past. And while Axios reports that WhatsApp and Messenger may be first to get the feature, an early version of an AI chatbot was spotted in the Instagram app this week by app researcher Alessandro Paluzzi. Screenshots he shared indicated the app could have as many as 30 different “personalities” to choose from.
Also in the works, according to Axios: generative AI photo editing in Instagram. The feature would apparently allow users to edit their photos via text prompts and then share the images back to their Story. Zuckerberg has also recently discussed — during the company's most recent earnings call — post creation tools for Facebook, as well as for the platform’s advertisers.
It’s still unclear just how soon some of these features may launch, but it sounds like Zuckerberg is hoping to see them sooner than later. The company is also hoping employees will come up with some ideas for new generative AI features of their own and is reportedly hosting an internal hackathon to inspire potential new ideas.
This article originally appeared on Engadget at https://www.engadget.com/mark-zuckerberg-says-generative-ai-is-coming-to-every-single-one-of-our-products-204741820.html?src=rss