Logitech is bringing together two acquisitions over the past few years in its quest to provide live streamers the tools they need: The company has just launched a Streamlabs desktop plugin for Loupedeck consoles. If you'll recall, the company purchased Streamlabs — a popular livestreaming software that offers a robust set of tools for broadcasting on Twitch, YouTube and Facebook — back in 2019. And in July this year, it acquired Loupedeck, which makes video streaming consoles that rival Elgato's Stream Deck.
The Streamlabs Desktop Plugin 1.0 turns the Loupedeck Live and Live S devices into external controllers for the streaming software. Creators can use the consoles' dials to control audio more precisely, and they can activate Streamlabs' desktop commands and view the status of their livestream straight from their Loupedeck device. They can also set up scenes, sources, audio sources and scene collections in Loupedeck's software, effectively freeing up more space on their monitor, which could instead be used for views of other things, such as their gameplay or chat. Logitech says the new plugin is rolling out with software update 5.8 today and will come preinstalled on all new Loupedeck devices.
"When we acquired Loupedeck earlier this year, we had a goal of providing a more seamless experience for Streamlabs creators from onboarding to everyday use with presets available out of the box for Loupedeck devices," Ujesh Desai, Logitech G's general manager said in a statement. "This 1.0 release is our first offering, which accelerates our goal to bring an ecosystem of hardware and software to creators everywhere, enabling them to focus on doing what they do best, which is make amazing content."
This article originally appeared on Engadget at https://www.engadget.com/logitech-launches-a-streamlabs-plugin-for-loupedeck-consoles-070159158.html?src=rss
Third-party iOS Reddit app Narwhal introduced a $4-per-month subscription plan that will take effect in the next week or two. The developer announced the plans Tuesday on Reddit (viaThe Verge). The pricing is designed to allow the developer to cover Reddit’s widely disparaged API fee hikes, which led to mass online protests earlier this year and the shutdown of the beloved client Apollo.
The app’s developer, who uses the handle u/det0ur on Reddit, wrote that the pricing was “definitely an experiment” as they try to keep their app afloat. “If I absolutely have to, I will transition to some other plans,” Narwhal’s creator wrote. “But I want to try this first.”
The developer had previously announced tiered pricing based on users’ API calls, but u/det0ur scrapped that after realizing how cumbersome that would be for users. “[Let’s] be real, the 2003-esque cell phone plan with monitored usage just isn’t great,” the developer wrote in a separate post from last week. “Who wants to even worry about what an API Call is? Let alone how much it will cost.”
However, API calls are an inescapable financial reality for third-party developers hoping to offer users an alternative to the official Reddit client. Reddit revealed its maligned API changes this spring, which led Apollo developer Christian Selig to say the updated pricing would cost him $20 million to keep the app going “as-is.” More than 6,000 subreddits went dark to protest the changes, but (unlike Unity’s recent walkback) Reddit dug in its heels and weathered the storm, leading to Apollo’s closure and the forced reopening of communities that continued to protest.
Users switching to Narwhal sound pleased with the pricing and the app. “The customization options are great; I have been able to ‘rebuild’ Apollo as closely as possible,” wrote u/Neryuslu. “You’re the first developer ever to get a monthly sub from me. I have refused this model so far, but in this case it’s obviously different. Still sucks to indirectly pay Reddit like this. Fuck you, u/spez,” they wrote, referring to the handle of Reddit CEO Steve Huffman.
This article originally appeared on Engadget at https://www.engadget.com/reddit-client-narwhal-tries-4-monthly-pricing-to-navigate-api-changes-163042022.html?src=rss
TikTok announced a new API today that will let you post (and do other things) directly to the platform from approved third-party apps. An extension of Share to TikTok, the company says the new feature “increases the resources and tools our community can choose from to easily navigate every stage of the creation process both on and off the platform.” Early partners for the Direct Post API include Adobe, Twitch, Blackmagic Design (makers of Davinci Resolve) and others.
TikTok Direct Post integrations will let creators make drafts, set captions or audience settings, and schedule or post content directly from supported third-party apps. Only videos are supported at launch, but photo content is “coming soon.”
Third-party software supporting the feature at launch include Adobe Premiere Pro, Adobe Express, CapCut (owned by TikTok’s parent company ByteDance), DaVinci Resolve, SocialPilot and Twitch. TikTok says developer partners will be “vetted through an audit process” before they can use the API.
“Now more than ever, publishing content in real-time has become a necessity, and creators of all skill levels need tools that can empower them with greater efficiency and without constraints,” said Deepa Subramaniam, Adobe’s Creative Cloud marketing VP. “With the new Direct Post feature for TikTok available in Adobe Express and Premiere Pro, creators can continue to create standout content, but with increased speed and without adding more interruption to their creative workflows.”
This article originally appeared on Engadget at https://www.engadget.com/tiktok-now-lets-you-post-directly-from-third-party-apps-160514639.html?src=rss
Just seven months after its beta debut, Adobe's Firefly generative AI is set to receive a trio of new models as well as more than 100 new features and capabilities, company executives announced at the Adobe Max 2023 event on Tuesday. The Firefly Image 2 model promises higher fidelity generated images and more granular controls for users and the Vector model will allow graphic designers to rapidly generate vector images, a first for the industry. The Design model for generating print and online advertising layouts offers another first: text-to-template generation.
Adobe is no stranger to using machine learning in its products. The company released its earliest commercial AI, Sensei, in 2016. Firefly is built atop the Sensei system and offers image and video editors a whole slew of AI tools and features, from "text to color enhancement" saturation and hue adjustments to font and design element generation and even creating and incorporating background music into video scenes on the fly. The generative AI suite is available across Adobe's product ecosystem including Premiere Pro, After Effects, Illustrator, Photoshop and Express, as well as on all subscription levels the Creative Cloud platform (yes, even the free one).
Adobe
Firefly Image 2 is the updated version of the existing text-to-image system. Like its predecessor, this one is trained exclusively on licensed and public domain content to ensure that its output images are safe for commercial use. It also accommodates text prompts in any of 100 languages.
Adobe
Adobe's AI already works across modalities, from still images, video and audio to design elements and font effects. As of Tuesday, it also generates vector art thanks to the new Firefly Vector model. Currently available in beta, this new model will also offer Generative Match, which will recreate a given artistic style in its output images. This will enable users to stay within bounds of the brand's guidelines, quickly spin up new designs using existing images and their aesthetics, as well as seamless, tileable fill patterns and vector gradients.
The final, Design model, is geared heavily towards advertising and marketing professionals for use in generating print and online copy templates using Adobe Express. Users will be able to generate images in Firefly then port them to express for use in a layout generated from the user's natural language prompt. Those templates can be generated in any of the popular aspect ratios and are fully editable through conventional digital methods.
Adobe
The Firefly web application will also receive three new features: Generative Match, as above, for maintaining consistent design aesthetics across images and assets. Photo Settings will generate more photorealistic images (think: visible, defined pores) as well as enable users to tweak images using photography metrics like depth of field, blur and field of view. The system's depictions of plant foliage will reportedly also improve under this setting. Prompt Guidance will even rewrite whatever hackneyed prose you came up with into something it can actually work from, reducing the need for the wholesale re-generation of prompted images.
This article originally appeared on Engadget at https://www.engadget.com/adobes-next-gen-firefly-2-offers-vector-graphics-more-control-and-photorealistic-renders-160030349.html?src=rss
Few tech companies have embraced generative AI as wholeheartedly as Adobe. At Adobe Max, its annual creativity conference, it unveiled a new version of the Firefly GAI model. Not only that, the company announced more GAI features for Adobe Express, just weeks after making Firefly more broadly available in the app.
Adobe Express now includes features such as Generative Fill. This enables users to add, remove or replace items, people and other aspects of images using text prompts. On a similar note, the Text to Template function can help users generate editable templates for things like graphics and social media posts based on text descriptions. Text to Template is powered by the new Firefly Design Model, which Adobe says will generate content that's safe for commercial use.
On top of that, Express now offers a GAI-powered translation tool. Translate can localize content between 45 languages, Adobe says. Meanwhile, new Drawing and Painting functions enable more than 50 multicolor paint and decorative brushes. These can mimic things like charcoal, pencil and watercolor textures. The company also noted student-friendly drawing templates that are available in Express for Education can make it easy to create effects like flowers and hearts.
A handy resize tool will make it a cinch for folks to automatically get multiple versions of a design to fit all social channels, Adobe said. Meanwhile, you'll be able to use the app to directly schedule and share videos to the likes of TikTok, Instagram, Facebook, LinkedIn, Pinterest and X.
Adobe isn't exactly stopping with Express. At Max, it announced more than 100 new features across its main Adobe Creative Cloud applications. These include additional Firefly-powered tools for Illustrator and Photoshop, as well as new editing capabilities in Lightroom.
This article originally appeared on Engadget at https://www.engadget.com/adobe-brings-more-generative-ai-features-to-express-160018288.html?src=rss
When Gmail launched for Wear OS last week, folks wondered how long it would be before the companion Google Calendar app would arrive. Well, it’s here. Google Calendar has officially shipped for Wear OS, giving smartwatch-wearers all kinds of access to their day-to-day schedules, as originally spotted by 9to5Google.
You’ll see it on the launcher as Calendar once installed. It’s basically a beefed-up version of the Schedule view found on phones. When you open up the app, you'll find a daily calendar complete with precise location details for events, notes, notifications and more. You can adjust whether or not you’ll be attending any saved event, in addition to deleting the event entirely.
If you’re tired of staring at a teensy smartwatch screen, you can also use the Calendar app to open up any date or event on your smartphone. Certain events, like holidays and birthdays, can appear as background images in the main feed of the app. There’s even some Google Tasks integration here, so you can mark events as complete.
Like many smartwatch apps, Google Calendar is for consumption and not creation. You cannot use the app to make new events or tasks. There are, however, two new Wear OS tiles that let you quickly glance at upcoming events and tasks without opening up the full app.
Google Calendar for Wear OS is now available to download on the Play Store. Just search for Calendar in the wearable version of the Play Store or remotely install it via the app listing.
As Wear OS 3 and Wear OS 4 continue to gain new features, the company’s older smartwatch operating systems are losing tools. Google recently announced that its proprietary voice assistant would no longer work on watches running anything before Wear OS 3.
This article originally appeared on Engadget at https://www.engadget.com/google-calendar-finally-lands-on-wearos-154535767.html?src=rss
Popular audio gadget maker Bastle just released an iOS app called Outsidify that lets you capture and transform audio directly from your iPhone’s speakers and microphone. Despite the slightly cringey app name, it looks pretty darned fun, allowing users to explore the ambient noise around them to create some truly unique soundscapes.
Once you capture some audio via the app, use Bastl’s latest creation to make harmonious or discordant feedback, manipulate responses, apply resonant filters and more. You can even use your mouth, speaking directly into the phone’s microphone, to create your own resonant filter.
Additionally, you can capture impulse responses from just about anything, with Bastl using a coffee cup or a construction site pipe as examples. These IRs can then be used to run other audio sources through to create custom reverbs and the like, via a DAW or a standalone piece of hardware. The only caveat here is the phone has to fit inside of the object or the space it's capturing, so your dreams of having an impulse response from the inside of a toilet paper roll are, sadly, quashed.
The integrated media player allows for full looping and lets you adjust the start and end points. There’s also a speed slider, from ¼ speed to 4x, and a cropping function. As for the recorder, it saves wav files to pass on to other devices, again with adjustable start and end points. You can also speed match recordings, so the recorded speed automatically matches the tempo set in the player while preserving the pitch. Of course, there’s also a countdown timer so you can get in position before the app starts recording.
The audio gets transformed via a feedback pad with adjustment options for amount and tone. There’s even an adjustable delay that changes how long it takes the microphone to reach the feedback pad.
This article originally appeared on Engadget at https://www.engadget.com/bastles-outsidify-app-lets-you-capture-and-transform-sounds-via-a-smartphone-185421887.html?src=rss
Your latest iPhone update is officially here. iOS 17 brings some substantial new features and a lot of upgrades that streamline how you use your iPhone, especially when connecting with other iPhone users.
While the lock screen customizations introduced in iOS 16 formed the big visual change last year, Apple has now applied a similar makeover to your phone calls and contact lists. And at a time when there is no shortage of video call apps and services, it’s trying to make FaceTime even more compelling.
When I previewed the developer build a month or so ago, I focused on messages and FaceTime, both of which got a lot of attention in this update. After a little more time with the finished product, iOS 17 feels like a big quality-of-life upgrade for iPhone users. Without a big tentpole feature, it’s harder to pinpoint why it’s so much better — but I'll try.
Supported devices
20 different iPhone models support iOS 17, going as far back as 2018’s iPhone XR. As many of the OS updates this year aren’t particularly processor- or machine learning-intensive, you’re not missing out on much with older supported iPhones. One exception is StandBy, which works best (or how it should) with Apple’s best smartphone screens — always-on displays.
StandBy Mode
With StandBy, Apple is dipping its toe in the smart display waters without making you buy another device. (For now.)
If your iPhone is horizontal and charging, iOS 17 will shift into StandBy mode, ditching your wallpaper and icons for giant clocks, calendar info, now playing widgets, photos and the rest. (One curious oversight: no email widget.)
You’ll need an iPhone 14 Pro or iPhone 15 Pro to ensure it works like it should — that is, always on. With all the other devices, you’ll need to tap the screen to get your information, which defeats the point. StandBy also utilizes the same iOS widget Smart Stacks so you can swipe between different information.
With iOS 17, we finally get interactive widgets, too, so you can toggle smart home lights or tick something off your to-do list without having to launch an entire app. (Another helpful feature coming to Reminders and your to-do lists is an automated grocery list feature, which will detect when you’re composing a shopping list, and draw together products you’ll typically find in the same place in the grocery store.)
Apple’s Continuity upgrades mean you can now use widgets on your Mac, even if you don’t have the same app installed on your computer. There are also more curated widgets for iOS 17, so you can select a specific photo album to populate them (no more screenshots or very dated holiday photos) and dedicate widgets to podcasts, Safari or your music.
Contact posters and FaceTime
Photo by Mat Smith / Engadget
Contact Posters remain the big visual twist for iOS 17. However, I’m still waiting for my iPhone-carrying friends to update their devices so I can see the glossy upgrade. Contact posters mix different profile photos, fonts and colors and will appear when someone calls you, FaceTimes you, or when you’re searching through contacts. This image will also appear when you try out NameDrop, Apple’s new feature for contactless… contact sharing. As I noted in my preview, the profile photo you use doesn’t have to be taken in Portrait mode to ensure the cutout effect between the image and text, which is nice.
NameDrop offers a degree of customization, so when you share your details, you can choose what phone numbers and emails to shoot across by bringing two compatible iPhones close to each other. There’s a lovely visual undulation, sound effect and haptic buzz, making it an odd delight to share your details. Apple also teased an upgraded AirDrop, able to transfer content online even if you step away. However, that feature will arrive later this year.
With FaceTime, alongside some new augmented reality gestures, you can leave a video voicemail if someone doesn’t answer your call. Yes, you’re just sending a video, to be honest, but it’s here if you need to do just that.
Messaging gets better and better
So, Messages is good now? It’s taken some time, but I’ll admit it: I want my friends to ditch WhatsApp and return to the other green messaging app. (And to my Android friends, I’m sorry.)
Apple has improved its sticker features, including Live Stickers, animated stickers taken from Live Photos. iOS 17 now collates all of my cut–outs of dogs, selfies and babies into one drawer. This drawer also houses memoji, emoji and third-party stickers. Like static cutout stickers before, you can ‘lift’ subjects out of photos by long pressing on them in the Photos app. With iOS 17, you can add sticker effects, like “shiny” and “puffy” that reflect faux light when you move your phone. Stickers can now also be used from the sticker drawer and added to photos, documents, and screenshots with Markup – that’s the little pencil tip icon.
A new Check In feature, embedded into Messages, can auto-notify someone that you’ve arrived at a destination. If you don’t arrive by a specified time, your iPhone will even ask you to confirm you’re okay, and if you don’t respond, an alert will be sent to whoever you sent the Check In notification to. The recipient can be informed of signal status and battery life. You can even share the route you take, if you’re willing to.
The keyboard is much improved, but I’m not sure how
Photo by Mat Smith / Engadget
Apple has taken on board the criticism of its often spotty autocorrect accuracy. It says it’s using a new “transformer language model” for its autocorrect suggestions in English, French and Spanish. Almost immediately, it worked better and has improved further over the last few weeks. I noticed my phone swapped ‘bbiab’ for ‘Brian’, in an email to Engadget’s head of Video.
This is made even better by the temporary underlining on your autocorrected words, so you can see what’s changed — great for when you didn’t notice your iPhone tweaking your missives.
Tapping on an autocorrection shows a pop-up of the original, so you can easily swap it back if you want. Predictive text suggestions appear mildly improved too. iOS 17 ends Apple’s prudish approach regarding curse words, so you can now save your favorite naughty words, and your iPhone will learn them and (hopefully) use them appropriately.
Live Voicemail and voice note transcription
Live Voicemail is one feature not yet available to me in the UK. And I’d very much like it, please. This voicemail upgrade lets you screen a call through live transcription, with the iPhone parsing what someone says, you can then pick up the call if they’re saying something you’re interested in hearing about – or just let them leave a message.
I had our Executive Editor Aaron Souppouris — whose London accent is incomprehensible to a lot of people — test this feature in the US. If your phone is locked when the call comes in, the system prompts you to unlock to read live. Once unlocked, Live Voicemail caught every word he said, which is pretty impressive.
It’s a different approach to Google, which introduced its own call-screening tricks to the Pixel years ago. In Android’s implementation, the device screens calls and asks the caller questions. It’s a little more… interactive. In iOS 17, you get a live transcript of their message and can choose to interrupt them by leaving the message. Or just get the jist. Google’s technique means people know they’re being screened which I dislike.
Photo by Mat Smith / Engadget
Machine-learning transcription isn’t new on iPhones (you’ve been able to dictate on your phone for years), but it’s the implementation in iOS 17 that is. When someone sends you a voice note on Messages, the iPhone can now auto-transcribe the contents of that voice note, as long as the audio is clear enough. I think I made my point during our iOS 17 preview, but It's my favorite feature this year.
Improvements beyond the iPhone
Billy Steele/Engadget
The iOS 17 benefits even stretch to your AirPods — if they’re the latest ones. With second-gen AirPods Pro, you’ll get adaptive audio — and dropdown icons on your phone to toggle the new features on and off. This adjusts the level of noise cancellation in a noisier environment and is bolstered by a new Conversation Awareness feature, which, when it detects you speaking, will lower the volume of your music or podcast. Unfortunately, it does the same when you cough. Check out our deep dive on the new features here.
If you own AirPlay-compatible devices, iOS 17 will offer up speaker options and automatically connect when you play audio on your iPhone, further streamlining the process. However, with my HomePod, I had to be very close for the auto-connect popup to appear.
While we’re talking audio, voice assistant Siri picks up some minor, but notable, improvements. No more ‘Hey Siri’ just ‘Siri’ — they’re cool like that now. Siri now handles back-to-back commands, too.
Cross-device improvements even reach AirTags and other Find My-enabled peripherals. Other people can now track these, so two people can monitor the same item.
Elsewhere, Safari now offers separate browsing profiles for your work and personal – or any other way you’d like to divide up your internet exploring. iOS 17 also introduces group password and passkey sharing.
Another simple upgrade to your iPhone experience is any two-factor authentication codes and messages sent to your email will be automatically inserted into your web browser, a feature that’s been available for codes in text messages for years. Better yet, iOS can now automatically delete these texts or emails after you’ve inserted the code, clearing out space, especially in Messages, for the texts that matter.
Missing parts
During the big iOS 17 reveal at WWDC 2023, Apple noted that some features of the new OS wouldn’t be available at launch. One of the big ones is a Journal app.
Apple says that Journal will glean details from other apps, like Messages and Podcasts, automatically suggesting things you might want to recall and write about. The Journal app is scheduled to land before the end of the year.
There are a few other things not here at the time of public release, too, like the enhanced AirDrop capabilities I mentioned earlier. Music collaboration was also teased, with the ability to invite friends to your playlists and let anyone add, reorder, and remove songs – or react to poor choices with emoji.
Another feature I’m waiting on is intelligent form detection for PDFs. Apple says iOS 17 will eventually be able to identify PDF forms across Files, Mail and any scanned files you’ve snapped. If it means I don’t have to pull out my laptop every time I need to fill in a PDF form, I’m on board.
There are some major accessibility upgrades too, which might get lost in the barrage of features. The big one is Personal Voice. After 15 minutes of talking at your iPhone, (reading set phrases aloud), the iPhone can simulate your voice, a la DeepFake tricks we’ve seen in recent years. While it’s cool to have a robo-Mat, the use case is anyone who may lose the ability to speak, or finds it difficult to do so now. (It also sounds pretty artificial, having toyed with other similar voixe models in Descript and other services.) With Personal Voice, you can convert written text into a voice for FaceTime, Phone calls and other compatible communication apps.
Another feature tucked away in Accessibility settings is the ability to speed up haptic touch. Haptic Touch is the long touch feature that replaced the (arguably better?) 3D Touch first found on the iPhone 6S. A long press on an icon or a photo takes longer than pressing hard on 3D Touch. Now you can tweak the settings (Accessibility-> Touch-> Haptic Touch). This immediately sped up all the menu browsing and secondary features I accessed through long presses — give it a try.
Wrap-up
Photo by Mat Smith / Engadget
With iOS 17, the visual differences are obvious. But underlying those are many small upgrades, especially for iPhone users that communicate mostly with other iPhone users. If you’re using FaceTime, you can leave a video message or use a handful of wacky augmented reality gestures. If you’re calling them or messaging other iOS 17 users, there are Live Stickers, Check In, Contact Posters, NameDrop and voice note transcription — already the standout feature to me this year. (I’m still waiting for more of my acquaintances to get up to speed and download the update, so I, selfishly, can use these features more.)
If you’re already using AirPods Pro, they’re better, too. Conversational Awareness is already making me look less of an ass when I order my drink at the coffee shop.
Alongside broader quality-of-life improvements to typing and Messages, Apple has also continued to push forward with accessibility features, too. We’re still waiting on that journaling app and several more features. Still, there are enough notable changes this year, combining the new (StandBy) with the improved (predictive typing) to keep your iPhone fresh without having to invest in new hardware.
This article originally appeared on Engadget at https://www.engadget.com/ios-17-review-notable-new-features-and-streamlined-touches-140009954.html?src=rss
Tracking your fitness and health just got easier if you use MyFitnessPal and Google's Wear OS. The MyFitnessPal smartwatch app offers a way to keep track of your health stats, including calorie intake and gym gains. While this worked well, it has also been limited since you couldn't edit or input your data directly on your wearable device. However, the latest app update changes things for the better.
In a recent blog post, MyFitnessPal announced updates that will make tracking and logging easier on Wear OS. Now, users can track and log without pulling out their phones. When wearing compatible Android smartwatches, users will have access to new watch tiles and complications — which will give them the ability to log foods they eat regularly while keeping track of daily nutritional intake, like sugar, fiber, fats, calories and even hydration. Users will also be able to see a quick snapshot of their entire day right from their wrists.
This feature will be available to users with a smartwatch running Wear OS, like the new Pixel Watch 2. The MyFitnessPal app is available for download from the Google Play Store.
While this feature isn't entirely new to the MyFitnessPal app on smartwatches — Apple Watch users have had this option for a while — it’s good news that people using Wear OS have another way to track. Monitoring our health and wellness is important and can be easily neglected when we just don't have the time. Any innovation, big or small, that makes keeping up with health stats easier is always a good move. Now if only they could figure out a way to burn and log calories without the exercise.
This article originally appeared on Engadget at https://www.engadget.com/myfitnesspal-update-lets-users-track-meals-or-workouts-on-wear-os-watches-195919769.html?src=rss
In an otherwise incremental update, one of macOS Sonoma’s marquee features is interactive desktop widgets. Although Apple now lets you add widgets to the Notification Center on older versions of macOS, with Sonoma you can plop them down right on the desktop. Here’s how to set up and start using customizable widgets on your Mac computer.
Notification Center widgets
First, if you have any existing Notification Center widgets you’d rather put on the desktop, you can now drag them over directly. They’ll move seamlessly back and forth between the two places, and you can reposition them around the desktop to find a spot you like.
How to access the widget gallery
Apple added a widget gallery (similar to the one on iOS) to make setup easy. Start by right-clicking (or ctrl-clicking) on an unused space on your desktop, and choose the “Edit Widgets” option. The widget gallery will open, displaying available ones from installed Mac apps and your iPhone (if you have one).
Will Shanklin / Engadget
The gallery’s main window (above) displays all widgets, while the left sidebar lets you scroll through the list of apps with available ones. If your iPhone’s widgets aren’t showing, ensure your handset is running iOS 17 or later, signed in with the same Apple ID as the Mac and on the same Wi-Fi network.
For apps with both macOS and iOS versions, you’ll see “On This Mac” and “From iPhone” tabs on the upper right where you can switch views. Whichever you choose, tapping on a widget will immediately place it on your desktop, or you can drag it around until you find a spot you like.
How to customize widgets
Once you’ve placed a widget on your desktop, you can right-click (or ctrl-click) on it to view its available options. If you want to make it bigger or smaller, you can switch between sizes in this menu. In addition, “Edit [widget name]” lets you adjust its specific settings (when applicable). You’ll also see the option to “Remove Widget.”
Apple
Many native macOS widgets are interactive, allowing you to change settings or perform other tasks without opening the corresponding app. For example, you can check off to-do list items in Reminders or toggle your lights on or off in the Home app — right from the widget. Unfortunately, iPhone widgets on your Mac aren’t interactive and will prompt you to “open [app] on your iPhone to continue” if you click them.
This article originally appeared on Engadget at https://www.engadget.com/how-to-set-up-widgets-in-macos-sonoma-133045709.html?src=rss