Posts with «software» label

How to use your iPhone as a webcam with your Mac

If you want to upgrade your video call setup without buying an external webcam, your iPhone can help. With macOS Ventura or later, Apple’s Continuity Camera feature allows users to turn their iPhone into a high-quality, wireless webcam for Mac. Whether you’re joining a meeting on Zoom, recording a presentation or creating content for YouTube, using your iPhone as a webcam can provide a sharper image, better low-light performance and useful extras like Center Stage and Desk View. Here’s how to set up and use your iPhone as a webcam with your Mac, along with additional tips for microphone-only use, Desk View, Studio Light and more. It works natively in macOS, so it’s easy to set up. All you need to do is mount your phone and start your call.

What you’ll need to use Continuity Camera

You’ll need the following things to use this feature properly:

  • An iPhone XR or newer running iOS 16 or later

  • A Mac running macOS Ventura or later

  • Wi-Fi and Bluetooth enabled on both devices

  • Both devices signed into the same Apple ID with two-factor authentication enabled

  • A way to mount your iPhone (Apple sells a MagSafe-compatible Belkin mount, but any secure mount or tripod will work)

Continuity Camera works wirelessly by default, though you can connect your iPhone to your Mac via USB if you prefer a more stable connection.

How to enable Continuity Camera

Continuity Camera is automatically enabled on supported iPhones and Macs. However, it’s worth confirming that the feature is active in your iPhone’s settings:

  1. Open Settings on your iPhone

  2. Tap General

  3. Select AirPlay & Handoff

  4. Make sure Continuity Camera is toggled on

On your Mac, no additional setup is required, but you’ll want to ensure both Wi-Fi and Bluetooth are enabled and that both devices are nearby and awake.

How to use your iPhone as a webcam in macOS apps

Once Continuity Camera is active, your Mac should automatically detect your iPhone as a webcam source in any compatible app. That includes FaceTime, Zoom, Google Meet, Microsoft Teams, QuickTime, Safari and most other video and streaming applications.

To use your iPhone as the camera in a specific app:

  1. Open the app you want to use (e.g., Zoom or FaceTime)

  2. Go to the app’s video settings or preferences menu

  3. Select your iPhone from the list of available camera sources (it may appear as "iPhone Camera")

Your iPhone will automatically activate its rear camera and stream a live video feed to your Mac. Continuity Camera uses the iPhone’s higher-quality rear camera, but you can leverage the front camera using third-party apps such as EpocCam, iVCam or DroidCam.

If nothing happens, make sure:

  • Both devices are unlocked and on the same Wi-Fi network

  • Continuity Camera is enabled on your iPhone

  • You’re signed into the same Apple ID on both devices

How to use microphone-only mode

In addition to camera input, Continuity Camera lets you use your iPhone as a high-quality microphone source. This is handy if you prefer to use your Mac’s built-in camera or another webcam but still want the clarity of the iPhone’s microphone.

To use your iPhone as a mic:

  1. Open System Settings on your Mac

  2. Go to Sound > Input

  3. Select your iPhone from the list of available input devices

You can also choose the iPhone microphone directly from within most video apps under their audio settings or microphone input menus.

How to use Desk View

Desk View is a unique feature of Continuity Camera that uses the iPhone’s ultrawide lens to simulate a top-down camera angle. It creates a second video feed showing your desk or workspace, which is useful for demos, unboxings, or sketching on paper.

It’s worth mentioning that Desk View is only available on Macs with the 12MP Center Stage camera, and with iPhone 11 or later (excluding iPhone 16e and iPhone SE, as these models do not meet the hardware requirements for this feature).

To use Desk View:

  1. Position your iPhone horizontally in a mount at the top of your display

  2. Open the Desk View app on your Mac (found in Applications or Launchpad)

  3. The app will generate a simulated overhead view of your desk

  4. You can share this view in apps like Zoom by selecting Desk View as the video source

Some third-party apps (such as FaceTime and Camo) also support displaying both your face and the Desk View simultaneously using picture-in-picture.

How to adjust Continuity Camera effects

MacOS allows you to enable various video effects in the Control Center when using your iPhone as a webcam. These features enhance your appearance and help you stay centered on screen, though you need to be on a video call to use them.

To access these effects:

  1. While using a video conferencing app (such as FaceTime) on your Mac, click the Control Center icon in the top-right of your Mac’s menu bar

  2. Select Video Effects

  3. Choose from the following options:

  • Center Stage: Uses the iPhone’s ultrawide lens to keep you centered as you move

  • Portrait: Adds a soft background blur similar to Portrait Mode in the Camera app

  • Studio Light: Brightens your face and dims the background to mimic professional lighting

  • Desk View: Activates the Desk View camera feed

You can toggle these effects on or off at any time during a call or recording session, or even when you’re outside of a video call.

Tips for mounting and positioning your iPhone

To get the best results, use a secure mount that keeps your iPhone stable and aligned with your face. Apple recommends positioning the iPhone horizontally with the rear camera facing you and the screen facing away.

If you’re using a MacBook, the Belkin iPhone Mount with MagSafe is designed to clip directly onto your Mac’s display. For desktop Macs, any tripod or adjustable mount that aligns the phone at eye level will work.

Avoid placing the iPhone too close to your face and ensure the camera lens is unobstructed. You will be able to see yourself during the call, so you can adjust to your preference. The rear camera is used for higher video quality (though like I mentioned you can use the front camera with compatible third-party apps). Make sure the iPhone is not in low-power mode, as it may affect performance.

Using Continuity Camera with third-party apps

Most popular video conferencing and streaming apps on macOS support Continuity Camera without any extra setup. However, some apps may require manual input selection.

Here’s how to change the camera on a few commonly used platforms:

  • Zoom: Go to Preferences > Video and select "iPhone Camera."

  • Google Meet (in Safari or Chrome): Click the gear icon before joining a call and select your iPhone under Camera

  • OBS Studio: Add a new video capture device source and select your iPhone as the input

  • QuickTime: Open QuickTime Player, choose New Movie Recording, click the arrow next to the record button, and select your iPhone

Continuity Camera works with most macOS-native and browser-based platforms as long as permissions for camera and microphone access are enabled.

How to switch between camera modes or devices

If you want to return to using your Mac’s built-in webcam or switch to another device, simply change the input source in your app’s settings. Continuity Camera only takes over as the default when an iPhone is detected and selected.

To switch back:

  1. Open the video or audio settings in your app

  2. Select a different camera or microphone input

  3. Your Mac will revert to using the built-in hardware or another connected device

You can also disconnect your iPhone from the mount or place it out of range to stop Continuity Camera from activating. You’ll have to do this for every app you’ve used. If you want a systemwide change or if you’d rather not dismount or unplug your iPhone, you can switch off Continuity Camera by doing the following on your phone:

  1. Go to Settings > General > AirPlay & Continuity (or AirPlay & Handoff)

  2. Turn off Continuity Camera.

Troubleshooting Continuity Camera issues

If your iPhone is not showing up as an available webcam, try the following:

  • Ensure both devices are running the latest versions of iOS and macOS

  • Confirm that both devices are signed into the same Apple ID

  • Restart both the Mac and iPhone

  • Toggle Wi-Fi and Bluetooth off and on again on both devices

  • Make sure no other app is already using the iPhone camera

  • Try using a wired USB connection instead

For persistent issues, resetting your Mac’s privacy permissions for camera and microphone access may help. Go to System Settings > Privacy & Security > Camera and Microphone, and verify that the apps you’re using have access.

Battery use and privacy

Using your iPhone as a webcam over an extended period can guzzle its battery quickly, especially with effects like Studio Light or Center Stage enabled. To avoid interruptions during longer calls or recordings, consider connecting the iPhone to power while it’s in use.

Apple includes privacy protections when using Continuity Camera. A green LED will appear next to your iPhone’s camera lens to indicate it’s active, and the screen will show a message confirming that the camera is in use. No video or audio is transmitted unless you have explicitly selected the iPhone as a source in your Mac app.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-use-your-iphone-as-a-webcam-with-your-mac-164248242.html?src=rss

Yelp will use AI to help restaurants answer calls and make phone reservations

Yelp has announced new AI-powered call answering features for restaurants and services as part of its Spring Product Release. With the service, currently under development, the company hopes that "businesses never have to miss a call again.” 

"In this next step of our product transformation, we’re continuing to harness AI to unlock the potential of Yelp’s rich data in ways that build trust and simplify decision-making — whether users are hiring a pro or booking a reservation," Yelp's chief product officer, Craig Saldanha, said in a statement. "By grounding our AI in real consumer behavior and business data, we’re creating intuitive, transparent features that improve the experience for everyone on Yelp."

The AI-powered system "will be fully integrated into Yelp's platform with customizable features and the ability to answer general questions, filter spam, transfer calls when needed, and capture messages." For restaurants, it will make reservations, put guests on a waitlist and highlight deals like happy hours. It will be part of Yelp Guest Manager, which is also getting a few updates to streamline operations — plus, a new Guest Experience Survey. 

Users calling services will be able to provide project details, get answers to follow-up questions and receive a call back from the company. For example, if a person has an issue with their car, they can give information to the AI system and receive potential solutions. Every business will be able to customize the AI service's greetings, choose when a call should be forwarded and determine follow-up questions.

This article originally appeared on Engadget at https://www.engadget.com/ai/yelp-will-use-ai-to-help-restaurants-answer-calls-and-make-phone-reservations-143320476.html?src=rss

WhatsApp is reportedly bringing voice and video calls to browsers

Microsoft will shut Skype down on Monday, May 5, in less than a week's time. While it's long fallen by the wayside in favor of Zoom, Teams and Google Meet, more users might be turning to WhatsApp. The Meta-owned messaging platform is reportedly working on a feature that would bring voice and video calling to its web browser, WABetaInfo reports

WhatsApp already allows voice and video calls on its Mac and Windows apps, but this update would allow users to access these functions without downloading an app. While I'm happy to have the app on my personal computer, this could benefit anyone who doesn't want to download WhatsApp on a work computer. Right now, the feature is only under development, but there's a preview of it below. 

WABetaInfo

WhatsApp has taken additional measures recently to make calling easier. In March, the platform launched a call menu feature for one-on-one and group chats. It allows users to quickly choose a type of call from within the chat or to send a call link. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/whatsapp-is-reportedly-bringing-voice-and-video-calls-to-browsers-130026611.html?src=rss

LlamaCon 2025 live: Updates from Meta's first generative AI developer conference keynote

After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is streaming online, and you'll be able to watch along live on the Meta for Developers Facebook page.

LlamaCon kicks off today at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features. 

The keynote address will be followed by a conversation at 1:45PM PT / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible.

Meta hasn't traditionally waited for a conference to launch updates to Meta AI or the Llama model. The company introduced its new Llama 4 family of models, which excel at image understanding and document parsing, on a Saturday in early April. It's not clear what new models or products the company could have saved for LlamaCon.

We'll be liveblogging the keynote presentation today, along with some of the subsequent interviews and sessions between Zuckerberg and his guests. Stay tuned and refresh this article at about 10AM ET today, when we'll kick off the live updates. 

Update, April 29 2025, 6:00AM ET: This story was updated to include the details of Engadget's liveblog, and correct a few typos in timezones.

This article originally appeared on Engadget at https://www.engadget.com/ai/llamacon-2025-live-updates-from-metas-first-generative-ai-developer-conference-keynote-215241436.html?src=rss

Duolingo will replace contract workers with AI

Duolingo is now going to be "AI-first," the company has announced — aka it will drop employees in favor of using AI. In a publicly shared email, CEO Luis von Ahn outlined how Duolingo will "gradually stop using contractors to do work that AI can handle." This follows the company's January 2024 decision to cut 10 percent of its contractors, in part because AI could do their tasks. 

In the email, von Ahn points to Duolingo's "need to create a massive amount of content, and doing that manually doesn’t scale. One of the best decisions we made recently was replacing a slow, manual content creation process with one powered by AI. Without AI, it would take us decades to scale our content to more learners. We owe it to our learners to get them this content ASAP." 

The CEO claims that Duolingo still "cares deeply about its employees" but that it needs to remove bottlenecks to best utilize them. To that end, it will be looking for experience using AI both in hiring and when doing performance reviews.  

In the email, von Ahn admits that AI is far from perfect — wow, who knew? But, he states that Duolingo should move with urgency, rather than wait and take "occasional small hits" quality-wise. We'll have to wait and see what these AI-powered prompts will look like as a result. 

This article originally appeared on Engadget at https://www.engadget.com/ai/duolingo-will-replace-contract-workers-with-ai-123058178.html?src=rss

Duolingo will replace contract workers with AI for content creation

Duolingo is now going to be "AI-first," the company has announced — aka it will drop employees in favor of using AI. In a publicly shared email, CEO Luis von Ahn outlined how Duolingo will "gradually stop using contractors to do work that AI can handle." This follows the company's January 2024 decision to cut 10 percent of its contractors, in part because AI could do their tasks. 

In the email, von Ahn points to Duolingo's "need to create a massive amount of content, and doing that manually doesn’t scale. One of the best decisions we made recently was replacing a slow, manual content creation process with one powered by AI. Without AI, it would take us decades to scale our content to more learners. We owe it to our learners to get them this content ASAP." 

The CEO claims that Duolingo still "cares deeply about its employees" but that it needs to remove bottlenecks to best utilize them. To that end, it will be looking for experience using AI both in hiring and when doing performance reviews.  

In the email, von Ahn admits that AI is far from perfect — wow, who knew? But, he states that Duolingo should move with urgency, rather than wait and take "occasional small hits" quality-wise. We'll have to wait and see what these AI-powered prompts will look like as a result. 

This article originally appeared on Engadget at https://www.engadget.com/ai/duolingo-will-replace-contract-workers-with-ai-for-content-creation-123058970.html?src=rss

How to watch and follow LlamaCon 2025, Meta's first generative AI developer conference, today

After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is streaming online, and you'll be able to watch along live on the Meta for Developers Facebook page.

LlamaCon kicks off today at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features. 

The keynote address will be followed by a conversation at 1:45PM PT / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible.

We'll be liveblogging the keynote presentation today, along with some of the subsequent interviews and sessions between Zuckerberg and his guests. Stay tuned and refresh this article at about 10AM ET today, when we'll kick off the live updates. Should you prefer to watch the video yourself, LlamaCon will stream live through the Meta for Developers Facebook page.

Meta hasn't traditionally waited for a conference to launch updates to Meta AI or the Llama model. The company introduced its new Llama 4 family of models, which excel at image understanding and document parsing, on a Saturday in early April. It's not clear what new models or products the company could have saved for LlamaCon.

Update, April 29 2025, 6:00AM ET: This story has been updated to include the details of Engadget's liveblog, and correct a few typos in timezones.

This article originally appeared on Engadget at https://www.engadget.com/ai/how-to-watch-and-follow-llamacon-2025-metas-first-generative-ai-developer-conference-today-215241414.html?src=rss

UK regulator wants to ban apps that can make deepfake nude images of children

The UK's Children's Commissioner is calling for a ban on AI deepfake apps that create nude or sexual images of children, according to a new report. It states that such "nudification" apps have become so prevalent that many girls have stopped posting photos on social media. And though creating or uploading CSAM images is illegal, apps used to create deepfake nude images are still legal.

 "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone — a stranger, a classmate, or even a friend — could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps." said Children’s Commissioner Dame Rachel de Souza. "There is no positive reason for these [apps] to exist."

De Souza pointed out that nudification AI apps are widely available on mainstream platforms, including the largest search engines and app stores. At the same time, they "disproportionately target girls and young women, and many tools appear only to work on female bodies." She added that young people are demanding action to take action against the misuse of such tools. 

To that end, de Souza is calling on the government to introduce a total ban on apps that use artificial intelligence to generate sexually explicit deepfakes. She also wants the government to create legal responsibilities for GenAI app developers to identify the risks their products pose to children, establish effective systems to remove CSAM from the internet and recognize deepfake sexual abuse as a form of violence against women and girls. 

The UK has already taken steps to ban such technology by introducing new criminal offenses for producing or sharing sexually explicit deepfakes. It also announced its intention to make it a criminal offense if a person takes intimate photos or video without consent. However, the Children's Commissioner is focused more specifically on the harm such technology can do to young people, noting that there is a link between deepfake abuse and suicidal ideation and PTSD, as The Guardian pointed out. 

"Even before any controversy came out, I could already tell what it was going to be used for, and it was not going to be good things. I could already tell it was gonna be a technological wonder that's going to be abused," said one 16-year-old girl surveyed by the Commissioner. 

In the US, the National Suicide Prevention Lifeline is 1-800-273-8255 or you can simply dial 988. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). Wikipedia maintains a list of crisis lines for people outside of those countries.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/uk-regulator-wants-to-ban-apps-that-can-make-deepfake-nude-images-of-children-110924095.html?src=rss

How to watch LlamaCon 2025, Meta's first generative AI developer conference

After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is entirely virtual, and you'll be able to watch along live on the Meta for Developers Facebook page.

LlamaCon kicks off at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features.

The keynote address will be followed by a conversation at 1:45PM ET / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible.

Meta hasn't traditionally waited for a conference to launch updates to Meta AI or the Llama model. The company introduced its new Llama 4 family of models, which excel at image understanding and document parsing, on a Saturday in early April. It's not clear what new models or products the company could have saved for LlamaCon.

LlamaCon will stream live on April 29th through the Meta for Developers Facebook page and we'll be live-blogging the event right here on Engadget so you can get all the details as they happen.

This article originally appeared on Engadget at https://www.engadget.com/ai/how-to-watch-llamacon-2025-metas-first-generative-ai-developer-conference-215241289.html?src=rss

Gmail on Android tablets and foldables now has an adjustable layout

Android tablet or foldable owners can now adjust the divider between Gmail's message list and conversation views. 9to5Google spotted the change, which arrived in version 2025.04.13.x of Gmail for Android.

The update lets you drag your finger on the divider to dynamically change the sizes of the Gmail app's two panes: the list on the left with multiple messages and the pane on the right that shows individual emails and threads. Alternatively, you can drag the boundary to the display's edge to show only one or the other.

The adjustable divider is also available in the Google Chat app for large-screen Android devices. The new feature is available to everyone with a personal Google or Google Workspace account in both apps.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/gmail-on-android-tablets-and-foldables-now-has-an-adjustable-layout-202111833.html?src=rss