OpenAI rolls back update that made ChatGPT an ass-kissing weirdo

OpenAI is rolling back a recent update to GPT-4o, the default model that powers ChatGPT, following complaints from users that it made the chat bot act like a weirdo. "The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week," said OpenAI CEO Sam Altman in a X post spotted by TechCrunch

As of midday Tuesday, Altman said ChatGPT was running on an older, less sycophantic version of GPT-4o for all free users. The company hopes to get paid users back on an older release of the model by later today. "We're working on additional fixes to model personality and will share more in the coming days," Altman said, adding OpenAI would share more information about what went wrong "at some point." 

OpenAI released the new GPT-4o late last week. By the weekend, people began noticing ChatGPT was being overly agreeable and verbose in its praise. As you can see from the X post below, often that praise was also inappropriate and strange. 

When is OpenAI pulling the plug on the new GPT-4o ?
This is the most misaligned model released to date by anyone.
This is OpenAI's Gemini image disaster moment.

image credit : r/u/Trevor050 pic.twitter.com/kNcdnEYMDq

— AshutoshShrivastava (@ai_for_success) April 27, 2025

Improving the emotional intelligence of its models, in so far as an algorithm can posses the trait, has been a recent focus for OpenAI. For instance, with GPT-4.5, the company said the model was better at responding with warmth and understanding than its previous systems. In trying to bring that same capability to the more affordable GPT-4o, it seems OpenAI got something wrong. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-rolls-back-update-that-made-chatgpt-an-ass-kissing-weirdo-203056185.html?src=rss

Firefox finally adds tab groups

Firefox now lets you organize your tabs. Four years after its biggest rivals launched tab groups, Mozilla published a nearly 1,000-word blog post recounting the feature's long road from user requests to launch. (Consider skipping it if you don’t like long-winded acceptance speeches.) "What happens when 4,500 people ask for the same feature?" the company asked rhetorically. "At Firefox, we build it."

Of course, those users may have requested tab groups partly because Firefox was the only major browser without them. Chrome, Safari and Edge launched tab groups in 2021. Hell, Vivaldi has had them since 2016.

Tardiness aside, Firefox users will welcome the chance to tidy up the clutter. The feature lets you drag and drop tabs into groups and label them by name or color. Mozilla says tab groups are on-device and never uploaded to the cloud. "Tab groups aren't just about decluttering," Firefox product manager Stefan Smagula said. "It's about reclaiming your flow and finding focus again."

Up next for Firefox tabs: The tech industry's favorite buzzword. Mozilla is testing smart tab groups, powered by AI, which suggest names and groups based on your open tabs.

This article originally appeared on Engadget at https://www.engadget.com/computing/firefox-finally-adds-tab-groups-195130482.html?src=rss

Meta has a plan to bring AI to WhatsApp chats without breaking privacy

As Meta’s first-ever generative AI conference gets underway, the company is also previewing a significant update on its plans to bring AI features to WhatsApp chats. Buried in its LlamaCon updates, the company shared that it’s working on something called “Private Processing,” which will allow users to take advantage of generative AI capabilities within WhatsApp without eroding its privacy features.

According to Meta, Private Processing is an “optional capability” that will enable people to “leverage AI capabilities for things like summarizing unread messages or refining them, while keeping messages private.” WhatsApp, of course, is known for its strong privacy protections and end-to-end encryption. That would seem incompatible with cloud-based AI features like Meta AI. But Private Processing will essentially allow Meta to do both.

Meta has shared more details about how it will accomplish this over on its engineering blog but, as Wired points out, it’s a similar model as Apple’s Private Cloud Compute (which allows the iPhone maker to implement Apple AI without sending all your data to the cloud). Here’s how Meta describes its approach.

We’re excited to share an initial overview of Private Processing, a new technology we’ve built to support people’s needs and aspirations to leverage AI in a secure and privacy-preserving way. This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment. In other words, Private Processing will allow users to leverage powerful AI features, while preserving WhatsApp’s core privacy promise, ensuring no one except you and the people you’re talking to can access or share your personal messages, not even Meta or WhatsApp.

The company seems well-aware such a plan will likely be met with skepticism. WhatsApp is regularly targeted by bad actors as it is. To address inevitable concerns from the security community, the company says it will allow security researchers and others to audit Private Processing, and will make the technology part of its bug bounty program that rewards people who find security vulnerabilities in its services.

It’s not clear when generative AI features may actually be available in WhatsApp chats — the company describes its announcement today as merely a “first look” at the technology — but it does note that Private Processing and “similar infrastructure” could have use cases beyond its messaging app.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-a-plan-to-bring-ai-to-whatsapp-chats-without-breaking-privacy-193556026.html?src=rss

New Star Wars series will premiere in Fortnite of all places

There’s a new animated Star Wars show coming soon and it’s set to actually premiere in the game Fortnite. Star Wars: Tales of the Underworld will be available to watch in-game starting on May 2 at 10AM ET. This is two full days before the show streams on Disney+.

Viewing will take place in a new in-game location called Star Wars Watch Party Island. Epic Games says that this area was built using Unreal Editor for Fortnite and uses official assets to create a "breathtaking environment inspired by a galaxy far, far away." Players will only have access to the first two episodes.

For the uninitiated, Star Wars: Tales of the Underworld is an anthology series consisting of animated shorts. It takes a look at the criminal underworld, centering on the bounty hunter Cad Bane and the force-sensitive assassin Asajj Ventress.

This is part of a larger collaboration between Fortnite and Star Wars. The game will receive new Star Wars content every week for use in Battle Royale. Players will be able to pilot X-wings and duke it out as Emperor Palpatine. A dark side version of Jar Jar Binks will also be a playable character.

This isn’t the first time our favorite space wizards appeared in Fortnite. The game once made Luke, Han and Leia playable characters and added the iconic lightsaber as a weapon.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/new-star-wars-series-will-premiere-in-fortnite-of-all-places-185859342.html?src=rss

Star Wars: Tales of the Underworld will premier on Fortnite beginning May 2

There’s a new animated Star Wars show coming soon and it’s set to actually premiere in the game Fortnite. Star Wars: Tales of the Underworld will be available to watch in-game starting on May 2 at 10AM ET. This is two full days before the show streams on Disney+.

Viewing will take place in a new in-game location called Star Wars Watch Party Island. Epic Games says that this area was built using Unreal Editor for Fortnite and uses official assets to create a "breathtaking environment inspired by a galaxy far, far away." Players will only have access to the first two episodes.

For the uninitiated, Star Wars: Tales of the Underworld is an anthology series consisting of animated shorts. It takes a look at the criminal underworld, centering on the bounty hunter Cad Bane and the force-sensitive assassin Asajj Ventress.

This is part of a larger collaboration between Fortnite and Star Wars. The game will receive new Star Wars content every week for use in Battle Royale. Players will be able to pilot X-wings and duke it out as Emperor Palpatine. A dark side version of Jar Jar Binks will also be a playable character.

This isn’t the first time our favorite space wizards appeared in Fortnite. The game once made Luke, Han and Leia playable characters and added the iconic lightsaber as a weapon.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/star-wars-tales-of-the-underworld-will-premier-on-fortnite-beginning-may-2-185859424.html?src=rss

Speedrunner reaches Breath of the Wild credits on Switch 2, a console which isn't even out yet

The Nintendo Switch 2 won’t be in our hands for over a month yet (sigh), but a speedrunner has already reached the credits of the Nintendo Switch 2 Edition of The Legend of Zelda: Breath of the Wild.

As reported by VGC, the Japanese speedrunner known as Ikaboze posted a video of his handiwork on his YouTube channel after attending a Switch 2 preview event in Tokyo. Attendees were able to play a 10-minute demo of the souped-up original Switch launch game, but Ikaboze only needed seven minutes of the allotted time to dispose of Ganon in the game’s epic final battle.

To be clear, this was not an any% run of the entire game, where the current top times all clock in around 23 minutes. The speedrunner loaded an autosave that spawned him outside Hyrule Castle, where he immediately dropped all of Link’s equipment and made a beeline for his longtime nemesis. Ikaboze was able to take down Ganon before the demo's time was up, to the delight of a crowd of onlookers, who applauded as the credits started to roll. The speedrunner was reportedly told by Nintendo staff at the event that they were the first person to have completed the Breath of the Wild demo.

The updated versions of both Breath of the Wild and its 2023 sequel, Tears of the Kingdom, will be available to play on Switch 2 on launch day, which remains June 5 worldwide despite the pre-order holdup in the US. According to Nintendo, Nintendo Switch 2 Edition games improve performance and resolution, as well as adding HDR support. There’s also a new Zelda companion app that will let you track down missing Koroks and shrines on your save file.

Those who already own the base game can upgrade for $10, and if you’re a Nintendo Switch Online + Expansion Pack member you’ll be able to play the Switch 2 versions of both BotW and TotK as part of your subscription. Good luck trying to beat Ikaboze, though.

This article originally appeared on Engadget at https://www.engadget.com/gaming/speedrunner-reaches-breath-of-the-wild-credits-on-switch-2-a-console-which-isnt-even-out-yet-173004158.html?src=rss

Meta is making it easier to use Llama models for app development

Meta is releasing a new tool it hopes will encourage developers to use its family of Llama models for their next project. At its inaugural LlamaCon event in Menlo Park on Tuesday, the company announced the Llama API. Available as a limited free preview starting today, the tool gives developers a place to experiment with Meta's AI models, including the recently released Llama 4 Scout and Maverick systems. It also makes it easy to create new API keys, which devs can use for authentication purposes.     

"We want to make it even easier for you to quickly start building with Llama, while also giving you complete control over your models and weights without being locked to an API," the company said in a blog post published during the event. To that end, the initial release of the Llama API includes tools devs can use to fine-tune and evaluate their apps.  

Additionally, Meta notes it won't use user prompts and model responses to train its own models. "When you’re ready, the models you build on the Llama API are yours to take with you wherever you want to host them, and we don’t keep them locked on our servers," the company said. Meta expects to roll out the tool to more users in coming weeks and months.  

Despite the fact Meta's Llama models have been downloaded more than one billion times, the company typically isn't viewed as a leader in the AI space in quite the same way as OpenAI and Anthropic. It doesn't help push against that perception that the company was caught gaming LMArena to make its Llama 4 models look better than they actually were.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-making-it-easier-to-use-llama-models-for-app-development-171514630.html?src=rss

Meta’s ChatGPT competitor includes conversational voice chat and a social feed

Meta didn't wait for Tuesday's LlamaCon keynote to unveil its first big AI announcement of the week. The company launched a standalone app that competes with ChatGPT, Gemini, Claude and other multimodal AI chatbots. Sticking to the company’s roots, the app also includes a social feed and the ability to draw on info from your profile and posts you’ve shared.

The Meta AI app offers similar features to rival chatbots, including text and voice chats, live web access and the ability to generate and edit images. But it also includes a Discover feed that (for better or worse) adds a social element to your AI queries. The company describes it as "a place to share and explore how others are using AI." It highlights the prompts that others share and lets you "remix them to make them your own."

Meta stresses that none of your private chats will post to others' feeds unless you explicitly choose to share them.

Meta

For users in the US and Canada, Meta AI can personalize its answers based on data you've shared with Meta products. This includes info like your social profile and content you like or engage with. The company says linking your Facebook and Instagram accounts to the same Meta AI account will provide "an even stronger personalized experience." If you don't want that, this might be a good time to check your privacy settings.

The app has a live conversation mode for users in the US, Canada, Australia and New Zealand. Much like a similar feature in ChatGPT and Gemini, Meta’s version lets you and the AI assistant listen and speak simultaneously, with a natural flow that should feel more like a real conversation. However, Meta only describes it as a demo that provides "a glimpse into the future," suggesting it's still in an early stage. This mode also doesn't offer live web access.

Meta

The Meta AI web version includes the app's new features, including voice interactions and the Discover feed. This version has a few differences, including enhanced image generation (more presets and new editing modes for style, mood, lighting and colors). The web version also lets you test a rich document editor (in some countries) that can spit out text- and image-rich docs to export as PDFs.

The app has merged with the Meta View companion app for the company's Ray-Ban glasses collab. The company says it will include a handoff feature that lets you start a conversation on the glasses and then access it in your history tab on the app or web. After installing the update, you can manage your glasses in the Meta AI app's Devices tab.

You can download the new Meta AI app from the App Store and Google Play.

This article originally appeared on Engadget at https://www.engadget.com/ai/metas-chatgpt-competitor-includes-conversational-voice-chat-and-a-social-feed-164735307.html?src=rss

How to use your iPhone as a webcam with your Mac

If you want to upgrade your video call setup without buying an external webcam, your iPhone can help. With macOS Ventura or later, Apple’s Continuity Camera feature allows users to turn their iPhone into a high-quality, wireless webcam for Mac. Whether you’re joining a meeting on Zoom, recording a presentation or creating content for YouTube, using your iPhone as a webcam can provide a sharper image, better low-light performance and useful extras like Center Stage and Desk View. Here’s how to set up and use your iPhone as a webcam with your Mac, along with additional tips for microphone-only use, Desk View, Studio Light and more. It works natively in macOS, so it’s easy to set up. All you need to do is mount your phone and start your call.

What you’ll need to use Continuity Camera

You’ll need the following things to use this feature properly:

  • An iPhone XR or newer running iOS 16 or later

  • A Mac running macOS Ventura or later

  • Wi-Fi and Bluetooth enabled on both devices

  • Both devices signed into the same Apple ID with two-factor authentication enabled

  • A way to mount your iPhone (Apple sells a MagSafe-compatible Belkin mount, but any secure mount or tripod will work)

Continuity Camera works wirelessly by default, though you can connect your iPhone to your Mac via USB if you prefer a more stable connection.

How to enable Continuity Camera

Continuity Camera is automatically enabled on supported iPhones and Macs. However, it’s worth confirming that the feature is active in your iPhone’s settings:

  1. Open Settings on your iPhone

  2. Tap General

  3. Select AirPlay & Handoff

  4. Make sure Continuity Camera is toggled on

On your Mac, no additional setup is required, but you’ll want to ensure both Wi-Fi and Bluetooth are enabled and that both devices are nearby and awake.

How to use your iPhone as a webcam in macOS apps

Once Continuity Camera is active, your Mac should automatically detect your iPhone as a webcam source in any compatible app. That includes FaceTime, Zoom, Google Meet, Microsoft Teams, QuickTime, Safari and most other video and streaming applications.

To use your iPhone as the camera in a specific app:

  1. Open the app you want to use (e.g., Zoom or FaceTime)

  2. Go to the app’s video settings or preferences menu

  3. Select your iPhone from the list of available camera sources (it may appear as "iPhone Camera")

Your iPhone will automatically activate its rear camera and stream a live video feed to your Mac. Continuity Camera uses the iPhone’s higher-quality rear camera, but you can leverage the front camera using third-party apps such as EpocCam, iVCam or DroidCam.

If nothing happens, make sure:

  • Both devices are unlocked and on the same Wi-Fi network

  • Continuity Camera is enabled on your iPhone

  • You’re signed into the same Apple ID on both devices

How to use microphone-only mode

In addition to camera input, Continuity Camera lets you use your iPhone as a high-quality microphone source. This is handy if you prefer to use your Mac’s built-in camera or another webcam but still want the clarity of the iPhone’s microphone.

To use your iPhone as a mic:

  1. Open System Settings on your Mac

  2. Go to Sound > Input

  3. Select your iPhone from the list of available input devices

You can also choose the iPhone microphone directly from within most video apps under their audio settings or microphone input menus.

How to use Desk View

Desk View is a unique feature of Continuity Camera that uses the iPhone’s ultrawide lens to simulate a top-down camera angle. It creates a second video feed showing your desk or workspace, which is useful for demos, unboxings, or sketching on paper.

It’s worth mentioning that Desk View is only available on Macs with the 12MP Center Stage camera, and with iPhone 11 or later (excluding iPhone 16e and iPhone SE, as these models do not meet the hardware requirements for this feature).

To use Desk View:

  1. Position your iPhone horizontally in a mount at the top of your display

  2. Open the Desk View app on your Mac (found in Applications or Launchpad)

  3. The app will generate a simulated overhead view of your desk

  4. You can share this view in apps like Zoom by selecting Desk View as the video source

Some third-party apps (such as FaceTime and Camo) also support displaying both your face and the Desk View simultaneously using picture-in-picture.

How to adjust Continuity Camera effects

MacOS allows you to enable various video effects in the Control Center when using your iPhone as a webcam. These features enhance your appearance and help you stay centered on screen, though you need to be on a video call to use them.

To access these effects:

  1. While using a video conferencing app (such as FaceTime) on your Mac, click the Control Center icon in the top-right of your Mac’s menu bar

  2. Select Video Effects

  3. Choose from the following options:

  • Center Stage: Uses the iPhone’s ultrawide lens to keep you centered as you move

  • Portrait: Adds a soft background blur similar to Portrait Mode in the Camera app

  • Studio Light: Brightens your face and dims the background to mimic professional lighting

  • Desk View: Activates the Desk View camera feed

You can toggle these effects on or off at any time during a call or recording session, or even when you’re outside of a video call.

Tips for mounting and positioning your iPhone

To get the best results, use a secure mount that keeps your iPhone stable and aligned with your face. Apple recommends positioning the iPhone horizontally with the rear camera facing you and the screen facing away.

If you’re using a MacBook, the Belkin iPhone Mount with MagSafe is designed to clip directly onto your Mac’s display. For desktop Macs, any tripod or adjustable mount that aligns the phone at eye level will work.

Avoid placing the iPhone too close to your face and ensure the camera lens is unobstructed. You will be able to see yourself during the call, so you can adjust to your preference. The rear camera is used for higher video quality (though like I mentioned you can use the front camera with compatible third-party apps). Make sure the iPhone is not in low-power mode, as it may affect performance.

Using Continuity Camera with third-party apps

Most popular video conferencing and streaming apps on macOS support Continuity Camera without any extra setup. However, some apps may require manual input selection.

Here’s how to change the camera on a few commonly used platforms:

  • Zoom: Go to Preferences > Video and select "iPhone Camera."

  • Google Meet (in Safari or Chrome): Click the gear icon before joining a call and select your iPhone under Camera

  • OBS Studio: Add a new video capture device source and select your iPhone as the input

  • QuickTime: Open QuickTime Player, choose New Movie Recording, click the arrow next to the record button, and select your iPhone

Continuity Camera works with most macOS-native and browser-based platforms as long as permissions for camera and microphone access are enabled.

How to switch between camera modes or devices

If you want to return to using your Mac’s built-in webcam or switch to another device, simply change the input source in your app’s settings. Continuity Camera only takes over as the default when an iPhone is detected and selected.

To switch back:

  1. Open the video or audio settings in your app

  2. Select a different camera or microphone input

  3. Your Mac will revert to using the built-in hardware or another connected device

You can also disconnect your iPhone from the mount or place it out of range to stop Continuity Camera from activating. You’ll have to do this for every app you’ve used. If you want a systemwide change or if you’d rather not dismount or unplug your iPhone, you can switch off Continuity Camera by doing the following on your phone:

  1. Go to Settings > General > AirPlay & Continuity (or AirPlay & Handoff)

  2. Turn off Continuity Camera.

Troubleshooting Continuity Camera issues

If your iPhone is not showing up as an available webcam, try the following:

  • Ensure both devices are running the latest versions of iOS and macOS

  • Confirm that both devices are signed into the same Apple ID

  • Restart both the Mac and iPhone

  • Toggle Wi-Fi and Bluetooth off and on again on both devices

  • Make sure no other app is already using the iPhone camera

  • Try using a wired USB connection instead

For persistent issues, resetting your Mac’s privacy permissions for camera and microphone access may help. Go to System Settings > Privacy & Security > Camera and Microphone, and verify that the apps you’re using have access.

Battery use and privacy

Using your iPhone as a webcam over an extended period can guzzle its battery quickly, especially with effects like Studio Light or Center Stage enabled. To avoid interruptions during longer calls or recordings, consider connecting the iPhone to power while it’s in use.

Apple includes privacy protections when using Continuity Camera. A green LED will appear next to your iPhone’s camera lens to indicate it’s active, and the screen will show a message confirming that the camera is in use. No video or audio is transmitted unless you have explicitly selected the iPhone as a source in your Mac app.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-use-your-iphone-as-a-webcam-with-your-mac-164248242.html?src=rss

Borderlands 4 release date moves up to September 12

The ground has just shifted for anyone trying to figure out the exact release date for Grand Theft Auto 6 as 2K has altered plans for another game in its stable, Borderlands 4. The latest entry in Gearbox's series will arrive 11 days earlier than previously announced as it will debut on September 12.

After teasing a delay, Gearbox co-founder and CEO Randy Pitchford said "the team has been working very hard. Everything's going great, actually. In fact, everything's going kind of the best case scenario. The game is awesome, the team is cooking." As such, Gearbox and 2K are bringing the release date forward. Gearbox said the decision was made after "a lot of meetings, playtesting and incredible development work."

Announcement about the Borderlands 4 launch date - Please watch until the end: pic.twitter.com/cF85jG1p09

— Randy Pitchford (@DuvalMagic) April 29, 2025

Sony is hosting a dedicated State of Play for Borderlands 4 on April 30 (which is tomorrow, fact fans). The stream will run for around 20 minutes or so and you can watch it on the PlayStation’s Twitch and YouTube channels at 5PM ET.

Meanwhile, that sound you just heard was countless game developers and publishers scrambling to deduce what the revised Borderlands 4 date means for GTA 6. The latter is still scheduled to arrive this fall and 2K is likely to want to have a buffer of at least a few weeks to avoid cannibalizing Borderlands 4 sales. If the fall release window still holds for GTA 6, that means it should arrive sometime in October or November (2K will certainly want to have the game out before Black Friday in that scenario).

It's been widely reported that publishers and studios are holding off on revealing release dates for any games they have coming out this fall to see when GTA 6 lands, so they can give that guaranteed juggernaut as wide a berth as possible. As it happens, Sony is one of the few companies that's locked in fall dates for major games. Marathon will drop on Borderlands 4's old date of September 23 while Ghost of Yōtei is slated to hit PS5 on October 2. Perhaps Sony, which is working with 2K on promoting Borderlands 4, knows more about the GTA 6 release date than it's letting on.

This article originally appeared on Engadget at https://www.engadget.com/gaming/borderlands-4-release-date-moves-up-to-september-12-154958162.html?src=rss