X is making a significant change to its crowd-soruced fact checking tool in an attempt to stem the flow of misinformation on its platform. The new rule is one that will be familiar to professional fact checkers, academics and Wikipedia editors, but is nonetheless new to X’s approach to fact-checking: the company will now require its volunteer contributors to include sources on every community note they write.
The company announced the change in a post on X, shortly after Wiredreported that some community notes contributors are worried the tool is being manipulated by bad actors and worsening X’s misinformation problems amid the ongoing Israel-Hamas war. “Starting today, sources are now required for proposed notes,” the company wrote. “We haven’t previously required this, as some helpful notes inherently do not need sources – for example, they refer to details of the post or media it contains. But those instances are less common, and we believe the overall impact of this change will be positive.”
The change comes amid mounting scrutiny of the amount of misinformation and other falsehoods spreading on X in recent days. Longtime researchers have said that misinformation has reached new heights following Hamas’ attacks in Israel and the ensuing war. The advent of paid verification, and algorithm changes that boost paying subscribers, have allowed misinformation to spread relatively unchecked, researchers have said.
Notes that cite sources have a much higher likelihood of earning a 'Helpful' status. This is not surprising, as sources make notes more easily verifiable by viewers and raters. Starting today, sources are now required for proposed notes.
European Union officials have also raised concerns, pointing to the viral spread of video game footage and other unrelated content falsely claiming to depict scenes from the ongoing conflict. EU officials opened an investigation into X over its handling of misinformation last week.
Under Elon Musk’s leadership, X cut the teams responsible for curating reputable information about breaking news events, removed misinformation-reporting tools, slashed safety teams that patrolled for disinformation, and stopped labeling state-affiliated media accounts. Instead, the company has relied almost entirely on Community Notes, which allows volunteer contributors to append fact-checks to individual tweets.
Contributors are not vetted before joining the program, though notes have to reach a certain threshold of “helpful” ratings from other contributors before they’ll be visible. X CEO Linda Yaccarino told EU officials last week that the company had “recently launched a major acceleration in the speed at which notes appear.”
According to Wired, the system is easily manipulated as groups of contributors can rate each other’s notes, or selectively rate contributions that align with their opinions. The report also says that community notes related to the Israel-Hamas war have been filled with conspiracy theories and infighting between contributors.
The change to require a linked source may be X’s attempt to increase the quality of its notes, though it doesn’t seem to have any guidelines about the types of sources that can be cited. The company says “thousands” of new contributors have joined the program in recent days, and that notes have been viewed “millions” of times.
This article originally appeared on Engadget at https://www.engadget.com/x-now-requires-community-fact-checks-to-include-sources-235125787.html?src=rss
A lot has changed in the two years since Facebook released its Ray Ban-branded smart glasses. Facebook is now called Meta. And its smart glasses also have a new name: the Ray-Ban Meta smart glasses. Two years ago, I was unsure exactly how I felt about the product. The Ray-Ban Stories were the most polished smart glasses I’d tried, but with mediocre camera quality, they felt like more of a novelty than something most people could use.
After a week with the company’s latest $299 sunglasses, they still feel a little bit like a novelty. But Meta has managed to improve the core features, while making them more useful with new abilities like livestreaming and hands-free photo messaging. And the addition of an AI assistant opens up some intriguing possibilities. There are still privacy concerns, but the improvements might make the tradeoff feel more worth it, especially for creators and those already comfortable with Meta’s platform.
What’s changed
Just like its predecessor, the Ray-Ban Meta smart glasses look and feel much more like a pair of Ray-Bans than a gadget and that’s still a good thing. Meta has slimmed down both the frames and the charging case, which now looks like the classic tan leather Ray-Ban pouch. The glasses are still a bit bulkier than a typical pair of shades, but they don’t feel heavy, even with extended use.
This year’s model has ditched the power switch of the original, which is nice. The glasses now automatically turn on when you pull them out of the case and put them on (though you sometimes have to launch the Meta View app to get them to connect to your phone).
Image by Karissa Bell for Engadget
The glasses themselves now charge wirelessly through the nosepiece, rather than near the hinges. According to Meta, the device can go about four hours on one charge, and the case holds an additional four charges. In a week of moderate use, I haven’t had to top up the case, but I do wish there was a more precise indication of its battery level than the light at the front (the Meta View app will display the exact power level of your glasses, but not the case.)
My other minor complaint is that the new charging setup makes it slightly more difficult to pull the glasses out of the case. It takes a little bit of force to yank the frames off the magnetic charging contacts and the vertical orientation of the case makes it easy to grab (and smudge) the lenses.
The latest generation of smart glasses comes in both the signature Wayfarer style, which start at $299, as well as a new, rounder “Headliner” design, which sells for $329. I opted for a pair of Headliners in the blue “shiny jean” color, but there are tan and black variations as well. One thing to note about the new colors is that both the “shiny jeans” and “shiny caramel” options are slightly transparent, so you can see some of the circuitry and other tech embedded in the frames.
The lighter colors also make the camera and LED indicator on the top corner of each lens stand out a bit more than on their black counterparts. (Meta has also updated its software to prevent the camera from being used when the LED is covered.) None of this bothered me, but if you want a more subtle look, the black frames are better at disguising the tech inside.
New camera, better audio
Look closely at the transparent frames, though, and you can see evidence of the many upgrades. There are now five mics embedded in each pair, two in each arm and one in the nosepiece. The additional mics also enable some new “immersive” audio features for videos. If you record a clip with sound coming from multiple sources — like someone speaking in front of you and another person behind you — you can hear their voices coming from different directions when you play back the video through the frames. It’s a neat trick, but doesn’t feel especially useful.
The directional audio is, however, a sign of how dramatically the sound quality has improved. The open-ear speakers are 50 percent louder and, unlike the previous generation, don’t distort at higher volumes. Meta says the new design also has reduced the amount of sound leakage, but I found this really depends on the volume you’re listening at and your surrounding noise conditions.
There will always be some quality tradeoffs when it comes to open-ear speakers, but it’s still one of my favorite features of the glasses. The design makes for a much more balanced level of ambient noise than any kind of “transparency mode” I’ve experienced with earbuds or headphones. And it’s especially useful for things like jogging or hiking when you want to maintain some awareness of what’s around you.
Camera quality was one of the most disappointing features on the first-generation Ray-Ban Stories so I was happy to see that Meta and Luxottica ditched the underpowered 5-megapixel cameras for a 12MP ultra-wide.
The upgraded camera still isn’t as sharp as most phones, but it’s more than adequate for social media. Shots in broad daylight were clear and the colors were more balanced than snaps from the original Ray-Ban Stories, which tended to look over-processed. I was surprised that even photos I took indoors or at dusk — occasions when most people wouldn't wear sunglasses — also looked decent. One note of caution about the ultra-wide lens, however: if you have long hair or bangs, it’s very easy for wisps of hair to end up in the edges of your frame if you're not careful.
The camera also has a few new tricks of its own. In addition to 60-second videos, you can now livestream directly from the glasses to your Instagram or Facebook account. You can even use touch controls on the side of the glasses to hear a readout of likes and comments from your followers. As someone who has live streamed to my personal Instagram account exactly one time before this week, I couldn’t imagine myself using this feature.
But after trying it out, it was a lot cooler than I expected. Streaming a first-person view from your glasses is much easier than holding up your phone, and being able to seamlessly switch between the first-person view and the one from your phone’s camera is something I could see being incredibly useful to creators. I still don’t see many IG Lives in my future, but the smart glasses could enable some really creative use cases for content creators.
The other new camera feature I really appreciated was the ability to snap a photo and share it directly with a contact via WhatsApp or Messenger (but not Instagram DMs) using only voice commands. While this means you can’t review the photo before sending it, it’s a much faster and convenient way to share photos on the go.
Meta AI
Two years ago, I really didn’t see the point of having voice commands on the Ray-Ban Stories. Saying “hey Facebook” felt too cringey to utter in public, and it just didn’t seem like there was much point to the feature. However, the addition of Meta’s AI assistant makes voice interactions a key feature rather than an afterthought.
The Meta Ray-Ban smart glasses are one of the first hardware products to ship with Meta’s new generative AI assistant built in. This means you can chat with the assistant about a range of topics. Answers to queries are broadcast through the internal speakers, and you can revisit your past questions and responses in the Meta View app.
To be clear: I still feel really weird saying “hey Meta,” or “OK Meta,” and haven’t yet done so in public. But there is now, at least, a reason you may want to. For now, the assistant is unable to provide “real-time” information other than the current time or weather forecast. So it won’t be able to help with some practical queries, like those about sports standings or traffic conditions. The assistant’s “knowledge cutoff” is December 2022, and it will remind you of that for most questions related to current events. However, there were a few questions I asked where it hallucinated and gave made-up (but nonetheless real-sounding) answers. Meta has said this kind of thing is an expected part of the development of large language models, but it’s important to keep in mind when using Meta AI.
Karissa Bell
Meta has suggested you should instead use it more for creative or general interest questions, like basic trivia or travel ideas. As with other generative AI tools, I found that the more creative and specific your questions, the better the answer. For example, “Hey Meta, what’s an interesting Instagram caption for a view of the Golden Gate Bridge,” generated a pretty generic response that sounded more like an ad. But “hey Meta, write a fun and interesting caption for a photo of the Golden gate Bridge that I can share on my cat’s Instagram account,” was slightly better.
That said, I’ve been mostly underwhelmed by my interactions with Meta AI. The feature still feels like something of a novelty, though I appreciated the mostly neutral personality of Meta AI on the glasses compared to the company’s corny celebrity-infused chatbots.
And, skeptical as I am, Meta has given a few hints about intriguing future possibilities for the technology. Onstage at Connect, the company offered a preview of an upcoming feature that will allow wearers to ask questions based on what they’re seeing through their glasses. For example, you could look at a monument and ask Meta to identify what you’re looking at. This “multi-modal” search capability is coming sometime next year, according to the company, and I’m looking forward to revisiting Meta AI once the update rolls out.
Privacy
The addition of generative AI also raises new privacy concerns. First, even if you already have a Facebook or Instagram account, you’ll need a Meta account to use the glasses. While this also means they don’t require you to use Facebook or Instagram, not everyone will be thrilled at the idea of creating another Meta-linked account.
The Meta View app still has no ads and the company says it won’t use the contents of your photos or video for advertising. The app will store transcripts of your voice commands by default, though you can opt to remove transcripts and associated voice recordings from the app’s settings. If you do allow the app to store voice recordings, these can be surfaced to “trained reviewers” to “improve, troubleshoot and train Meta’s products.”
Karissa Bell
I asked the company if it plans to use Meta AI queries to inform its advertising and a spokesperson said that “at this time we do not use the generative AI models that power our conversational AIs, including those on smart glasses, to personalize ads.” So you can rest easy that your interactions with Meta AI won’t be fed into Meta’s ad-targeting machine, at least for now. But it’s not unreasonable to imagine that could one day change. Meta tends to keep new products ad-free in the beginning and introduce ads once they start to reach a critical mass of users. And other companies, like Snap, are already using generative AI to boost their ad businesses.
Are they worth it?
If any of that makes you uncomfortable, or you’re interested in using the shades with non-Meta apps, then you might want to steer clear of the Ray-Ban Meta smart glasses. Though your photos and videos can be exported to any app, most of the devices’ key features work best when you’re playing in Meta’s ecosystem. For example, you can connect your WhatsApp and Messenger accounts to send hands-free photos or messages but can’t send texts via SMS or other apps (Meta AI will read out incoming texts, however). Likewise, the livestreaming abilities are limited to Instagram and Facebook, and won’t work with other platforms.
If you’re a creator or already spend a lot of time in Meta’s apps, though, there are plenty of reasons to give the second-generation shades a look. While the Ray-Ban Stories of two years ago were a fun, if overly expensive, novelty, the $299 Ray-Ban Meta smart glasses feel more like a finished product. The improved audio and photo quality better justify the price, and the addition of AI makes them feel like a product that’s likely to improve rather than a gadget that will start to become obsolete as soon as you buy it.
This article originally appeared on Engadget at https://www.engadget.com/ray-ban-meta-smart-glasses-review-instagram-worthy-shades-070010365.html?src=rss
A top European Union official is warning Elon Musk about the spread of misinformation on X amid the Israel-Hamas war. EU Commissioner Thierry Breton sent Musk an "urgent" letter about the company’s handling of misinformation and its responsibilities under the Digital Services Act.
The letter comes as researchers and fact checkers have warned about a wave of misinformation on X in the wake of the Hamas attacks in Israel. While Musk’s recent move to strip headlines from links shared on the platform has made it more difficult to find news, verified users have also been sharing viral clips of completely unrelated content purporting to be scenes from the unfolding conflict.
“Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU,” Breton wrote in the letter to Musk. “Let me remind you that the Digital Services Act sets very precise obligations regarding content moderation.”
Following the terrorist attacks by Hamas against 🇮🇱, we have indications of X/Twitter being used to disseminate illegal content & disinformation in the EU.
In particular, Breton called out the spread of “fake and manipulated images and facts circulating on your platform in the EU, such as repurposed old images of unrelated armed conflicts or military footage that actually originated from video games.” He also flagged the company’s newly-changed public interest policy, saying that the change “left many European users uncertain” about what type of content the platform allows.
Breton also suggested X was not responding appropriately to requests to deal with “potentially illegal content,” on its platform. “When you receive notices of illegal content in the EU, you must be timely, diligent and objective in taking action and removing the relevant content when warranted,” Breton wrote. “We have, from qualified sources, reports about potentially illegal content circulating on your service despite flags from relevant authorities.”
X didn’t respond to a request for comment, but Musk issued a brief reply on X. “Our policy is that everything is open source and transparent, an approach that I know the EU supports,” Musk wrote. “Please list the violations you allude to on X, so that that [sic] the public can see them.”
The company, which recently removed its misinformation-reporting tool and cut safety teams tasked with handling disinformation, has pointed to its crowd-sourced fact-checking tool, Community Notes, as its primary way of addressing misinformation.
In an update posted shortly after Breton shared the letter, the company said that “more than 500 unique notes” had been created over the last three days, including notes addressing “fake videos made with game simulators” and other “out of context” and “unrelated” footage. X added that it’s “actively working on” changes “that will help automatically show notes on even more posts with matching video and images” and that it’s “scaling up” notifications for people who previously engaged with content later fact-checked with a note. The company didn’t say how many users have received such notifications.
It’s not the first time European Union officials have raised concerns about the amount of disinformation on X. An EU report last month found that X had the highest prevalence of misinformation and disinformation. Under the Digital Services Act, companies like X are required to disclose details about their handling of disinformation.
This article originally appeared on Engadget at https://www.engadget.com/eu-official-warns-elon-musk-about-xs-handling-of-disinformation-amid-israel-hamas-war-210909999.html?src=rss
The Oversight Board has shared the details surrounding a case involving an “altered” Facebook video of President Joe Biden, which could have significant implications for Meta’s “manipulated media” policy.
At the center of the case is a video of Biden from last fall, when he joined his granddaughter who was voting in-person for the first time. After voting, Biden placed an “I voted” sticker on her shirt. A Facebook user later shared an edited version of the encounter, making it appear as if he repeatedly touched her chest. The video caption called him a “sick pedophile,” and said those who voted for him were “mentally unwell.”
In a statement, the board also raised the issue of manipulated media and elections. “Although this case involves President Biden, it touches on the much broader issue of how manipulated media might impact elections in every corner of the world,” Thomas Hughes, director of the Oversight Board Administration, said in a statement. “It’s important that we look at what challenges and best practices Meta should adopt when it comes to authenticating video content at scale.”
According to the Oversight Board, a Facebook user reported the video, but Meta ultimately left the clip up saying it didn’t break its rules. As the board notes, the company’s manipulated media policy prohibits misleading video created with artificial intelligence, but doesn’t apply to deceptive edits made with more conventional techniques. “The Board selected this case to assess whether Meta’s policies adequately cover altered videos that could mislead people into believing politicians have taken actions, outside of speech, that they have not,” the Oversight Board said in a statement announcing the case.
The case also underscores the often glacial pace of the Oversight Board and its ability to effect change at Meta. The Biden clip at the center of the case was originally filmed last October, and edited versions have been spreading on social media since at least January (the version in this case was first posted in May). It will likely take several more weeks, if not months, for the board to make a decision on whether the Facebook video should be removed or left up. Meta will then have two months to respond to the board’s policy recommendations, though it could take many more weeks or months for the company to fully implement any suggestions it chooses to adopt. That means any meaningful policy change may fall much closer to the 2024 election than the 2022 midterm election that kickstarted the case in the first place.
This article originally appeared on Engadget at https://www.engadget.com/the-oversight-board-will-take-on-metas-manipulated-media-policy-ahead-of-2024-elections-120046787.html?src=rss
Elon Musk is once again in the crosshairs of the Securities and Exchange Commision (SEC). The regulator, which has been investigating Musk’s Twitter takeover, is now suing the owner of X after he failed to appear for previously-scheduled testimony, The Wall Street Journalreports.
The SEC’s investigation dates back to 2022, when it opened a probe into Musk’s delayed disclosure of his stake in Twitter, which was at the time a publicly-traded company. Musk was 10 days late in filing paperwork, required under US securities law, disclosing his investment in Twitter. The delay may have earned him as much as $156 million, and also made him the target of a class-action lawsuit from former Twitter shareholders.
Musk had been scheduled to testify in the SEC investigation into the matter last month, The Wall Street Journal reports. But Musk failed to appear at a scheduled meeting in San Francisco, and later gave a “blanket refusal to appear for testimony” when the SEC tried to reschedule. The regulator is now asking a San Francisco federal court to force Musk to comply with its subpoena.
It’s hardly the first time Musk has found himself on the wrong side of the SEC, which he has repeatedly ridiculed over the years. The Tesla CEO was charged with securities fraud over a now-infamous 2018 tweet claiming he had “funding secured” to take the electric car maker private. Musk eventually settled with the SEC, paying a $20 million fine and giving up his position as chairman of Tesla’s board. Musk is, however, still fighting a provision of that SEC settlement requiring a so-called “Twitter-sitter” to sign-off on some of Musk’s Tesla-related tweets.
X didn’t respond to a request for comment.
This article originally appeared on Engadget at https://www.engadget.com/the-sec-is-suing-elon-musk-for-refusing-to-testify-in-twitter-investigation-212347834.html?src=rss
Reddit is revamping search and making a key feature of its app more accessible. The company announced a series of updates it says makes search faster and easier across its app and mobile site.
The changes include a new “media” tab in search and within individual subreddits so users can more easily browse images, video clips, and GIFs. Additionally, search results in Reddit’s app and website are getting a simpler, cleaner look.
Reddit is also making search easier for people using the mobile version of its site who aren’t logged in. Now, logged out searches will have more filters, as well as separate tabs for comments and posts. And mobile web searches are 85 percent faster overall, according to the company.
There are also search improvements specifically for Redditors who rely on screen readers. “The posts and comments tabs on the search result page are now screen reader compatible on native mobile apps,” Reddit explains in a blog post. “We’re adding labels, roles/traits, values, and states to all elements on these pages to help redditors discover content and take action. If a redditor uses a screen reader, they can hear the actions available and the results returned on these tabs.”
That change could help the company address some of the long-running accessibility complaints about its app. Members of r/blind were some of the most vocal opponents to the company’s API crackdown, which resulted in the shuttering of many third-party apps. The company later said that it would exempt some accessibility-focused apps from its API fees, but the moderators of r/blind have said the concession isn’t enough, and that the company has “made it impossible for blind Redditors to moderate their own sub.” While Reddit’s latest updates don’t address blind users’ complaints about its moderation tools, the changes could still be a significant improvement for people who browse the app with screen readers.
This article originally appeared on Engadget at https://www.engadget.com/reddit-is-revamping-search-and-improving-support-for-screen-readers-143054804.html?src=rss
Meta’s Oversight Board is set to take on a new high-profile case ahead of next year’s presidential election. The board said it planned to announce a case involving a user appeal related to an “altered” video of President Joe Biden. The board didn’t disclose specifics of the case, which it said would be announced formally “in the coming days,” but suggested it will touch on policies that could have far-reaching implications for Meta.
“In the coming days the Oversight Board will announce a new case regarding a user-appeal to remove an altered video of President Joe Biden on Facebook,” the Oversight Board said in a statement. “This case will examine issues related to manipulated media on Meta’s platforms and the company’s policies on misinformation, especially around elections.”
While neither Meta or the Oversight Board has shared details about the video in question, the case could further shape the social network’s policies around AI-generated or otherwise manipulated media. Even before the rise of generative AI tools that make it easier than ever to create fake videos of public figures, Meta has taken heat over its response to suggestively edited videos of politicians. In 2019, the company declined to remove an edited clip that falsely claimed then-Speaker of the House of Representatives Nancy Pelosi was “drunk.”
The incident prompted the company’s current policy that bars AI-generated deepfakes, but allows some other types of edited videos to remain up. Over the last year, fact checkers have regularlydebunkeddeceptively-edited videos of Joe Biden that often spread widely on Facebook and Instagram.
It’s not the first time the Oversight Board has weighed in on a case involving a head of state, The board previously got involved in Meta’s suspension of Donald Trump, and recently recommended Meta suspend the former prime minister of Cambodia (Meta ultimately declined to do so). When the Oversight Board agrees to a case, Meta is only required to implement the board’s decision for the specific Facebook or Instagram post in question. The board also makes a number of policy suggestions, which Meta is free to ignore, though it must provide written responses.
This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-weigh-in-on-altered-facebook-video-of-joe-biden-181008196.html?src=rss
Elon Musk is looking to new video features, including game streaming and live shopping, as part of his attempt to turn X into an “everything app.” The company formerly known as Twitter is experimenting with basic, Twitch-like game streaming capabilities, which are currently accessible to X Premium subscribers.
Musk showed off the feature Sunday night in a 54-minuteDiablo IV stream posted from an anonymous Twitter account with the handle @cyb3rgam3r420. Musk later replied to the account and confirmed the company was testing the feature. An engineer at X, Mark Kalman, also shared a video explaining how Premium subscribers can set up game streaming from their accounts by connecting Open Broadcaster Software (OBS) to their Twitter account via X Media Studio.
For now, it’s unclear how serious X is about courting streamers.The feature seems to support viewer comments in the streams, but for now lacks most of the creator-centric features of other platforms. But it is one of the latest examples of how X is turning to creators and new video features in an effort to lure more users to the platform.Separately, the company also said it would begin experimenting with live shopping features through a new partnership with Paris Hilton. Varietyreports that Hilton has signed on to “create four original video content programs per year that include live-shopping features.”
It’s also unclear if X’s infrastructure will be able to keep up with new live video features. The company, which shed many of its site reliability engineers in layoffs last year, has struggled with large live audio and video streams, particularly those boosted by Musk’s account. When Florida Governor Ron DeSantis appeared in a chat on Spaces in May to announce a presidential run with Musk, the stream repeatedly crashed.
According to Musk’s biographer, Walter Isaacson, the issue was a result of instability caused by a poorly planned move of one of the company’s data centers. However, the issues still don’t seem to be fully resolved. Just last week, Musk attempted to live stream himself visiting the US border with Mexico when the video feed abruptly cut out after about four minutes. Musk was able to eventually restart the stream, but only after he sent a frantic, company-wide email to all of X’s staff, according to New York Times reporter Ryan Mac. “Please fix this,” he said.
This article originally appeared on Engadget at https://www.engadget.com/x-is-working-on-game-streaming-and-live-shopping-features-203902095.html?src=rss
As expected, Apple is making a last-ditch effort to get the Supreme Court to reverse a ruling that would force it to open up its App Store to third-party payments. The iPhone maker filed a petition with the Court Thursday, arguing that the lower court injunction was “breathtakingly broad” and “unconstitutional.”
It’s the latest beat in a long-simmering feud between Cupertino and the Fortnite developer that’s seen both sides ask the Supreme Court to reverse parts of a lower court ruling. But Apple's latest petition could have far-reaching consequences for all developers, should the Supreme Court decide to take up the case.
That’s because Apple is asking the Supreme Court to reverse an injunction that would require the company to allow app developers to offer payments that circumvent its App Store, and the fees associated with it. Such a move would be a major blow to the App Store’s business, which has used the rule to maintain strict control over in-app payments.
The rule, often referred to as an “anti-steering” policy, has long been controversial and a major gripe for developers. It not only prohibits app makers from providing links to web-based payments, it bars them from even telling their customers that a cheaper rate was available somewhere else.
Fortnite developer Epic made the issue a central part of its antitrust lawsuit against Apple in 2020, and the judge in the case ruled in Epic’s favor on the issue in 2021. Apple has spent the last two years fighting that part of the ruling.
Separately, Epic has also asked the Supreme Court to reconsider part of the lower court’s ruling in its bid to keep its antitrust claims against Apple alive.
This article originally appeared on Engadget at https://www.engadget.com/apple-asks-supreme-court-to-reverse-app-store-ruling-in-epic-case-221126323.html?src=rss
Meta’s Connect keynote felt different this year, and not just because it marked the return of an in-person event. It’s been nearly two years since Mark Zuckerberg used Connect to announce that Facebook was changing its name to Meta and reorienting the entire company around the metaverse.
But at this year’s event, it felt almost as if Zuckerberg was trying to avoid saying the word “metaverse.” While he did utter the word a couple of times, he spent much more time talking up Meta’s new AI features, many of which will be available on Instagram and Facebook and other non-metaverse apps. Horizon Worlds, the company’s signature metaverse experience that was highlighted at last year’s Connect, was barely mentioned.
That may not be particularly surprising if you’ve been following the company’s metaverse journey lately. Meta has lost so much money on the metaverse, its own investors have questioned it. And Zuckerberg has been mercilessly mocked for trying to hype seemingly minor metaverse features like low-res graphics or avatars with legs.
AI, on the other hand, is much more exciting. The rise of large language models has fueled a huge amount of interest from investors and consumers alike. Services like OpenAI’s ChatGPT, Snap’s MyAI and Midjourney have made the technology accessible — and understandable— to millions.
ASSOCIATED PRESS
Given all that, it’s not surprising that Zuckerberg and Meta used much of Connect — once known solely as a virtual reality conference — to talk about the company’s new generative AI tools. And there was a lot to talk about: the company introduced Meta AI, a generative AI assistant, which can answer questions and take on the personality of dozens of characters; AI-powered image editing for Instagram; and tools that will enable developers, creators and businesses to make their own AI-powered bots. AI will even play a prominent role in the company’s new hardware, the Meta Quest 3 and the Ray-Ban Meta smart glasses, both of which will ship with the Meta AI assistant.
But that doesn't mean the company is giving up on the metaverse. Zuckerberg has said the two are very much linked, and has previously tried to dispel the notion that Meta’s current focus on AI has somehow supplanted its metaverse investments. “A narrative has developed that we're moving away from focusing on the metaverse vision,” Zuckerberg said in April. We've been focusing on both AI and the metaverse for years now, and we will continue to focus on both.”
But at Connect he offered a somewhat different pitch for the metaverse than he has in the past. Over the last two years, Zuckerberg spent a lot of time emphasizing socializing and working in VR environments, and the importance of avatars. This year, he pitched an AI-centric metaverse.
"Pretty soon, I think we're going to be at a point where you're going to be there physically with some of your friends, and others will be there digitally as avatars as holograms and they'll feel just as present as everyone else. Or you know, you'll walk into a meeting and you'll sit down at a table and there will be people who are there physically, and people are there digitally as holograms. But also sitting around the table with you. are gonna be a bunch of AIs who are embodied as holograms, who are helping you get different stuff done too. So I mean, this is just a quick glimpse of the future and how these ideas of the physical and digital come together into this idea that we call the metaverse."
Notably, the addition of AI assistants could also make “the metaverse” a lot more useful. One of the more intriguing features previewed during Connect were Meta AI-powered search capabilities in the Ray-Ban Meta smart glasses. The Google Lens-like feature would enable wearers to “show” things they are seeing through the glasses and ask the AI questions about it, like asking Meta AI to identify a monument or translate text.
It’s not hard to imagine users coming up with their own use cases for AI assistants in Meta’s virtual worlds, either. Angela Fan, a research scientist with Meta AI, says generative AI will change the type of experiences people have in the metaverse. “It’s almost like a new angle on it,” Fan tells Engadget. “When you're hanging out with friends, for example, you might also have an AI looped in to help you with tasks. It’s the same kind of foundation, but brought to life with the AIs that will do things in addition to some of the friends that you hang out with in the metaverse.”
Meta
For now, it’s not entirely clear just how long it will be before these new AI experiences reach the metaverse. The company said the new “multi-modal” search capabilities would be arriving on its smart glasses sometime next year. And it didn’t give a timeframe for when the new “embodied” AI assistants could be available for metaverse hangouts.
It’s also not yet clear if the new wave of AI assistants will be popular enough to fuel a renewed interest in the metaverse to begin with. Meta previously tried to make (non-AI) chatbots a thing in 2016 and the effort fell flat. And even though generative AI makes the latest generation of bots much more powerful, the company has plenty of competition in the space. But by putting its AI into its other apps now, Meta has a much better chance at reaching its billions of users. And that could lay important groundwork for its vision for an AI-centric metaverse.
This article originally appeared on Engadget at https://www.engadget.com/metas-metaverse-is-getting-an-ai-makeover-194004996.html?src=rss