Posts with «social & online media» label

Newsletter service Ghost will support the fediverse protocol ActivityPub

Newsletter platform Ghost is the latest service to pledge support for ActivityPub, the open source protocol powering the fediverse. The company announced Monday it would add ActivityPub support later this year in a move that could bring tens of millions of people into the fediverse.

The fediverse is a growing collection of services, including Mastodon, Flipboard and Threads, that support the ActivityPub protocol. It’s part of a growing movement for decentralized social media services, which rely on open protocols rather than closed networks. Proponents often compare it to email, which allows people to communicate regardless of their preferred app or platform.

In a blog post laying out its vision, Ghost said it was joining the fediverse in an effort to “bring back” the open web. “On, Ghost publishers will be able to follow, like and interact with one another in the same way that you would normally do on a social network — but on your own website,” the company wrote. “The difference, of course, is that you’ll also be able to follow, like, and interact with users on Mastodon, Threads, Flipboard, Buttondown, WriteFreely, Tumblr, WordPress, PeerTube, Pixelfed... or any other platform that has adopted ActivityPub, too.”

While Ghost says ActivityPub integration will be optional for publishers, the company notes that its entry into the fediverse could bring "tens of millions" of new people into the space. A number of popular newsletters run on Ghost, including Platformer, Garbage Day, She’s a Beast, as does the independent tech news site 404 Media.

This article originally appeared on Engadget at https://www.engadget.com/newsletter-service-ghost-will-support-the-fediverse-protocol-activitypub-231359155.html?src=rss

Mozilla urges WhatsApp to combat misinformation ahead of global elections

In 2024, four billion people — about half the world’s population — in 64 countries including large democracies like the US and India, will head to the polls. Social media companies like Meta, YouTube and TikTok, have promised to protect the integrity of those elections, at least as far as discourse and factual claims being made on their platforms are concerned. Missing from the conversation, however, is closed messaging app WhatsApp, which now rivals public social media platforms in both scope and reach. That absence has researchers from non-profit Mozilla worried.

“Almost 90% of the safety interventions pledged by Meta ahead of these elections are focused on Facebook and Instagram,” Odanga Madung, a senior researcher at Mozilla focused on elections and platform integrity, told Engadget. “Why has Meta not publicly committed to a public road map of exactly how it’s going to protect elections within [WhatsApp]?”

Over the last ten years, WhatsApp, which Meta (then Facebook) bought for $19 billion in 2014, has become the default way for most of the world outside the US to communicate. In 2020, WhatsApp announced that it had more than two billion users around the world — a scale that dwarfs every other social or messaging app except Facebook itself.

Despite that scale, Meta’s focus has mostly been only on Facebook when it comes to election-related safety measures. Mozilla’s analysis found that while Facebook had made 95 policy announcements related to elections since 2016, the year the social network came under scrutiny for helping spread fake news and foster extreme political sentiments. WhatsApp only made 14. By comparison, Google and YouTube made 35 and 27 announcements each, while X and TikTok had 34 and 21 announcements respectively. “From what we can tell from its public announcements, Meta’s election efforts seem to overwhelmingly prioritize Facebook,” wrote Madung in the report.

Mozilla is now calling on Meta to make major changes to how WhatsApp functions during polling days and in the months before and after a country’s elections. They include adding disinformation labels to viral content (“Highly forwarded: please verify” instead of the current “forwarded many times), restricting broadcast and Communities features that let people blast messages to hundreds of people at the same time and nudging people to “pause and reflect” before they forward anything. More than 16,000 people have signed Mozilla’s pledge asking WhatsApp to slow the spread of political disinformation, a company spokesperson told Engadget.

WhatsApp first started adding friction to its service after dozens of people were killed in India, the company’s largest market, in a series of lynchings sparked by misinformation that went viral on the platform. This included limiting the number of people and groups that users could forward a piece of content to, and distinguishing forwarded messages with “forwarded” labels. Adding a “forwarded” label was a measure to curb misinformation — the idea was that people might treat forwarded content with greater skepticism.

“Someone in Kenya or Nigeria or India using WhatsApp for the first time is not going to think about the meaning of the ‘forwarded’ label in the context of misinformation,” Madung said. “In fact, it might have the opposite effect — that something has been highly forwarded, so it must be credible. For many communities, social proof is an important factor in establishing the credibility of something.”

The idea of asking people to pause and reflect came from a feature that Twitter once implemented where the app prompted people to actually read an article before retweeting it if they hadn’t opened it first. Twitter said that the prompt led to a 40% increase in people opening articles before retweeting them

And asking WhatsApp to temporarily disable its broadcast and Communities features arose from concerns over their potential to blast messages, forwarded or otherwise, to thousands of people at once. “They’re trying to turn this into the next big social media platform,” Madung said. “But without the consideration for the rollout of safety features.”

“WhatsApp is one of the only technology companies to intentionally constrain sharing by introducing forwarding limits and labeling messages that have been forwarded many times,” a WhatsApp spokesperson told Engadget. “We’ve built new tools to empower users to seek accurate information while protecting them from unwanted contact, which we detail on our website.”

Mozilla’s demands came out of research around platforms and elections that the company did in Brazil, India and Liberia. The former are two of WhatsApp’s largest markets, while most of the population of Liberia lives in rural areas with low internet penetration, making traditional online fact-checking nearly impossible. Across all three countries, Mozilla found political parties using WhatsApp’s broadcast feature heavily to “micro-target” voters with propaganda, and, in some cases, hate speech.

WhatsApp’s encrypted nature also makes it impossible for researchers to monitor what is circulating within the platform’s ecosystem — a limitation that isn’t stopping some of them from trying. In 2022, two Rutgers professors, Kiran Garimella and Simon Chandrachud visited the offices of political parties in India and managed to convince officials to add them to 500 WhatsApp groups that they ran. The data that they gathered formed the basis of an award-winning paper they wrote called “What circulates on Partisan WhatsApp in India?” Although the findings were surprising — Garimella and Chandrachud found that misinformation and hate speech did not, in fact, make up a majority of the content of these groups — the authors clarified that their sample size was small, and they may have deliberately been excluded from groups where hate speech and political misinformation flowed freely.

“Encryption is a red herring to prevent accountability on the platform,” Madung said. “In an electoral context, the problems are not necessarily with the content purely. It’s about the fact that a small group of people can end up significantly influencing groups of people with ease. These apps have removed the friction of the transmission of information through society.”

This article originally appeared on Engadget at https://www.engadget.com/mozilla-urges-whatsapp-to-combat-misinformation-ahead-of-global-elections-200002024.html?src=rss

Slack rolls out its AI tools to all paying customers

Slack just rolled out its AI tools to all paying users, after releasing them to a select subset of customers earlier this year. The company’s been teasing these features since last year and, well, now they’re here.

The AI auto-generates channel recaps to give people key highlights of stuff they missed while away from the keyboard or smartphone, for keeping track of important work stuff and office in-jokes. Slack says the algorithm that generates these recaps is smart enough to pull content from the various topics discussed in the channel. This means that you’ll get a paragraph on how plans are going for Jenny’s cake party in the conference room and another on sales trends or whatever.

There’s something similar available for threads, which are smaller conversations between one or a few people. The tool will recap any of these threads into a short paragraph. Customers can also opt into a daily recap for any channel or thread, delivered each morning.

Slack

Another interesting feature is conversational search. The various Slack channels stretch on forever and it can be tough to find the right chat when necessary. This allows people to ask questions using natural language, with the algorithm doing the actual searching.

These tools aren’t just for English speakers, as Slack AI now offers Japanese and Spanish language support. Slack says it’ll soon integrate some of its most-used third-party apps into the AI ecosystem. To that end, integration with Salesforce’s Einstein Copilot is coming in the near future.

It remains to be seen if these tools will actually be helpful or if they’re just more excuses to put the letters “AI” in promotional materials. I’ve been on Slack a long time and I haven’t encountered too many scenarios in which I’d need a series of auto-generated recaps, as longer conversations are typically relegated to one-on-one meetings, emails or video streams. However, maybe this will change how people use the service.

This article originally appeared on Engadget at https://www.engadget.com/slack-rolls-out-its-ai-tools-to-all-paying-customers-120045296.html?src=rss

YouTube prevents ad-blocking mobile apps from accessing its videos

YouTube's war with ad blockers is far from over, and it's focusing on tools that enable ad-free viewing on mobile this time. The Google-owned video platform has announced that it's "strengthening [its] enforcement on third-party apps that violate" its Terms of Service, "specifically ad-blocking apps." It's talking about mobile applications you can use to access videos without being interrupted by advertisements. When you use an application like that, you may experience buffering issues or see an error message that says "The following content is not available on this app."

The service says its terms don't allow third-party apps to switch off ads "because that prevents the creator from being rewarded for viewership." Like it's been doing over the past few months since it started cracking down on ad blockers, YouTube suggests signing up for a Premium membership if you want to watch ad-free. YouTube Premium will set you back $14 a month. 

Back in November, YouTube told us that it "launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience." It started by showing pop-ups whenever an ad blocker is in use telling you that it's against the website's TOS. Soon after that, you could only play up to three videos with an ad blocker on before you can no longer load any. Google also later admitted that if you have an ad blocker installed, you "may experience suboptimal viewing," such as having to wait a longer period before a video loads. 

This article originally appeared on Engadget at https://www.engadget.com/youtube-prevents-ad-blocking-mobile-apps-from-accessing-its-videos-123055735.html?src=rss

Meta is testing messaging capabilities for Threads, but don’t call them DMs

As Threads has grown to more than 130 million users, one of the major remaining “missing” features users often complain about is the lack of direct messaging abilities. But those missing out on DMs may soon have a new option to message other Threads users.

Meta is starting to test messaging features that rely on Instagram’s inbox but allow new messages to be initiated from the Threads app. The feature has begun to appear for some Threads users, who report seeing a “message” button atop other users’ profiles where the “mention” feature used to be. A Meta spokesperson confirmed the change, saying the company was “testing the ability to send a message from Threads to Instagram.”

Of note, Threads still doesn’t have its own inbox, and it’s not clear if it ever will. Instagram head Adam Mosseri has said multiple times that he doesn’t want to create a separate inbox for Threads, but would rather “make the Instagram inbox work” in the app. A Meta spokesperson further confirmed that “this is not a test of DMs on Threads.”

But even though it’s not a full-fledged DM feature, the ability to send a message from the Threads app without having to switch to Instagram could at least make messaging from Threads a little less clunky. Actually checking or replying to those messages, though, will still require users to head to the Instagram app.

That may still seem like an entirely unnecessary step, but Mosseri has pointed out that building two versions of the same inbox could easily get complicated. “If, in the end, we can’t make the Instagram inbox work for Threads, we’ll have a hard choice to make between (1) mirroring the Instagram inbox in Threads and dealing with notification routing weirdness, and (2) building a totally separate Threads inbox and dealing with the fact that you’ll have two redundant message threads with each of your friends with the same handles in two different apps,” he wrote in a post in November. “Neither seems great.”

This article originally appeared on Engadget at https://www.engadget.com/meta-is-testing-messaging-capabilities-for-threads-but-dont-call-them-dms-213536876.html?src=rss

Google, a $1.97 trillion company, is protesting California's plan to pay journalists

Google, the search giant that brought in more than $73 billion in profit last year, is protesting a California bill that would require it and other platforms to pay media outlets. The company announced that it was beginning a “short-term test” that will block links to local California news sources for a “small percentage” of users in the state.

The move is in response to the California Journalism Preservation Act, a bill that would require Google, Meta and other platforms to pay California publishers fees in exchange for links. The proposed law, which passed the state Assembly last year, amounts to a “link tax,” according to Google VP of News Partnerships Jaffer Zaidi.

“If passed, CJPA may result in significant changes to the services we can offer Californians and the traffic we can provide to California publishers,” Zaidi writes. But though the bill has yet to become law, Google is opting to give publishers and users in California a taste of what those changes could look like.

The company says it will temporarily test blocking links to California news sources that would be covered under the law in order “to measure the impact of the legislation on our product experience.” Zaidi didn’t say how large the test would be or how long it would last. Google is also halting all its spending on California newsrooms, including “new partnerships through Google News Showcase, our product and licensing program for news organizations, and planned expansions of the Google News Initiative.”

Google isn’t the first company to use hardball tactics in the face of new laws that aim to force tech companies to pay for journalism. Meta pulled news from Facebook and Instagram in Canada after a similar law passed and has threatened to do the same in California. (Meta did eventually cut deals to pay publishers in Australia after a 2021 law went into effect, but said last month it would end those partnerships.)

Google has a mixed track record on the issue, It pulled its News service out of Spain for seven years in protest of local copyright laws that would have required licensing fees. But the company signed deals worth about $150 million to pay Australian publishers. It also eventually backed off threats to pull news from search results in Canada, and forked over about $74 million. That may sound like a lot, but those amounts are still just a tiny fraction of the $10 - $12 billion that researchers estimate Google should be paying publishers.

This article originally appeared on Engadget at https://www.engadget.com/google-a-197-trillion-company-is-protesting-californias-plan-to-pay-journalists-175706632.html?src=rss

Twitch CEO says DJs will have to share what they earn on the website with music labels

In an interview with the channel TweakMusicTips, Twitch CEO Dan Clancy said that DJ streamers on the platform will have to share their revenue with music labels. As posted by Zach Bussey on X (formerly Twitter), Clancy said that Twitch is working on a "structure," wherein DJs and the platform "are gonna have to share money with the labels." He said he's already talked to some DJs about it. The DJs, of course, realized that they'd rather not share what they earn. But Clancy said that Twitch will pay part of what the labels are owed, while the DJs hand over a portion of their revenue. 

Clancy's statement was part of his response to the host's question about the copyright situation of music streamers on the platform. The CEO replied that Twitch has been talking to music labels about it in hopes of finding a stable solution so that DJ streamers don't get hit with DMCA takedown requests. He also said that the website has a "pretty good thing" going on with labels right now — a "thing" that involves Twitch paying them money, apparently — but it's not a sustainable long-term solution. Plus, the labels are only OK with that deal at the moment because they know Twitch is working on another solution that will make them (more) money. 

Clancy also clarified that live streams and videos on demand have different sets of rules for playing copyrighted music, and the latter is definitely a problem. That's why he suggests that DJs should mute pre-recorded videos on their own, because Twitch's system doesn't always detect copyrighted songs to mute them. The CEO said Twitch is close to signing the deal with labels, but it's unclear how the Amazon subsidiary intends to monitor live music streams and if it already has the technology to do so. 

This article originally appeared on Engadget at https://www.engadget.com/twitch-ceo-says-djs-will-have-to-share-what-they-earn-on-the-website-with-music-labels-060210010.html?src=rss

The Motion Picture Association will work with Congress to start blocking piracy sites in the US

At CinemaCon this year, the Motion Picture Association Chairman and CEO Charles Rivkin has revealed a plan that would make "sailing the digital seas" under the Jolly Roger banner just a bit harder. Rivkin said the association is going to work with Congress to establish and enforce a site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy, "including leading democracies and many of America's closest allies." The only reason why the US isn't one of them, he continued, is the "lack of political will, paired with outdated understandings of what site-blocking actually is, how it functions, and who it affects."

With the rule in place, "film and television, music and book publishers, sports leagues and broadcasters" can ask the court to order ISPs to block websites that share stolen content. Rivkin, arguing in favor of site-blocking, explained that the practice doesn't impact legitimate businesses. He said legislation around the practice would require detailed evidence to prove that a certain entity is engaged in illegal activities and that alleged perpetrators can appear in court to defend themselves. 

Rivkin cited FMovies, an illegal film streamer, as an example of how site-blocking in the US would minimize traffic to piracy websites. Apparently, FMovies gets 160 million visits per month, a third of which comes from the US. If the rule also exists in the country, then the website's traffic would, theoretically, drop pretty drastically. The MPA's chairman also talked about previous efforts to enforce site-blocking in the US, which critics previously said would "break the internet" and could potentially stifle free speech. While he insisted that other countries' experiences since then had proven those predictions wrong, he promised that the organization takes those concerns seriously.

He ended his speech by asking for the support of theater owners in the country. "The MPA is leading this charge in Washington," he said. "And we need the voices of theater owners — your voices — right by our side. Because this action will be good for all of us: Content creators. Theaters. Our workforce. Our country."

This article originally appeared on Engadget at https://www.engadget.com/the-motion-picture-association-will-work-with-congress-to-start-blocking-piracy-sites-in-the-us-062111261.html?src=rss

OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times. The report, which describes the lengths OpenAI, Google and Meta have gone to in order to maximize the amount of data they can feed to their AIs, cites numerous people with knowledge of the companies’ practices. It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed, Matt Bryant, a spokesperson for Google, told NYT, also saying that the company was unaware of any such use by OpenAI.

The report, however, claims there were people at Google who knew but did not take action against OpenAI because Google was using YouTube videos to train its own AI models. Google told NYT it only does so with videos from creators who have agreed to take part in an experimental program. Engadget has reached out to Google and OpenAI for comment.

The NYT report also claims Google tweaked its privacy policy in June 2022 to more broadly cover its use of publicly available content, including Google Docs and Google Sheets, to train its AI models and products. Bryant told NYT that this is only done with the permission of users who opt into Google’s experimental features, and that the company “did not start training on additional types of data based on this language change.”

This article originally appeared on Engadget at https://www.engadget.com/openai-and-google-reportedly-used-transcriptions-of-youtube-videos-to-train-their-ai-models-163531073.html?src=rss

Meta plans to more broadly label AI-generated content

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they’re uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered.

The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media. However, the board suggested that Meta should “reconsider this policy quickly, given the number of elections in 2024.”

Meta says it agrees with the board's "recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context." The company added that, in July, it will stop taking down content purely based on violations of its manipulated video policy. "This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media," Meta's vice president of content policy Monika Bickert wrote in a blog post.

Meta had been applying an “Imagined with AI” label to photorealistic images that users whip up using the Meta AI tool. The updated policy goes beyond the Oversight Board's labeling recommendations, Meta says. "If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context," Bickert wrote.

While the company generally believes that transparency and allowing appropriately labeled AI-generated photos, images and audio to remain on its platforms is the best way forward, it will still delete material that breaks the rules. "We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards," Bickert noted.

This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-more-broadly-label-ai-generated-content-152945787.html?src=rss