Posts with «social & online media» label

Slack rolls out its AI tools to all paying customers

Slack just rolled out its AI tools to all paying users, after releasing them to a select subset of customers earlier this year. The company’s been teasing these features since last year and, well, now they’re here.

The AI auto-generates channel recaps to give people key highlights of stuff they missed while away from the keyboard or smartphone, for keeping track of important work stuff and office in-jokes. Slack says the algorithm that generates these recaps is smart enough to pull content from the various topics discussed in the channel. This means that you’ll get a paragraph on how plans are going for Jenny’s cake party in the conference room and another on sales trends or whatever.

There’s something similar available for threads, which are smaller conversations between one or a few people. The tool will recap any of these threads into a short paragraph. Customers can also opt into a daily recap for any channel or thread, delivered each morning.


Another interesting feature is conversational search. The various Slack channels stretch on forever and it can be tough to find the right chat when necessary. This allows people to ask questions using natural language, with the algorithm doing the actual searching.

These tools aren’t just for English speakers, as Slack AI now offers Japanese and Spanish language support. Slack says it’ll soon integrate some of its most-used third-party apps into the AI ecosystem. To that end, integration with Salesforce’s Einstein Copilot is coming in the near future.

It remains to be seen if these tools will actually be helpful or if they’re just more excuses to put the letters “AI” in promotional materials. I’ve been on Slack a long time and I haven’t encountered too many scenarios in which I’d need a series of auto-generated recaps, as longer conversations are typically relegated to one-on-one meetings, emails or video streams. However, maybe this will change how people use the service.

This article originally appeared on Engadget at

YouTube prevents ad-blocking mobile apps from accessing its videos

YouTube's war with ad blockers is far from over, and it's focusing on tools that enable ad-free viewing on mobile this time. The Google-owned video platform has announced that it's "strengthening [its] enforcement on third-party apps that violate" its Terms of Service, "specifically ad-blocking apps." It's talking about mobile applications you can use to access videos without being interrupted by advertisements. When you use an application like that, you may experience buffering issues or see an error message that says "The following content is not available on this app."

The service says its terms don't allow third-party apps to switch off ads "because that prevents the creator from being rewarded for viewership." Like it's been doing over the past few months since it started cracking down on ad blockers, YouTube suggests signing up for a Premium membership if you want to watch ad-free. YouTube Premium will set you back $14 a month. 

Back in November, YouTube told us that it "launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience." It started by showing pop-ups whenever an ad blocker is in use telling you that it's against the website's TOS. Soon after that, you could only play up to three videos with an ad blocker on before you can no longer load any. Google also later admitted that if you have an ad blocker installed, you "may experience suboptimal viewing," such as having to wait a longer period before a video loads. 

This article originally appeared on Engadget at

Meta is testing messaging capabilities for Threads, but don’t call them DMs

As Threads has grown to more than 130 million users, one of the major remaining “missing” features users often complain about is the lack of direct messaging abilities. But those missing out on DMs may soon have a new option to message other Threads users.

Meta is starting to test messaging features that rely on Instagram’s inbox but allow new messages to be initiated from the Threads app. The feature has begun to appear for some Threads users, who report seeing a “message” button atop other users’ profiles where the “mention” feature used to be. A Meta spokesperson confirmed the change, saying the company was “testing the ability to send a message from Threads to Instagram.”

Of note, Threads still doesn’t have its own inbox, and it’s not clear if it ever will. Instagram head Adam Mosseri has said multiple times that he doesn’t want to create a separate inbox for Threads, but would rather “make the Instagram inbox work” in the app. A Meta spokesperson further confirmed that “this is not a test of DMs on Threads.”

But even though it’s not a full-fledged DM feature, the ability to send a message from the Threads app without having to switch to Instagram could at least make messaging from Threads a little less clunky. Actually checking or replying to those messages, though, will still require users to head to the Instagram app.

That may still seem like an entirely unnecessary step, but Mosseri has pointed out that building two versions of the same inbox could easily get complicated. “If, in the end, we can’t make the Instagram inbox work for Threads, we’ll have a hard choice to make between (1) mirroring the Instagram inbox in Threads and dealing with notification routing weirdness, and (2) building a totally separate Threads inbox and dealing with the fact that you’ll have two redundant message threads with each of your friends with the same handles in two different apps,” he wrote in a post in November. “Neither seems great.”

This article originally appeared on Engadget at

Google, a $1.97 trillion company, is protesting California's plan to pay journalists

Google, the search giant that brought in more than $73 billion in profit last year, is protesting a California bill that would require it and other platforms to pay media outlets. The company announced that it was beginning a “short-term test” that will block links to local California news sources for a “small percentage” of users in the state.

The move is in response to the California Journalism Preservation Act, a bill that would require Google, Meta and other platforms to pay California publishers fees in exchange for links. The proposed law, which passed the state Assembly last year, amounts to a “link tax,” according to Google VP of News Partnerships Jaffer Zaidi.

“If passed, CJPA may result in significant changes to the services we can offer Californians and the traffic we can provide to California publishers,” Zaidi writes. But though the bill has yet to become law, Google is opting to give publishers and users in California a taste of what those changes could look like.

The company says it will temporarily test blocking links to California news sources that would be covered under the law in order “to measure the impact of the legislation on our product experience.” Zaidi didn’t say how large the test would be or how long it would last. Google is also halting all its spending on California newsrooms, including “new partnerships through Google News Showcase, our product and licensing program for news organizations, and planned expansions of the Google News Initiative.”

Google isn’t the first company to use hardball tactics in the face of new laws that aim to force tech companies to pay for journalism. Meta pulled news from Facebook and Instagram in Canada after a similar law passed and has threatened to do the same in California. (Meta did eventually cut deals to pay publishers in Australia after a 2021 law went into effect, but said last month it would end those partnerships.)

Google has a mixed track record on the issue, It pulled its News service out of Spain for seven years in protest of local copyright laws that would have required licensing fees. But the company signed deals worth about $150 million to pay Australian publishers. It also eventually backed off threats to pull news from search results in Canada, and forked over about $74 million. That may sound like a lot, but those amounts are still just a tiny fraction of the $10 - $12 billion that researchers estimate Google should be paying publishers.

This article originally appeared on Engadget at

Twitch CEO says DJs will have to share what they earn on the website with music labels

In an interview with the channel TweakMusicTips, Twitch CEO Dan Clancy said that DJ streamers on the platform will have to share their revenue with music labels. As posted by Zach Bussey on X (formerly Twitter), Clancy said that Twitch is working on a "structure," wherein DJs and the platform "are gonna have to share money with the labels." He said he's already talked to some DJs about it. The DJs, of course, realized that they'd rather not share what they earn. But Clancy said that Twitch will pay part of what the labels are owed, while the DJs hand over a portion of their revenue. 

Clancy's statement was part of his response to the host's question about the copyright situation of music streamers on the platform. The CEO replied that Twitch has been talking to music labels about it in hopes of finding a stable solution so that DJ streamers don't get hit with DMCA takedown requests. He also said that the website has a "pretty good thing" going on with labels right now — a "thing" that involves Twitch paying them money, apparently — but it's not a sustainable long-term solution. Plus, the labels are only OK with that deal at the moment because they know Twitch is working on another solution that will make them (more) money. 

Clancy also clarified that live streams and videos on demand have different sets of rules for playing copyrighted music, and the latter is definitely a problem. That's why he suggests that DJs should mute pre-recorded videos on their own, because Twitch's system doesn't always detect copyrighted songs to mute them. The CEO said Twitch is close to signing the deal with labels, but it's unclear how the Amazon subsidiary intends to monitor live music streams and if it already has the technology to do so. 

This article originally appeared on Engadget at

The Motion Picture Association will work with Congress to start blocking piracy sites in the US

At CinemaCon this year, the Motion Picture Association Chairman and CEO Charles Rivkin has revealed a plan that would make "sailing the digital seas" under the Jolly Roger banner just a bit harder. Rivkin said the association is going to work with Congress to establish and enforce a site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy, "including leading democracies and many of America's closest allies." The only reason why the US isn't one of them, he continued, is the "lack of political will, paired with outdated understandings of what site-blocking actually is, how it functions, and who it affects."

With the rule in place, "film and television, music and book publishers, sports leagues and broadcasters" can ask the court to order ISPs to block websites that share stolen content. Rivkin, arguing in favor of site-blocking, explained that the practice doesn't impact legitimate businesses. He said legislation around the practice would require detailed evidence to prove that a certain entity is engaged in illegal activities and that alleged perpetrators can appear in court to defend themselves. 

Rivkin cited FMovies, an illegal film streamer, as an example of how site-blocking in the US would minimize traffic to piracy websites. Apparently, FMovies gets 160 million visits per month, a third of which comes from the US. If the rule also exists in the country, then the website's traffic would, theoretically, drop pretty drastically. The MPA's chairman also talked about previous efforts to enforce site-blocking in the US, which critics previously said would "break the internet" and could potentially stifle free speech. While he insisted that other countries' experiences since then had proven those predictions wrong, he promised that the organization takes those concerns seriously.

He ended his speech by asking for the support of theater owners in the country. "The MPA is leading this charge in Washington," he said. "And we need the voices of theater owners — your voices — right by our side. Because this action will be good for all of us: Content creators. Theaters. Our workforce. Our country."

This article originally appeared on Engadget at

OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times. The report, which describes the lengths OpenAI, Google and Meta have gone to in order to maximize the amount of data they can feed to their AIs, cites numerous people with knowledge of the companies’ practices. It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed, Matt Bryant, a spokesperson for Google, told NYT, also saying that the company was unaware of any such use by OpenAI.

The report, however, claims there were people at Google who knew but did not take action against OpenAI because Google was using YouTube videos to train its own AI models. Google told NYT it only does so with videos from creators who have agreed to take part in an experimental program. Engadget has reached out to Google and OpenAI for comment.

The NYT report also claims Google tweaked its privacy policy in June 2022 to more broadly cover its use of publicly available content, including Google Docs and Google Sheets, to train its AI models and products. Bryant told NYT that this is only done with the permission of users who opt into Google’s experimental features, and that the company “did not start training on additional types of data based on this language change.”

This article originally appeared on Engadget at

Meta plans to more broadly label AI-generated content

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they’re uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered.

The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media. However, the board suggested that Meta should “reconsider this policy quickly, given the number of elections in 2024.”

Meta says it agrees with the board's "recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context." The company added that, in July, it will stop taking down content purely based on violations of its manipulated video policy. "This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media," Meta's vice president of content policy Monika Bickert wrote in a blog post.

Meta had been applying an “Imagined with AI” label to photorealistic images that users whip up using the Meta AI tool. The updated policy goes beyond the Oversight Board's labeling recommendations, Meta says. "If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context," Bickert wrote.

While the company generally believes that transparency and allowing appropriately labeled AI-generated photos, images and audio to remain on its platforms is the best way forward, it will still delete material that breaks the rules. "We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards," Bickert noted.

This article originally appeared on Engadget at

An old SEO scam has a new AI-generated face

Over the years, Engadget has been the target of a common SEO scam, wherein someone claims ownership of an image and demands a link bank to a particular website. A lot of other websites would tell you the same thing, but now the scammers are making their fake DMCA takedown notices and threats of legal action look more legit with the help of easily accessible AI tools. 

According to a report by 404Media, the publisher of the website Tedium received a "copyright infringement notice" via email from a law firm called Commonwealth Legal last week. Like older, similar attempts at duping the recipient, the sender said they're reaching out "in relation to an image" connected to their client. In this case, the sender demanded the addition of a "visible and clickable link" to a website called "tech4gods" underneath the photo that was allegedly stolen. 

Since Tedium actually used a photo from a royalty-free provider, the publisher looked into the demand, found the law firm's website, and upon closer inspection, realized that the images of its lawyers were generated by AI. As 404Media notes, the images of the lawyers had vacant looks in the eyes that's commonly seen in photos created by AI tools. If you do a reverse image search on them, you'll get results from a website with the URL, which uses artificial intelligence to make "unique, worry-free model photos... from scratch." The publisher also found that the law firm's listed address that's supposed to be on the fourth floor of a building points to a one-floor structure on Google Street View. The owner of tech4gods said he had nothing to do with the scam but admitted that he used to buy backlinks for his website. 

This is but one example of how bad actors can use AI tools to fool and scam people, and we have to be more vigilant as instances like this will just likely keep on growing. Reverse image search engines are your friend, but they may not be infallible and may not always help. Deepfakes, for instance, have become a big problem in recent years, as bad actors continue to use them to create convincing videos and audio not just to scam people, but also to spread misinformation online. 

This article originally appeared on Engadget at

Meta’s AI image generator struggles to create images of couples of different races

Meta AI is consistently unable to generate accurate images for seemingly simple prompts like “Asian man and Caucasian friend,” or “Asian man and white wife,” The Verge reports. Instead, the company’s image generator seems to be biased toward creating images of people of the same race, even when explicitly prompted otherwise.

Engadget confirmed these results in our own testing of Meta’s web-based image generator. Prompts for “an Asian man with a white woman friend” or “an Asian man with a white wife” generated images of Asian couples. When asked for “a diverse group of people,” Meta AI generated a grid of nine white faces and one person of color. There were a couple occasions when it created a single result that reflected the prompt, but in most cases it failed to accurately depict the prompt.

As The Verge points out, there are other more “subtle” signs of bias in Meta AI, like a tendency to make Asian men appear older while Asian women appeared younger. The image generator also sometimes added “culturally specific attire” even when that wasn’t part of the prompt.

It’s not clear why Meta AI is struggling with these types of prompts, though it’s not the first generative AI platform to come under scrutiny for its depiction of race. Google’s Gemini image generator paused its ability to create images of people after it overcorrected for diversity with bizarre results in response prompts about historical figures. Google later explained that its internal safeguards failed to account for situations when diverse results were inappropriate.

Meta didn’t immediately respond to a request for comment. The company has previously described Meta AI as being in “beta” and thus prone to making mistakes. Meta AI has also struggled to accurately answer simple questions about current events and public figures.

This article originally appeared on Engadget at