Posts with «social & online media» label

X names its third head of safety in less than two years

X has named a new head of safety nearly a year after the last executive in the position resigned. The company said Tuesday that it had promoted Kylie McRoberts to Head of Safety and hired Yale Cohen as Head of Brand Safety and Advertiser Solutions.

The two will have the unenviable task of leading X’s safety efforts, including its attempts to reassure advertisers that the platform doesn’t monetize hate speech or terrorist content. The company said earlier this year it planned to hire 100 new safety employees after previously cutting much of its safety staff.

Head of safety has been a particularly fraught position since Elon Musk took over the company previously known as Twitter. Musk has previously clashed with his safety leads and McRoberts is the third person to hold the title in less than two years. Previously, Yoel Roth resigned shortly after the disastrous rollout of Twitter Blue in 2022. Roth was replaced by Ella Irwin, who resigned last year after Musk publicly criticized employees for enforcing policies around misgendering.

Not much is known about McRoberts, but she is apparently an existing member of X’s safety team (her X account is currently private and a LinkedIn profile appears to have been recently deleted). “During her time at X, she has led initiatives to increase transparency in our moderation practices through labels, improve security with passkeys, as well as building out our new Safety Center of Excellence in Austin,” X said in a statement.

This article originally appeared on Engadget at https://www.engadget.com/x-names-its-third-head-of-safety-in-less-than-two-years-213004771.html?src=rss

Spotify's free audiobook credit is coming to Canada and other countries next week

Spotify Premium users in Canada, Ireland and New Zealand will have access to 15 hours of monthly audiobook listening at no extra cost starting on April 9. Subscribers in the US, UK and Australia have had access to this perk for several months.

The Premium audiobook catalog now includes more than 250,000 titles. That's a notable increase from the 200,000 audiobooks that were in the library as of late 2023. So when you could use a change from the millions of songs and podcasts on Spotify, you'll have a ton of books to choose from.

Those who hit the 15-hour limit can add more audiobook listening time in 10-hour top ups. In the new markets, the extra listening time costs CAD $14.99, IRE €12.99 or NZD $19.99, per TechCrunch.

Since last month, Spotify has offered an audiobook-only subscription plan in the US. At $10, it's $1 per month less than Spotify Premium for the same 15 hours of audiobook listening time. Still, depending on the lengths of books that you listen to, this plan might prove better value than Audible, which grants you one audiobook credit per month for $15. That said, unused audiobook listening time on Spotify doesn't carry over to the next month.

This article originally appeared on Engadget at https://www.engadget.com/spotifys-free-audiobook-credit-is-coming-to-canada-and-other-countries-next-week-182444456.html?src=rss

You can now use ChatGPT without an account

On Monday, OpenAI began opening up ChatGPT to users without an account. It described the move as part of its mission to “make tools like ChatGPT broadly available so that people can experience the benefits of AI.” It also gives the company more training data (for those who don’t opt out) and perhaps nudges more users into creating accounts and subscribing for superior GPT-4 access instead of the older GPT-3.5 model free users get.

I tested the instant access, which — as advertised — allowed me to start a new GPT-3.5 thread without any login info. The chatbot’s standard “How can I help you today?” screen appears, with optional buttons to sign up or log in. Although I saw it today, OpenAI says it’s gradually rolling out access, so check back later if you don’t see the option yet.

OpenAI says it added extra safeguards for accountless users, including blocking prompts and image generations in more categories than logged-in users. When asked for more info on what new categories it’s blocking, an OpenAI spokesperson told me that, while developing the feature, it considered how logged-out GPT-3.5 users could potentially introduce new threats.

The spokesperson added that the teams in charge of detecting and stopping abuse of its AI models have been involved in creating the new feature and will adjust accordingly if unexpected threats emerge. Of course, it still blocks everything it does for signed-in users, as detailed in its moderation API.

You can opt out of data training for your prompts when not signed in. To do so, click on the little question mark to the right of the text box, then select Settings and turn off the toggle for “Improve the model for everyone.”

OpenAI says more than 100 million people across 185 countries use ChatGPT weekly. Those are staggering numbers for an 18-month-old service from a company many people still hadn’t heard of two years ago. Today’s move gives those hesitant to create an account an incentive to take the world-changing chatbot for a spin, boosting those numbers even more.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-use-chatgpt-without-an-account-184417749.html?src=rss

LinkedIn is testing a TikTok-like feed for vertical video

LinkedIn is testing a new feed of TikTok-like vertical videos. The feature hasn’t been publicly announced but it’s been spotted by users in recent days and the company confirmed the tests to TechCrunch.

According to a screenshot shared by Instagram employee Jenny Eishingdrelo and a video posted to LinkedIn by influencer marketing exec Austin Null, the new feed will appear in a separate “video” tab in the LinkedIn app. Users will be able to scroll vertically to move between clips, much like TikTok or Instagram Reels.

It’s not the first time the company has hopped on a trendy format. LinkedIn previously experimented with a Stories feature for disappearing posts. That feature lasted less than a year, though the professional network hinted at the time that it wasn’t done with its video experiments, saying it was working “to evolve the Stories format into a reimagined video experience across LinkedIn.”

Presumably, LinkedIn is hoping the feed will showcase content from its ranks of professional creators and thought leaders, many of whom are already posting video to their feeds. However, it’s not clear how many of the site’s users are interested in a dedicated video feed for workplace-related content.

This article originally appeared on Engadget at https://www.engadget.com/linkedin-is-testing-a-tiktok-like-feed-for-vertical-video-233454044.html?src=rss

Instagram is working on new Reels feed that combines two users' interests

Instagram is working on a feature that would recommend Reels to you and a friend based on videos you've shared with each other and your individual interests. Reverse engineer Alessandro Paluzzi unearthed the feature, which is called Blend. Instagram confirmed to TechCrunch that it's testing Blend internally and it hasn't started trialing it publicly. It may be the case that Blend never sees the light of day, though it's always intriguing to find out about the ideas Instagram is toying with.

The platform hasn't revealed more details about how Blend will work, though the idea seems to be that Instagram users and one of their besties will discover new Reels together instead of one of them finding a video they like and DMing it to the other. It would make sense for Blend to have an indicator that the other person has already seen a particular Reel so that the two people who have access to the feed can start chatting about it. 

TikTok doesn't have a feature along these lines, as TechCrunch notes, so Blend could give Instagram an advantage when it comes to folks who like to check out short-form videos together. As with many of the other features platforms of this ilk introduce, Blend fundamentally seems to be about increasing engagement.

#Instagram is working on Blend: #Reels recommendations based on reels you've shared each other and your reels interests 👀

ℹ️ Private between the two of you. You can leave a Blend at any time. pic.twitter.com/1kcssBuf7G

— Alessandro Paluzzi (@alex193a) March 28, 2024

This article originally appeared on Engadget at https://www.engadget.com/instagram-is-working-on-new-reels-feed-that-combines-two-users-interests-192018928.html?src=rss

More YouTube creators are now making money from Shorts, the company's TikTok competitor

YouTube’s TikTok competitor, Shorts, is becoming a more significant part of the company’s monetization program. The company announced that more than a quarter of channels in its Partner Program are now earning money from the short-form videos.

The milestone comes a little more than a year after YouTube began sharing ad revenue with creators making Shorts. YouTube says it currently has more than 3 million creators around the world in the Partner Program, which would imply the number of Shorts creators making money from the platform is somewhere in the hundreds of thousands.

Because ads on Shorts appear between clips in a feed, revenue sharing for Shorts is structured differently than for longer-form content on YouTube. Ad revenue is pooled and divided among eligible creators based on factors like views and music licensing. The company has said this arrangement is far more lucrative for individuals than traditional creator funds.

So far though, it’s unclear just how much creators are making from Shorts compared with the platform’s other monetization programs. YouTube declined to share details but said the company has paid out $70 billion to creators over the last three years.

Shorts’ momentum could grow even more in the coming months. TikTok, which itself has been trying to compete more directly with YouTube by encouraging longer videos, is facing a nonzero chance that its app could be banned in the United States. Though that outcome is far from certain, YouTube would almost certainly attract former TikTok users and creators.

This article originally appeared on Engadget at https://www.engadget.com/more-youtube-creators-are-now-making-money-from-shorts-the-companys-tiktok-competitor-130017537.html?src=rss

The Oversight Board weighs in on Meta’s most-moderated word

The Oversight Board is urging Meta to change the way it moderates the word “shaheed,” an Arabic term that has led to more takedowns than any other word or phrase on the company’s platforms. Meta asked the group for help crafting new rules last year after attempts to revamp it internally stalled.

The Arabic word “shaheed” is often translated as “martyr,” though the board notes that this isn’t an exact definition and the word can have “multiple meanings.” But Meta’s current rules are based only on the “martyr” definition, which the company says implies praise. This has led to a “blanket ban” on the word when used in conjunction with people designated as “dangerous individuals” by the company.

However, this policy ignores the “linguistic complexity” of the word, which is “often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways,” the Oversight Board says in its opinion. “There is strong reason to believe the multiple meanings of ‘shaheed’ result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.”

In their recommendations to Meta, the Oversight Board says that the company should end its “blanket ban” on the word being used to reference “dangerous individuals,” and that posts should only be removed if there are other clear “signals of violence” or if the content breaks other policies. The board also wants Meta to better explain how it uses automated systems to enforce these rules.

If Meta adopts the Oversight Board’s recommendations, it could have a significant impact on the platform’s Arabic-speaking users. The board notes that the word, because it is so common, likely “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” the board’s co-chair (and former Danish prime minister) Helle Thorning-Schmidt said in a statement. “The Board is especially concerned that Meta’s approach impacts journalism and civic discourse because media organizations and commentators might shy away from reporting on designated entities to avoid content removals.”

This is hardly the first time Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company found that Meta’s moderators were less accurate when assessing Palestinian Arabic, resulting in “false strikes” on users’ accounts. The company apologized last year after Instagram’s automated translations began inserting the word “terrorist” into the profiles of some Palestinian users.

The opinion is also yet another example of how long it can take for Meta’s Oversight Board to influence the social network’s policies. The company first asked the board to weigh in on the rules more than a year ago (the Oversight Board said it “paused” the publication of the policy after October 7 attacks in Israel to ensure its rules “held up” to the “extreme stress” of the conflict in Gaza). Meta will now have two months to respond to the recommendations, though actual changes to the company’s policies and practices could take several more weeks or months to implement.

“We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely,” a Meta spokesperson said in a statement. “We aim to apply these policies fairly but doing so at scale brings global challenges, which is why in February 2023 we sought the Oversight Board's guidance on how we treat the word ‘shaheed’ when referring to designated individuals or organizations. We will review the Board’s feedback and respond within 60 days.”

This article originally appeared on Engadget at https://www.engadget.com/the-oversight-board-weighs-in-on-metas-most-moderated-word-100003625.html?src=rss

Ron DeSantis signs bill requiring parental consent for kids to join social media platforms in Florida

Florida Governor Ron DeSantis just signed into law a bill named HB 3 that creates much stricter guidelines about how kids under 16 can use and access social media. To that end, the law completely bans children younger than 14 from participating in these platforms. 

The bill requires parent or guardian consent for 14- and 15-year-olds to make an account or use a pre-existing account on a social media platform. Additionally, the companies behind these platforms must abide by requests to delete these accounts within five business days. Failing to do so could rack up major fines, as much as $10,000 for each violation. These penalties increase to $50,000 per instance if it is ruled that the company participated in a “knowing or reckless” violation of the law.

As previously mentioned, anyone under the age of 14 will no longer be able to create or use social media accounts in Florida. The platforms must delete pre-existing accounts and any associated personal information. The bill doesn’t name any specific social media platforms, but suggests that any service that promotes “infinite scrolling” will have to follow these new rules, as will those that feature display reaction metrics, live-streaming and auto-play videos. Email platforms are exempt.

This isn’t just going to change the online habits of kids. There’s also a mandated age verification component, though that only kicks in if the website or app contains a “substantial portion of material” deemed harmful to users under 18. Under the language of this law, Floridians visiting a porn site, for instance, will have to verify their age via a proprietary platform on the site itself or use a third party system. News agencies are exempt from this part of the bill, even if they meet the materials threshold. 

Obviously, that brings up some very real privacy concerns. Nobody wants to enter their private information to look at, ahem, adult content. There’s a provision that gives websites the option to route users to an “anonymous age verification” system, which is defined as a third party that isn’t allowed to retain identifying information. Once again, any platform that doesn’t abide by this restriction could be subject to a $50,000 civil penalty for each instance.

This follows DeSantis vetoing a similar bill earlier this month. That law would have banned teens under 16 from using social media apps and there was no option for parental consent.

NetChoice, a trade association that represents social media platforms, has come out against the law, calling it unconstitutional. The group says that HB 3 will essentially impose an “ID for the internet”, arguing that the age verification component will have to widen to adequately track whether or not children under 14 are signing up for social media apps. NetChoice says “this level of data collection will put Floridians’ privacy and security at risk.”

Paul Renner, the state’s Republican House Speaker, said at a press conference for the bill signing that a “child in their brain development doesn’t have the ability to know that they’re being sucked in to these addictive technologies, and to see the harm, and step away from it. And because of that, we have to step in for them.”

The new law goes into effect on January 1, but it could face some legal challenges. Renner said he expects social media companies to “sue the second after this is signed” and DeSantis acknowledged that the law will likely be challenged on First Amendment issues, according to Associated Press.

Florida isn’t the first state to try to separate kids from their screens. In Arkansas, a federal judge recently blocked enforcement of a law that required parental consent for minors to create new social media accounts. The same thing happened in California. A similar law passed in Utah, but was hit with a pair of lawsuits that forced state reps back to the drawing board. On the federal side of things, the Protecting Kids on Social Media Act would require parental consent for kids under 18 to use social media and, yeah, there’s that whole TikTok ban thing.

This article originally appeared on Engadget at https://www.engadget.com/ron-desantis-signs-bill-requiring-parental-consent-for-kids-to-join-social-media-platforms-in-florida-192116891.html?src=rss

Authorities reportedly ordered Google to reveal the identities of some YouTube videos' viewers

Federal authorities in the US asked Google for the names, addresses, telephone numbers and user activity of the accounts that watched certain YouTube videos between January 1 and 8, 2023, according to unsealed court documents viewed by Forbes. People who watched those videos while they weren't logged into an account weren't safe either, because the government also asked for their IP addresses. The investigators reportedly ordered Google to hand over the information as part of an investigation into someone who uses the name "elonmuskwhm" online. 

Authorities suspect that elonmuskwhm is selling bitcoin for cash and is, thus, breaking money laundering laws, as well as running an unlicensed money transmitting business. Undercover agents reportedly sent the suspect links to videos of YouTube tutorials for mapping via drones and augmented reality software in their conversations back in early January. Those videos, however, weren't private and had been collectively viewed by over 30,000 times, which means the government was potentially asking Google for private information on quite a large number of users. "There is reason to believe that these records would be relevant and material to an ongoing criminal investigation, including by providing identification information about the perpetrators," authorities reportedly told the company. 

Based on the documents Forbes had seen, the court granted the order but had asked Google to keep it under wraps. It's also unclear if Google handed over the data the authorities were asking for. In another incident, authorities asked the company for a list of accounts that "viewed and/or interacted" with eight YouTube livestreams. Cops requested for that information after learning that they were being watched through a stream while they were searching an area following a report that an explosive was placed inside a trashcan. One of those video livestreams was posted by the Boston and Maine Live account, which has over 130,000 subscribers.

A Google spokesperson told Forbes that the company follows a "rigorous process" to protect the privacy of its users. But critics and privacy advocates are still concerned that government agencies are overstepping and are using their power to obtain sensitive information on people who just happened to watch specific YouTube videos and aren't in any way doing anything illegal. 

"What we watch online can reveal deeply sensitive information about us—our politics, our passions, our religious beliefs, and much more," John Davisson, senior counsel at the Electronic Privacy Information Center, told Forbes. "It's fair to expect that law enforcement won't have access to that information without probable cause. This order turns that assumption on its head."

This article originally appeared on Engadget at https://www.engadget.com/authorities-reportedly-ordered-google-to-reveal-the-identities-of-some-youtube-videos-viewers-140018019.html?src=rss

Threads begins testing swipe gestures to help train the For You algorithm

Threads has begun testing swipe gestures to help users improve the algorithm that populates the For You feed. It’s reportedly called Algo Tune as, well, it helps people tune their algorithms. It’s pretty rare when any social media site, particularly one run by Meta, allows users to adjust the parameters by which the great and powerful algorithm operates, so this feature is definitely worth keeping an eye on.

It works a lot like Tinder and other dating apps. If you don’t like something on your feed, you swipe left. If you like a post and want to see more like it, you swipe right. That’s pretty much it. The algorithm is allegedly tuned over time by these responses, adjusting your feed to provide more of the content you want and less of the stuff you don’t want. Meta CEO Mark Zuckerberg calls it an “easy way to let us know what you want to see more of on your feed.”

This is just an experiment, for now, so the feature’s only rolling out to a select number of Threads users. The company also hasn’t released any specific information as to how all of the swiping actually influences the algorithm, but that’s par for the course when it comes to these things. The algo must remain protected at all costs.

The social media app sure has been busy lately, adding new tools at a rapid clip. Threads finally rolled out trending topics to all users, after experimenting with the feature since February. Meta also recently previewed fediverse integration, which would allow Threads posts on fellow social media app Mastodon. The company’s also been testing some features that let users save drafts and take photos directly in the app.

This article originally appeared on Engadget at https://www.engadget.com/threads-begins-testing-swipe-gestures-to-help-train-the-for-you-algorithm-175004586.html?src=rss