Posts with «social & online media» label

More YouTube creators are now making money from Shorts, the company's TikTok competitor

YouTube’s TikTok competitor, Shorts, is becoming a more significant part of the company’s monetization program. The company announced that more than a quarter of channels in its Partner Program are now earning money from the short-form videos.

The milestone comes a little more than a year after YouTube began sharing ad revenue with creators making Shorts. YouTube says it currently has more than 3 million creators around the world in the Partner Program, which would imply the number of Shorts creators making money from the platform is somewhere in the hundreds of thousands.

Because ads on Shorts appear between clips in a feed, revenue sharing for Shorts is structured differently than for longer-form content on YouTube. Ad revenue is pooled and divided among eligible creators based on factors like views and music licensing. The company has said this arrangement is far more lucrative for individuals than traditional creator funds.

So far though, it’s unclear just how much creators are making from Shorts compared with the platform’s other monetization programs. YouTube declined to share details but said the company has paid out $70 billion to creators over the last three years.

Shorts’ momentum could grow even more in the coming months. TikTok, which itself has been trying to compete more directly with YouTube by encouraging longer videos, is facing a nonzero chance that its app could be banned in the United States. Though that outcome is far from certain, YouTube would almost certainly attract former TikTok users and creators.

This article originally appeared on Engadget at https://www.engadget.com/more-youtube-creators-are-now-making-money-from-shorts-the-companys-tiktok-competitor-130017537.html?src=rss

The Oversight Board weighs in on Meta’s most-moderated word

The Oversight Board is urging Meta to change the way it moderates the word “shaheed,” an Arabic term that has led to more takedowns than any other word or phrase on the company’s platforms. Meta asked the group for help crafting new rules last year after attempts to revamp it internally stalled.

The Arabic word “shaheed” is often translated as “martyr,” though the board notes that this isn’t an exact definition and the word can have “multiple meanings.” But Meta’s current rules are based only on the “martyr” definition, which the company says implies praise. This has led to a “blanket ban” on the word when used in conjunction with people designated as “dangerous individuals” by the company.

However, this policy ignores the “linguistic complexity” of the word, which is “often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways,” the Oversight Board says in its opinion. “There is strong reason to believe the multiple meanings of ‘shaheed’ result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.”

In their recommendations to Meta, the Oversight Board says that the company should end its “blanket ban” on the word being used to reference “dangerous individuals,” and that posts should only be removed if there are other clear “signals of violence” or if the content breaks other policies. The board also wants Meta to better explain how it uses automated systems to enforce these rules.

If Meta adopts the Oversight Board’s recommendations, it could have a significant impact on the platform’s Arabic-speaking users. The board notes that the word, because it is so common, likely “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” the board’s co-chair (and former Danish prime minister) Helle Thorning-Schmidt said in a statement. “The Board is especially concerned that Meta’s approach impacts journalism and civic discourse because media organizations and commentators might shy away from reporting on designated entities to avoid content removals.”

This is hardly the first time Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company found that Meta’s moderators were less accurate when assessing Palestinian Arabic, resulting in “false strikes” on users’ accounts. The company apologized last year after Instagram’s automated translations began inserting the word “terrorist” into the profiles of some Palestinian users.

The opinion is also yet another example of how long it can take for Meta’s Oversight Board to influence the social network’s policies. The company first asked the board to weigh in on the rules more than a year ago (the Oversight Board said it “paused” the publication of the policy after October 7 attacks in Israel to ensure its rules “held up” to the “extreme stress” of the conflict in Gaza). Meta will now have two months to respond to the recommendations, though actual changes to the company’s policies and practices could take several more weeks or months to implement.

“We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely,” a Meta spokesperson said in a statement. “We aim to apply these policies fairly but doing so at scale brings global challenges, which is why in February 2023 we sought the Oversight Board's guidance on how we treat the word ‘shaheed’ when referring to designated individuals or organizations. We will review the Board’s feedback and respond within 60 days.”

This article originally appeared on Engadget at https://www.engadget.com/the-oversight-board-weighs-in-on-metas-most-moderated-word-100003625.html?src=rss

Ron DeSantis signs bill requiring parental consent for kids to join social media platforms in Florida

Florida Governor Ron DeSantis just signed into law a bill named HB 3 that creates much stricter guidelines about how kids under 16 can use and access social media. To that end, the law completely bans children younger than 14 from participating in these platforms. 

The bill requires parent or guardian consent for 14- and 15-year-olds to make an account or use a pre-existing account on a social media platform. Additionally, the companies behind these platforms must abide by requests to delete these accounts within five business days. Failing to do so could rack up major fines, as much as $10,000 for each violation. These penalties increase to $50,000 per instance if it is ruled that the company participated in a “knowing or reckless” violation of the law.

As previously mentioned, anyone under the age of 14 will no longer be able to create or use social media accounts in Florida. The platforms must delete pre-existing accounts and any associated personal information. The bill doesn’t name any specific social media platforms, but suggests that any service that promotes “infinite scrolling” will have to follow these new rules, as will those that feature display reaction metrics, live-streaming and auto-play videos. Email platforms are exempt.

This isn’t just going to change the online habits of kids. There’s also a mandated age verification component, though that only kicks in if the website or app contains a “substantial portion of material” deemed harmful to users under 18. Under the language of this law, Floridians visiting a porn site, for instance, will have to verify their age via a proprietary platform on the site itself or use a third party system. News agencies are exempt from this part of the bill, even if they meet the materials threshold. 

Obviously, that brings up some very real privacy concerns. Nobody wants to enter their private information to look at, ahem, adult content. There’s a provision that gives websites the option to route users to an “anonymous age verification” system, which is defined as a third party that isn’t allowed to retain identifying information. Once again, any platform that doesn’t abide by this restriction could be subject to a $50,000 civil penalty for each instance.

This follows DeSantis vetoing a similar bill earlier this month. That law would have banned teens under 16 from using social media apps and there was no option for parental consent.

NetChoice, a trade association that represents social media platforms, has come out against the law, calling it unconstitutional. The group says that HB 3 will essentially impose an “ID for the internet”, arguing that the age verification component will have to widen to adequately track whether or not children under 14 are signing up for social media apps. NetChoice says “this level of data collection will put Floridians’ privacy and security at risk.”

Paul Renner, the state’s Republican House Speaker, said at a press conference for the bill signing that a “child in their brain development doesn’t have the ability to know that they’re being sucked in to these addictive technologies, and to see the harm, and step away from it. And because of that, we have to step in for them.”

The new law goes into effect on January 1, but it could face some legal challenges. Renner said he expects social media companies to “sue the second after this is signed” and DeSantis acknowledged that the law will likely be challenged on First Amendment issues, according to Associated Press.

Florida isn’t the first state to try to separate kids from their screens. In Arkansas, a federal judge recently blocked enforcement of a law that required parental consent for minors to create new social media accounts. The same thing happened in California. A similar law passed in Utah, but was hit with a pair of lawsuits that forced state reps back to the drawing board. On the federal side of things, the Protecting Kids on Social Media Act would require parental consent for kids under 18 to use social media and, yeah, there’s that whole TikTok ban thing.

This article originally appeared on Engadget at https://www.engadget.com/ron-desantis-signs-bill-requiring-parental-consent-for-kids-to-join-social-media-platforms-in-florida-192116891.html?src=rss

Authorities reportedly ordered Google to reveal the identities of some YouTube videos' viewers

Federal authorities in the US asked Google for the names, addresses, telephone numbers and user activity of the accounts that watched certain YouTube videos between January 1 and 8, 2023, according to unsealed court documents viewed by Forbes. People who watched those videos while they weren't logged into an account weren't safe either, because the government also asked for their IP addresses. The investigators reportedly ordered Google to hand over the information as part of an investigation into someone who uses the name "elonmuskwhm" online. 

Authorities suspect that elonmuskwhm is selling bitcoin for cash and is, thus, breaking money laundering laws, as well as running an unlicensed money transmitting business. Undercover agents reportedly sent the suspect links to videos of YouTube tutorials for mapping via drones and augmented reality software in their conversations back in early January. Those videos, however, weren't private and had been collectively viewed by over 30,000 times, which means the government was potentially asking Google for private information on quite a large number of users. "There is reason to believe that these records would be relevant and material to an ongoing criminal investigation, including by providing identification information about the perpetrators," authorities reportedly told the company. 

Based on the documents Forbes had seen, the court granted the order but had asked Google to keep it under wraps. It's also unclear if Google handed over the data the authorities were asking for. In another incident, authorities asked the company for a list of accounts that "viewed and/or interacted" with eight YouTube livestreams. Cops requested for that information after learning that they were being watched through a stream while they were searching an area following a report that an explosive was placed inside a trashcan. One of those video livestreams was posted by the Boston and Maine Live account, which has over 130,000 subscribers.

A Google spokesperson told Forbes that the company follows a "rigorous process" to protect the privacy of its users. But critics and privacy advocates are still concerned that government agencies are overstepping and are using their power to obtain sensitive information on people who just happened to watch specific YouTube videos and aren't in any way doing anything illegal. 

"What we watch online can reveal deeply sensitive information about us—our politics, our passions, our religious beliefs, and much more," John Davisson, senior counsel at the Electronic Privacy Information Center, told Forbes. "It's fair to expect that law enforcement won't have access to that information without probable cause. This order turns that assumption on its head."

This article originally appeared on Engadget at https://www.engadget.com/authorities-reportedly-ordered-google-to-reveal-the-identities-of-some-youtube-videos-viewers-140018019.html?src=rss

Threads begins testing swipe gestures to help train the For You algorithm

Threads has begun testing swipe gestures to help users improve the algorithm that populates the For You feed. It’s reportedly called Algo Tune as, well, it helps people tune their algorithms. It’s pretty rare when any social media site, particularly one run by Meta, allows users to adjust the parameters by which the great and powerful algorithm operates, so this feature is definitely worth keeping an eye on.

It works a lot like Tinder and other dating apps. If you don’t like something on your feed, you swipe left. If you like a post and want to see more like it, you swipe right. That’s pretty much it. The algorithm is allegedly tuned over time by these responses, adjusting your feed to provide more of the content you want and less of the stuff you don’t want. Meta CEO Mark Zuckerberg calls it an “easy way to let us know what you want to see more of on your feed.”

This is just an experiment, for now, so the feature’s only rolling out to a select number of Threads users. The company also hasn’t released any specific information as to how all of the swiping actually influences the algorithm, but that’s par for the course when it comes to these things. The algo must remain protected at all costs.

The social media app sure has been busy lately, adding new tools at a rapid clip. Threads finally rolled out trending topics to all users, after experimenting with the feature since February. Meta also recently previewed fediverse integration, which would allow Threads posts on fellow social media app Mastodon. The company’s also been testing some features that let users save drafts and take photos directly in the app.

This article originally appeared on Engadget at https://www.engadget.com/threads-begins-testing-swipe-gestures-to-help-train-the-for-you-algorithm-175004586.html?src=rss

YouTube created the creator economy

Nineteen years after Jawed Karim uploaded the very first YouTube video, the awkward, 19-second clip in front of San Diego Zoo’s elephant enclosure is memorable today only because of what it represents: the start of a multibillion-dollar juggernaut that defines so much of what it means to be an online creator.

Today, YouTube is the most dominant social media platform by a sizable margin, especially among teenagers. Its influence is so vast it feels almost impossible to define. The service has birthed thousands of memes and internet personalities. Its recommendation algorithm has been credited with supercharging bizarre trends and viral misinformation.

But one of the most powerful ways YouTube has wielded its influence is through its Partner Program. The revenue sharing arrangement has generated billions of dollars for its most popular users and helped define the multibillion-dollar industry we now call the creator economy. Today, there are dozens of platforms and business models for making money via content creation, but it’s difficult to imagine any of them existing without YouTube’s Partner Program.

While YouTube is hardly the only platform that has made becoming an online creator feel like a viable career path, it has played an outsized role in creating and fueling the industry. When Google first introduced the Partner Program in 2007, there weren’t many ways to make a living from online content. The blogging industry was well established, but online media dynamics were already shifting away from independently run operations in favor of established platforms and brands.

YouTube, on the other hand, was a rising upstart in online media. Google had acquired the video service in 2006, before it had ads or even a mobile app. And when it announced it would make some of its most popular creators “partners“ in its business, it promised some of Google’s ad money could flow directly to the people making content.

It would take several more years for the Partner Program to grow into the money-printing machine it is now. But the Partner Program arrived, in 2007, when there was a growing demand for online video. Between 2006 and 2009, the audience for online video doubled, according to Pew Research, and YouTube was the biggest beneficiary. By the fall of 2009, YouTube was seeing more than one billion views a day.

That same year, YouTube made another important change to its monetization policies. It decided to spread the wealth so any single viral video could be eligible for revenue sharing, even if the creator wasn’t a partner, affirming that YouTube was the place to make money from viral content. In 2012, the Partner Program officially opened to everyone, and by 2014 there were one million creators making money from YouTube, according to The New York Times.

The flood of creators looking for a payout (and the sometimes scammy tactics that drove them) eventually led YouTube to again tighten its requirement for partner status in 2017. But YouTube had already cemented itself as the platform for amateur creators to turn their videos into a steady income. Today, there are more than three million channels with partner status, and the company has paid creators more than $70 billion in the last three years alone.

Of course, creators starting out now have many options available besides YouTube. Nearly every social media app offers some kind of monetization opportunity, though few have generated anything close to the eye-popping eight-figure sums made by YouTube’s top talent.

Other companies' creator funds, in which all creators draw payouts from the same pool of money fronted by the platform, have been underwhelming. YouTube star Jimmy Donaldson, better known as Mr. Beast, regularly tops the lists of YouTube’s highest earners. In 2022, he shared that he was making less than $10,000 a year from TikTok’s creator fund. And other apps’ monetization features, like tipping, subscriptions and virtual gifts, are difficult to scale.

Unsurprisingly, the number of YouTube-made multimillionaires has drastically changed teens’ ideas for career paths. In 2005, the year YouTube came online, teens said their top career aspirations were to become teachers or doctors, according to a poll conducted by Gallup. By 2021, YouGov found becoming a YouTuber or streamer was the top aspiration for Gen Z. In 2023, Morning Consult reported that 57% of Gen Z would like to pursue a career as an online creator “if given the opportunity.”

Polls like this often prompt a lot of eye rolls and snarky headlines. But it’s never been easier or more lucrative to be an online creator. At least one university offers a major in content creation and social media. Whether we like the idea of influencing as a career path, the industry of independent streamers, vloggers, newsletter writers, podcast producers, VTubers and others is worth hundreds of billions of dollars.

This article originally appeared on Engadget at https://www.engadget.com/youtube-created-the-creator-economy-130028016.html?src=rss

Glassdoor reportedly attaches real names to anonymous accounts

Is it really possible to keep anything hidden on the internet anymore? It seems very unlikely, with the latest example coming from Glassdoor, which published people's real names without their consent, ArsTechnica reports. That's right, the site specifically designed to allow anonymous, often unfiltered posts about users' employers is now tattling. 

Glassdoor's long-standing policy was that users could sign up with their name or anonymously. However, things changed when the company bought FishBowl in 2021 and later integrated it. Now, Glassdoor users get signed up for a FishBowl account and, as a result, must be verified (a Fishbowl requirement). This shift gives Glassdoor access to users' information to either display without consent — as is being done — or potentially get revealed if there was a leak or subpoena.

ArsTechnica spoke with two individuals whose data was populated on their Glassdoor profiles, including Monica, who noticed the change after actually asking for the company to remove some of her public-facing information. In an initial blog post, she claimed to have repeatedly not consented and that one Glassdoor employee told her that all profiles are now required to include a name.

Monica reported that a Glassdoor manager then added, "I stand behind the decision that your name has to be placed on your profile and it cannot be reverted back to just your initials or nullified/anonymized from the platform. I am sorry that we disagree on this issue. We treat all users equally when it comes to what is eligible to be placed on the profile and what is not, but we know that there are times our users, such as yourself, may not always agree with us." However, a Glassdoor spokesperson told Ars Technica that users could remain fully anonymous — contradicting the manager and leaving the truth unclear.

Then there was Josh, who claimed that Glassdoor not only added private information without permission but that some of it was inaccurate. Glassdoor listed him as living in London when he's based in California and spelled his employer's name wrong. Both Monica and Josh removed their accounts and sent Glassdoor requests to delete their data.

This article originally appeared on Engadget at https://www.engadget.com/glassdoor-reportedly-attaches-real-names-to-anonymous-accounts-120058183.html?src=rss

YouTube lays out new rules for 'realistic' AI-generated videos

Many companies and platforms are wrangling with how to handle AI-generated content as it becomes more prevalent. One key concern for many is the labeling of such material to make it clear that an AI model whipped up a photo, video or piece of audio. To that end, YouTube has laid out its new rules for labeling videos made with artificial intelligence.

Starting today, the platform will require anyone uploading a realistic-looking video that "is made with altered or synthetic media, including generative AI" to label it for the sake of transparency. YouTube defines realistic content as anything that a viewer could "easily mistake" for an actual person, event or place.

YouTube

If a creator uses a synthetic version of a real person's voice to narrate a video or replaces someone's face with another person's, they'll need to include a label. They'll also need to include the disclosure if they alter footage of a real event or place (such as by modifying an existing cityscape or making it look like a real building is on fire).

YouTube says that it might apply one of these labels to a video if a creator hasn't done so, "especially if the altered or synthetic content has the potential to confuse or mislead people." The team notes that while it wants to give creators some time to get used to the new rules, YouTube will likely penalize those who persistently flout the policy by not including a label when they should be.

These labels will start to appear across YouTube in the coming weeks, starting with the mobile app and then desktop and TVs. They'll mostly appear in the expanded description, noting that the video includes "altered or synthetic content," adding that "sound or visuals were significantly edited or digitally generated."

YouTube

However, when it comes to more sensitive topics (such as news, elections, finance and health), YouTube will place a label directly on the video player to make it more prominent. 

Creators won't need to include the label if they only used generative AI to help with things like script creation, coming up with ideas for videos or to automatically generate captions. Labels won't be necessary for "clearly unrealistic content" or if changes are inconsequential. Adjusting colors or using special effects like adding background blur alone won't require creators to use the altered content label. Nor will applying lighting filters, beauty filters or other enhancements.

In addition, YouTube says it's still working on a revamped takedown request process for synthetic or altered content that depicts a real, identifiable person's face or voice. It plans to share more details about that updated procedure soon.

This article originally appeared on Engadget at https://www.engadget.com/youtube-lays-out-new-rules-for-realistic-ai-generated-videos-154248008.html?src=rss

LinkedIn is developing in-app games to further distract you from your job hunt

LinkedIn, a platform that surely everybody associates with fun, may soon offer puzzle-based games to give its users something to do besides networking. App researcher Nima Owji posted a series of screenshots on X this weekend showing some of the games LinkedIn is working on, and the company has since confirmed the plan to TechCrunch. Employees’ scores will reportedly affect how the companies they work for are ranked in the games.

BREAKING: #LinkedIn is working on IN-APP GAMES!

There are going to be a few different games and companies will be ranked in the games based on the scores of their employees!

Pretty cool and fun, in my opinion! pic.twitter.com/hLITqc8aqw

— Nima Owji (@nima_owji) March 16, 2024

Per TechCrunch, the titles LinkedIn is working on so far include “Queens,” “Inference” and “Crossclimb.” LinkedIn provided the publication with some newer images of the games, but for everyone just anxiously awaiting their rollout, there’s no timeline yet for when they’ll be released. It’s unclear if games will be available in full to free users or reserved for LinkedIn’s paid subscribers.

This article originally appeared on Engadget at https://www.engadget.com/linkedin-is-developing-in-app-games-to-further-distract-you-from-your-job-hunt-205953683.html?src=rss

It took Starbucks a little too long to realize coffee NFTs aren't it

Starbucks is pulling the plug on Odyssey, its Web3 rewards program that gave members access to collectible NFTs. The company updated its FAQ on Friday to let members know that the beta program is closing on March 31, and they have a little over a week left to complete any remaining activities (called journeys). Those will shut down March 25. Users won’t lose their Stamps (Starbucks’ NFTs), which are hosted on Nifty Gateway, but they’ll have to sign up for Nifty using their Starbucks Rewards email to access them there, if they haven’t already.

Starbucks was late to the NFT game with Odyssey, which launched in beta in late 2022 — well after interest in the digital collectibles peaked. Unlike some other NFT ventures from major brands, though, it seemed to be aiming for more than a quick cash grab. It gamified the rewards system, offering activities and coffee-related mini-games that encouraged members’ ongoing participation.

In a conversation with TechCrunch published just last month, Odyssey community lead Steve Kaczynski emphasized the community element, saying, “I’ve seen that people who live in California in the Starbucks Odyssey community are really good friends with people in Chicago and they have met up in real life at times. This never would have happened if not for Web3.” But it’s 2024, and brands and consumers alike have long since moved on from NFTs. (Naturally, Forum3, which worked with Starbucks on Odyssey, seems to have pivoted to AI).

Starbucks says the Odyssey marketplace, where members could buy and sell their stamps, will move over to the Nifty marketplace. They can also withdraw their Stamps to trade them on other platforms.

This article originally appeared on Engadget at https://www.engadget.com/it-took-starbucks-a-little-too-long-to-realize-coffee-nfts-arent-it-170132305.html?src=rss