Posts with «author_name|karissa bell» label

Social media scammers stole at least $770 million in 2021

The last year has been a boon for social media scammers, according to a new report from the FTC. The agency says more than 95,000 people lost $770 million to scammers who found them via social media platforms in 2021. That’s more than double the $258 million they say scammers made off with in 2020.

The report doesn’t speculate on why there was such a big increase in 2021, but it notes that reports of scams have “soared” over the last five years. It also states that there was a “massive surge” in scams related to “bogus cryptocurrency investments” and that investment scams accounted for nearly $285 million — more than third — of the $770 million lost last year.

Romance scams have also “climbed to record highs in recent years,” according to the report. “These scams often start with a seemingly innocent friend request from a stranger, followed by sweet talk, and then, inevitably, a request for money,” the FTC says. Also prevalent are scams related to online shopping, most of which involve “undelivered goods” that were purchased as the result of an ad on social media.

Of note, Facebook and Instagram are the only two platforms named in the report. “More than a third of people who said they lost money to an online romance scam in 2021 said it began on Facebook or Instagram,” the report states. Likewise, the FTC says that Facebook and Instagram were the most commonly cited platform for reports of undelivered good, with the two apps cited in 9 out of 10 reports where a service was identified.

“We put significant resources towards tackling this kind of fraud and abuse,” a spokesperson for Meta said in a statement. “We also go beyond suspending and deleting accounts, Pages, and ads. We take legal action against those responsible when we can and always encourage people to report this behavior when they see it.”

Interestingly one of the FTC’s recommendations is that users try to opt out of targeted advertising when possible as scammers can “easily use the tools available to advertisers on social media platforms to systematically target people with bogus ads based on personal details such as their age, interests, or past purchases.” The agency also recommends users lock down their privacy settings and to be wary of any messages asking for money, especially in the form of cryptocurrency or gift cards.

TikTok will add PSAs to Holocaust-related content

TikTok is adding PSAs and informational resources about the Holocaust in an effort to combat antisemitism in its app. With the changes, TikTok will link to aboutholocaust.org when users search for Holocaust-related content. Holocaust-related hashtags will also link to the website, along with a brief PSA.

“While browsing this topic, we recommend you verify facts using trusted sources, such as the multilingual website (aboutholocaust.org) for essential information about the history of the Holocaust and tis legacy,” the message says. The app will also add a permanent banner at the bottom of Holocaust-related videos that urges users to “get the facts about the Holocaust.” That change will be rolling out “in the coming months.”

Though TikTok’s rules prohibit hate speech, Holocaust denialism and other forms of antisemitism, the app has faced criticism in the past for allowing antisemitic content to spread on its platform. Last year, the Anti-Defamation League published a blog post with a number of examples of “posts perpetuating age-old antisemitic tropes and conspiracy theories.” The organization urged TikTok “to address this systematically.”

TikTok

In its latest statement, TikTok notes that it blocks search results about the Holocaust that may break its rules, and that it uses “a combination of technologies and moderation teams to remove antisemitic content and accounts from our platform, including Holocaust denial or any other form of hate speech directed at the Jewish community.”

Facebook-backed Diem Association may be close to dissolving

It’s looking more and more likely that Diem, Meta’s ill-fated cryptocurrency previously known as Libra, will never actually materialize. The Diem Association is reportedly “weighing a sale of its assets as a way to return capital to its investor members,” Bloombergreports.

It’s unclear what assets the Diem Association owns, but the report notes the group is talking to bankers about selling its intellectual property and finding "a new home for the engineers that developed the technology.”

If a sale were to happen, it would seem to be the final nail in the coffin for Diem, the cryptocurrency project that Mark Zuckerberg has championed. Plans to get the stablecoin off the ground have stalled for years amid regulatory pushback and lawmaker concerns. After first launching as Libra, several high-profile partners pulled out in 2019.

Last fall, Facebook started a small pilot of Novi, the cryptocurrency wallet formerly known as Calibra. But the fact that Novi was forced to launch without support for Diem — it used a different stablecoin called the Pax Dollar — was a sign that Diem's future remained uncertain. Longtime Facebook exec David Marcus, who oversaw the social network’s crypto plans, said at the time that Facebook remained committed to Diem. “I do want to be clear that our support for Diem hasn’t changed and we intend to launch Novi with Diem once it receives regulatory approval and goes live,” he wrote. Marcus announced a month later that he was leaving Facebook. 

A representative for the Diem Association said that Bloomberg's reporting contained unspecified "factual errors," but declined to elaborate or comment further.

Why TikTok stars are criticizing its creator fund

Being part of TikTok’s creator fund is apparently a lot less lucrative than it may seem, even for some of the app’s biggest stars. Over the last few days, some high-profile TikTokkers have taken the unusual step of publicizing how much — or in this case, how little — they are making from the fund.

TikTok is still relatively early in its monetization features for creators. Instead of a revenue sharing arrangement like YouTube’s Partner Program, TikTok pays its top stars out of a creator fund. Launched in 2020, the fund started out with $200 million, and TikTok said last year it was increasing the fund to $1 billion over the next three years in the U.S. But the company hasn’t provided details on how much it has distributed or how much participants can expect to earn.

But according to one prominent streamer, most creators are earning very little. Last week, Hank Green, who has more than 6 million followers on TikTok, shared a YouTube video titled “So… TikTok sucks.” In the 24-minute video, he details his experience in TikTok’s creator fund, and estimates that he currently makes about 2.5 cents per 1,000 views on the platform — a fraction of what he earns on YouTube and about half of what he had previously earned on TikTok.

The problem, as he explains it, is that TikTok offers a steadily growing number of creators a portion of a “static pool of money,” that isn’t tied to TikTok’s revenue or its skyrocketing popularity. The result is that each creator makes less and less, even as TikTok becomes more successful. “Because of the way that TikTok shares a lot of audience among a lot of creators, that ends up being less than a dollar a day for most of the people in the creator fund,” he said.

Green, whose participation in the creator fund was previously touted by TikTok in a corporate blog post, said that creator funds aren’t bad on their own, but that TikTok’s current arrangement is preventing creators from being able to adequately support themselves.

His comments prompted others to share their frustrations with TikTok. Safwan AhmedMia, who goes by SuperSaf on TikTok, shared Green’s video along with a screenshot of his TikTok earnings: £112.04 (about $151). “This is how much I've made from the TikTok Creator Fund since April 2021 with over 25 million views in that time,” he wrote.

Then, Jimmy Donaldson, the streamer known as Mr. Beast, shared his TikTok earnings. According to the screenshot, he’s earned just under $15,000 from the app, with daily earnings between $18 and $32 in January. As The Informationpoints out, that works out to less than $10,000 a year from TikTok, despite his estimate that he’s gotten “over a billion views” on the app. That number is particularly low considering Donaldson is YouTube’s top earner, and made $54 million on the platform in 2021.

Here’s mine, I’m 2 lazy to count the views on my tik toks but it’s prob over a billion views pic.twitter.com/D8cSrBdXsZ

— MrBeast (@MrBeast) January 21, 2022

It's not clear how much Green, AhmedMia and Donaldson's experience reflects that of other creators in the the fund. But TikTok hasn’t offered an alternative explanation about why its creators are making so little. “The Creator Fund is one of many ways that creators can make money on TikTok,” a TikTok spokesperson said in a statement, pointing to the company’s creator marketplace, which helps match creators with potential sponsors. “We continue to listen to and seek feedback from our creator community and evolve our features to improve the experience for those in the program.”

It's true that the creator fund isn't the only way TikTok stars make money from the app. The app has a tipping feature, though it's not available to everyone yet. Creators also regularly partner with brands and those deals can be worth millions for the app's most influential users. But inking a deal with a major brand requires time and effort, and that option may not be available to lesser known creators. And since TikTok doesn't have a revenue sharing feature, the fund is right now the only way creators can be paid directly by the company. 

Elsewhere, the app is testing other monetization features for creators. It's experimenting with subscription features, which would allow creators to effectively move some of their content behind a paywall. The features appears to be in an early stage, and the company hasn’t said when, or even if, the feature may be available more widely.

Are you in TikTok's creator fund or have a tip to share about how it distributes funds? Email me at karissa.bell [at] engadget.com.

YouTube considers jumping on the NFT bandwagon

YouTube is the latest platform eyeing a move into NFTs. In a new letter to creators about YouTube’s 2022 priorities, CEO Susan Wojicki said the company is exploring how its creators could benefit from the digital collectibles.

In the letter, Wojicki said that Web3 — a term used by crypto enthusiasts to refer to the collection of blockchain based technologies they believe will usher in a new era of the internet — has been a “source of inspiration” for the company. She didn’t say exactly how YouTube may integrate NFTs into its platform, but suggested the technology could be a new source of revenue for creators.

“The past year in the world of crypto, nonfungible tokens (NFTs), and even decentralized autonomous organizations (DAOs) has highlighted a previously unimaginable opportunity to grow the connection between creators and their fans,” she wrote. “We’re always focused on expanding the YouTube ecosystem to help creators capitalize on emerging technologies, including things like NFTs, while continuing to strengthen and enhance the experiences creators and fans have on YouTube.”

If YouTube allowed creators to sell NFTs directly to their fans, it would be a major boon for the technology, which has grown in popularity over the last year, but hasn’t been widely adopted by major social platforms. But there are already signs that could change in 2022.

Twitter just introduced its first experiment with NFTs, with NFT profile pictures. Instagram’s top executive has also expressed an interest in the technology, and The Financial Timesreported last week that Facebook and Instagram are working on an NFT marketplace and other features,

NFT aren’t the only new monetization opportunities YouTube is looking at in the coming year, though. Wojicki also said the company is “excited” about podcasts and that “we expect it to be an integral part of the creator economy.” She also confirmed that YouTube would expand its shopping features to more creators, and test “how shopping can be integrated into Shorts.”

The CEO also touched on the controversy surrounding YouTube’s decision to remove public dislike counts from its platform. She noted that the dislikes was often used to target smaller creators for harassment, and that the feature could still be used to inform individuals' recommendations. “Every way we looked at it, we did not see a meaningful difference in viewership, regardless of whether or not there was a public dislike count,” she said. “And importantly, it reduced dislike attacks.”

Instagram will now reduce the visibility of 'potentially harmful' content

Instagram is taking new steps to make “potentially harmful” content less visible in its app. The company says that the algorithm powering the way posts are ordered in users’ feeds and in Stories will now de-prioritize content that “may contain bullying, hate speech or may incite violence.”

While Instagram’s rules already prohibit much of this type of content, the change could affect borderline posts, or content that hasn’t yet reached the app’s moderators. “To understand if something may break our rules, we'll look at things like if a caption is similar to a caption that previously broke our rules,” the company explains in an update.

Up until now, Instagram has tried to hide potentially objectionable content from public-facing parts of the app, like Explore, but hasn’t changed how it appears to users who follow the accounts posting this type of content. The latest change means that posts deemed “similar” to those that have been previously removed will be much less visible even to followers. A spokesperson for Meta confirmed that “potentially harmful” posts could still be eventually removed if the post breaks its community guidelines.

The update follows a similar change in 2020, when Instagram began down-ranking accounts that shared misinformation that was debunked by fact checkers. Unlike that change, however, Instagram says that the latest policy will only affect individual posts and “not accounts overall.”

Additionally, Instagram says it will now factor in each individual user’s reporting history into how it orders their feeds. “If our systems predict you’re likely to report a post based on your history of reporting content, we will show the post lower in your Feed,” Instagram says.

Twitter brings NFTs to profile photos, but only for Twitter Blue subscribers

Twitter is giving NFT enthusiasts a new reason to pay for a Twitter Blue subscription. The company is testing a new feature that allows NFT owners to authenticate NFTs displayed in their profile photos.

The feature, which is being offered as an early stage “Labs” feature for Twitter Blue subscribers, allows NFT owners to connect their crypto wallet to their Twitter account and display an NFT as their profile photo. While many NFT owners already use the art in their profile photos, the Twitter Blue feature will also add an icon indicating that the NFT has been authenticated and that person behind the account is the official owner of the piece.

Though only Twitter Blue subscribers can access the feature, the authentication symbol will be visible to everyone on Twitter. And other users will be able to tap on the hexagon symbol in order to learn more about the NFT in the image.

Twitter

While Twitter has previously indicated that it was working on an NFT authentication service, it’s notable that it would choose to offer the feature to Twitter Blue subscribers first, The company debuted the $3/month subscription service in November, in a bid to appeal to power users who might pay for specialized features. The NFT feature is “still under active development,” according to the company, and it’s not clear if it plans to launch it more widely. Twitter has previously said that early-stage “labs” features are experiments that could become available outside of Twitter Blue, kept around for subscribers, or killed off entirely.

Facebook takes down fake Iranian accounts that posed as Scottish locals

Facebook disabled a network of fake accounts that posed as English and Scottish locals, but were actually an Iran-based influence operation. The company detailed the takedowns in its latest report on coordinated inauthentic behavior on its platform.

The network was relatively small — eight accounts on Facebook and 126 on Instagram — though it had amassed about 77,000 followers, according to the company. Facebook’s security researchers didn’t indicate exactly who in Iran was behind the effort, or what their motives were, but said some of the people involved had a “background in teaching English as a foreign language.”

“This network posted photos and memes in English about current events in the UK, including supportive commentary about Scottish independence and criticism of the UK government,” Facebook writes in its report. In a call with reporters, Facebook’s Director of Threat Disruption, David Agranovich, said that it’s not the first time the company has caught Iran-linked fake accounts targeting Scotland, but that the latest network stood out for its “artisanal” approach to the fake personas.

“What was unique about this case was the effort that the operators took to make their fakes look like real people,” Agranovich said. He noted the accounts spent considerable time posting about their “side interests,” like football, in an attempt to boost their credibility. Some of the accounts also lifted profile photos from real celebrities or media personalities, and regularly updated the images in order to appear more real. Other accounts used fake photos generated by AI programs.

Overall, Facebook says that the fake accounts weren’t particularly successful as the most popular account had only reached about 4,000 followers, about half of whom were actually located in the UK. “In a way, this is more like an old fashioned pre-internet influence operation, creating detailed fake personas and trying not to be noticed,” Agranovich said.

Reddit 'revamped' its block feature so blocking actually works

Reddit is “revamping” its block feature so that blocking on Reddit functions more like other social platforms. With the change, blocking a user on Reddit will not just block that person’s posts from your view, but will also prevent them from being able to see or interact with your posts.

That may sound obvious, but up until now, blocking on Reddit has been more like a “mute” feature in that it only worked one way. That made the feature particularly ineffectual for people dealing with harassment or other forms of abuse as blocking didn’t do anything to prevent the person from interacting with your posts.

Reddit

Now, when someone who has been blocked encounters that user’s posts or comments, the content will appear as if it was deleted. Profiles will also be inaccessible after a user has been blocked. (one exception is moderators, who will be able to view posts from users who blocked them if they appear in the subreddit they moderate.)

On the other side, Reddit is changing up how blocked users’ content appears for people who initiated the block in order to better protect against harassment. Now, those who have blocked someone else will still be able to view their posts, though the content will appear collapsed by default. Reddit says it’s keeping these posts accessible so that users can still utilize the site’s reporting features and track any potential harassment.

Snapchat is limiting friend recommendations for teen accounts

Snapchat is changing up its friend recommendation feature following calls for increased safety on the app. The company is making it harder for adult strangers to find teens in its app by limiting friend recommendations in its “Quick Add” feature. 

Now, the app won’t show the accounts of 13 to 17-year-olds in Quick Add unless they have “a certain number of accounts in common,” according to Snap. While the change won’t prevent adults and teens from connecting at all, it could make it more difficult for strangers to find teens they don’t already know. In a blog post, the company said the change was part of its work to "combat the fentanyl epidemic" and keep drug dealers from finding "new ways to abuse Snapchat."

The company has faced scrutiny over its handling of drug dealers on its platform in recent months. Lawmakers and safety advocates have pushed Snap to do more to keep dealers off of Snapchat following reports of overdoses linked to drugs bought through the app. Snap also said Tuesday that it has improved its ability to proactively detect “drug-related content” on its platform, with 88 percent of “drug related content” now being “proactively detected” with AI. The company also notes it has staffed up the team that works directly with law enforcement agencies and has “significantly improved” its response time to law enforcement requests.

At a Senate hearing last fall, Snap’s VP of Global Public Policy Jennifer Stout said the company was working on new parental control features that would make it easier for parents to monitor their children’s activity in the app. Those updates still have yet to launch, though the company hopes to make them available “in the coming months.”