LinkedIn is expanding its Clubhouse-style live audio feature as it looks to draw more creators to its platform. The company, which first launched live audio events in January, will now open up hosting capabilities to all creators.
With the update, all LinkedIn creators who use the platform’s “creator mode” will be able to host live audio events so long as they stay in line with the platform’s “community policies of being a trustworthy, safe, and professional provider of content.” Though event hosting is currently limited to creators, any LinkedIn user is able to participate in the chats.
Similar to Clubhouse, creators on LinkedIn can schedule their audio events in advance and share the upcoming talks with their network. The company says creators are already using audio features to expand their professional networks, connect with potential clients and reach new followers. Video-centric live events are also in the works, though LinkedIn hasn’t given an update on when that will launch.
The expansion comes as LinkedIn has significantly ramped up its efforts to become a more creator-centric platform. The company says more than 10 million people are using the site’s creator mode, nearly double the 5.5 million who were using it in March. Now, LinkedIn is trying to help those creators broaden their reach. The company is tweaking the way creator profiles and their content appear in search results and in the LinkedIn feed in order to make it easier for people to find and follow them. It also plans to make creator profiles embeddable to outside websites so creators can more easily promote their LinkedIn content on other platforms.
TikTok is joining forces with Pearpop to launch a comedy docuseries hosted by creator Jericho Mencke, according to The Hollywood Reporter. It's cost $5 for all eight episodes, each 30 minutes long, with the first two running for free for all TikTok users.
Called Finding Jericho, the series will feature Mencke doing comedic interviews with characters like a clown from Craigslist. It'll be executive produced by Pearpop executives Zack Bernstein and Austin Sokol, along with Mencke.
Last month, TikTok unveiled the Live monthly subscription tool for creators on an invitation-only basis, after unveiling the service in January 2022. It allows creators to "increase their earnings while continuing to grow their communities" with perks like subscriber badges, custom emotes and a subscriber-only chat.
In 2020, TikTok announced a $200 million fund to support creators, but the subscription service gives personalities a more direct stream of income. Pearpop, meanwhile, is a separate platform that allows creators to "monetize their influence" through challenges and brand sponsorships. The first episode of Finding Jericho premiered late yesterday at 9PM PST and following episodes will arrive Tuesday and Thursdays at the same time on the @Jercho1 and @pearpopofficial TikTok accounts.
Meta is finally peeling back the curtain on how political and election ads are targeted on Facebook. The company is making information about how political and “social issue” ads are targeted available to researchers and the public, Meta said in an update.
Researchers who are part of the company’s Facebook Open Research and Transparency (FORT) program will get access to the most detailed information. “This data will be provided for each individual ad and will include information like the interest categories chosen by advertisers,” Facebook writes.
The company had previously experimented with making some targeting data available to researchers via FORT last year, but the information was only available for political ads during a three-month period before the 2020 election. Now, researchers will also be able access “all social issue, electoral and political ads run globally since August 2020.”
Meta is also making a more limited amount of political ad-targeting data available to the public via its Ad Library. That update, expected in July, will allow anyone to see more general information about how specific Facebook Pages are targeting their ads. “This update will include data on the total number of social issue, electoral and political ads a Page ran using each type of targeting (such as location, demographics and interests) and the percentage of social issue, electoral and political ad spend used to target those options,” the company writes. “For example, the Ad Library could show that over the last 30 days, a Page ran 2,000 ads about social issues, elections or politics, and that 40% of their spend on these ads was targeted to ‘people who live in Pennsylvania’ or ‘people who are interested in politics.’”
Facebook
Questions about how political ads are targeted on Facebook have been a thorny topic for the company. Researchers have long argued that understanding how election and political ads are targeted is just as important as having a record of the people and organizations behind each ad. But Meta has resisted making detailed targeting data available, citing privacy concerns.
But that hasn’t stopped groups from trying to study the issue on their own. A team at New York University created a browser extension to help them understand how political ads are targeted on Facebook. Using the data, they uncovered multiple flaws in Facebook’s Ad Library. Meta accused the team of scraping and disabled their accounts, which also cut off their ability to use the company’s CrowdTangle tool to study misinformation.
Making more detailed targeting information available through FORT may still not go as far as some researchers would like — researchers still need to be vetted and approved by Facebook to access FORT — but it at least offers one avenue where the data is available. And, with the 2022 midterms coming up later this year, there’s likely to be significant interest in learning more about how political ads spread through Facebook.
Facebook is still struggling to contain the video of last weekend’s horrific mass shooting in Buffalo, New York. Now, not only are clips of the shooting accessible on the platform, reposted clips of the attack are sometimes appearing alongside Facebook ads, The New York Timesreports.
The Times notes that it’s not clear how often ads are appearing alongside clips of the shooting, but the paper said that “searches for terms associated with footage of the shooting have been accompanied by ads for a horror film, clothing companies and video streaming services,” in their own tests and tests conducted by the Tech Transparency Project.
While this isn’t a new problem for Facebook — the platform has made similar missteps in the wake of a 2019 shooting in Christchurch, New Zealand — the company is apparently in some cases actually recommending search terms associated with videos of the shooting, according to The New York Times, which said Facebook suggested some searches as being “popular now.”
As with previous mass shootings and violent events, footage originally streamed to Twitch by the gunman in Buffalo has proved difficult for social media platforms to contain. Facebook previously told Engadget that it had designated the event a terrorist attack, and that it was working to automatically detect new copies that are shared to its service.
But videos are still falling through the cracks. And the fact that Facebook is surfacing ads near those videos is likely to raise further questions about whether the company prioritizes profits over safety as a whistleblower has alleged.
In a statement, a company spokesperson told The Times it was trying to “to protect people using our services from seeing this horrific content even as bad actors are dead-set on calling attention to it.”
Twitter is taking more steps to slow the spread of misinformation during times of crisis. The company will attempt to amplify credible and authoritative information while trying to avoid elevating falsehoods that can lead to severe harm. Under its new crisis misinformation policy, Twitter interprets crises as circumstances that pose a "widespread threat to life, physical safety, health or basic subsistence" in line with the United Nations’ definition of a humanitarian crisis.
For now, the policy will only apply to tweets regarding international armed conflict. It may eventually cover the likes of natural disasters and public health emergencies.
The company plans to fact-check information with the help of "multiple credible, publicly available sources." Those include humanitarian groups, open-source investigators, journalists and conflict monitoring organizations.
Twitter acknowledges that misinformation can spread quickly and it will take action "as soon as we have evidence that a claim may be misleading." Tweets that violate the rules of this policy won't appear in the Home timeline or the search or explore sections.
"Content moderation is more than just leaving up or taking down content, and we’ve expanded the range of actions we may take to ensure they’re proportionate to the severity of the potential harm," Twitter's head of safety and integrity Yoel Roth wrote in a blog post. "We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events.
We’ve been refining our approach to crisis misinformation, drawing on input from global experts and human rights organizations. As part of this new framework, we’ll start adding warning notices on high visibility misleading Tweets related to the war in Ukraine. pic.twitter.com/fr0NGleJXP
The company will also make it a priority to put notices on highly visible rule-breaking tweets and those from high-profile accounts, such as ones operated by state-run media or governments. Users will need to click through the notice to read the tweet. Likes, retweets and shares will be disabled on these tweets as well.
"This tweet violated the Twitter Rules on sharing false or misleading info that might bring harm to crisis-affected populations," the notice will read. "However, to preserve this content for accountability purposes, Twitter has determined this tweet should remain available." In addition, the notice will include a link to more details about Twitter's approach to crisis misinformation. The company says it will start adding the notice to highly visible misleading tweets related to the war in Ukraine.
The notice may appear on tweets that include falsehoods about on-the-ground conditions during an evolving conflict; misleading or incorrect allegations of war crimes or mass atrocities; or misinformation about the use of weapons or force. Twitter may also apply the label to tweets with "false information regarding international community response, sanctions, defensive actions or humanitarian operations."
There are some exceptions to the rules. They won't apply to personal anecdotes, first-person accounts, efforts to debunk or fact-check a claim or "strong commentary."
However, a lot of the fine details about Elon Musk's pending takeover of Twitter remain up in the air, and this policy could change if and when the deal closes. Musk has said Twitter should only suppress illegal speech (which is also a complex issue, since rules vary by jurisdiction). It remains to be seen exactly how he will handle content moderation.
Meta’s accounting of the most popular content on Facebook continues to be a confusing mess to untangle. The company released the latest version of its “widely viewed content report,” which details some of the most-viewed Facebook posts in the United States.
And, once again, the latest report raises questions about the company’s ability to limit the spread of what Meta euphemistically refers to as “lower-quality posts.” Between January and March of this year, six of the top 20 most popular links on Facebook were from a spammy website that has since been banned by the company for inauthentic behavior.
“In this report, there were pieces of content that have since been removed from Facebook for violating our policies of Inauthentic Behavior,” the company wrote in a blog post. “The removed links were all from the same domain, and links to that domain are no longer allowed on Facebook.”
The links all came from a Vietnam-based “news” site called Naye News. Unfortunately, Facebook didn’t share details about the actual URLs that went viral and were later removed, so there’s not much we can glean about the actual content. What we do know is that Naye News, which as Bloomberg reporter Davey Alba points out has never before appeared in a widely viewed content report, was able to reach a vast number of Facebook users before the company banned it. Links to Naye News appeared six times on the list of the top 20 URLs, including the two top spots. Together, these links got more than 112 million views, according to the report.
This website wasn’t the only source of questionable content that made it into the top most-viewed list. The fourth-most popular link on the list was a YouTube clip from a town hall meeting with Wisconsin Senator Ron Johnson, featuring a nurse making provablytly false claims about COVID-19 treatments.
During a call with reporters, head of Facebook Integrity Anna Stepanov, said that links to the YouTube video were demoted in News Feed after it was debunked by fact checkers. The company also added warning labels to discourage it from being reshared. “Without these features, this link would likely have received even more reach,” Stepanov said.
But even with those measures, the link was still viewed more than 22.1 million times on Facebook. That’s more than the number of views on the original YouTube video, which currently has about 6.5 million views.
Meanwhile, another URL on the report, which got 12.3 million views, is a link to a website called “heaveemotions.com,” that now redirects to a website that appears to be meant to trick visitors into installing malware. On Facebook though, the link originally rendered a preview with meme-style text that reads: “They told me the virus is everywhere. I told them so is God. Can I get an Amen? I Bet you won’t repost.”
Screenshot/ Facebook
It’s not the first time overtly spammy content has appeared in one of these reports. In the last version of this report, the top Facebook Page was one later removed by the company for breaking its rules. Reporter Ryan Broderick later identified the page’s origins as a Sri Lankan content farm.
The reports, which Meta began releasing in part to rebut data suggesting far-right personalities consistently dominate the platform, are one of the only windows the company offers into what’s popular on Facebook. That’s been a key question for researchers trying to study the platform and how information, and misinformation, spreads across it. But researchers have also raised questions about how Meta was compiling these reports, which in the past have surfaced bizarre results.
Notably, Meta now says it’s changing the way it evaluates what content is the most “widely viewed” on its platform. Previous reports identifying the top links on Facebook were based on any public post that contained a URL, even if the was just appended to the body of a text post. This meant that popular Pages could effectively spam their followers with random links — like to a website representing former Green Bay Packers football players — embedded in a text or photo post.
Researchers had widely criticized this approach as a widely distributed text post with a link at the end is a lot different than a link post in which the linked content is fully rendered as a preview. Now, Meta is reversing course. “Moving forward, links will need to render a preview in order to be counted as a view, as that more accurately represents what people are seeing.”
Even so, these reports are still only a limited look at what’s most popular on Facebook. The company says the list of the top 20 most-viewed links — the list that included Naye News and COVID-19 misinformation — “collectively accounted for 0.03% of all Feed content views in the US during Q1 2022.” But as always with Facebook, its sheer size means that even a fraction of a percent can equate to millions of views. At the very least, these reports show that it’s still relatively easy to game Facebook’s algorithms and spread “low quality” content.
Following Saturday’s horrific mass shooting in Buffalo, online platforms like Facebook, TikTok and Twitter are seemingly struggling to prevent various versions of the gunman’s livestream from proliferating on their platforms. The shooter, an 18-year-old white male, attempted to broadcast the entire attack on Twitch using a GoPro Hero 7 Black. The company told Engadget it took his channel down within two minutes of the violence starting.
Not going to share screenshots, but the rate at which versions of the Buffalo video continue to proliferate on Facebook and Twitter is astonishing. We've been here before with Christchurch and it continues to happen.
“Twitch has a zero-tolerance policy against violence of any kind and works swiftly to respond to all incidents,” a Twitch spokesperson said. “The user has been indefinitely suspended from our service, and we are taking all appropriate action, including monitoring for any accounts rebroadcasting this content.”
Despite Twitch’s response, that hasn’t stopped the video from proliferating online. According to New York Times reporter Ryan Mac, one link to a version of the livestream someone used a screen recorder to preserve saw 43,000 interactions. Another Twitter user said they found a Facebook post linking to the video that had been viewed more than 1.8 million times, with an accompanying screenshot suggesting the post did not trigger Facebook’s automated safeguards. A Meta spokesperson told Mac the video violates Facebook’s Community Standards.
LISTEN: Police commissioner explains what happened today in Buffalo.
Sheriff followed up calling this shooting that killed 10 people, including a retired Buffalo police officer, a racially-motivated hate crime. @news4buffalopic.twitter.com/qTWJ3YRUyC
— Austin Kellerman (@AustinKellerman) May 14, 2022
Responding to Mac’s Twitter thread, Washington Post reporter Taylor Lorenz said she found TikTok videos that share accounts and terms Twitter users can search for to view the full video. “Clear the vid is all over Twitter,” she said. We’ve reached out to the company for comment.
Preventing terrorists and violent extremists from disseminating their content online is one of the things Facebook, Twitter and a handful of other tech companies said they would do following the 2019 shooting in Christchurch, New Zealand. In the first 24 hours after that attack, Meta said it removed 1.5 million videos, but clips of the shooting continued to circulate on the platform for more than a month after the event. The company blamed its automated moderation tools for the failure, noting they had a hard time detecting the footage because of the way in which it was filmed. "This was a first-person shooter video, one where we have someone using a GoPro helmet with a camera focused from their perspective of shooting," Neil Potts, Facebook’s public policy director, told British lawmakers at the time.
Elon Musk’s tweeting may have landed him in legal trouble again. As you may recall, the Tesla and SpaceX executive tweeted on Friday that his deal to buy Twitter was “temporarily on hold” after the company disclosed that fake and spam accounts represented less than 5 percent of its monetizable daily active users during the first quarter of 2022.
After his tweet prompted Twitter CEO Parag Agrawal to say the company was “prepared for all scenarios,” Musk stated his team would test “a random sample of 100 followers” to verify Twitter’s numbers. According to the billionaire, one of the answers he gave to a question about his methodology prompted a response from Twitter’s legal team.
“I picked 100 as the sample size number, because that is what Twitter uses to calculate <5% fake/spam/duplicate,” he said in the alleged offending tweet. “Twitter legal just called to complain that I violated their NDA by revealing the bot check sample size is 100,” Musk later said of his actions.
We’ve reached out to Twitter for comment.
In another twist in Musk’s bid to buy Twitter, he also took aim at the platform’s algorithmic feed. “You are being manipulated by the algorithm in ways you don’t realize,” he said.
The message drew the attention of former Twitter CEO Jack Dorsey. “It was designed simply to save you time when you are away from [the] app for a while,” Dorsey told Musk. “Pull to refresh goes back to reverse chron as well.”
Dorsey then responded to someone who said Twitter’s algorithmic feed was “definitely” designed to manipulate. “No it wasn’t designed to manipulate. It was designed to catch you up and work off what you engage with,” Dorsey said. “That can def have unintended consequences tho.”
Musk later appeared to walk back his comment. “I’m not suggesting malice in the algorithm, but rather that it’s trying to guess what you might want to read and, in doing so, inadvertently manipulate/amplify your viewpoints without you realizing this is happening,” he said.
Should something come of Musk’s actions, this wouldn’t be the first time one of his tweets has landed him in legal trouble. Back in 2018, his now-infamous “funding secured” tweet attracted the attention of the US Securities and Exchange Commission, leading to a $40 million settlement with the agency that he’s now trying to end.
Elon Musk's deal to buy Twitter is "temporarily on hold" pending confirmation that spam and fake accounts do represent less than 5 percent of users, he tweeted. Attached to the tweet was a Reuters link reporting that Twitter estimated in a regulatory filing that those types of accounts represented 5 percent of its monetizable daily active users during the first quarter of 2022.
Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of usershttps://t.co/Y2t0QMuuyn
It appears that Musk may have some concerns about those figures, judging by the tweet. It's not clear what steps he and Twitter will take to verify them, however.
If you've been somehow disconnected from the internet (lucky you!), Musk is in the process of buying Twitter for $44 billion. He aims to quadruple the user base and has said he'll defeat spam bots, authenticate all humans and make its algorithms open source, while also championing free speech and walking back content moderation. As part of that, he said he'd reverse the Twitter ban on Donald Trump and other users.
However, some experts on social media content moderation have said that those goals conflict with each other. Facebook's former security chief Alex Stamos, for one, recently tweeted that Musk's ideals for Twitter may conflict with European laws, pointing out that there's "a large mismatch between the US and the UK's Online Safety Bill and EU Digital Services Act and Digital Markets Acts." Stamos also noted that Twitter is saturated in the developed world, so any growth "will require even more dealing with the challenges of autocracies and developing democracies."
Starting tomorrow, YouTube will give both fans and creators the ability to gift paid channel subscriptions. A number of influential streamers tweeted the announcement today, many of whom were ecstatic about a new monetization tool. Gifted subs have been a popular feature on Twitch — YouTube Gaming's main rival— for a while. Many streamers see subscriptions as an easy way to generate revenue while also building their community. But YouTube has dragged its heels on releasing the much-anticipated feature for some time. Finally, YouTube Japan tested the waters with gifted memberships earlier this year for a select number of channels. Gifted memberships — which is still in beta — will now be available to all YouTube Gaming users in the US and UK.
Excited to announce that starting May 11th, memberships Gifting Beta will be enabled for YouTube streams!
Been streaming on YouTube for 2.5 years and just so happy to see the platform continue to focus working on improving the streaming side of it. Many more changes to come :)
Fans normally pay $4.99 per month for channel memberships, which allow them to access user badges, emotes and other exclusive content by their favorite creators. YouTube Gaming has released a number of other Twitch-like features this year, such as Live Redirects, which allow streamers to send fans to other streams or premieres.
While Twitch remains the biggest US-based platform for livestreaming, a number of its high-profile streamers have decamped in recent years for YouTube Gaming. And there may be more to follow. Bloomberg reported last month that Twitch partners will get a smaller cut of revenue from subscriptions (50 percent from 70 percent) under a new monetization model by the Amazon-owned platform. YouTube Gaming takes only 30 percent of a streamer’s revenue from channel subscriptions. While YouTube Gaming doesn’t have as big of an audience as Twitch, that could easily change if more popular Twitch creators leave for greener pastures.