Posts with «author_name|karissa bell» label

TikTok adds warnings to search results for 'distressing content'

TikTok is adding new warnings to its in-app search that will alert users when results may include “distressing content.” The app has employed “sensitive content” warnings on individual videos since last year, but the updated alerts will appear in search results for terms that could include such content.

In a blog post, TikTok uses the example of “scary makeup” as a search term that may prompt such a warning. The company notes that users will be able to click through the warning to view results anyway, and that individual videos deemed “graphic or distressing” are ineligible from the app’s recommendations.

TikTok is also changing up search results to provide more resources on searches related to suicide and self harm, the company said. In addition to surfacing links to reach helplines like the Crisis text Line, the app will also point users to “content from our creators where they share their personal experiences with mental well-being, information on where to seek support and advice on how to talk to loved ones about these issues.”

TikTok

The app has at times struggled to deal with content related to self harm. Last year, a video of a suicide, originally streamed to Facebook Live, went viral on TikTok as the company scrambled to take down new copies. But even as users came up with workarounds to skirt TikTok’s detection, other creators posted viral clips urging users not to engage with the content. That suggests that TikTok’s plan to rely on creators to share positive PSAs could be an effective strategy for the company.

In the U.S., the number for the National Suicide Prevention Lifeline is 1-800-273-8255. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). TikTok has published a list of resources for other countries.

Researchers: Platforms like Facebook have played ‘major role’ in fueling polarization

Social media platforms like Facebook “have played a major role in exacerbating political polarization that can lead to such extremist violence,” according to a new report from researchers at New York University’s Stern Center for Business and Human Rights.

That may not seem like a surprising conclusion, but Facebook has long tried to downplay its role in fueling divisiveness. The company says that existing research shows that “social media is not a primary driver of harmful polarization.” But in their report, NYU’s researchers write that “research focused more narrowly on the years since 2016 suggests that widespread use of the major platforms has exacerbated partisan hatred.”

To make their case, the authors highlight numerous studies examining the links between polarization and social media. They also interviewed dozens of researchers, and at least one Facebook executive, Yann Le Cun, Facebook’s top AI scientist.

While the report is careful to point out that social media is not the "original cause" of polarization, the authors say that Facebook and others have “intensified” it. They also note that Facebook’s own attempts to reduce divisiveness, such as de-emphasizing political content in News Feed, show the company is well aware of its role. “The introspection on polarization probably would be more productive if the company’s top executives were not publicly casting doubt on whether there is any connection between social media and political divisiveness,” the report says.

“Research shows that social media is not a primary driver of harmful polarization, but we want to help find solutions to address it,” a Facebook spokesperson said in a statement. “That is why we continually and proactively detect and remove content (like hate speech) that violates our Community Standards and we work to stop the spread of misinformation. We reduce the reach of content from Pages and Groups that repeatedly violate our policies, and connect people with trusted, credible sources for information about issues such as elections, the COVID-19 pandemic and climate change.”

The report also raises the issue that these problems are difficult to address “because the companies refuse to disclose how their platforms work.” Among the researchers recommendations is that Congress force “Facebook and Google/YouTube, to share data on how algorithms rank, recommend, and remove content.” Platforms releasing the data, and independent researchers who study it, should be legally protected as part of that work, they write.

Additionally, Congress should “empower the Federal Trade Commission to draft and enforce an industry code of conduct,” and “provide research funding” for alternative business models for social media platforms. The researchers also raise several changes that Facebook and other platforms could implement directly, including adjusting their internal algorithms to further de-emphasize polarizing content, and make these changes more transparent to the public. The platforms should also “double the number of human content moderators” and make them all full employees, in order to make decisions more consistent.

Facebook's program for VIPs allows politicians and celebs to break its rules, report says

Facebook has for years used a little known VIP program that’s enabled millions of high-profile users to skirt its rules, according to a new report in The Wall Street Journal.

According to the report, the program, called “XCheck” or “cross check” was created in order to avoid “PR fires,” the public backlash that occurs when Facebook made a mistake affecting a high profile user’s account. The cross check program meant that if one of these accounts broke its rules, the violation was sent to a separate team so that it could be reviewed by Facebook employees, rather than its non-employee moderators who typically review rule-breaking content.

Facebook had previously disclosed the existence of cross check, which had also been reported on by other outlets. But The Wall Street Journal report revealed that “most of the content flagged by the XCheck system faced no subsequent review.” This effectively allowed celebrities, politicians and other high profile users to break rules without consequences.

In one incident described in the report, Brazilian soccer star Neymar posted nude photos of a woman who had accused him of sexual assault. Such a post is a violation of Facebook’s rule around non-consensual nudity, and rule breakers are typically banned from the platform. But the cross check system “blocked Facebook’s moderators from removing the video,” and the post was viewed nearly 60 million times before it was eventually removed. His account faced no other consequences.

Last year alone, the cross check system enabled rule-breaking content to be viewed more than 16 billion times before being removed, according to internal Facebook documents cited by The Wall Street Journal. The report also says Facebook ‘misled’ its Oversight Board, which pressed the company on its cross check system in June when weighing in on how the company should handle Donald Trump’s “indefinite suspension.” The company told the board at the time that the system only affected “a small number” of its decisions and that it was “not feasible” to share more data.

“The Oversight Board has expressed on multiple occasions its concern about the lack of transparency in Facebook’s content moderation processes, especially relating to the company’s inconsistent management of high-profile accounts,” the Oversight Board said in a statement shared on Twitter. “The Board has repeatedly made recommendations that Facebook be far more transparent in general, including about its management of high-profile accounts, while ensuring that its policies treat all users fairly.”

Facebook told The Wall Street Journal that its reporting was based on “outdated information” and that the company has been trying to improve the cross check system. “In the end, at the center of this story is Facebook's own analysis that we need to improve the program,” Facebook spokesperson Andy Stone wrote in a statement. “We know our enforcement is not perfect and there are tradeoffs between speed and accuracy.”

The revelations could prompt new investigations into Facebook’s content moderation policies. Some information related to cross check has been “turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection,” according to The Wall Street Journal.

Google is rolling out dark mode for Search on desktop

Google is finally rolling out a dark theme for Search on desktop. The change had been spotted as far back as December, but the feature is now official and rolling out to all users “over the next few weeks,” according to an update from a Google product support manager.

You can get the new, not-quite-black theme by heading to Settings > Search Settings > Appearance and selecting “dark.” There’s also a “device default” option which will automatically update the theme based on your device’s settings.

Though the change is starting to roll out now, it could take a few more days or weeks before it’s available to everyone. 9to5Google further notes that some users have spotted a sun icon that can be used to toggle it on or off without diving into the settings page, though it’s not clear if that’s an official part of the update or another test.

WhatsApp rolls out end-to-end encryption for chat backups

WhatsApp began quietly testing end-to-end encryption for chat history backups earlier this summer. Now, the company is making the feature official: WhatsApp announced today that all users will be able to encrypt backups of their chat history.

While WhatsApp messages have been encrypted since 2016, the app hasn’t offered end-to-end encryption of backups, which rely on iCloud or Google Drive. But with the latest update, users will be able to opt-in to end-to-end encryption for their backups before those backups hit their cloud storage service. Users can expect the update “in the coming weeks,” according to the company.

Once end-to-end encryption is enabled, “neither WhatsApp nor the backup service provider will be able to access” the backup, WhatsApp writes in a blog post. Backups are encrypted with a “unique, randomly generated encryption key.” Users will then be able to choose between two options: manually storing the 64-digit key, or setting a password, which can be used to access the key.

While the feature certainly makes backups more secure, there are a few factors to keep in mind. The first is that opting in means there will be no way to recover your backup should you lose the 64-digit encryption key (you are able to reset the password if you forget it). Next is that even though WhatsApp recently announced support for multiple devices, you’ll only be able to use encrypted backups on your primary device.

It’s also worth pointing out that end-to-end encryption doesn’t guarantee your chats will never be used in a way you might not like. This week, ProPublica published a lengthy story on WhatsApp’s use of human moderators who review chats that are reported by WhatsApp users. And earlier this year The Information reported that Facebook may be researching AI that could one day allow it to serve users ads based on encrypted messages. While neither of these “break” the security offered by encryption — and there are many very good reasons why people should be able to report abusive messages — it's a good reminder that privacy is about much more than just the presence of end-to-end encryption.

I wish Ray-Ban's Stories smart glasses were made by anyone but Facebook

I’ve spent much of the last week trying out Ray-ban Stories, the new smart glasses made by Facebook and Ray-Ban owner Luxottica, and I’m still not entirely sure how I feel about them.

Here’s what I do know: I don’t hate them. I might actually kind of like them, and I suspect others will too. Yet there are many valid reasons why you’d want to avoid ceding yet another surface of your life — much less your face — to Facebook. As with everything Facebook, it comes down to privacy and whether or not you trust the company (it doesn’t help that Ray-Ban Stories’ privacy settings leave a lot to be desired.)

But, purely as a set of “smart glasses,” they offer a pretty intriguing look at what non-AR “smart glasses” can be.

Ray-Ban Stories

I’ve spent quite a bit of time with the first three versions of Snapchat’s Spectacles, and Facebook has undoubtedly benefited from watching Snap experiment with sunglasses over the last five years. Even the name, Ray-Ban Stories, feels like some kind of subtweet at Evan Spiegel.

But while Spectacles always felt a little conspicuous and toy-like, these mostly look and feel like, well, a pair of Ray-Bans. Yes, they are a bit heavier, and the cameras are difficult to ignore; but from a distance, they could easily pass as any other pair of WayFarers. That’s a good thing for Facebook. Ray-Bans might not be for everyone, but they are certainly more appealing — and influencer-friendly — than anything with a Facebook logo (there is Facebook branding on the packaging).

Karissa Bell

Besides the look, one major difference between Ray-Ban Stories and Spectacles is that the Ray-Ban glasses also have audio capabilities built in. Each arm has a tiny speaker so you can stream music or podcasts, while being able to hear your surroundings. Much like the Bose Frames, it essentially turns the glasses into bluetooth headphones (there are also onboard mics so you can pick up calls). Though the audio functionality is more of a secondary feature, so far it’s been my favorite part of wearing the glasses.

The open-ear speakers means you get much more ambient sound than you do with say, AirPods Pro in transparency mode, so going for a walk while listening to music feels pretty natural. But I’ve also been pleasantly surprised with the audio quality, which is much better than I expected.

Music sounds rich, and calls and podcasts are clear — at least when you listen at about mid-range volume. The audio quality degrades significantly if you crank the volume to full blast, but that also kills the whole letting-ambient-noise-in thing. They aren’t the same as a decent pair of earbuds, but I’d have no hesitation about wearing these in place of my normal AirPods for a quick run or stroll around the park (they aren’t water resistant so I’d probably leave them at home on longer runs when sweat could be more of an issue).

The cameras (and yet another Facebook app)

But the main reason why Facebook and Luxottica want you to spend $300+ is for the cameras. Facebook says it crammed a lot of processing power into the 5-megapixel cameras to help them punch above their weight and it mostly works. The videos I shot had good stabilization even when I was walking or in a moving car. The photos were considerably less impressive, but passable if all you’re looking for is something to share to your Instagram Story.

Unsurprisingly, they seemed to shoot better in bright light — the photos I took in direct sun were much more clear than those I took on a slightly more overcast day. In slightly more shady conditions, everything looked a bit dark and underexposed. But these are sunglasses, after all, so it makes sense they’d be optimized for sunnier days.

Viewing and sharing your photos and videos requires a separate app, called Facebook View. The app, which requires a Facebook login, allows you to view your shots and make some basic edits before downloading them to your phone or sharing to another app.

For now, the app feels a bit overlooked compared with the glasses. There are some barebones editing controls for adjusting the crop, brightness, saturation and a few other elements. The cameras don't actually shoot 3D photos, but you can use the app to add some depth effects that add a bit of motion to the shot, similar to how you can share “3D photo” in News Feed. There’s also an option to make a video collage, set to a preset soundtrack, with up to 10 clips at a time.

While I appreciated the ability to do a bit of fine-tuning, the effects are limited and so cheesy I briefly wondered if Facebook and Luxottica were hoping to market the glasses to Boomers rather than Gen Z. But, purely as an app for dumping photos, Facebook View gets the job done — and you can always edit your shots in a separate app.

My other major complaint was with the voice commands, which allows you to shoot photos and videos hands-free. It worked fine the few times I tried it, but I mostly steered clear of the feature. I feel goofy enough talking to any digital assistant outside of my home, but there’s something deeply uncomfortable about saying “Hey Facebook, take a photo,” in public.

Privacy

Which brings us to the elephant in the room: privacy. Facebook’s official line is that Ray-Ban stories were designed with “privacy in mind” but we all know the company has a pretty dismal track record when it comes to privacy.

Here’s what you need to know about Ray-Ban Stories and privacy:

There are no ads in the Facebook View app, and Facebook says it won’t use the contents of your photos and videos for advertising purposes. But, as is so often the case with Facebook, merely using the product can inform the company’s ability to “personalize” your experience, according to a disclaimer when you first set up the Facebook View app. “We use this data to improve and personalize your experience with facebook products,” it says. You can opt out of this data sharing, but it’s on by default.

And remember those hated voice commands? Facebook also stores those transcripts by default, according to the app. The transcripts and “related data” are stored and accessible by “trained reviewers.” And, as with other voice-activated assistants, there will inevitably be occasions when something is captured even when the wake word (in this case “hey Facebook) isn’t uttered. I’ve only intentionally used voice commands twice, and yet there are already four transcripts saved in the app.

Thankfully, you can opt out of both storing transcripts and sharing “additional” info with facebook. The app also lets you delete transcripts of voice commands it’s captured, but both are unfortunately enabled by default.

If any of that makes you uneasy, then you’ll likely want to steer clear of not just Ray-Ban Stories but the eventual AR glasses that come next. There’s also the price. At a starting price of $299, they are considerably more expensive than a standard pair of WayFarers, though they’re still less than Ray-Ban’s priciest frames. (And nearly $100 less than the starting price of Snapchat’s third-gen Spectacles.)

Whether or not you’re willing to make that investment largely depends on how you feel about Facebook and what you are hoping to get out of a pair of “smart glasses.” At best, they feel like a better, more polished version of Snapchat's Spectacles. It’s still a novelty, but with decent audio, smart glasses are starting to feel a lot more useful. At worst, the glasses are yet another reminder of Facebook's dominance.

Twitter starts rolling out Communities, it's dedicated space for groups

After 15 years, Twitter is getting dedicated features for groups. The company is now starting to test Communities, “a more intimate space for conversations” on the platform.

Communities, which the company first teased back in February, are sort of like Twitter’s version of a subreddit or a public-facing group on Facebook. Communities are dedicated to specific topics, and members can post tweets to a dedicated group timeline. Each community has its own moderators who set rules for the group, and users must be invited by an existing member or moderator to participate.

The feature is meant to address what’s been a long-running issue for the platform: that it can be incredibly difficult for new users to wade through the noise and find the corner of Twitter that speaks to their interests. The company has tried to address this with Topics, which injects tweets into your timeline based on your interests, but Communities takes the idea a step further.

Twitter

Twitter notes that some of its first Communities will focus on popular topics like skincare, astrology, sneakers and dogs, but that over time it expects the groups to reflect the more “niche discussions” that happen on the platform. For now, Twitter is starting with just a handful of Communities, though moderators and members are able to invite anyone to join. The company says it plans to open up the feature for more users to create Communities in the “coming months.”

Notably, Twitter seems to be trying to avoid some of the issues that have plagued Facebook’s Groups. All Communities are publicly accessible and viewable by anyone on the platform —there’s no such thing as a private or “secret” Community — though only members can participate in the discussion directly. Like Reddit and Facebook, Twitter will also rely on admins and moderators to steer the day-to-day conversations and keep members in check. The company is also working on new reporting and detection features to weed out “potentially problematic” groups that may spring up.

Though the company is calling the feature a test, Twitter seems to be quite serious about its potential. Communities is getting its own tab in the center of Twitter’s app, between explore and notifications, which suggests the company plans for Communities to be a prominent feature of its platform for the long term.

The fight to study what happens on Facebook

Facebook recently added a new report to its transparency center. The "widely viewed content" report was ostensibly meant to shed light on what’s been a long-running debate: What is the most popular content on Facebook?

The 20-page report raised more questions than answers. For example, it showed that the most viewed URL was a seemingly obscure website associated with former Green Bay Packers players. It boasted nearly 90 million views even though its official Facebook page has just a few thousand followers. The report also included URLs for e-commerce sites that seemed at least somewhat spammy, like online stores for CBD products and Bible-themed t-shirts. There was also a low-res cat GIF and several bland memes that asked people to respond with foods they like or don’t like or items they had recently purchased.

Notably absent from the report were the right-wing figures who regularly dominate the unofficial “Facebook Top 10” Twitter account, which ranks content by engagement. In fact, there wasn’t very much political content at all, a point Facebook has long been eager to prove. For Facebook, its latest attempt at “transparency” was evidence that most users’ feeds aren’t polarizing, disinformation-laced swamps but something much more mundane.

Days later, The New York Times reported that the company had prepped an earlier version of the report, but opted not to publish it. The top URL from thatreport was a story from the Chicago Sun Timesthat suggested the death of a doctor may have been linked to the COVID-19 vaccine. Though the story was from a credible news source, it’s also the kind of story that’s often used to fuel anti-vaccine narratives.

Almost as soon as the initial report was published, researchers raised other issues. Ethan Zuckerman, an associate professor of public policy and communication at University of Massachusetts at Amherst, called it “transparency theatre.” It was, he said, “a chance for FB to tell critics that they’re moving in the direction of transparency without releasing any of the data a researcher would need to answer a question like ‘Is extreme right-wing content disproportionately popular on Facebook?’”

The promise of ‘transparency’

For researchers studying how information travels on Facebook, it’s a familiar tactic: provide enough data to claim “transparency,” but not enough to actually be useful. “The findings of the report are debatable,” says Alice Marwick, principal researcher at the Center for Information Technology and Public Life at University of North Carolina. “The results just didn't hold up, they don't hold up to scrutiny. They don't map to any of the ways that people actually share information.”

Marwick and other researchers have suggested that this may be because Facebook opted to slice its data in an unusual way. They have suggested that Facebook only looked for URLs that were actually in the body of a post, rather than the link previews typically shared. Or perhaps Facebook just has a really bad spam problem. Or maybe it’s a combination of the two. “There's no way for us to independently verify them … because we have no access to data compared to what Facebook has,” Marwick told Engadget.

Those concerns were echoed by Laura Edelson, a researcher at New York University. “No one else can replicate or verify the findings in this report,” she wrote in a tweet. “We just have to trust Facebook.” Notably, Edelson has her own experience running into the limits of Facebook’s push for “transparency.”

The company recently shut down her personal Facebook account, as well as those of several NYU colleagues, in response to their research on political ad targeting on the platform. Since Facebook doesn’t make targeting data available in its ad library, the researchers recruited volunteers to install a browser extension that could scoop up advertising info based on their feeds.

Facebook called it “unauthorized scraping,” saying it ran afoul of their privacy policies. In doing so, it cited its obligation to the FTC, which the agency later said was “misleading.” Outside groups had vetted the project and confirmed it was only gathering data about advertisers, not users’ personal data. Guy Rosen, the company’s VP of Integrity, later said that even though the research was “well-intentioned” it posed too great a privacy risk. Edelson and others said Facebook was trying to silence research that could make the company look bad.“If this episode demonstrates anything it is that Facebook should not have veto power over who is allowed to study them,” she wrote in a statement.

Rosen and other Facebook execs have said that Facebook does want to make more data available to researchers, but that they need to go through the company’s official channels to ensure the data is made available in a “privacy protected” way. The company has a platform called FORT (Facebook Open Research and Transparency), which allows academics to request access to some types of Facebook data, including election ads from 2020. Earlier this year, the company said it would expand the program to make more info available to researchers studying “fringe” groups on the platform.

But while Facebook has billed FORT as yet another step in its efforts to provide “transparency,” those who have used FORT have cited shortcomings. A group of researchers at Princeton hoping to study election ads ultimately pulled the project, citing Facebook’s restrictive terms. They said Facebook pushed a “strictly non-negotiable” agreement that required them to submit their research to Facebook for review prior to publishing. Even more straightforward questions about how they were permitted to analyze the data were left unanswered.

“Our experience dealing with Facebook highlights their long running pattern of misdirection and doublespeak to dodge meaningful scrutiny of their actions,” they wrote in a statement describing their experience.

A Facebook spokesperson said the company only checks for personally identifiable information, and that it’s never rejected a research paper.

“We support hundreds of academic researchers at more than 100 institutions through the Facebook Open Research and Transparency project,” Facebook’s Chaya Nayak, who heads up FORT at Facebook, said in a statement. “Through this effort, we make massive amounts of privacy-protected data available to academics so they can study Facebook’s impact on the world. We also pro-actively seek feedback from the research community about what steps will help them advance research most effectively going forward.”

Data access affects researchers’ ability to study Facebook’s biggest problems. And the pandemic has further highlighted just how significant that work can be. Facebook’s unwillingness to share more data about vaccine misinformation has been repeatedly called out by researchers and public health officials. It’s all the more vexing because Facebook employs a small army of its own researchers and data scientists. Yet much of their work is never made public. “They have a really solid research team, but virtually everything that research team does is kept only within Facebook, and we never see any of it,” says Marwick, the UNC professor.

But much of Facebook’s internal research could help those outside the platform who are trying to understand the same questions, she says. “I want more of the analysis and research that's going on within Facebook to be communicated to the larger scholarly community, especially stuff around polarization [and] news sharing. I have a fairly strong sense that there's research questions that are actively being debated in my research community that Facebook knows the answer to, but they can't communicate it to us.”

The rise of ‘data donation’

To get around this lack of access, researchers are increasingly looking to “data donation” programs. Like the browser extension used by the NYU researchers, these projects recruit volunteers to “donate” some of their own data for research.

NYU’s Ad Observer, for example, collected data about ads on Facebook and YouTube, with the goal of helping them understand the platform’s ad targeting at amore granular level. Similarly, Mozilla, maker of the Firefox browser, has a browser add-on called Rally that helps researchers study a range of issues from COVID-19 misinformation to local news. The Markup, a nonprofit news organization, has also created Citizen Browser, a customized browser that aids journalists’ investigations into Facebook and YouTube. (Unlike Mozilla and NYU’s browser-based projects, The Markup pays users who participate in Citizen Browser.)

“The biggest single problem in our research community is the lack of access to private proprietary data,” says Marwick. “Data donation programs are one of the tactics that people in my community are using to try to get access to data, given that we know the platform's aren't going to give it to us.”

Crucially, it’s also data that’s collected independently, and that may be the best way to ensure true transparency, says Rebecca Weiss, who leads Mozilla’s Rally project. “We keep getting these good faith transparency efforts from these companies but it's clear that transparency also means some form of independence,” Weiss tells Engadget.

For participants, these programs offer social media users a way to make sure some of their data, which is constantly being scooped up by mega-platforms like Facebook, can also be used in a way that is within their control: to aid in research. Weiss says that, ultimately, it’s not that different from market research or other public science projects. “This idea of donating your time to a good faith effort — these are familiar concepts.”

Researchers also point out that there are significant benefits to gaining a better understanding of how the most influential and powerful platforms operate. The study of election ads, for example, can expose bad actors trying to manipulate elections. Knowing more about how health misinformation spreads can help public health officials understand how to combat vaccine hesitancy. Weiss notes that having a better understanding of why we see the ads we do — political or otherwise — can go a long way toward demystifying how social media platforms operate.

“This affects our lives on a daily basis and there's not a lot of ways that we as consumers can prepare ourselves for the world that exists with these increasingly more powerful ad networks that have no transparency.”

Twitter opens Super Follow subscriptions for some creators

Twitter is finally flipping the switch on “Super Follows,” its new subscription feature that allows creators to charge their followers for exclusive content. Starting today, the company is making the feature available to a “small group” of creators, with plans to expand the lineup in the coming weeks (Twitter has been taking applications for Super Follows since June).

For now, creators can set monthly rates of $2.99, $4.99 or $9.99 in order to access “subscriber-only” tweets. Twitter says it will eventually incorporate other features, such as Spaces and newsletters. But until then the feature essentially amounts to.. paying for tweets, which might explain why the company is trying it out with just a few people to start. The initial lineup includes:

  • @MakeupforWOC who will offer “client-level treatment” for subscribers with skincare questions

  • @myeshachou who will provide exclusive “behind-the-scenes stories”

  • @KingJosiah54 who will offer “in-depth sports analysis”

  • @tarotbybronx who will provide Super Followers with “astrology, tarot, and intuitive healing advice” and “extra spiritual guidance”

Of course, if you’re especially interested in one of these topics or just a dedicated fan, there is an upside to buying a subscription. You’ll be able to interact with creators in a smaller, and slightly more private, forum. That could be useful if, for example, you’re hoping to get some personalized skincare advice. On the other hand, asking fans to pay for the kind of content they’re used to getting for free might be a tough sell.

Super Follows is one piece of Twitter’s strategy to reshape its platform as a destination for creators. Outside of subscriptions, the company is also experimenting with letting creators sell tickets to audio chats in Spaces. Twitter is also working on a newsletter platform — it acquired Revue earlier this year — and has opened up tipping features in its app.

Twitter tests new harassment prevention feature with ‘Safety Mode’

Twitter is experimenting with its most aggressive anti-harassment features to date. The company will start testing “Safety Mode” a new account setting that automatically blocks accounts using “potentially harmful language.” Twitter first previewed the feature back in February during its Analyst Day presentation, but is now starting to make it available to “a small feedback group.” It’s not clear when it might be available more widely.

When enabled, Safety Mode will proactively block accounts that are likely to be the source of harassment for a period of seven days. Twitter says the system is designed so that accounts of people you know or frequently interact with won’t be blocked, but trolls will.

“Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions,” Twitter writes in a blog post. “When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier.”

Introducing Safety Mode. A new way to limit unwelcome interactions on Twitter. pic.twitter.com/xa5Ot2TVhF

— Twitter Safety (@TwitterSafety) September 1, 2021

While Twitter has taken several steps in the past to address its long-running harassment problem, Safety Mode is notable because it takes more of the burden off of the person being harassed. Instead of manually blocking, muting and reporting problematic accounts, the feature should be able to catch the offending tweets before they are seen.

Because it’s still in a relatively early stage, Twitter says it’s likely to make at least some mistakes. And users are able to manually review the tweets and accounts flagged by Safety Mode, and reverse faulty autoblocks. When the seven-day period ends, users will get a notification "recapping" the actions Safety Mode took.