Posts with «author_name|karissa bell» label

Meta's Oversight Board will review the company's handling of election content in Brazil

The Oversight Board has agreed to review a case related to Meta’s handling of election content in Brazil. In a statement, the board said they planned to scrutinize the social network’s policies surrounding election content in “high-risk” areas.

The case stems from a user who posted a video in early January calling for people to “besiege” Brazil’s congress following the election of President Lula da Silva. The video also featured clips of a speech from a Brazilian general, who called for people to go into the streets and government buildings. The video was reported seven times by four different users, according to the board, but remained on Facebook even after it was reviewed by five separate moderators. Meta later opted to remove the post and issue a “strike” to the person who had originally posted it, following the Oversight Board’s decision to review the case.

Though the case is related to Brazil’s most recent presidential election, the board’s recommendations could have a more-far reaching impact. “The Board selected this case to examine how Meta moderates election-related content, and how it is applying its Crisis Policy Protocol in a designated ‘temporary high-risk location,’” the group wrote in a statement.

As the board points out, Meta’s “Crisis Policy Protocol,” is a central aspect of the case. The protocol, which was created after the Oversight board weighed in on the suspension of Donald Trump, allows Meta to respond to situations when there is a risk of “imminent harm” either offline or online. So any recommendations that address that policy could end up affecting election-related content around the world, not just in Brazil.

However, that outcome is still months away. For now, the Oversight Board is asking for public feedback on various issues associated with the case before it makes recommendations to Meta. The company will then have 60 days to respond, though, as usual, Meta is not required to adopt policy changes suggested by the board.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-review-the-companys-handling-of-election-content-in-brazil-100001018.html?src=rss

The FTC is investigating Elon Musk's handling of Twitter Blue and the ‘Twitter Files’

The Federal Trade Commission is stepping up its investigation into some of Twitter’s most controversial decisions since Elon Musk took over the company last fall. That includes the company’s mass layoffs and the launch of Twitter Blue, as well as the company’s dealings with journalists involved with the so-called “Twitter Files,” according to a new report in The Wall Street Journal.

At issue, is Twitter’s 2022 settlement with the FTC over its use of “deceptive” ad targeting. Along with a $150 million fine, the company at the time agreed to a “comprehensive privacy and information security program,” as well as other strict measures meant to protect users’ privacy. But there’s been widespread concern from lawmakers and others that Twitter has not adhered to those requirements under Musk’s leadership.

Now, The Wall Street Journal reports that the FTC has sent at least a dozen letters to Twitter since last fall in an effort to get more information about the company’s handling of layoffs, Twitter Blue, the “Twitter Files” and other issues. The agency is also reportedly trying to depose Musk as part of the inquiry. The House Judiciary Committee also released a report about the FTC's inquiries to Twitter.

The report isn’t the first suggestion that Twitter may have run afoul of the regulator since Musk’s takeover. The FTC previously said it had “deep concern” following the departures of key privacy and security executives. Lawmakers and others have also raised concerns about the hasty rollout of Twitter Blue, which reportedly launched without a proper privacy or security review, a requirement of Twitter’s FTC settlement.

Likewise, as Bloomberg pointed out last year, the settlement also requires Twitter to limit internal access to Twitter users’ data. Security experts have questioned whether Musk’s decision to hand over reams of internal documents and grant journalists access to internal systems could violate its obligations with the FTC.

In a tweet, Musk called the FTC’s actions “a shameful case of weaponization of a government agency for political purposes and suppression of the truth.” Republican members of the House Judiciary Committee also criticized the agency’s investigation as “harassment.”

This article originally appeared on Engadget at https://www.engadget.com/the-ftc-is-investigating-elon-musks-handling-of-twitter-blue-and-the-twitter-files-233539305.html?src=rss

TikTok's Series feature will allow creators to charge for 'premium' content

TikTok has a new plan to challenge YouTube and help creators earn more on their platform. The company is introducing Series, a new feature that allows creators to charge for collections of "premium" videos.

Videos that are part of a “series” will differ from other TikTok videos in a couple ways. First, the videos can be up to 20 minutes long, double the ten-minute limit for most other videos on the platform. And, unlike other TikTok content, series content will live behind a paywall, meaning the clips won’t show up in the app’s recommendations or be as easily shareable as typical TikTok videos. (Creators will, however, be able to link to their Series from other, non-paywalled videos.) 

Series creators have considerable flexibility in how much they charge, with one-time payment options ranging from $0.99 to $189.99 for access to video collections. In a blog post, the TikTok notes that creators are free to choose an amount “that best reflects the value of their exclusive content.”

The feature could help creators earn substantially more from their videos than existing monetization features on the platform. Creators have long criticized the company’s creator fund, which they say isn’t big enough to accommodate the app’s growing ranks of prominent users. TikTok has apparently paid close attention to that criticism as it recently introduced a newer version of its creator fund, the Creativity Program, meant to help creators earn more.

Interestingly, TikTok says that it’s not planning on taking a cut of creators’ revenue from Series, at least for now. A spokesperson for the company said that “for a limited time” creators will receive “100% of their earnings” minus app store fees. That could potentially help TikTok make its new longform offerings more competitive with YouTube, which creators often favor for longer videos because of the increased revenue potential.

Of course, a lot will depend on how individual TikTokkers use the new paywall features, and if their fans are willing to fork over for more exclusive content. For now, only a handful of “select” creators have access to Series, but the company says it will start accepting applications from more creators “in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-series-feature-will-allow-creators-to-charge-for-premium-content-140033561.html?src=rss

Meta agrees to change VIP 'cross-check' program but won't disclose who is in it

Meta has responded to the dozens of recommendations from the Oversight Board regarding its controversial cross-check program, which shields high-profile users from the company’s automated content moderation systems. In its response, Meta agreed to adopt many of the board’s suggestions, but declined to implement changes that would have increased transparency around who is in the program.

Meta’s response comes after the board had criticized the program for prioritizing “business concerns” over human rights. While the company had characterized the program as a “second layer of review” to help it avoid mistakes, the Oversight Board noted that cross-check cases are often so backlogged that harmful content is left up far longer than it otherwise would be.

In total, Meta agreed to adopt 26 of the 32 recommendations at least partially. These include changes around how cross-check cases are handled internally at the company, as well as promises to disclose more information to the Oversight Board about the program. The company also pledged to reduce the backlog of cases.

But, notably, Meta declined to take the Oversight Board up on its recommendation that it publicly disclose politicians, state actors, businesses and other public figures who benefit from the protections of cross-check. The company said publicly disclosing details about the program “could lead to myriad unintended consequences making it both unfeasible and unsustainable” and said that it would open cross-check to being “game(d)” by bad actors.

Likewise, the company declined, or didn’t commit, to recommendations that may alert people that they are subject to cross-check. Meta declined a recommendation that it require users who are part of cross-check make “an additional, explicit, commitment” to follow the company’s rules. And Meta said it was “assessing the feasibility” of a recommendation that it allow people to opt out of cross-check (which would also, naturally, notify them that they are part of the program). “We will collaborate with our Human Rights and Civil Rights teams to assess options to address this issue, in an effort to enhance user autonomy regarding cross-check,” the company wrote.

While Meta’s response shows that the company is willing to make changes to one of its most controversial programs, it also underscores the company’s reluctance to make key details about cross-check public. That also aligns with the Oversight Board’s previous criticism, which last year accused the company of not being “fully forthcoming” about cross-check.

This article originally appeared on Engadget at https://www.engadget.com/meta-agrees-to-change-vip-cross-check-program-but-wont-disclose-who-is-in-it-181140075.html?src=rss

TikTok will automatically limit screen time for teens

TikTok is introducing new settings that are meant to reduce how much time teens are spending in the app. In an update, the company says it will automatically default teens under the age of 18 to a daily screen time limit of 60 minutes.

With the change, teens will still be able to bypass the daily limit, but they’ll be required to enter a passcode, “requiring them to make an active decision to extend that time,” the company says. Additionally, if teens opt to turn off the screen time limit altogether, TikTok will further prompt them to set a limit if they spend more than 100 minutes in the app.

The company is also adding new parental control features via the app’s “Family Pairing” feature, which allows parents to monitor their children’s activity on TikTok. Parents will be able to set their own custom screen time limits, and view a dashboard that details stats about their child’s time in the app, like how often they open it and what times of the day they use it most. Parents can also set a schedule for when their children can receive notifications, and choose to filter topics they don’t want to appear in their For You feeds.

The update comes as lawmakers in the United States have renewed their efforts to ban TikTok entirely. In addition to national security concerns, Congress has also criticized the company for not doing enough to protect its youngest users.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-will-automatically-limit-screen-time-for-teens-110056722.html?src=rss

Twitter updates violent speech policy to ban ‘wishes of harm’

Twitter is once again tightening its rules around what users are permitted to say on the platform. The company introduced an updated “violent speech” policy, which contains some notable additions compared with previous versions of the rules.

Interestingly, the new policy prohibits users from expressing “wishes of harm” and similar sentiments. “This includes (but is not limited to) hoping for others to die, suffer illnesses, tragic incidents, or experience other physically harmful consequences,” the rules state. That’s a reversal from Twitter’s previous policy, which explicitly said that “statements that express a wish or hope that someone experiences physical harm" were not against the company’s rules.

“Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not actionable under this policy,” Twitter’s previous policy stated, according to the Wayback Machine.

That change isn't the only addition to the policy. Twitter’s rules now also explicitly protects “infrastructure that is essential to daily, civic, or business activities” from threats of damage. From the rules:

You may not threaten to inflict physical harm on others, which includes (but is not limited to) threatening to kill, torture, sexually assault, or otherwise hurt someone. This also includes threatening to damage civilian homes and shelters, or infrastructure that is essential to daily, civic, or business activities.

These may not seem like particularly eyebrow-raising changes, but they are notable given Elon Musk’s previous statements about how speech should be handled on Twitter. Prior to taking over the company, the Tesla CEO stated that his preference would be to allow all speech that is legal. “I think we would want to err on the side of, if in doubt, let the speech exist,” he said at the time.

It’s also not the first time Twitter’s rules have become more restrictive since Musk’s takeover. The company’s rules around doxxing changed following his dustup with the (now suspended) @elonjet account, which shared the whereabouts of Musk’s private jet.

Twitter didn’t explain its rationale for the changes, but noted in a series of tweets that it may suspend accounts breaking the rules or force them to delete the tweets in question. The company no longer has a communications team to respond to requests for comment.

This article originally appeared on Engadget at https://www.engadget.com/twitter-updates-violent-speech-policy-to-ban-wishes-of-harm-214320985.html?src=rss

Flipboard is leaning into Mastodon — and away from Twitter

Flipboard is the latest service to embrace Mastodon as Twitter becomes increasingly chaotic under Elon Musk. The news reading app, whose founder was once on Twitter’s board of directors, is now going all in on the Fediverse.

The company announced that it’s integrating Mastodon into its main app, so that users can browse their feeds much the way they can “flip” through their Twitter timelines. Flipboard is also starting up its own Mastodon instance in an effort to encourage broader adoption among its user base.

According to Flipboard CEO Mike McCue, the two updates are the first “very initial steps” of a broader plan to embrace the decentralized social networking protocols that have been popularized by Mastodon over the last year. Instead of relying on the “proprietary social graphs” of services like Twitter and Facebook — both of which have become increasingly hostile to outside developers — Flipboard could instead be centered around ActivityPub, the open source protocol that powers Mastodon and the rest of the decentralized services that make up the “Fediverse.”

“As we embrace ActivityPub at Flipboard, we’ll effectively allow anyone who's on Mastodon to follow a user on Flipboard, and to follow a Flipboard magazine, and vice versa,” McCue says in an interview. “What ActivityPub enables is a common, open social graph.” This means that services like Flipboard and Mastodon could eventually be interoperable with other platforms that have pledged to adopt ActivityPub, like Tumblr.

The shift is especially notable for Flipboard given its once deep ties to Twitter. McCue served on Twitter's board of directors between 2010 and 2012, and Twitter once reportedly considered buying the app. But now, McCue says the current state of Twitter “is quite sad for a lot of people who were advocates and participants in the whole Twitter ecosystem.”

And, with Twitter set to end its free API, it’s not clear how much longer Flipboard will be able to maintain any kind of functionality with the service. “It's total chaos over there,” McCue says, referring to Twitter since Musk took over the company. “The writing on the wall is that I don't see [Flipboard’s] Twitter integration lasting much longer.”

But McCue describes Mastodon and the Fediverse as a kind of antidote to the Musk-induced chaos. “We need to get out of this world where one person can basically dictate how these communities of people are interacting with each other,” he says.

Of course, there are still questions about whether Mastodon will ever be more than a relatively niche Twitter alternative. The platform has seen explosive growth since last spring when Musk announced his takeover bid for Twitter, but the growth has since leveled off. And the decentralized nature of the platform isn’t necessarily intuitive for newcomers. McCue acknowledges that the Fediverse is still waiting for its “Netscape moment” (he was an executive at the browser company in the late ‘90s at the peak of the Web 1.0 era), but he predicts that other mainstream services may start looking at Mastodon more strategically as well.

“I think you're going to see, in the coming months, companies like us start to integrate ActivityPub and advocate to publishers and content creators that they should build a presence in the Fediverse,” he predicts. “Once that starts to reach critical mass … then I think you're gonna get that Netscape moment.”

This article originally appeared on Engadget at https://www.engadget.com/flipboard-is-leaning-into-mastodon-and-away-from-twitter-160036103.html?src=rss

Meta is reforming ‘Facebook jail’ in response to the Oversight Board

It’s now going to be harder to land in “Facebook jail.” Meta says it’s reforming its penalty system so that people are less likely to have their accounts restricted for less serious violations of the company’s rules.

“Under the new system, we will focus on helping people understand why we have removed their content, which is shown to be more effective at preventing re-offending, rather than so quickly restricting their ability to post,” Meta explains in a blog post. “We will still apply account restrictions to persistent violators, typically beginning at the seventh violation, after we’ve given sufficient warnings and explanations to help the person understand why we removed their content.”

Previously, users could land in “Facebook jail,” which could prevent them from posting on the platform for 30 days at a time, for relatively minor infractions. Meta says that it sometimes imposed these types of penalties mistakenly due to “missed context.” For example, someone who jokingly told a friend they would “kidnap” them, or posted a friend’s address in order to invite others to an event, may have been wrongly penalized. These punishments were not just unfair for “well-intentioned” users, but in some cases actually made it more difficult for the company to identify actual bad actors.

With the new system, users may still be restricted from certain features, like posting in groups, following a strike, but will still be able to post elsewhere on the service. Longer, thirty-day restrictions will be reserved for a user’s tenth strike, though the company may impose more restrictions for “severe” rule violations. Facebook users will be able to to view their past violations and details about account restrictions in the “Account Status” seduction of the app.

Meta notes that the overhaul comes as a result of feedback from the Oversight Board, which has repeatedly criticized Meta for not providing users with information about why their posts were removed. In a statement following Meta’s new policy, the board said the changes were “a welcome step in the right direction,” but that “room for improvement remains.”

The board notes that the latest changes don’t do anything to address “severe strikes,” which can have an outsize impact on activists and journalists, especially when the company makes a mistake. The Oversight Board also said that Meta should provide users the opportunity to add context to their appeals, and that the information should be available to its moderators.

Two Supreme Court cases could upend the rules of the internet

The Supreme Court could soon redefine the rules of the internet as we know it. This week, the court will hear two cases, Gonzalez v. Google and Twitter v. Taamneh, that give it an opportunity to drastically change the rules of speech online.

Both cases deal with how online platforms have handled terrorist content. And both have sparked deep concerns about the future of content moderation, algorithms and censorship.

Section 230 and Gonzalez v. Google

If you’ve spent any time following the various culture wars associated with free speech online over the last several years, you’ve probably heard of Section 230. Sometimes referred to as the “the twenty-six words that invented the internet,” Section 230 is a clause of the Communications Decency Act that shields online platforms from liability for their users' actions. It also protects companies’ ability to moderate what appears on their platforms.

Without these protections, Section 230 defenders argue, the internet as we know couldn’t exist. But the law has also come under scrutiny the last several years amid a larger reckoning with Big Tech’s impact on society. Broadly, those on the right favor repealing Section 230 because they claim it enables censorship, while some on the left have said it allows tech giants to avoid responsibility for the societal harms caused by their platforms. But even among those seeking to amend or dismantle Section 230, there’s been little agreement about specific reforms.

Section 230 also lies at the heart of Gonzalez v. Google, which the Supreme Court will hear on February 21st. The case, brought by family members of a victim of the 2015 Paris terrorist attack, argues that Google violated US anti-terrorism laws when ISIS videos appeared in YouTube’s recommendations. Section 230 protections, according to the suit, should not apply because YouTube’s algorithms suggested the videos.

“It basically boils down to saying platforms are not liable for content posted by ISIS, but they are liable for recommendation algorithms that promoted that content,” said Daphne Keller, who directs the Program on Platform Regulation at Stanford's Cyber Policy Center, during a recent panel discussing the case.

That may seem like a relatively narrow distinction, but algorithms underpin almost every aspect of the modern internet. So the Supreme Court’s ruling could have an enormous impact not just on Google, but on nearly every company operating online. If the court sides against Google, then “it could mean that online platforms would have to change the way they operate to avoid being held liable for the content that is promoted on their sites,” the Bipartisan Policy Center, a Washington-based think tank, explains. Some have speculated that platforms could be forced to do away with any kind of ranking at all, or would have to engage in content moderation so aggressive it would eliminate all but the most banal, least controversial content.

“I think it is correct that this opinion will be the most important Supreme Court opinion about the internet, possibly ever,” University of Minnesota law professor Alan Rozenshtein said during the same panel, hosted by the Brookings Institution.

That’s why dozens of other platforms, civil society groups and even the original authors of Section 230 have weighed in, via “friend of the court” briefs, in support of Google. In its brief, Reddit argued that eroding 230 protections for recommendation algorithms could threaten the existence of any platform that, like Reddit, relies on user-generated content.

“Section 230 protects Reddit, as well as Reddit’s volunteer moderators and users, when they promote and recommend, or remove, digital content created by others,” Reddit states in its filing. “Without robust Section 230 protection, Internet users — not just companies — would face many more lawsuits from plaintiffs claiming to be aggrieved by everyday content moderation decisions.”

Yelp, which has spent much of the last several years advocating for antitrust action against Google, shared similar concerns. “If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear,” the company argues. “If Yelp had to display every submitted review, without the editorial freedom Section 230 provides to algorithmically recommend some over others for consumers, business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”

Meta, on the other hand, argues that a ruling finding 230 doesn’t apply to recommendation algorithms would lead to platforms suppressing more “unpopular” speech. Interestingly, this argument would seem to play into the right’s anxieties about censorship. “If online services risk substantial liability for disseminating third-party content … but not for removing third-party content, they will inevitably err on the side of removing content that comes anywhere close to the potential liability line,” the company writes. “Those incentives will take a particularly heavy toll on content that challenges the consensus or expresses an unpopular viewpoint.”

Twitter v. Taamneh

The day after the Supreme Court hears arguments in Gonzalez v. Google, it will hear yet another case with potentially huge consequences for the way online speech is moderated: Twitter v. Taamneh. And while the case doesn’t directly deal with Section 230, the case is similar to Gonzalez v. Google in a few important ways.

Like Gonzalez, the case was brought by the family of a victim of a terrorist attack. And, like Gonzalez, family members of the victim are using US anti-terrorism laws to hold Twitter, Google and Facebook accountable, arguing that the platforms aided terrorist organizations by failing to remove ISIS content from their services. As with the earlier case, the worry from tech platforms and advocacy groups is that a ruling against Twitter would have profound consequences for social media platforms and publishers.

“There are implications on content moderation and whether companies could be liable for violence, criminal, or defamatory activity promoted on their websites,” the Bipartisan Policy Center says of the case. If the Supreme Court were to agree that the platforms were liable, then “greater content moderation policies and restrictions on content publishing would need to be implemented, or this will incentivize platforms to apply no content moderation to avoid awareness.”

And, as the Electronic Frontier Foundation noted in its filing in support of Twitter, platforms “will be compelled to take extreme and speech-chilling steps to insulate themselves from potential liability.”

There could even be potential ramifications for companies whose services are primarily operated offline. “If a company can be held liable for a terrorist organization’s actions simply because it allowed that organization’s members to use its products on the same terms as any other consumer, then the implications could be astonishing,” Vox writes.

What’s next

It’s going to be several more months before we know the outcome of either of these cases, though analysts will be closely watching the proceedings to get a hint of where the justices may be leaning. It’s also worth noting that these aren’t the only pivotal cases concerning social media and online speech.

There are two other cases, related to restrictive social media laws out of Florida and Texas, that might end up at the Supreme Court as well. Both of those could also have significant consequences for online content moderation.

In the meantime, many advocates argue that Section 230 reform is best left to Congress, not the courts. As Jeff Kosseff, a law professor at the US Naval Academy who literally wrote the book about Section 230, recently wrote, cases like Gonzalez “challenge us to have a national conversation about tough questions involving free speech, content moderation, and online harms.” But, he argues, the decision should be up to the branch of government where the law originated.

“Perhaps Congress will determine that too many harms have proliferated under Section 230, and amend the statute to increase liability for algorithmically promoted content. Such a proposal would face its own set of costs and benefits, but it is a decision for Congress, not the courts.”

Meta is bringing Telegram-like ‘channels’ to Instagram

Meta has set its sights on copying a new messaging app: Telegram. Mark Zuckerberg just showed off “broadcast channels,” a new Instagram feature that brings one-way messaging to the app. The company is testing the feature with a handful of creators, and plans to bring the Telegram-like functionality to Facebook and Messenger as well.

Broadcast channels allow creators to stream updates to their followers’ inboxes, much like channels on Telegram. Those who join the channels are able to react to messages and vote in polls, but can’t participate in the conversation directly. For example, Mark Zuckerberg shared in his “Meta Channel” that he would use the space to “share news and updates on all the products and tech we’re building at Meta.” In addition to text updates, creators can also share audio clips, photos and other content. 

For now, it seems only Zuckerberg and about a dozen other creators have access to the feature. The initial group includes snowboarder Chloe Kim, Jiu-Jitsu fighter Mackenzie Dern, and meme account Tank Sinatra. The company says that others interested in using the feature can sign up to be considered for early access.

Though Meta describes channels as a “test,” the company seems to be fairly invested in the feature. Additional features, including the ability to add another creator to the chat and to conduct AMAs, are already in the works. Meta also plans to start testing the channels on Facebook and Messenger “in the coming months.”