Posts with «author_name|karissa bell» label

Meta will close a loophole in its doxxing policy in response to the Oversight Board

Meta has agreed to change some of its rules around doxxing in response to recommendations from the Oversight Board. The company had first asked the Oversight Board to help shape its rules last June, saying the policy was “significant and difficult.” The board followed up with 17 recommendations for the company in February, which Meta has now weighed in on.

Unlike decisions around whether specific posts should be taken down or left up, Meta is free to completely disregard policy proposals from the Oversight Board, but it is required to respond to each recommendation individually.

One of the most notable changes is that Meta agreed to end an exception to its existing rules that allowed users to post private residential information if it was “publicly available” elsewhere. The Oversight Board had pointed out that there was a significant difference between obtaining data from a public records request and a viral social media post.

In its response Friday, Meta agreed to remove the exception from its policy. “As the board notes in this recommendation, removing the exception for ‘publicly available’ private residential information may limit the availability of this information on Facebook and Instagram when it is still publicly available elsewhere,” the company wrote. “However, we recognize that implementing this recommendation can strengthen privacy protections on our platforms.” Meta added that the policy change would be implemented “by the end of the year.”

While the company ended one exception, it agreed to relax its policy on another issue. Meta said users would be able to share photos of the exterior of private homes “when the property depicted is the focus of the news story, except when shared in the context of organizing protests against the resident.” Likewise, the company also agreed that it would allow users to share addresses of “high ranking” government officials if the property is a publicly-owned official residence, like those used by heads of state and ambassadors.

The policy changes could have a significant impact for people facing harassment, while also allowing some information to be shared in the context of news stories or protests against elected officials.

The board had also recommended Meta revamp the way that privacy violations are reported by users and how reports are handled internally. On the reporting front, Meta said it has already started experimenting with a simpler method for reporting privacy intrusions. Previously, users had to “click through two menus” and manually search for “privacy violation,” but now the option will appear without the extra search. Meta said it will have results from the experiment “later this month" when it will decide whether to make the change permanent.

Notably, Meta declined to make another change that could make it easier for doxxing victims to get help more quickly. The company said that it would not act on a recommendation that it “create a specific channel of communications for victims of doxing” regardless of whether they are Facebook users. Meta noted that it’s already piloting some live chat help features, but said it “cannot commit to building a doxing-specific channel.”

Meta was also non-committal on a board recommendation that doxxing should be categorized as “severe” violation resulting in a temporary suspension. The company said it was “assessing the feasibility” of the suggestion and “exploring ways to incorporate elements of this recommendation.”

In addition to the substance of the policy changes, Meta’s response to the Oversight Board in this case is notable because it represents the first time the company had asked for a policy advisory opinion, received recommendations and issued a response. Typically, the board weighs in on specific moderation decisions, which can then impact the underlying policies. But Meta can also ask for help shaping broader rules, like it did with doxxing. The company has also asked for help in creating rules around its controversial“cross check” system.

Facebook may crack down on Russian government accounts to fight disinformation

Facebook says it’s eyeing new ways to limit the influence of official Russian government accounts as it sees a surge in cyber espionage and “covert influence operations” tied to “government-linked actors” from Russia and Belarus.

Facebook’s security researchers shared the update as part of the company’s first quarterly threat report, which detailed its latest efforts to prevent its platform from being exploited amid Russia’s invasion of Ukraine.

During a call with reporters, Meta’s President of Public Policy Nick Clegg said that the company has seen an uptick in state-backed disinformation and other efforts to sow misinformation. “Since Russia's invasion of Ukraine, we've seen attacks on internet freedom and access to information intensified,” Clegg said. “It's manifested itself in two ways: One focus is on pushing state propaganda through state-run media, influence operations and espionage campaigns. And the other aimed at closing down the flow of credible information.”

Clegg added that the company is considering new steps to prevent official government accounts from spreading disinformation, but didn’t elaborate. Though Facebook has been demoting Russian state media outlets since March, the company hasn’t had a clear strategy for addressing misinformation and lies about the war from official government accounts. Up to know, it’s taken one-off actions against specific posts, like when an account belonging to Russia’s UK embassy falsely claimed a photo of a hospital bombing was staged.

Now Facebook is apparently considering how it can better prevent these accounts from spreading misinformation, said Clegg, who has previously been a vocal defender of Facebook’s policy against fact-checking politicians. “We are actively now reviewing additional steps to address misinformation and hoaxes coming from Russian government pages,” Clegg said.

Official pages are just one area of concern for Facebook though. In its report, Facebook security researchers detailed several influence operations and other campaigns to manipulate its platform in favor of pro-Russian interests and disinformation.

“For example, we detected and disrupted recidivist CIB [coordinated inauthentic behavior] activity linked to the Belarusian KGB who suddenly began posting in Polish and English about Ukrainian troops surrendering without a fight and the nation’s leaders fleeing the country on February 24, the day Russia began the war,” they wrote in the report. “On March 14, they pivoted back to Poland and created an event in Warsaw calling for a protest against the Polish government. We disabled the account and event that same day.”

The company also said it saw renewed activity from Ghostwriter, an entity that uses phishing attacks on email accounts to take over its targets’ social media accounts. Facebook previously said Ghostwriter targeted a handful of Ukrainian journalists, military officials and other public figures at the start of the war. This time, Ghostwriter “attempted to hack into the Facebook accounts of dozens of Ukrainian military personnel,” Facebook wrote. “In a handful of cases, they posted videos calling on the Army to surrender as if these posts were coming from the legitimate account owners. We blocked these videos from being shared.”

Facebook also spotted renewed activity from Russia’s Internet Research Agency, the troll farm behind Russia’s 2016 election interference campaign that’s made repeated attempts to get back on Facebook in recent years. Facebook said their attempts to make new accounts on the platform were “unsuccessful” and appeared to be trying to drive traffic to a separate website that “blamed Russia’s attack on NATO and the West and accused Ukrainian forces of targeting civilians.”

Finally, Facebook also said it has removed “tens of thousands' ' of accounts, pages and groups for using spammy and misleading tactics in an attempt to profit off the war in Ukraine. These efforts included meme pages posing as on-the-ground reports from Ukraine as well as spammers trying to sell merch or lure people to outside websites for ad revenue.

Twitter appears to have quietly altered a key way deleted tweets can be preserved

Twitter might finally be delivering an edit button, but the company appears to have quietly altered a key way deleted tweets can be preserved. As writer Kevin Marks first pointed out, the company changed its embedded javascript so that the text of deleted tweets is no longer visible in embeds on outside websites.

Previously, the text of a deleted tweet was still visible on web pages on which it had been embedded, but now Twitter is using javascript to render the tweet as a blank white box. Overall, it might not seem like a major change on Twitter’s part, but it’s one that has significant implications. Tweets from public officials, celebrities and the general public are frequently embedded into news stories. Even if those tweets were later deleted, there was a clear record of what had been said.

Now, there are untold numbers of old articles where instead of a tweet there’s just a blank box without context. For example, tweets from former President Donald Trump were routinely cited by media organizations. Even after his account was permanently suspended, the text of those missives was still viewable on the sites where it had been embedded. Now, that’s no longer the case.

In Trump's case, there are extensive archives of those tweets. But that’s not the case for the majority of Twitter users, or even many public officials. And while it’s still technically possible to view the text by disabling javascript in your browser, it’s not the kind of step most people would know how to do even if they knew the option existed.

Twitter product manager Eleanor Harding told Marks the change was made “to better respect when people have chosen to delete their Tweets.” A spokesperson for Twitter declined to comment further on the change. 

Hey Kevin! We're doing this to better respect when people have chosen to delete their Tweets. Very soon it'll have better messaging that explains why the content is no longer available :) my DMs are open if you'd like to chat more about this

— Eleanor Harding (@tweetanor) March 29, 2022

Still, it’s a curious move because, as Marks points out in his post, Twitter’s original choice to maintain the text of deleted tweets was an intentional choice on the part of Twitter engineers. “If it's deleted, or 1000 years in the future, the text remains,” former Twitter engineer Ben Ward wrote in 2011 when embedding tweets was first announced.

That’s in line with statements from other twitter executives over the years about the importance of Twitter as a kind of “public record.” For example, former CEO Jack Dorsey said in 2018 he was hesitant to build an edit button because it could erode Twitter’s ability to function as a public record. “It’s really critical that we preserve that,” he said at the time.

Facebook wants you to post Reels from third-party apps

Facebook is taking another step to encourage users to create original content for its TikTok clone. The company introduced a new “sharing to Reels” feature to allow users of third-party apps to post directly to Facebook Reels.

The update allows outside developers to add a “Reels button” to their app so users can post clips directly to Reels while taking advantage of Reels’ editing tools, Facebook wrote in a blog post. Initial developers to use the feature include Smule, which makes a popular karaoke app and video editing apps Vita and VivaVideo.

The move is yet another sign of the growing importance of Reels, and how Facebook has tried to borrow from the same playbook it used with Stories. Facebook has pushed Reels into nearly every part of its service in recent months just as it once did with Stories when the company viewed Snapchat as its chief rival. Now, with Facebook losing users to TikTok, Meta CEO Mark Zuckerberg has staked a lot on the success of Reels. He said last fall that Reels would be “as important for our products as Stories” and that reorienting its service to appeal to younger users was the company’s “North Star.”

But incentivizing users to post original content, not just ripped off TikTok clips, has been somewhat of a challenge for the company. Instagram, which has had Reels the longest, said a year ago that it would stop promoting videos with other apps’ watermarks, but the service is still filled with recycled TikToks. Adding a “Reels” button to other content creation apps is unlikely to solve that overnight, but it could help bring in some fresh, non-TikTok-created clips.

Twitter confirms it will test an edit button

More than a decade and a half into its existence, Twitter has confirmed what was once unthinkable: an edit button is on the way. The company confirmed as much Tuesday, saying that it's been "been working on an edit feature since last year." 

The company was light on details, but it did share a mock-up of the feature, which it said it would test first with Twitter Blue subscribers. 

👀 pic.twitter.com/I13wE3eLdn

— Twitter Comms (@TwitterComms) April 5, 2022

Developing...

Twitter won’t let government-affiliated accounts tweet photos of POWs

Twitter is once again tightening its rules to address how its platform is handling the war in Ukraine. The company said Tuesday that it will no longer allow official government or government-affiliated accounts to tweet photos of prisoners of war “in the context of the war in Ukraine.”

The policy will apply to photos published "on or after April 5th," according to an update in Twitter's rules. Government accounts sharing such images will be required to delete them, said Yoel Roth, Twitter’s Head of Site Integrity. “Beginning today, we will require the removal of Tweets posted by government or state-affiliated media accounts which share media that depict prisoners of war in the context of the war in Ukraine,” Roth said.

“We’re doing so in line with international humanitarian law, and in consultation with international human rights groups. To protect essential reporting on the war, some exceptions apply under this guidance where there is a compelling public interest or newsworthy POW content.”

Beginning today, we will require the removal of Tweets posted by government or state-affiliated media accounts which share media that depict prisoners of war in the context of the war in Ukraine. https://t.co/WJ336RM8Gz.

— Yoel Roth (@yoyoel) April 5, 2022

In a blog post, the company added that in cases in which there is a “compelling public interest” for a government account to share photos of prisoners of war, it would add interstitial warnings to the images.

While the new rules apply to official government and government-affiliated accounts, Twitter noted that it will take down POW photos shared by anyone with “with abusive intent, such as insults, calls for retaliation, mocking/taking pleasure in suffering of PoWs, or for any other behavior that violates the Twitter rules.”

Additionally, Twitter is taking new steps to limit the reach of Russian government accounts on its platform. Under a new policy, the company will no longer “amplify or recommend government accounts belonging to states that limit access to free information and are engaged in armed interstate conflict,” Roth said. “This measure drastically reduces the chance that people on Twitter see Tweets from these accounts unless they follow them.”

It’s not yet clear if or how Twitter plans to enforce this policy for contexts other than the war in Ukraine. In a blog post, the company left open the possibility that it would apply the rules to situations “beyond interstate armed conflict” but didn’t elaborate.

What does this mean?

We won’t recommend these accounts, and we won't amplify them across the Home Timeline, Explore, Search, and in other places on Twitter. This measure drastically reduces the chance that people on Twitter see Tweets from these accounts unless they follow them.

— Yoel Roth (@yoyoel) April 5, 2022

“Attempts by states to limit or block access to free information within their borders are uniquely harmful, and run counter to Twitter’s belief in healthy and open public conversation,” the company wrote. “We’re committed to treating conversations about global conflicts more equitably, and we’ll continue to evaluate whether this policy may be applied in other contexts, beyond interstate armed conflict.”

The changes are the latest way Russia’s invasion of Ukraine has forced Twitter to adapt its content moderation rules as tries to suppress Russia-backed disinformation. The company has already taken steps to limit the visibility of Russian state media outlets and turned off advertising and recommendations in both Russia and Ukraine. Russia has blocked Twitter since March 4th.

Leaked document indicates Facebook may be underreporting images of child abuse

A training document used by Facebook’s content moderators raises questions about whether the social network is under-reporting images of potential child sexual abuse, The New York Timesreports.The document reportedly tells moderators to “err on the side of an adult” when assessing images, a practice that moderators have taken issue with but company executives have defended.

At issue is how Facebook moderators should handle images in which the age of the subject is not immediately obvious. That decision can have significant implications, as suspected child abuse imagery is reported to the National Center for Missing and Exploited Children (NCMEC), which refers images to law enforcement. Images that depict adults, on the other hand, may be removed from Facebook if they violate its rules, but aren’t reported to outside authorities.

But, as The NYT points out, there isn’t a reliable way to determine age based on a photograph. Moderators are reportedly trained to use a more than 50-year-old method to identify “the progressive phases of puberty,” but the methodology “was not designed to determine someone’s age.” And, since Facebook’s guidelines instruct moderators to assume photos they aren’t sure of are adults, moderators suspect many images of children may be slipping through.

This is further complicated by the fact that Facebook’s contract moderators, who work for outside firms and don’t get the same benefits as full-time employees, may only have a few seconds to make a determination, and may be penalized for making the wrong call.

Facebook, which reports more child sexual abuse material to NCMEC than any other company, says erring on the side of adults is meant to protect users’ and privacy and to avoid false reports that may hinder authorities’ ability to investigate actual cases of abuse. The company’s Head of Safety Antigone Davis told the paper that it may also be a legal liability for them to make false reports. Notably, not every company shares Facebook’s philosophy on this issue. Apple, Snap and TikTok all reportedly take “the opposite approach” and report images when they are unsure of an age.

Facebook News Feed bug injected misinformation into users' feeds for months

A “bug” in Facebook’s News Feed ranking algorithm injected a “surge of misinformation” and other harmful content into users’ News Feeds between last October and March, according to an internal memo reported byThe Verge. The unspecified bug, described by employees as a “massive ranking failure,” went unfixed for months and affected "as much as half of all News Feed views."

The problem affected Facebook’s News Feed algorithm, which is meant to down-rank debunked misinformation as well as other problematic and “borderline” content. But last fall, views on debunked misinformation began rising by “up to 30 percent,” according to the memo, while other content that was supposed to be demoted was not. “During the bug period, Facebook’s systems failed to properly demote nudity, violence, and even Russian state media the social network recently pledged to stop recommending in response to the country’s invasion of Ukraine,” according to the report.

More worrying, is that Facebook engineers apparently realized something was very wrong — The Verge reports the problem was categorized as a “severe” vulnerability in October — but it went unfixed until March 11th because engineers were “unable to find the root cause.”

The incident underscores just how complex, and often opaque, Facebook’s ranking algorithms are even to its own employees. Whistleblower Frances Haugen has argued that issues like this one are evidence that the company needs to make its algorithms transparent to outside researchers or even move away from engagement-based ranking altogether.

A Facebook spokesperson confirmed to The Verge that the bug had been fixed, saying it “has not had any meaningful, long-term impact on our metrics.”

Still, the fact that it took Facebook so long to come up with a fix, is likely to bolste calls for the company to change its approach to algorithmic ranking. The company recently brought back Instagram’s non-algorithmic feed partially in response to concerns about the impact its recommendations have on younger users. Meta is also facing the possibility of legislation that would regulate algorithms like the one used in News Feed.

Instagram will let you multitask while you DM

Instagram is making it a little easier to chat while browsing your feed. The app is adding a new multitasking feature that allows users to quickly respond to incoming messages without switching back and forth between their feed and the inbox.

With the change, new chats will appear at the top of your feed while browsing, and you can respond by tapping on the message. The app is also adding a shortcut to make sharing posts a little quicker. Instead of scrolling through a list of contacts, users can designate four friends that will appear as shortcuts when tapping and holding the share button.

The app is also adding a music sharing feature, so users can swap 30 second previews of songs coa Spotify, Apple Music and Amazon Music

Instagram

Ever since Meta began merging Instagram and Messenger, Instagram’s in-app chat has been steadily getting more features — and becoming more like Messenger. Now, those who have opted to link their inboxes, will also see a tray at the top of their DMs indicating which of their friends are currently online, much like the Messenger feature.

Instagram is also adding a Slack-style @Silent shortcut, similar to the update Messenger showed off earlier this week. Adding @silent to messages allows them to be delivered without triggering a notification. (It’s not clear if Instagram plans to adopt more of the “command” shortcuts in the future.) And, finally, group chats are getting a polling feature of their own, so polls will no longer be limited just to Stories.

As is often the case, these updates won’t be available everywhere all at once. The company says the new features “are available in select countries, with plans to expand globally,” but didn’t elaborate on which countries, or how long it would take for the changes to reach everyone.

Apple, Facebook and Discord reportedly gave user data to hackers posing as law enforcement

Apple, Facebook and Discord turned over user data to hackers posing as law enforcement officials, according to a new report in Bloomberg. The demands, which were forged to look like authentic legal requests, reportedly came from legitimate email accounts that had been “compromised.”

According to Bloomberg, both Facebook and Apple turned over “basic subscriber details, such as a customer’s address, phone number and IP address.” Discord provided “the Internet address history of Discord accounts tied to a specific phone number,” according to Krebs on Security. The hackers also targeted Snap, though it’s not clear if the company actually turned over the requested data.

As Bloomberg points out, it’s not uncommon for companies like Apple and Facebook to turn over data to law enforcement, and these companies have dedicated teams to respond to such requests. Typically, these requests are accompanied by a court order, but there are “emergency” cases when law enforcement asks for data without one, like when someone’s life is believed to be in danger.

In this case, the hackers exploited this tactic in order to access personal information about specific targets in order to “facilitate financial fraud schemes.” Using hacked emails tied to legitimate law enforcement personnel, they were able to successfully fool the companies into handing over the data.

In a statement to Bloomberg, Meta spokesperson Andy Stone said that the company has safeguards in place to verify legal requests and detect abuse. “We block known compromised accounts from making requests and work with law enforcement to respond to incidents involving suspected fraudulent requests, as we have done in this case,” Stone said.

Apple and Snap also pointed to company guidelines, saying they have policies to verify the legitimacy of requests for user data. But these safeguards can fall short if the requests appear to be from emails associated with legitimate law enforcement agencies. As Discord told Krebson Security:

“We can confirm that Discord received requests from a legitimate law enforcement domain and complied with the requests in accordance with our policies. We verify these requests by checking that they come from a genuine source, and did so in this instance. While our verification process confirmed that the law enforcement account itself was legitimate, we later learned that it had been compromised by a malicious actor. We have since conducted an investigation into this illegal activity and notified law enforcement about the compromised email account.”

Interestingly, security researchers have reportedly tied some of the people involved in this scheme to another high-profile hacking group: Lapsus$, whose members allegedly hacked Microsoft and Okta. According to Bloomberg, one person involved with forging the requests is also “believed to be the mastermind behind the cybercrime group Lapsus$.”