Posts with «author_name|karissa bell» label

Facebook has banned 3,000 accounts for COVID-19 and vaccine misinformation

Since the start of the coronavirus pandemic, Facebook has taken a much tougher stance on health misinformation than it has in the past, removing millions of posts for sharing misinformation. Now, we know just how many accounts, groups and pages have been banned from the platform for repeatedly breaking those rules: 3,000.

Facebook shared the stat as part of its community standards enforcement report, which measures how the company enforces its rules. The number may seem low given the vast amount of misinformation on Facebook about the pandemic and the vaccines.The company also said that more than 20 million pieces of content have been removed and more than 190 million have warning labels between the start of the pandemic in 2020 and this past June.

But the relatively low number of bans — just 3,0000 — tracks with findings by researchers who say that just a few individuals are responsible for the vast majority of vaccine mistruths on social media.

During a call with reporters, Facebook’s VP of Content Policy Monika Bickert, said the company has had to continually evolve its policies, and that it now removes 65 types of vaccine falsehoods, such as posts saying COVID-19 shots cause magnetism. She also noted that some groups have used “coded language” to try to evade the company’s detection, which can pose a challenge.

Facebook’s handling of vaccine misinformation has been in the spotlight in recent months as government officials, including President Joe Biden, have said Facebook should do more to counter mistruths about the COVID-19 vaccines. On its part, Facebook says that vaccine hesitancy has declined by 50 percent in the US, according to its surveys, and that its COVID-19 Information Center has reached 2 billion people.

Facebook’s first ‘widely viewed content’ report argues political content isn’t actually popular

Facebook really wants people to know that it’s most popular content isn’t political and it’s releasing a new report to try to prove it. The social network released its first-ever “Widely Viewed Content Report,” documenting what it claims is the most-viewed content on its platform during the second quarter of 2021.

The report is Facebook’s rebuttal to commonly cited data that indicates posts from polarizing figures are consistently among the best-performing on the platform. Data from Facebook-owned CrowdTangle, an analytics platform, commonly shows posts from conservative figures and outlets like Newsmax, Fox news, Ben Shapiro and Dan Bongino get the most engagement.

Notably, none of those names appear in Facebook’s latest report, which included lists of top-performing domains, pages and specific public posts it said attracted the most eyeballs. Among the top domains were YouTube, Amazon, Unicef, GoFundMe, Spotify and TikTok. The most widely-viewed links included a website for an organization associated with former Green Bay Packers football players. That page drew more than 87 million viewers, according to Facebook. An online storefront for CBD products was #2 on the list, with 72 million views. Also on the list was a cat GIF from Tumblr with just over 49 million views.

When it comes to the most viewed public pages, Facebook’s list included Unicef, as well as animal site The Dodo, LADbible, and Sassy Media and other publishers that have built media companies off of viral Facebook content. Notably, most of the content Facebook put forward in its report didn’t appear to be overtly political. The most-viewed individual posts were all a collection of text-based memes, encouraging users to answer light-hearted (and mostly boring) questions.

It’s not the first time Facebook has tried to counter perceptions that its most popular content is polarizing or political. The company says it will release the “widely viewed content” report on a regular basis to help people track what type of content is being seen the most.

“There's a few gaps in the data that's being used today, and the narrative that has emerged is quite simply wrong,” Facebook's VP of Integrity, Guy Rosen, said during a call with reporters. “CrowdTangle is focused on interaction, CrowdTangle only has a limited set of certain pages, groups, and accounts. We are creating a report that provides a broad view and … an accurate representation of what people’s experiences actually are on our platform.”

Developing...

Twitter's latest experiment is a tool for reporting 'misleading' tweets

A new test from Twitter will finally allow users to report “misleading” tweets. The company says it’s testing the feature for “some people” in the US, South Korea and Australia. Though only an experiment, it’s a significant step for Twitter which has previously had limited reporting tools for misinformation on its service. 

With the change though, users will now be able to report political and health misinformation, with sub-categories for election and COVID-19 related tweets, according toThe Verge. That tracks with other fact checking and misinformation-busting efforts Twitter has made over the past year and a half. The company has previously introduced labels and PSAs to debunk health and election misinformation on its platform.

We're assessing if this is an effective approach so we’re starting small. We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.

— Twitter Safety (@TwitterSafety) August 17, 2021

At the moment, it’s not clear how reported tweets will be handled. Unlike Facebook, which uses a large network of fact checkers to debunk falsehoods, Twitter’s fact checking initiatives have been more narrowly focused. In a tweet, the company said that users shouldn’t expect the company to respond to every report but the reports “will help us identify trends so that we can improve the speed and scale of our broader misinformation work.” 

Tinder will make ID verification available to all users

Tinder plans to make ID verification — a feature currently only available in Japan — accessible to all its users, the company says.

For now, we have few details about how this update, which is expected to roll out in “the coming quarters,” will work. In a statement, Tinder said ID verification would “begin as voluntary” and that it would consult with experts to determine “what documents are most appropriate in each country,” and other details.

Tinder execs noted that ID verification can be a “complex and nuanced” issue, and that some users “might have compelling reasons that they can’t or don’t want to share their real-world identity with an online platform.” The company said it’s working with various experts to ensure its approach is “equitable.”

ID verification would be the latest safety update for the dating app, which has added other features meant to reassure users’ about their potential matches’ identity. The company has also implemented an anti-catfishing feature, and Tinder parent company Match Group partnered with background check platform Garbo earlier this year.

Why is Facebook so bad at countering vaccine misinformation?

It’s been six months since Facebook announced a major reversal to its policies on vaccine misinformation. Faced with a rising tide of viral rumors and conspiracy theories, the company said it would start removing vaccine mistruths from its platform. Notably, the effort not only encompassed content about COVID-19 vaccines, but all vaccines. That includes many of the kinds of claims it had long allowed, like those linking vaccines and autism, statements that vaccines are “toxic” or otherwise dangerous.

The move was widely praised, as disinformation researchers and public health officials have long urged Facebook and other platforms to treat vaccine misinformation more aggressively. Since then, the company has banned some prominent anti-vaxxers, stopped recommending health-related groups and shown vaccine-related PSAs across Facebook and Instagram. It now labels any post at all that mentions COVID-19 vaccines, whether factual or not.

Yet, despite these efforts, vaccine misinformation is still an urgent problem, and public health officials say Facebook and other social media platforms aren’t doing enough to address it. Last month, the Surgeon General issued an advisory warning of the dangers of health misinformation online. The accompanying 22-page report didn’t call out any platforms by name, but it highlighted algorithmic amplification and other issues commonly associated with Facebook. The following day, President Joe Biden made headlines when he said that misinformation on Facebook was “killing people.”

While Facebook has pushed back, citing its numerous efforts to quash health misinformation during the pandemic, the company’s past lax approach to vaccine misinformation has likely made that job much more difficult. In a statement, a Facebook spokesperson said vaccine hesitancy has decreased among its users in the US, but the company has also repeatedly rebuffed requests for more data that could shed light on just how big the problem really is.

“Since the beginning of the pandemic, we have removed 18 million pieces of COVID misinformation, labeled hundreds of millions of pieces of COVID content rated by our fact-checking partners, and connected over 2 billion people with authoritative information through tools like our COVID information center,” a Facebook spokesperson told Engadget. “The data shows that for people in the US on Facebook, vaccine hesitancy has declined by 50% since January, and acceptance is high. We will continue to enforce against any account or group that violates our COVID-19 and vaccine policies and offer tools and reminders for people who use our platform to get vaccinated.”

Facebook’s pandemic decision

Throughout the pandemic, Facebook has moderated health misinformation much more aggressively than it has in the past. Yet for the first year of the pandemic, the company made a distinction between coronavirus misinformation — e.g., statements about fake cures or disputing the effectiveness of masks, which it removed — and vaccine conspiracy theories, which it said did not break the company’s rules. Mark Zuckerberg even said that he would be reluctant to moderate vaccine misinformation the same way the company has with COVID misinformation.

That changed this year, with the advent of COVID-19 vaccines and the rising tide of misinformation and vaccine hesitancy that accompanied them, but the damage may have already been done. A peer-reviewed study published in Nature in February found that exposure to misinformation about the COVID-19 vaccines “lowers intent to accept a COVID-19 vaccine” by about 6 percent.

People are also more likely to be unvaccinated if they primarily get their news from Facebook, according to a July report from the COVID States Project. The researchers sampled more than 20,000 adults in all 50 states and found that those who cited Facebook as a primary news source were less likely to be vaccinated. While the authors note that it doesn’t prove that using Facebook affects someone’s choice to get vaccinated, they found a “surprisingly strong relationship” between the two.

“If you rely on Facebook to get news and information about the coronavirus, you are substantially less likely than the average American to say you have been vaccinated,” they write. “In fact, Facebook news consumers are less likely to be vaccinated than people who get their coronavirus information from Fox News. According to our data, Facebook users were also among the most likely to believe false claims about coronavirus vaccines.”

The researchers speculate that this could be because people who spend a lot of time on Facebook are less likely to trust the government, the media or other institutions. Or, it could be that spending time on the platform contributed to that distrust. While there’s no way to know for sure, we do know that Facebook has for years been an effective platform for spreading disinformation about vaccines.

A spotty record

Doctors and researchers have warned for years that Facebook wasn’t doing enough to prevent lies about vaccines from spreading. Because of this, prominent anti-vaxxers have used Facebook and Instagram to spread their message and build their followings.

A report published earlier this year from the CCDH found that more than half of all vaccine misinformation online could be linked to 12 individuals who are part of a long-running, and often coordinated, effort to undermine vaccines. But while the company has banned some accounts, some of those individuals still have a presence on a Facebook-owned platform, according to the CCDH. Facebook has disputed the findings of that report, which relied on analytics from the company's CrowdTangle tool. But the social network’s own research into vaccine hesitancy indicated “a small group appears to play a big role” in undermining vaccines, The Washington Postreported in March.

There are other issues, too. For years, Facebook’s search and recommendation algorithm have made it extraordinarily easy for users to fall into rabbit holes of misinformation. Simply searching the word “vaccine” would be enough to surface recommendations for accounts spreading conspiracy theories and other vaccine disinformation.

Engadget reported last year on Instagram’s algorithmic search results associated anti-vaccine accounts with COVID-19 conspiracies and QAnon content. More than a year later, a recent study from Avaaz found that although this type of content no longer appears at the top of search results, Facebook’s recommendation algorithms continue to recommend pages and groups that promote misinformation about vaccines. In their report, researchers document how users can fall into misinformation “rabbit holes” by liking seemingly innocuous pages or searching for “vaccines.” They also found that Facebook’s page recommendation algorithm appeared to associate vaccines and autism.

“Over the course of two days, we used two new Facebook accounts to follow vaccine-related pages that Facebook suggested for us. Facebook’s algorithm directed us to 109 pages, with 1.4M followers, containing anti-vaccine content — including pages from well-known anti-vaccine advocates and organizations such as Del Bigtree, Dr. Ben Tapper, Dr. Toni Bark, Andrew Wakefield, Children's Health Defense, Learn the Risk, and Dr. Suzanne Humphries. Many of the pages the algorithm recommended to us carried a label, warning that the page posts about COVID-19 or vaccines, giving us the option to go directly to the CDC website. The algorithm also recommended 10 pages related to autism — some containing anti-vaccine content, some not — suggesting that Facebook’s algorithm associates vaccines with autism, a thoroughly debunked link that anti-vaccine advocates continue to push.”

Facebook has removed some of these pages from its recommendations, though it’s not clear which. Avaaz points out that there’s no way to know why Facebook’s recommendation algorithm surfaces the pages it does as the company doesn’t disclose how these systems work. Yet it’s notable because content associating vaccines with autism is exactly one of the claims that Facebook said it would ban under its stricter misinformation rules during the pandemic. That Facebook’s suggestions are intermingling the topics is, at the very least, undermining those efforts.

Claims and counterclaims

Facebook has strongly opposed these claims. The company repeatedly points to its messaging campaign around covid-19 vaccines, noting that more than 2 billion people have viewed the company’s COVID-19 and vaccine PSAs.

In a blog post responding to President Biden’s comments last month, Facebook’s VP of Integrity Guy Rosen argued that “vaccine acceptance among Facebook users in the US has increased.” He noted that the company has “reduced the visibility of more than 167 million pieces of COVID-19 content debunked by our network of fact-checking partners so fewer people see it.”

He didn’t share, however, how much of that misinformation was about vaccines, or details on the company’s enforcement of its more general vaccine misinformation rules. That’s likely not an accident. The company has repeatedly resisted efforts that could shed light on how misinformation spreads on its platform.

Facebook executives declined a request from their data scientists who asked for additional resources to study COVID-19 misinformation at the start of the pandemic, according toThe New York Times. It’s not clear why the request was turned down, but the company has also pushed back on outsiders’ efforts to gain insight into health misinformation.

Facebook has declined to share the results of an internal study on vaccine hesitancy on its platform, according to Washington DC Attorney General Karl Racine’s office, which has launched a consumer protection investigation into the company’s handling of vaccine misinformation.

“Facebook has said it’s taking action to address the proliferation of COVID-19 vaccine misinformation on its site,” a spokesperson said. “But then when pressed to show its work, Facebook refused.”

The Biden Administration has also — unsuccessfully — pushed Facebook to be more forthcoming about vaccine misinformation. According to The New York Times, administration officials have met repeatedly with Facebook and other platforms as part of its effort to curb misinformations about coronavirus vaccines. Yet when a White House official asked Facebook to share “how often misinformation was viewed and spread,” the company refused. According to The Times, “Facebook responded to some requests for information by talking about vaccine promotion strategies,’ such as its PSAs or its tool to help users book vaccine appointments.

One issue is that it’s not always easy to define what is, and isn’t, misinformation. Factual information, like news stories or personal anecdotes about vaccine side effects, can be shared with misleading commentary. This, Facebook has suggested, makes it difficult to study the issue in the way that many have asked. At the same time, Facebook is a notoriously data-driven company. It’s constantly testing even the smallest features, and it employs scores of researchers and data scientists. It’s difficult to believe that learning more about vaccine hesitancy and how misinformation spreads is entirely out of reach.

Facebook delays office re-opening to January 2022

Facebook employees will be working from home at least until the end of the year. The company has pushed back its plans to bring US employees back to the office due to concerns about rising COVID cases due to the delta variant. The company had said earlier this summer that it planned to reopen US offices at 50 percent capacity by September. But that timeline has now been pushed back to January as cases have risen.

“Data, not dates, is what drives our approach for returning to the office,” the company said in a statement to CNBC. “Given the recent health data showing rising Covid cases based on the delta variant, our teams in the U.S. will not be required to go back to the office until January 2022. We expect this to be the case for some countries outside of the US, as well. We continue to monitor the situation and work with experts to ensure our return to office plans prioritize everyone’s safety.”

Facebook made headlines in early 2020 for being among the first to close its offices, well before many areas instituted their own lockdowns policies. The company has also said that it will require its US employees to be vaccinated against COVID-19. Whenever it does re-open, it’s likely that work will look and feel much different to many employees. The company has said it will embrace remote work going forward, with Mark Zuckerberg saying that as much as 50 percent of the company’s workforce could remote in the next 5-10 years.

Why Apple's child safety updates are so controversial

Last week, Apple previewed a number of updates meant to beef up child safety features on its devices. Among them: a new technology that can scan the photos on users’ devices in order to detect child sexual abuse material (CSAM). Though the change was widely praised by some lawmakers and child safety advocates, it prompted immediate pushback from many security and privacy experts, who say the update amounts to Apple walking back its commitment to putting user privacy above all else.

Apple has disputed that characterization, saying that its approach balances both privacy and the need to do more to protect children by preventing some of the most abhorrent content from spreading more widely.

What did Apple announce?

Apple announced three separate updates, all of which fall under the umbrella of “child safety.” The most significant — and the one that’s gotten the bulk of the attention — is a feature that will scan iCloud Photos for known CSAM. The feature, which is built into iCloud Photos, compares a user’s photos against a database of previously identified material. If a certain number of those images is detected, it triggers a review process. If the images are verified by human reviewers, Apple will suspend that iCloud account and report it to the National Center for Missing and Exploited Children (NCMEC).

Apple also previewed new “communication safety” features for the Messages app. That update enables the Messages app to detect when sexually explicit photos are sent or received by children. Importantly, this feature is only available for children who are part of a family account, and it’s up to parents to opt in.

Apple

If parents do opt into the feature, they will be alerted if a child under the age of 13 views one of these photos. For children older than 13, the Messages app will show a warning upon receiving an explicit image, but won’t alert their parents. Though the feature is part of the Messages app, and separate from the CSAM detection, Apple has noted that the feature could still play a role in stopping child exploitation, as it could disrupt predatory messages.

Finally, Apple is updating Siri and its search capabilities so that it can “intervene” in queries about CSAM. If someone asks how to report abuse material, for example, Siri will provide links to resources to do so. If it detects that someone might be searching for CSAM, it will display a warning and surface resources to provide help.

When is this happening and can you opt out?

The changes will be part of iOS 15, which will roll out later this year. Users can effectively opt out by disabling iCloud Photos (instructions for doing so can be found here). However, anyone disabling iCloud Photos should keep in mind that it could affect your ability to access photos across multiple devices.

So how does this image scanning work?

Apple is far from the only company that scans photos to look for CSAM. Apple’s approach to doing so, however, is unique. The CSAM detection relies on a database of known material, maintained by NCMEC and other safety organizations. These images are “hashed” (Apple’s official name for this is NeuralHash) — a process that converts images to a numerical code that allows them to be identified, even if they are modified in some way, such as cropping or making other visual edits. As previously mentioned, CSAM detection only functions if iCloud Photos is enabled. What’s notable about Apple’s approach is that rather than matching the images once they’ve been sent to the cloud — as most cloud platforms do — Apple has moved that process to users’ devices.

Apple

Here’s how it works: Hashes of the known CSAM are stored on the device, and on-device photos are compared to those hashes. The iOS device then generates an encrypted “safety voucher” that’s sent to iCloud along with the image. If a device reaches a certain threshold of CSAM, Apple can decrypt the safety vouchers and conduct a manual review of those images. Apple isn’t saying what the threshold is, but has made clear a single image wouldn’t result in any action.

Apple also published a detailed technical explanation of the process here.

Why is this so controversial?

Privacy advocates and security researchers have raised a number of concerns. One os these is that this feels like a major reversal for Apple, which five years ago refused the FBI’s request to unlock a phone and has put up billboards stating “what happens on your iPhone stays on your iPhone.” To many, the fact that Apple created a system that can proactively check your images for illegal material and refer them to law enforcement, feels like a betrayal of that promise.

In a statement, the Electronic Frontier Foundation called it “a shocking about-face for users who have relied on the company’s leadership in privacy and security.” Likewise, Facebook — which has spent years taking heat from Apple over its privacy missteps — has taken issue with the iPhone maker’s approach to CSAM. WhatsApp chief, Will Cathcart, described it as “an Apple built and operated surveillance system.”

More specifically, there are real concerns that once such a system is created, Apple could be pressured — either by law enforcement or governments — to look for other types of material. While CSAM detection is only going to be in the US to start, Apple has suggested it could eventually expand to other countries and work with other organizations. It’s not difficult to imagine scenarios where Apple could be pressured to start looking for other types of content that’s illegal in some countries. The company’s concessions in China — where Apple reportedly “ceded control” of its data centers to the Chinese government — are cited as proof that the company isn’t immune to the demands of less-democratic governments.

There are other questions too. Like whether it's possible for someone to abuse this process by maliciously getting CSAM onto someone’s device in order to trigger them losing access to their iCloud account. Or whether there could be a false positive, or some other scenario that results in someone being incorrectly flagged by the company’s algorithms.

What does Apple say about this?

Apple has strongly denied that it’s degrading privacy or walking back its previous commitments. The company published a second document in which it tries to address many of these claims.

On the issue of false positives, Apple has repeatedly emphasized that it is only comparing users’ photos against a collection of known child exploitation material, so images of, say, your own children won’t trigger a report. Additionally, Apple has said that the odds of a false positive is around one in a trillion when you factor in the fact that a certain number of images must be detected in order to even trigger a review. Crucially, though, Apple is basically saying we just have to take their word on that. As Facebook’s former security chief Alex Stamos and security researcher Matthew Green wrote in a joint New York Times op-ed, Apple hasn’t provided outside researchers with much visibility into how all this actually works.

Apple further says that its manual review, which relies on human reviewers, would be able to detect if CSAM was on a device as the result of some kind of malicious attack.

When it comes to pressure from governments or law enforcement agencies, the company has basically said that it would refuse to cooperate with such requests. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands,” it writes. “We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.” Although, once again, we kind of just have to take Apple at its word here.

If it’s so controversial, why is Apple doing it?

The short answer is because the company thinks this is finding the right balance between increasing child safety and protecting privacy. CSAM is illegal and, in the US, companies are obligated to report it when they find it. As a result, CSAM detection features have been baked into popular services for years. But unlike other companies, Apple hasn’t checked for CSAM in users’ photos, largely due to its stance on privacy. Unsurprisingly, this has been a major source of frustration for child safety organizations and law enforcement.

To put this in perspective, in 2019 Facebook reported 65 million instances of CSAM on its platform, according to The New York Times. Google reported 3.5 million photos and videos, while Twitter and Snap reported “more than 100,000,” Apple, on the other hand, reported 3,000 photos.

That’s not because child predators don’t use Apple services, but because Apple hasn’t been nearly as aggressive as some other platforms in looking for this material, and its privacy features have made it difficult to do so. What’s changed now is that Apple says it’s come up with a technical means of detecting collections of known CSAM in iCloud Photos libraries that still respects users’ privacy. Obviously, there’s a lot of disagreement over the details and whether any kind of detection system can truly be “private.” But Apple has calculated that the tradeoff is worth it. “If you’re storing a collection of CSAM material, yes, this is bad for you,” Apple’s head of privacy toldThe New York Times. “But for the rest of you, this is no different.”

Facebook caught a marketing firm paying influencers to criticize COVID-19 vaccines

Facebook has banned a marketing firm for its involvement in a disinformation campaign that used influencers and fake accounts to undermine COVID-19 vaccines. The company removed 65 Facebook accounts and 243 Instagram accounts associated with a campaign, which also recruited unwitting influencers to boost its message.

According to Facebook, the network “originated in Russia,” but was linked to Fazze, a subsidiary of a UK-registered marketing firm that operates from Russia. The accounts primarily targeted India and Latin America, though the United States was also targeted “to a much lesser extent.” The campaign came in “two distinct waves,” according to Facebook.

“First, in November and December 2020, the network posted memes and comments claiming that the AstraZeneca COVID-19 vaccine would turn people into chimpanzees,” the company wrote in a report. “Five months later, in May 2021, it questioned the safety of the Pfizer vaccine by posting an allegedly hacked and leaked AstraZeneca document.” Facebook didn’t speculate on who hired Fazze or what their motive was, but Ben Nimmo, the company’s Global Threat Intelligence Lead for Influence Operations, noted that the activity “coincided roughly with times when regulators and some of the target countries were discussing emergency authorization for each vaccine.”

Ultimately, the campaign was “sloppy” with “quite low” engagement, according to Nimmo. The exception was the paid posts from legitimate influencers who got caught up in the campaign, as those posts “attracted some limited attention.” However, it was influencers who exposed the campaign, after a handful publicly disclosed that Fazze had offered to pay them to “claim that Pfizer’s Covid-19 vaccine is deadly,” according to The New York Times.

While Facebook regularly publishes details around inauthentic behavior and foreign interference on its platform, this is one of the first that centered around COVID-19 vaccines. The topic has become a thorny issue for Facebook, as officials have blamed social media for not doing enough to prevent vaccine disinformation from spreading.

In a call with reporters, Facebook’s Head of Security Policy, Nathaniel Gleicher. said that the effort, though unsuccessful, highlights how disinformation campaigns are evolving. “Influence operations increasingly span many platforms and target influential voices because running successful campaigns with large numbers of fake accounts on a single network has become harder and harder,” he said.

WhatsApp head says Apple's child safety update is a 'surveillance system'

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

A spokesperson for Apple disputed Cathcart's characterization of the software, noting that users can choose to disable iCloud Photos. Apple has also said that the system is only trained on a database of “known” images provided by the National Center for Missing and Exploited Children (NCMEC) and other organizations, and that it wouldn’t be possible to make it work in a regionally-specific way since it’s baked into iOS.

This is an Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control. Countries where iPhones are sold will have different definitions on what is acceptable.

— Will Cathcart (@wcathcart) August 6, 2021

Can this scanning software running on your phone be error proof? Researchers have not been allowed to find out. Why not? How will we know how often mistakes are violating people’s privacy?

— Will Cathcart (@wcathcart) August 6, 2021

It’s not surprising that Facebook would take issue with Apple’s plans. Apple has spent years bashing Facebook over its record on privacy, even as the social network has embraced end-to-end encryption. More recently, the companies have clashed over privacy updates that have hindered Facebook’s ability to track its users, an update the company has said will hurt its advertising revenue.

FTC rebukes Facebook for ‘misleading’ comments about NYU researchers

Earlier this week, Facebook followed through on its threats to cut a group of New York University researchers off from its platform. The researchers were part of a project called the Ad Observatory, which recruited volunteers to study how Facebook targets political ads on its platform.

In it decision to ban the researchers, Facebook repeatedly referred to its obligations to the FTC saying it was acting against the researchers “in line with our privacy program under the FTC Order” — a reference to the company’s 2019 settlement with the agency over lax privacy practices. But the social network’s actions were roundly criticized by the research community and free speech advocates, who said the company was preventing legitimate research under the guise of “scraping.” As Wired pointed out, the company’s agreement with the FTC doesn’t even prohibit what the researchers were actually doing.

Now, the FTC has weighed in, calling the company’s explanation of its actions was “misleading” and “inaccurate.” In a sharply worded letter addressed to Mark Zuckerberg, Acting Director of the Bureau of Consumer Protection Samuel Levine, said that he was “disappointed by how your company has conducted itself in this matter.”

“The FTC is committed to protecting the privacy of people, and efforts to shield targeted advertising practices from scrutiny run counter to that mission,” Levine wrote. “Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising. While it is not our role to resolve individual disputes between Facebook and third parties, we hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims.”

Facebook didn’t immediately respond to a request for comment.