Posts with «author_name|karissa bell» label

What’s in the Facebook Papers and what it means for the company

Facebook (and now, Meta) might just be experiencing its most sustained and intense bout of bad press ever, thanks to whistleblower Frances Haugen and the thousands of documents she spirited out of the company.

The Wall Street Journal was the first publication to report on the contents of the documents, which have also been turned over to the Securities and Exchange Commission. Since then, the documents have made their way into the hands of more than a dozen publications who formed “a consortium,” much to the dismay of Facebook’s PR department.

There have now been more than a hundred stories based on the documents. And while many of those reference the same documents, the details are significant. But as important as they are, it’s also a dizzying amount of information. There are detailed documents written by the company's researchers, free-form notes and memos, as well as comments and other posts in Workplace, the internal version of Facebook used by its employees.

This mix of sources, together with the fact that the consortium has not released most of the documents to researchers or other journalists, makes the Facebook Papers difficult to parse. Gizmodo has been publishing some of the underlying documents, but new revelations could be trickling out for weeks or months as the material becomes more widely distributed.

But amid all that noise, a few key themes have emerged, many of which have also been backed up by prior reporting on the company and its policies. This article will detail Haugen’s disclosures, and additional details that have arisen from reporting on the Facebook Papers. We'll continue to update it as fresh allegations emerge.

Facebook allowed politics to influence its decisions

This likely won’t be a surprise to anyone who has followed Facebook over the last five years or so, but the Facebook Papers add new evidence to years-long allegations that Mark Zuckerberg and other company leaders allowed politics to influence their decisions.

One of the first stories to break from Haugen’s disclosures (via The Wall Street Journal) included details about Facebook’s “cross check” program, which allowed politicians, celebrities and other VIPs to skirt the company’s rules. The initial motivation for the program? To avoid the “PR fires” that may occur if the social network were to mistakenly remove something from a famous person’s account. In another document, also reported byThe Journal, a researcher on Facebook's integrity team complained that the company had made “special exceptions” for right-wing publisher Brietbart. The publication, part of Facebook’s official News Tab, also had “managed partner” status, which may have helped the company avoid consequences for sharing misinformation.

At the same time, while Facebook’s policies were often perceived internally as putting their thumb on the scale in favor of conservatives, Zuckerberg has also been accused of shelving ideas that could have been perceived as benefiting Democrats. The CEO was personally involved in killing a proposal to put a Spanish language version of its voting information center into WhatsApp ahead of the 2020 presidential election, The Washington Post reported. Zuckerberg reportedly said the plan wasn’t “politically neutral.”

Facebook has serious moderation failures outside the US and Europe

Some of the most damning revelations in the Facebook Papers relate to how the social network handles moderation and safety issues in countries outside of the United States and Europe. The mere fact that Facebook is prone to overlook countries that make up its “rest of world” metrics is not necessarily new. The company's massive failure in Myanmar, where Facebook-fueled hate helped incite a genocide, has been well documented for years.

Yet a 2020 document noted the company still has “significant gaps” in its ability to detect hate speech and other rule-breaking content on its platform. According to Reuters, the company’s AI detection tools — known as “classifiers” — aren’t able to identify misinformation in Burmese. (Again, it’s worth pointing out that a 2018 report on Facebook’s role in the genocide in Myanmar cited viral misinformation and the lack of Burmese-speaking content moderators as issues the company should address.)

Unfortunately, Myanmar is far from the only country where Facebook’s under-investment in moderation has contributed to real-world violence. CNN notes that Facebook’s own employees have been warning that the social network is being abused by “problematic actors” to incite violence in Ethiopia. Yet Facebook lacked the automated tools to detect hate speech and other inciting content even though it had determined the country was one of the most “at risk” countries.

Even in India — Facebook’s largest market — there’s a lack of adequate language support and resources to enforce the platform’s rules. In one document, reported byThe New York Times, a researcher created a test account as an Indian user and started following Facebook’s automated recommendations for accounts and pages to follow. It took just three weeks for a new user’s feed to become flooded with “hate speech, misinformation and celebrations of violence.” At the end of the experiment, the researcher wrote: “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life.” The report was not an outlier. Facebook groups and WhatsApp messages are being used to “spread religious hatred” in the country, according to The Wall Street Journal’s analysis of several internal documents.

Facebook has misled authorities and the public about its worst problems

Lawmakers, activists and other watchdogs have long suspected that Facebook knows far more about issues like misinformation, radicalization and other major problems than it publicly lets on. But many documents within the Facebook Papers paint a startling picture of just how much the company’s researchers know, often long before issues have boiled over into major scandals. That knowledge is often directly at odds with what company officials have publicly claimed.

For example, in the days after the Jan. 6 insurrection, COO Sheryl Sandberg said that rioters had “largely” organized using other platforms, not Facebook. Yet a report from the company’s own researchers, which first surfaced in April, found that the company had missed a number of warning signs about the brewing “Stop the Steal” movement. Though the company had spent months preparing for a chaotic election, including the potential for violence, organizers were able to evade Facebook’s rules by using disappearing Stories and other tactics, according to BuzzFeed.

Likewise, Facebook’s researchers were internally sounding the alarm about QAnon more than a year before the company banned the conspiracy movement. A document titled “Carol’s Journey to QAnon” detailed how a “conservative mom” could see QAnon and other conspiracy theories takeover their News Feed in just five days only by liking Pages that Facebook’s algorithms recommended. “Carol’s” experience was hardly an outlier. Researchers ran these types of experiments for years, and repeatedly found that Facebook’s algorithmic recommendations could push users deeper into conspiracies. But much of this research was not acted on until “things had spiraled into a dire state,” one researcher wrote in a document reported by NBC News.

The documents also show how Facebook has misleadingly characterized its ability to combat hate speech. The company has long faced questions about how hate speech spreads on its apps, and the issue sparked a mass advertiser boycott last year. According to a document cited by Haugen, the company’s own engineers estimate that the company is taking action on “as little as 3-5% of hate” on its platform. That’s in stark contrast to the statistics the company typically showcases.

Similarly, the Facebook Papers indicate that Facebook’s researchers knew much more about vaccine and COVID-19 misinformation than they would share with the public or officials. The company declined to answer lawmakers’ questions about how COVID-19 misinformation spreads even though, according to The Washington Post’s reporting, “researchers had deep knowledge of how covid and vaccine misinformation moved through the company’s apps.”

Facebook has misled advertisers and shareholders

These are the allegations that could end up being some of the most consequential because they show serious problems affecting the company’s core business — and could tie into any future SEC action.

Instagram has long been viewed as a bright spot for Facebook in terms of attracting the teens and younger users Facebook needs to grow. But increasingly, teens and younger users are spending more time and creating more content in competing apps like TikTok. The issue is even more stark for Facebook, where “teen and young adult DAU [daily active users] has been in decline since 2012/2013,” according to a slide shared byBloomberg.

The story points out another issue that could get the company into hot water with the SEC: that the company is overcharging advertisers and misrepresenting the size of its user base due to the number of duplicate accounts. Though this is hardly the first time the issue has been raised, the company’s own reports suggest Facebook “undercounts” the metric, known as SUMA (single user multiple account), according to Bloomberg.

Zuckerberg prioritized growth over safety

While the Facebook Papers are far from the first time the company has faced accusations that it puts profit ahead of users’ wellbeing, the documents have shed new light on many of those claims. One point that’s come up repeatedly in the reporting is Zuckerberg’s obsession with MSI, or meaningful social interaction. Facebook retooled its News Feed around the metric in 2018 as a strategy to combat declining engagement. But the decisions, meant to make sure Facebook users were seeing more content from friends and family, also made the News Feed angrier and more toxic.

By optimizing for “engagement,” publishers and other groups learned they could effectively game the company’s algorithms by, well, pissing people off. But politicians learned they could reach more people by posting more negative content, according to The Wall Street Journal. Publishers also complained that the platform was incentivizing more negative and polarizing content. Yet when Zuckerberg was presented with a proposal that found reducing the amount of some re-shared content could reduce misinformation, the CEO “said he didn’t want to pursue it if it reduced user engagement.”

That wasn’t the only time a Facebook leader was unwilling to make changes that could have a detrimental effect on engagement, even if it would address other serious issues like misinformation. Several documents detail research and concerns about Facebook’s “like” button and other reactions.

Because the News Feed algorithm prioritized a “reaction” more than a like, it boosted content that received the “angry” reaction even though researchers flagged that these posts were much more likely to be toxic. “Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience,” The Washington Post wrote. The company finally stopped giving extra weight to “angry” last September.

Facebook slow-walked, and in some cases outright killed, proposals from researchers about how to address the flood of anti-vaccine comments on its platform, the APreported.

The company has also been accused of downplaying research that found Instagram can exacerbate mental health issues for some of its teen users. The documents, which were some of the first records to emerge from Haugen’s disclosures, forced Facebook to “pause” work on an Instagram Kids app that had already drawn the attention of 44 state Attorneys General. The research also prompted the first Congressional hearing as a result of Haugen's whistleblowing. 

What does all this mean for Facebook Meta?

While the Facebook Papers contain a dizzying amount of details about Facebook’s failures and misdeeds, many of the claims are not entirely new allegations. And if there’s one thing Facebook’s history has taught us, it’s that the company has never let a scandal affect its ability to make billions of dollars.

But, there are some signs that Haugen’s disclosures could be different. For one, she has turned over the documents to the SEC, which has the authority to conduct a wide-ranging investigation into the company’s actions. As many experts have pointed out, it’s not clear what could actually come from such an investigation, but it could at the very least force Facebook’s top executives to formally answer detailed questions from the regulator.

And though Haugen has said she is not in favor of antitrust action against the social network, the FTC has reportedly begun to take a look at the disclosures. (The FTC is already in the midst of a legal battle with Facebook.) Facebook already seems to be reacting as well. The company has asked employees to preserve documents going back to 2016, The New York Times reported this week. There are other, more practical, issues too. The company is reportedly struggling to recruit engineering talent, according to documents reported by Protocol.

The constant scandals and internal roadblocks have also taken a toll on existing employees. For as much scrutiny as the company has faced externally, the Facebook Papers paint a picture of a company whose employees are at times deeply divided and frustrated. The events of January 6th in particular sparked a heated debate about Facebook’s role, and how it missed opportunities to recognize the threat of the “Stop the Steal Movement.” But there have been fundamental disagreements between researchers and other staffers, and Facebook’s leaders for years.

As Wiredpoints out, the Facebook Papers are full of “badge posts” — Facebook speak for the companywide posts employees write upon their departure from the social network — from “dedicated employees who have concluded that change will not come, or who are at least are too burned out to continue fighting for it.”

Facebook takes down government-run troll farm in Nicaragua

Facebook has taken out a government-run troll farm in Nicaragua, where multiple government agencies helped run a network of fake accounts and media pages that spanned across Facebook, Instagram, TikTok, Twitter and YouTube. Facebook shared details of the network in its monthly report on coordinated inauthentic behavior on the platform.

In addition to hundreds of fake accounts on its platform, the troll farm also operated a “a complex network of media brands” on Blogspot, Wordpress and Telegram, Facebook said in a statement. Some fake accounts posed as government supporters, while some posed as university students, who led protests against the government in 2018. The fake account also mass-reported activists and other government critics in an attempt to get them banned from Facebook. Beginning in 2019, the group also began "posting and artificially amplifying praise about the Nicaraguan government and the ruling FSLN party."

While it’s not the first time Facebook has caught a government running this kind of operation, Facebook said the Nicaraguan effort was unique because they were able to link it to multiple government institutions, including the Social Security Institute and Supreme Court. The country’s post office headquarters Managua served as a “main hub” for the troll farm, and government workers even appeared to post on a regular 9am-5pm schedule.

“This was really the closest thing we've yet seen to a whole-of-government operation,” Facebook’s Global Threat Intelligence Lead for Influence Operations, Ben Nimmo, said during a call with reporters. “This is the first time that I can think of [that] we've seen so many different institutions getting involved.”

It’s also notable that Facebook traced the start of the operation back to 2018, meaning much of the activity went undetected for years. Nimmo noted that while the company’s automated systems were able to detect and disable some of the fake accounts in 2018, the operation was “complex” and time-consuming to investigate.

Facebook is rebranding itself as 'Meta'

Facebook, the social network, will no longer define the future of Facebook, the company that will now be known as Meta. Facebook Inc. is changing the name in order to distinguish its beleaguered social network, which has an increasingly poor reputation around the globe, from the company that is pinning its future on the promise of a “metaverse.”

"Our brand is so tightly linked to one product that it can't possibly represent everything that we're doing today, let alone in the future," Zuckerberg said. "From now on, we're going to be metaverse-first, not Facebook first."

Zuckerberg announced the new name during a virtual (meta-virtual?) keynote for the company’s Facebook Connect conference. Under its new arrangement, Facebook and its “family of apps” will be a division of the larger Meta company.

The restructuring bears some similarities to when Google restructured itself into Alphabet, the holding company that now operates Google, along with its “other bets” like DeepMind and Nest. Facebook has already said it plans to separate Facebook Reality Labs, its AR and VR group, from the rest of the company when reporting its financial performance.

Facebook

Facebook is positioning the name as more reflective of its future ambitions to evolve from social network to metaverse company. Zuckerberg still has yet to clearly define exactly what being a “metaverse company” means for its main platform and users, but augmented and virtual reality are central to the vision. The company has already shown off an early version of one project, called Horizon Workrooms, that allows people to conduct meetings in VR. The company also previewed new "Horizon Home" and "Horizon Venues" experiences. And, earlier this month, the company announced plans to hire 10,000 new workers in Europe in order to build out its metaverse.

The name change also comes at one of the most precarious moments in the company’s history. The social network is reeling from the fallout of the “Facebook Papers,” a trove of internal documents collected by a former employee turned whistleblower. The documents have been the basis for a series of complaints to the Securities and Exchange Commission, as well as the source of more than a dozen reports about the company’s failings to stem the tide of misinformation, hate speech and other harms caused by the platform.

Developing...

Snap, TikTok and YouTube need to do more to protect children, lawmakers say

The Senate Commerce Committee just wrapped up another three-hour hearing about social media’s effect on children and teens. But the latest hearing was different from previous ones in an important way: it featured representatives from TikTok, Youtube and Snap.

Though the three apps are some of the most popular apps among teens and younger users, all three have gotten less attention from lawmakers than Facebook and even Twitter. It was the first time TikTok and Snap had appeared at such a hearing. All three companies tried to head off criticism by drawing distinctions between their platforms and Facebook, which has recently drawn comparisons to tobacco companies. And each company promised new features to ramp up parental controls and other child protections on their service.

YouTube VP Leslie Miller said the company was working on a new feature that would allow parents to “choose a locked default autoplay setting” in the YouTube Kids app, in addition to other new parental controls. She didn’t provide further detail, but said it would launch “in the coming months.”

Snap also said it was working on new features for parents, with Jennifer Stout, the company’s VP of Global Public Policy, saying the features would be “rolling out very soon.” She said the update would allow parents to view information about how their children are using Snapchat, such as who they spend the most time chatting with and what their privacy and location settings are.

TikTok said it would add additional controls to allow parents and children to better customize their feeds, but was light on specifics. “We're investing in new ways for our community to enjoy content based on age appropriateness or family comfort,” said Michael Beckerman, the company’s VP of Public Policy, “We're developing more features that empower people to shape and customize their experience in the app.”

But the senators of the Commerce Committee seemed unimpressed by these promises. Throughout the hearing, they pushed the companies on issues like algorithmically-boosted content about eating disorders and self harm on YouTube and TikTok. Snap’s Stout was pushed on what the company is doing to stop drug dealers on its platform.

Several Republican senators also pushed Beckerman on TikTok’s ties to Chinese parent company ByteDance, and how it handles US user data. In one particularly memorable exchange, Senator Ted Cruz said Beckerman was dodging questions about TikTok’s affiliation with a company called Beijing ByteDance Technology, which reportedly has links to the Chinese government. Beckerman also deflected questions about what data TikTok collects by saying Facebook and Instagram collect more data about users than TikTok does.

Though Facebook wasn’t officially part of the hearing, disclosures from whistleblower Frances Haugen were referenced several times. Senator Richard Blumenthal, who at a previous hearing said Facebook and other companies were facing a “big tobacco moment,” said that “tech is not irredeemably bad like big tobacco.”

But he said that the companies need to do much more than prove they are “different” from Facebook. “I understand from your testimony that your defense is ‘we’re not Facebook,’” he said. “Being different from Facebook is not a defense. That bar is in the gutter. It's not a defense to say that you are different.”

Mark Zuckerberg says Facebook's future is 'young adults' and the metaverse

Amid reports that Facebook has misled shareholders about significant declines in teens and younger users, Mark Zuckerberg said the company was “retooling” in order to make “serving young adults” its top priority. To do that, the company plans to make “significant changes” to its Facebook and Instagram apps, and spend billions of dollars building out its vision for a “metaverse.”

Citing increased competition from TikTok and iMessage, Zuckerberg said the company would do more to win over “young adult” users, even if it came at the expense of older users. He said the company’s TikTok rival Reels would be “as important for our products as Stories.” “We also expect to make significant changes to Instagram and Facebook in the next year to further lean into video and make Reels a more central part of the experience,” Zuckerberg said.

On top of that, he said the company was “retooling” internally in order to make young people its “North Star.” He added the shift “will take years, not months.” The issue of younger users has been particularly fraught for the company. A whistleblower’s disclosures about Facebook’s internal research on teen mental health prompted a series of Congressional hearings about child safety, and a wave of headlines about how the photo sharing app may be harmful to some of its most vulnerable users. At the same time, other internal documents have indicated Facebook and Instagram have faced significant declines in engagement among teens and young adults for years.

Zuckerberg said another major priority for the company would be building its vision for a “metaverse.” He didn’t comment on reports that the company would change its name to reflect its new focus on augmented reality and virtual reality, but made clear the company has significant ambitions in the space. “Our goal is to help the metaverse reach a billion people,” he said. He added that a metaverse could enable “hundreds of billions of dollars of digital commerce.”

The company also said Monday that going forward it will report two sets of financials: one for its “family” of apps, which includes Facebook, Instagram, Messenger and WhatsApp; and one for its Reality Labs division that oversees its augmented and virtual reality work. In a statement, Facebook said its 2021 profit will be reduced by $10 billion due to its investment in Reality Labs, and that the company would only increase its AR and VR spending in the next “several years.”

Developing...

Facebook researchers were warning about its recommendations fueling QAnon in 2019

Facebook officials have long known about how the platform’s recommendations can lead users into conspiracy theory-addled “rabbit holes.” Now, we know just how clear that picture was thanks to documents provided by Facebook whistleblower Frances Haugen.

During the summer of 2019, a Facebook researcher found that it took just five days for the company to begin recommending QAnon groups and other disturbing content to a fictional account, according to an internal report whose findings were reported by NBC News, The Wall Street Journal and others Friday. The document, titled “Carol's Journey to QAnon” was also in a cache of records provided by Haugen to the Securities and Exchange Commission as part of her whistleblower complaint.

It reportedly describes how a Facebook researcher set up a brand new account for “Carol,” who was described as a “conservative mom.” After liking a few conservative, but “mainstream” pages, Facebook’s algorithms began suggesting more fringe and conspiracy content. Within five days of joining Facebook, “Carol” was seeing “groups with overt QAnon affiliations,” conspiracy theories about “white genocide” and other content described by the researcher as “extreme, conspiratorial, and graphic content.”

The fact that Facebook’s recommendations were fueling QAnon conspiracy theories and other concerning movements has been well known outside of the company for some time. Researchers and journalists have also documented the rise of the once fringe conspiracy theory during the coronavirus pandemic in 2020. But the documents show that Facebook’s researchers were raising the alarm about the conspiracy theory prior to the pandemic. The Wall Street Journal notes that researchers suggested measures like preventing or slowing down re-shared content but Facebook officials largely opted no to take those steps.

Facebook didn’t immediately respond to questions about the document. “We worked since 2016 to invest in people, technologies, policies and processes to ensure that we were ready, and began our planning for the 2020 election itself two years in advance,” Facebook’s VP of Integrity wrote in a lengthy statement Friday evening. In the statement, Rosen recapped the numerous measures he said Facebook took in the weeks and months leading up to the 2020 election — including banning QAnon and militia groups — but didn’t directly address the company’s recommendations prior to QAnon’s ban in October 2020.

The documents come at a precarious moment for Facebook. There have now been two whistleblowers who have turned over documents to the SEC saying the company has misled investors and prioritized growth and profits over users’ safety. Scrutiny is likely to further intensify as more than a dozen media organizations now have access to some of those documents.

Another former Facebook employee has filed a whistleblower complaint

Another former Facebook employee has filed a whistleblower complaint with the Securities and Exchange Commission. The latest complaint, which was first reported by The Washington Post, alleges Facebook misled its investors about “dangerous and criminal behavior on its platforms, including Instagram, WhatsApp and Messenger.”

In the complaint, the former employee described a conversation with one of Facebook’s top communication executives who, following disclosures about Russia’s use of the platform to meddle in the 2016 election, said the scandal would be a “flash in the pan” and that “we are printing money in the basement, and we are fine.”

Like Frances Haugen, the latest whistleblower is also a former member of Facebook’s integrity team, which was tasked with fighting misinformation, voting interference and other major problems facing the company. And, like Haugen, the former Facebook staffer said that the company has “routinely undermined efforts to fight misinformation, hate speech and other problematic content out of fear of angering then-President Trump and his political allies, or out of concern about potentially dampening the user growth.”

The SEC filing also describes illegal activity in secret Facebook Groups, and Facebook’s policy of allowing politicians and other high-profile users to skirt its rules. It names Mark Zuckerberg and Sheryl Sandberg as being aware of the problems and not reporting them to investors, according to The Post.

While many of the details sound similar to other complaints from former company insiders, news of another complaint adds to the pressure on Facebook, which has spent much of the last month trying to discredit Haugen and downplay the significance of its own research. Meanwhile, lawmakers have called on Zuckerberg to answer questions from Congress, and Haugen is expected to brief European officials as well.

A Facebook spokesperson didn’t immediately respond to a request for comment. Zuckerberg is expected to announce plans to rebrand the company with a new name next week.

Instagram is testing tools to make it easier for creators to find sponsors

Instagram is testing new tools to make it easier for creators to earn money through its service. The app is now testing affiliate shops, a feature it first previewed at its Creator Week event in June, and a dedicated “partnerships” inbox.

Affiliate shops are an extension of Facebook’s existing shopping features, which are already widely available. But the latest version of the storefronts allow creators to link to products that are already part of their affiliate arrangements. Creators will earn commission fees when their followers buy products from these shops (though the exact terms of these arrangements haven’t been detailed). The company says that for now the shopping feature will only be available to creators who are part of that affiliate program.

Instagram is also testing new inbox features it says will make it easier for brands to connect with creators for sponsorships. Instagram DMs will get a dedicated “partnerships” section just for messages from brands. The company says this will give those messages “priority placement” and will allow them to skip the “requests” section where incoming messages are often lost.

Instagram

Separately, the app is working on tools to match brands with creators looking for sponsorships. With the tools, creators can identify brands they are interested in working with directly from the app. While brands would be able to browse creators who fit their needs based on factors like age, gender and follower count.

The tools are still in an early stage, with only a handful of companies and creators participating for now. But the company has previously signaled such features could expand significantly. Mark Zuckerberg said earlier this year that Instagram is planning a “branded content marketplace” to help enable a bigger “creator middle class.”

Snap says Apple's privacy changes hurt its ad business more than it expected

Snap is finally seeing the effects of Apple’s iOS 14 privacy changes on its ad business and the changes have had a bigger impact than it expected.

The company reported revenue of just over $1 billion for the third-quarter of 2021. But despite that being a new milestone for Snap, it was $3 million shy of what the company had previously estimated. Snap executives said Apple’s iOS changes that make it more difficult for advertisers to track users were largely to blame for the shortfall.

“Our advertising business was disrupted by changes to iOS ad tracking that were broadly rolled out by Apple in June and July,” CEO Evan Spiegel said during a call with analysts. “While we anticipated some degree of business disruption, the new Apple provided measurement solution did not scale as we had expected, making it more difficult for our advertising partners to measure and manage their ad campaigns for iOS.”

It wasn’t all bad news for Snap, though. The company once again beat expectations on user growth, adding 13 million new daily active users for the second quarter in a row. Snap now has 306 million DAUs, a new high for the company.

Still, Spiegel called it a “frustrating setback” for the company, but added that increased privacy protections are “really important for the long term health of the ecosystem and something we fully support.”

The iOS 14.5 update forced developers to ask users to explicitly agree to sharing their device identifier (known as IDFA), which is used by advertisers to track users across apps and services. Though Apple previewed the changes more than a year ago, the update wasn’t released until April. Since then, third-party analytics have estimated that a vanishingly small percentage of iOS users agreed to allow apps to track them.

Snap isn’t the only company that has warned about Apple’s iOS changes on its ad business. Facebook, which has been publicly slamming the changes for more than a year, saying the changes will have an outsize impact on developers and small businesses. But Facebook has also warned investors that the changes are likely to hurt its own ad revenue in 2021. The social network is reporting its third-quarter earnings Monday, when it will share just how significantly it's been affected. 

Twitter says its algorithms amplify the ‘political right’ but it doesn’t know why

Twitter said in April that it was undertaking a new effort to study algorithmic fairness on its platform and whether its algorithms contribute to “unintentional harms.” As part of that work, the company promised to study the political leanings of its content recommendations. Now, the company has published its initial findings. According to Twitter’s research team, the company’s timeline algorithm amplifies content from the “political right” in six of the seven countries it studied.

The research looked at two issues: whether the algorithmic timeline amplified political content from elected officials, and whether some political groups received a greater amount of amplification. The researchers used tweets from news outlets and elected officials in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States) to conduct the analysis, which they said was the first of its kind for Twitter.

“Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline,” Twitter’s Rumman Chowdhury wrote about the research. “In 6 out of 7 countries, Tweets posted by political right elected officials are algorithmically amplified more than the political left. Right-leaning news outlets (defined by 3rd parties), see greater amplification compared to left-leaning.”

Here’s what’s complex: The team did phenomenal work identifying *what* is happening. Establishing *why* these observed patterns occur is a significantly more difficult question to answer and something META will examine.
5/n

— Rumman Chowdhury (@ruchowdh) October 21, 2021

Crucially, as Chowdhury points out to Protocol, it’s not yet clear why this is happening. In the paper, the researchers posit that the difference in amplification could be a result of political parties pursuing “different strategies on Twitter.” But the team said that more research would be needed to fully understand the cause.

While the findings are likely to raise some eyebrows, Chowdhury also notes that “algorithmic amplification is not problematic by default.” The researchers further point out that their findings “does not support the hypothesis that algorithmic personalization amplifies extreme ideologies more than mainstream political voices.”

But at the very least, the research would seem to further debunk the notion that Twitter is biased against conservatives. The research also offers an intriguing look at how a tech platform can study the unintentional effects of its algorithms. Facebook, which has come under pressure to make more of its own research public, has defended its algorithms even as a whistleblower has suggested the company should move back to a chronological timeline.

Twitter’s research is part of a broader effort by Twitter to uncover bias and other issues in its algorithms. The company has also published research about its image cropping algorithm and started a bug bounty program to find bias in its platform.