Posts with «social & online media» label

Facebook’s first ‘widely viewed content’ report argues political content isn’t actually popular

Facebook really wants people to know that it’s most popular content isn’t political and it’s releasing a new report to try to prove it. The social network released its first-ever “Widely Viewed Content Report,” documenting what it claims is the most-viewed content on its platform during the second quarter of 2021.

The report is Facebook’s rebuttal to commonly cited data that indicates posts from polarizing figures are consistently among the best-performing on the platform. Data from Facebook-owned CrowdTangle, an analytics platform, commonly shows posts from conservative figures and outlets like Newsmax, Fox news, Ben Shapiro and Dan Bongino get the most engagement.

Notably, none of those names appear in Facebook’s latest report, which included lists of top-performing domains, pages and specific public posts it said attracted the most eyeballs. Among the top domains were YouTube, Amazon, Unicef, GoFundMe, Spotify and TikTok. The most widely-viewed links included a website for an organization associated with former Green Bay Packers football players. That page drew more than 87 million viewers, according to Facebook. An online storefront for CBD products was #2 on the list, with 72 million views. Also on the list was a cat GIF from Tumblr with just over 49 million views.

When it comes to the most viewed public pages, Facebook’s list included Unicef, as well as animal site The Dodo, LADbible, and Sassy Media and other publishers that have built media companies off of viral Facebook content. Notably, most of the content Facebook put forward in its report didn’t appear to be overtly political. The most-viewed individual posts were all a collection of text-based memes, encouraging users to answer light-hearted (and mostly boring) questions.

It’s not the first time Facebook has tried to counter perceptions that its most popular content is polarizing or political. The company says it will release the “widely viewed content” report on a regular basis to help people track what type of content is being seen the most.

“There's a few gaps in the data that's being used today, and the narrative that has emerged is quite simply wrong,” Facebook's VP of Integrity, Guy Rosen, said during a call with reporters. “CrowdTangle is focused on interaction, CrowdTangle only has a limited set of certain pages, groups, and accounts. We are creating a report that provides a broad view and … an accurate representation of what people’s experiences actually are on our platform.”

Developing...

Taliban content is the latest issue for social media companies

While Facebook and Twitter are already struggling to handle vaccine misinformation and extremism, there's an increased focus on how social networks are handling Taliban-related content, following America's sudden withdrawal from Afghanistan. The militant group has swiftly overtaken Afghanistan's civilian government, taking control of the capital Kabul in only a few days, far sooner than intelligence analysts expected. Just like every modern organization, the Taliban relies heavily on social media to spread its messaging and communicate with followers, which puts the onus on technology companies to secure their platforms. 

“The Taliban is sanctioned as a terrorist organization under US law and we have banned them from our services under our Dangerous Organization policies," a Facebook spokesperson said in a statement. "This means we remove accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them." They went on to note that the company will be following the situation closely with the help of native Dari and Pashto speakers, who serve as local experts. Facebook isn't making any additions to its existing policies, which cover its core app, Instagram and WhatsApp, but it's clear that it's making the Taliban's uprising a priority.

Still, that statement doesn't mean much if Facebook can't actually see what's happening on its platforms. Vice reports that the Taliban has been spreading its message on WhatsApp, which uses end-to-end encryption to secure conversations. The company could technically ban specific accounts, but it won't be able to easily search and remove content like it can on Facebook proper and Instagram.

Twitter, meanwhile, wouldn't say if it would ban notable Taliban accounts like spokesperson Suhail Shaheen's. CNN reported yesterday that he had 347,000 followers on the platform, but now he's amassed over 361,000, a clear sign of growing influence. Twitter noted that people were using its service to seek help in Afghanistan, and that it would continue to enforce its existing rules around things like the glorification of violence and hateful conduct. The company also introduced the ability to report misleading tweets yesterday.

While Twitter is shying away from any definitive stances against the Taliban, a spokesperson noted: "Our enforcement approach is agile and we will remain transparent about our work as it continues to evolve to address these increasingly complex issues." Basically, the rules could change at any moment.

Moving forward, it's unclear how social media companies will recognize the Taliban as it takes control of Afghanistan. As the Washington Post reports, it's up to social media firms to determine who maintains official state accounts like the Afghanistan President's Twitter, which now has over 926,000 followers.

Snapchat Trends is an overview of the most popular keywords in use in Stories

Snap is introducing a new tool called Snapchat Trends that provides a public overview of the most popular keywords currently in use on the app. Accessible via the company’s website, you can use the feature to get a sense of the topics that Snapchat users are referencing in Stories they share with the public and their friends. The tool also includes a database you can use to search for specific terms.

While the feature will primarily be of interest to marketers and advertisers looking to tap into Snapchat’s growing user base of 293 million daily active accounts, it’s something anyone can access, and looking at the trends you get an insight into what young people care and worry about. Yes, you see topics related to shows like The Bachelorette, but things like the Taliban takeover of Kabul and the lockdown in Australia are also featured. It's a reminder that there's always more to youth culture than it seems.  

YouTube will start showing video chapters in search results

YouTube has an impossibly large video library, and the company knows that navigating it is easier said than done. To that end, the company is introducing a few new features to improve the search experience. Probably the most significant new tool is chapter view right from the search results page. YouTube has offered the ability for users to break longer videos into separate chapters so that viewers can quickly find specific information, but they were only visible when you clicked through. 

Now, chapters will appear alongside the search results, with a time-stamped image thumbnail for each section. This should give viewers more insight into the content inside each video, and you'll be able to tap or click right into a specific chapter if you find exactly the info you're looking for. We're not yet sure if this feature is coming to mobile, desktop or both, but we asked YouTube and will update this if we find out more.

Another new feature we do know is coming to mobile are the little snippets of videos that automatically play when you mouse over them on the desktop. YouTube says it'll roll out "a version" of these previews on mobile, though it's not clear exactly what gesture will be used to get the snippet to play.

Finally, some of Google's auto-translate tools are coming to YouTube search results to make them useful to more people. Specifically, the company is starting to include automatically translated video titles, descriptions and captions to search. These results will show when there isn't enough related content in a user's local language to be useful. YouTube is first adding these translations to English-language videos, and right now it's only being tested on mobile devices in India and Indonesia; the company says it'll "consider" expanding to more locations based on user feedback. 

Twitter tones down new buttons after complaints of eye strain

A few days ago, Twitter rolled out a number of design updates meant to make the website more accessible. It introduced a new proprietary typeface and increased contrast to make buttons and other visual elements like images stand out. Just because those changes make the website more accessible for some people, though, doesn't mean they work for everyone. As TechCrunch and CNET note, complaints started pouring in after the update went out, with people reporting eye strain and headaches caused by the changes. Now, the social network has announced that it's adjusting its buttons' contrast levels to make them easier on the eyes. 

We've identified issues with the Chirp font for Windows users and are actively working on a fix. Thanks for your patience and please let us know if you have additional feedback.

— Twitter Accessibility (@TwitterA11y) August 14, 2021

Twitter said it made the adjustment after people sent in complaints that the "new look is uncomfortable for people with sensory sensitivities." The company's accessibility account started asking for feedback a day after the updates went out, promising to track it all. Sounds like it's stayed true to its word, though the Chirp font remains even if it's supposedly giving people headaches. Twitter also hasn't changed the new colors for the Follow button, which has caused quite the confusion: The button is now filled in with black for accounts you've yet to follow and shows up with a white background for accounts you're already following. It used to be the other way around.

The company may release more fixes to its accessibility update in the future, though. It told TechCrunch that "feedback was sought from people with disabilities throughout the process, from the beginning." However, it knows that "people have different preferences and needs and [it] will continue to track feedback and refine the experience." Twitter added: "We realize we could get more feedback in the future and we'll work to do that."

Researchers shut down Instagram study following backlash from Facebook

AlgorithmWatch, a group of researchers who had been studying how Instagram’s opaque algorithms function, say they were recently forced to halt their work over concerns Facebook planned to take legal action against them. In a post spotted by The Verge, AlgorithmWatch claims the company accused it of breaching Instagram’s terms of service and said it would move to take “more formal engagement” if the project did not “resolve” the issue.

AlgorithmWatch’s research centered around a browser plugin more than 1,500 individuals downloaded. The tool helped the team to collect information it says allowed it to make some inferences about how Instagram prioritizes specific photos and videos over others.

Most notably, the team found the platform encourages people to show skin. Before publishing its findings, AlgorithmWatch said it had reached out to Facebook for comment, only for the company not to respond initially. However, in May 2020, Facebook told the researchers their work was “flawed in a number of ways” after it said earlier in the year it found a list of issues with the methodology AlgorithmWatch had employed.

When Facebook accused AlgorithmWatch of breaching its terms of service, the company pointed to a section of its rules that prohibits automated data collection. It also said the system violated GDPR, the European Union’s data privacy law. “We only collected data related to content that Facebook displayed to the volunteers who installed the add-on,” AlgorithmWatch said. “In other words, users of the plugin [were] only accessing their own feed, and sharing it with us for research purposes.” As for Facebook’s allegations related to GDPR, the group said, “a cursory look at the source code, which we open-sourced, shows that such data was deleted immediately when arriving at our server.”

Despite the belief they had done nothing wrong, the researchers eventually decided to shutter the project. “Ultimately, an organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars,” they said.

When Engadget reached out to Facebook for comment on the situation, the company denied it had threatened to sue the researchers. Here’s the full text of what it had to say:

We believe in independent research into our platform and have worked hard to allow many groups to do it, including AlgorithmWatch — but just not at the expense of anyone’s privacy. We had concerns with their practices, which is why we contacted them multiple times so they could come into compliance with our terms and continue their research, as we routinely do with other research groups when we identify similar concerns. We did not threaten to sue them. The signatories of this letter believe in transparency — and so do we. We collaborate with hundreds of research groups to enable the study of important topics, including by providing data sets and access to APIs, and recently published information explaining how our systems work and why you see what you see on our platform. We intend to keep working with independent researchers, but in ways that don’t put people’s data or privacy at risk.

This episode with AlgorithmWatch has worrisome parallels with actions Facebook took earlier in the month against a project called NYU Ad Observatory, which had been studying how political advertisers target their ads. Facebook has some tools in place to assist researchers in their work, but for the most part, its platforms have been a black box since the fallout of the Cambridge Analytica scandal. That’s a significant problem, as AlgorithmWatch points out.

“Large platforms play an oversized, and largely unknown, role in society, from identity-building to voting choices,” it said. “Only if we understand how our public sphere is influenced by their algorithmic choices, can we take measures towards ensuring they do not undermine individuals’ autonomy, freedom, and the collective good.”

Why is Facebook so bad at countering vaccine misinformation?

It’s been six months since Facebook announced a major reversal to its policies on vaccine misinformation. Faced with a rising tide of viral rumors and conspiracy theories, the company said it would start removing vaccine mistruths from its platform. Notably, the effort not only encompassed content about COVID-19 vaccines, but all vaccines. That includes many of the kinds of claims it had long allowed, like those linking vaccines and autism, statements that vaccines are “toxic” or otherwise dangerous.

The move was widely praised, as disinformation researchers and public health officials have long urged Facebook and other platforms to treat vaccine misinformation more aggressively. Since then, the company has banned some prominent anti-vaxxers, stopped recommending health-related groups and shown vaccine-related PSAs across Facebook and Instagram. It now labels any post at all that mentions COVID-19 vaccines, whether factual or not.

Yet, despite these efforts, vaccine misinformation is still an urgent problem, and public health officials say Facebook and other social media platforms aren’t doing enough to address it. Last month, the Surgeon General issued an advisory warning of the dangers of health misinformation online. The accompanying 22-page report didn’t call out any platforms by name, but it highlighted algorithmic amplification and other issues commonly associated with Facebook. The following day, President Joe Biden made headlines when he said that misinformation on Facebook was “killing people.”

While Facebook has pushed back, citing its numerous efforts to quash health misinformation during the pandemic, the company’s past lax approach to vaccine misinformation has likely made that job much more difficult. In a statement, a Facebook spokesperson said vaccine hesitancy has decreased among its users in the US, but the company has also repeatedly rebuffed requests for more data that could shed light on just how big the problem really is.

“Since the beginning of the pandemic, we have removed 18 million pieces of COVID misinformation, labeled hundreds of millions of pieces of COVID content rated by our fact-checking partners, and connected over 2 billion people with authoritative information through tools like our COVID information center,” a Facebook spokesperson told Engadget. “The data shows that for people in the US on Facebook, vaccine hesitancy has declined by 50% since January, and acceptance is high. We will continue to enforce against any account or group that violates our COVID-19 and vaccine policies and offer tools and reminders for people who use our platform to get vaccinated.”

Facebook’s pandemic decision

Throughout the pandemic, Facebook has moderated health misinformation much more aggressively than it has in the past. Yet for the first year of the pandemic, the company made a distinction between coronavirus misinformation — e.g., statements about fake cures or disputing the effectiveness of masks, which it removed — and vaccine conspiracy theories, which it said did not break the company’s rules. Mark Zuckerberg even said that he would be reluctant to moderate vaccine misinformation the same way the company has with COVID misinformation.

That changed this year, with the advent of COVID-19 vaccines and the rising tide of misinformation and vaccine hesitancy that accompanied them, but the damage may have already been done. A peer-reviewed study published in Nature in February found that exposure to misinformation about the COVID-19 vaccines “lowers intent to accept a COVID-19 vaccine” by about 6 percent.

People are also more likely to be unvaccinated if they primarily get their news from Facebook, according to a July report from the COVID States Project. The researchers sampled more than 20,000 adults in all 50 states and found that those who cited Facebook as a primary news source were less likely to be vaccinated. While the authors note that it doesn’t prove that using Facebook affects someone’s choice to get vaccinated, they found a “surprisingly strong relationship” between the two.

“If you rely on Facebook to get news and information about the coronavirus, you are substantially less likely than the average American to say you have been vaccinated,” they write. “In fact, Facebook news consumers are less likely to be vaccinated than people who get their coronavirus information from Fox News. According to our data, Facebook users were also among the most likely to believe false claims about coronavirus vaccines.”

The researchers speculate that this could be because people who spend a lot of time on Facebook are less likely to trust the government, the media or other institutions. Or, it could be that spending time on the platform contributed to that distrust. While there’s no way to know for sure, we do know that Facebook has for years been an effective platform for spreading disinformation about vaccines.

A spotty record

Doctors and researchers have warned for years that Facebook wasn’t doing enough to prevent lies about vaccines from spreading. Because of this, prominent anti-vaxxers have used Facebook and Instagram to spread their message and build their followings.

A report published earlier this year from the CCDH found that more than half of all vaccine misinformation online could be linked to 12 individuals who are part of a long-running, and often coordinated, effort to undermine vaccines. But while the company has banned some accounts, some of those individuals still have a presence on a Facebook-owned platform, according to the CCDH. Facebook has disputed the findings of that report, which relied on analytics from the company's CrowdTangle tool. But the social network’s own research into vaccine hesitancy indicated “a small group appears to play a big role” in undermining vaccines, The Washington Postreported in March.

There are other issues, too. For years, Facebook’s search and recommendation algorithm have made it extraordinarily easy for users to fall into rabbit holes of misinformation. Simply searching the word “vaccine” would be enough to surface recommendations for accounts spreading conspiracy theories and other vaccine disinformation.

Engadget reported last year on Instagram’s algorithmic search results associated anti-vaccine accounts with COVID-19 conspiracies and QAnon content. More than a year later, a recent study from Avaaz found that although this type of content no longer appears at the top of search results, Facebook’s recommendation algorithms continue to recommend pages and groups that promote misinformation about vaccines. In their report, researchers document how users can fall into misinformation “rabbit holes” by liking seemingly innocuous pages or searching for “vaccines.” They also found that Facebook’s page recommendation algorithm appeared to associate vaccines and autism.

“Over the course of two days, we used two new Facebook accounts to follow vaccine-related pages that Facebook suggested for us. Facebook’s algorithm directed us to 109 pages, with 1.4M followers, containing anti-vaccine content — including pages from well-known anti-vaccine advocates and organizations such as Del Bigtree, Dr. Ben Tapper, Dr. Toni Bark, Andrew Wakefield, Children's Health Defense, Learn the Risk, and Dr. Suzanne Humphries. Many of the pages the algorithm recommended to us carried a label, warning that the page posts about COVID-19 or vaccines, giving us the option to go directly to the CDC website. The algorithm also recommended 10 pages related to autism — some containing anti-vaccine content, some not — suggesting that Facebook’s algorithm associates vaccines with autism, a thoroughly debunked link that anti-vaccine advocates continue to push.”

Facebook has removed some of these pages from its recommendations, though it’s not clear which. Avaaz points out that there’s no way to know why Facebook’s recommendation algorithm surfaces the pages it does as the company doesn’t disclose how these systems work. Yet it’s notable because content associating vaccines with autism is exactly one of the claims that Facebook said it would ban under its stricter misinformation rules during the pandemic. That Facebook’s suggestions are intermingling the topics is, at the very least, undermining those efforts.

Claims and counterclaims

Facebook has strongly opposed these claims. The company repeatedly points to its messaging campaign around covid-19 vaccines, noting that more than 2 billion people have viewed the company’s COVID-19 and vaccine PSAs.

In a blog post responding to President Biden’s comments last month, Facebook’s VP of Integrity Guy Rosen argued that “vaccine acceptance among Facebook users in the US has increased.” He noted that the company has “reduced the visibility of more than 167 million pieces of COVID-19 content debunked by our network of fact-checking partners so fewer people see it.”

He didn’t share, however, how much of that misinformation was about vaccines, or details on the company’s enforcement of its more general vaccine misinformation rules. That’s likely not an accident. The company has repeatedly resisted efforts that could shed light on how misinformation spreads on its platform.

Facebook executives declined a request from their data scientists who asked for additional resources to study COVID-19 misinformation at the start of the pandemic, according toThe New York Times. It’s not clear why the request was turned down, but the company has also pushed back on outsiders’ efforts to gain insight into health misinformation.

Facebook has declined to share the results of an internal study on vaccine hesitancy on its platform, according to Washington DC Attorney General Karl Racine’s office, which has launched a consumer protection investigation into the company’s handling of vaccine misinformation.

“Facebook has said it’s taking action to address the proliferation of COVID-19 vaccine misinformation on its site,” a spokesperson said. “But then when pressed to show its work, Facebook refused.”

The Biden Administration has also — unsuccessfully — pushed Facebook to be more forthcoming about vaccine misinformation. According to The New York Times, administration officials have met repeatedly with Facebook and other platforms as part of its effort to curb misinformations about coronavirus vaccines. Yet when a White House official asked Facebook to share “how often misinformation was viewed and spread,” the company refused. According to The Times, “Facebook responded to some requests for information by talking about vaccine promotion strategies,’ such as its PSAs or its tool to help users book vaccine appointments.

One issue is that it’s not always easy to define what is, and isn’t, misinformation. Factual information, like news stories or personal anecdotes about vaccine side effects, can be shared with misleading commentary. This, Facebook has suggested, makes it difficult to study the issue in the way that many have asked. At the same time, Facebook is a notoriously data-driven company. It’s constantly testing even the smallest features, and it employs scores of researchers and data scientists. It’s difficult to believe that learning more about vaccine hesitancy and how misinformation spreads is entirely out of reach.

Facebook may be forced to sell Giphy following UK regulator findings

The UK's competition regulator has found that Facebook's acquisition of GIF-sharing platform Giphy will harm competition within social media and digital advertising. As part of its provisional decision, the watchdog voiced concerns that Facebook could prevent rivals including TikTok and Snapchat from accessing Giphy, a service they already use. It added that Facebook could also require customers of the GIF platform to hand over more data in return for access. If its objections are confirmed as part of the ongoing review, the regulator said it could force Facebook to unwind the deal and to sell off Giphy in its entirety.

The Competition and Markets Authority (CMA) ultimately determined that the deal stands to increase Facebook's sizeable market power. Together, its suite of apps — including Facebook, WhatsApp and Instagram — account for 70 percent of social media activity and are accessed at least once a month by 80 percent of internet users, the CMA said.

Beyond social media, the watchdog suggested that the acquisition could remove a potential challenger to Facebook in the $5.5 billion display advertising market. Citing Facebook's termination of Giphy's paid ad partnerships following the deal, the regulator said the move had effectively stopped the company's ad expansion (including to additional countries like the UK) in its tracks. This in turn had an impact on innovation in the broader advertising sector, the CMA explained.

Facebook's announcement last May that it was acquiring Giphy, with plans to integrate it with Instagram, for a reported $400 million immediately raised alarm bells for regulators. The social network is facing antitrust complaints in the US and the EU over its social media and advertising monopolies, respectively. At the same time, the UK has ramped up its scrutiny of Big Tech by creating a dedicated Digital Markets Unit to oversee the likes of Google, Facebook and Apple. The fledgling agency sits within the CMA and is designed to give people more control over their data.

Today, the CMA echoed those principles in its initial decision. The regulator said that it would "take the necessary actions" to protect users if it concludes that the merger is detrimental to competition. It will now consult on its findings as part of the reviews process. A final decision is slated for October 6th.

Facebook told Variety that it "disagrees" with the CMA's preliminary findings. "We will continue to work with the CMA to address the misconception that the deal harms competition,” the company added. It previously argued that Giphy has no operations in the UK, meaning that the CMA has no jurisdiction over the deal. In addition, it has claimed that Giphy's paid services cannot be classified as display advertising under the regulator's own market definition. 

Twitter rolls out redesign with proprietary Chirp font

If you went to scroll through your Twitter timeline today, you may have noticed that things look a bit different. That’s because Twitter has started rolling out a handful of design tweaks to its web client and mobile apps. The company’s Design account detailed them in a thread it posted earlier today.

Notice anything different?

Today, we released a few changes to the way Twitter looks on the web and on your phone. While it might feel weird at first, these updates make us more accessible, unique, and focused on you and what you’re talking about.

Let’s take a deeper look. 🧵 pic.twitter.com/vCUomsgCNA

— Twitter Design (@TwitterDesign) August 11, 2021

The most visible (and controversial) change involves Chirp, Twitter’s first proprietary typeface. The company introduced the font back in January. According to Twitter, one of the main advantages of Chirp is the way it can align the text of tweets written in Western languages to the left-hand side of the interface. The company says that’s something that should make it easier to read content as you scroll through your timeline.

The company also tweaked its use of color. It says it went out of its way to use less blue and increase contrast so that both frequently used icons and visual content like images stand out. If you're a fan of customization, Twitter plans to roll out additional color palettes soon. “This is only the start of more visual updates as Twitter becomes more centered on you and what you have to say,” the company said.

hey..quick update: now 50% of our iOS folks will be able to speak like 👽, 🤖,🐝& more! https://t.co/pShfK3RYfG

— Spaces (@TwitterSpaces) August 10, 2021

Separate from the redesign, the company is also rolling out a feature to the Spaces app on iOS that allows users to change how their voice sounds when they speak during a presentation. “We know people often feel uncomfortable by the sound of their own voice,” the company said. “Giving people fun effects and useful ones might lower the threshold.”

Facebook's Oversight Board orders a post criticizing the Myanmar coup to be restored

Facebook's Oversight Board has instructed the social network to restore a post from a user that criticized the Chinese state. According to the board, Facebook mistakenly removed the post for violating its hate speech policy under the belief it targeted Chinese people.

"This case highlights the importance of considering context when enforcing hate speech policies, as well as the importance of protecting political speech," the Oversight Board wrote. "This is particularly relevant in Myanmar given the February 2021 coup and Facebook’s key role as a communications medium in the country."

The user, who appeared to be in Myanmar, posted the message in question in April. The post argued that, rather than providing funding to Myanmar's military following the coup in February, tax revenue should be given to the Committee Representing Pyidaungsu Hlutaw, a group of legislators that opposed the coup. The post, which was written in Burmese, was viewed around half a million times.

Although no users reported the post, Facebook decided to take it down. The post used profanity while referencing Chinese policy in Hong Kong. Facebook's translation of the post led four content reviewers to believe that the user was criticizing Chinese people. 

Under its hate speech rules, Facebook doesn't allow content that targets someone or a group of people based on ethnicity, race or national origins that use “profane terms or phrases with the intent to insult.” The user who wrote the post claimed in their appeal that they shared it in an effort to “stop the brutal military regime.”

The Oversight Board says context is particularly important in this case. The Burmese language uses the same word to refer to both a state and people who are from that state. Other factors made it clear the user was referring to the Chinese state, according to the board.

Two translators who reviewed the post "did not indicate any doubt" that the word at the heart of the case was referring to a state. The translators told the board the post includes terms that Myanmar’s government and the Chinese embassy commonly use to refer to each other. Public comments the board received regarding the case indicated the post was political speech.

The Oversight Board ordered Facebook to restore the post and recommended Facebook ensures "its Internal Implementation Standards are available in the language in which content moderators review content. If necessary to prioritize, Facebook should focus first on contexts where the risks to human rights are more severe."

The company has had a complicated history with Myanmar. In 2018, Facebook was accused of censoring information about ethnic cleansing in the country. It admitted it didn't do enough to stop people from using the platform to incite offline violence and "foment division," following a report it commissioned about the matter.

Soon after the coup, Facebook was temporarily blocked in Myanmar. After it returned, Facebook took steps to limit the reach of the country's military on its platform, and later banned the military outright on Facebook and Instagram.

The Oversight Board previously told Facebook to restore a post from another user based in Myanmar. As with the latest ruling, the board said Facebook misinterpreted the post as hate speech. While it was “pejorative or offensive,” the post didn't “advocate hatred” or directly call for violence.