Biden administration officials can freely communicate with social media companies — for now. The 5th Circuit Court of Appeals has put a pause on Judge Terry A. Doughty's order that prohibits most federal officials from talking to companies like Meta about content. According to The New York Times, the three-judge panel has ruled for Doughty's preliminary injunction to be put aside "until further orders of the court."
If you'll recall, the state attorneys general of Louisiana and Missouri filed a lawsuit against President Joe Biden and other top government officials, including Dr. Anthony Fauci. They accused the current administration of pressuring social media companies to censor certain topics and remove content. The lawsuit, the Washington Post reports, is based on emails between the administration and social networks, wherein the former questioned the companies' handling of posts on their websites containing conservative claims on the COVID-19 pandemic and the 2020 presidential elections, as well as anti-vaccine sentiments.
Doughty, a Trump-appointed judge, said the plaintiffs "produced evidence of a massive effort" by the defendants "to suppress speech based on its content." He also wrote in his decision that if the allegations are true, "the present case arguably involves the most massive attack against free speech in United States history." His order prohibits federal agencies that include the Department of Health and Human Services and the Department of Homeland Security from asking online platforms to take down content with "protected free speech." They could still, however, communicate with those entities for issues related to criminal activity, national security and election interference by foreign players.
Conservatives have long believed that mainstream social media platforms are biased against right-wing ideologies. That had led to the launch of social networks associated with conservatives, such as Parler and Donald Trump's Truth Social. The state attorneys argued that federal officials crossed the line by threatening to take antitrust actions against social networks and to limit their Section 230 protections, which allow internet companies to moderate content on their platforms as they see fit. It's worth noting that former President Trump previously signed an executive order that sought to limit federal protections offered by Section 230 after Twitter fact-checked a false tweet he posted.
The Justice Department appealed Doughty's order the day after it was issued, arguing that it was too broad and could limit the government's ability to warn people about false information in times of emergency. Apparently, the administration has already felt its effects after its scheduled meeting with Meta to discuss strategies on how to counter foreign disinformation campaigns was cancelled. This stay will allow federal agencies to continue working with online platforms until the court could look further into the complaint. The appeals court has ordered for the case's oral arguments to be expedited so a final decision could be reached in the near future.
This article originally appeared on Engadget at https://www.engadget.com/appeals-court-pauses-order-that-restricts-biden-officials-from-contacting-social-networks-123040377.html?src=rss
American regulators now appear to be clamping down on generative AI in earnest. The Washington Post has learned the Federal Trade Commission (FTC) has launched an investigation into OpenAI, the creator of ChatGPT and DALL-E. Officials have requested documents showing how the company tackles risks stemming from its large language AI models. The FTC is concerned the company may be violating consumer protection laws through "unfair or deceptive" practices that could hurt the public's privacy, security or reputation.
The Commission is particularly interested in information linked to a bug that leaked ChatGPT users' sensitive data, including payments and chat histories. While OpenAI said the number of affected users was very small, the FTC is worried this stems from poor security practices. The agency also wants details of any complaints alleging the AI made false or malicious statements about individuals, and info showing how well users understand the accuracy of the products they're using.
We've asked OpenAI for comment. The FTC declined comment and typically doesn't remark on investigations, but has previously warned that generative AI could run afoul of the law by doing more harm than good to consumers. It could be used to perpetrate scams, run misleading marketing campaigns or lead to discriminatory advertising, for instance. If the government body finds a company in violation, it can apply fines or issue consent decrees that force certain practices.
AI-specific laws and rules aren't expected in the near future. Even so, the government has stepped up pressure on the tech industry. OpenAI chief Sam Altman testified before the Senate in May, where he defended his company by outlining privacy and safety measures while touting AI's claimed benefits. He said protections were in place, but that OpenAI would be "increasingly cautious" and continue to upgrade its safeguards.
It's not clear if the FTC will pursue other generative AI developers, such as Google and Anthropic. The OpenAI investigation shows how the Commission might approach other cases, though, and signals that the regulator is serious about scrutinizing AI developers.
This article originally appeared on Engadget at https://www.engadget.com/ftc-opens-investigation-into-chatgpt-creator-openai-164551958.html?src=rss
The Massachusetts state legislature is considering a bill that would ban the sale of users’ phone location data. If passed, the Location Shield Act would be the first such law in the nation as Congress stalls on comprehensive user privacy solutions on a national scale. The state’s proposed legislation would also require a warrant for law enforcement to access user location data from data brokers.
Today, The Wall Street Journalpublished a report with numerous details on the proposed legislation, following earlier discussions at the state house (as reported byThe Athol Daily News). Of course, the bill wouldn’t prevent Massachusetts residents from using their phone’s location services for things that directly benefit them — like Google Maps navigation, DoorDash deliveries or hailing an Uber. However, it would bar tech companies and data vendors from selling that data to third parties — a practice without any clear consumer benefit.
The Location Shield Act is backed by the ACLU and various progressive and pro-choice groups, who see a greater urgency to block the dissemination of user location in a post-Dobbs world. As red states increasingly criminalize abortion, concerns have grown over the transfer of user data to catch women traveling out of state to undergo the procedure or access medication. In addition, the bill’s backers raise concerns about national security and digital-stalking implications.
Opposing the legislation is the State Privacy & Security Coalition, a trade association representing the tech industry. “The definition of sale is extremely broad,” said Andrew Kingman, an association lawyer. He says the group supports heightened protections but would prefer giving consumers “the ability to opt-out of sale,” as other state laws have done, rather than imposing an outright ban. Of course, making it optional rather than a complete ban would likely be much better for data brokers’ bottom lines.
Requiring law enforcement to provide a warrant to access user location data could also help curtail the rising trend of law enforcement buying that information commercially. A 2022 ACLU investigation found that the Department of Homeland Security bought over 336,000 data points to essentially bypass the Fourth Amendment requirement for a search warrant. Although the US Supreme Court has said a warrant is usually needed for agencies to access location data from carriers, purchasing the data from private companies has served as a loophole.
The Massachusetts legislative session runs through next year, and the bill’s backers show optimism that it will pass. “I have every reason to be optimistic that something will be happening in this session,” MA Senate Majority Leader Cindy Creem (D), the bill’s sponsor, told the WSJ.
This article originally appeared on Engadget at https://www.engadget.com/massachusetts-weighs-outright-ban-on-selling-user-location-data-191637974.html?src=rss
French law enforcement may soon have far-reaching authority to snoop on alleged criminals. Lawmakers in France's National Assembly have passed a bill that lets police surveil suspects by remotely activating cameras, microphones and GPS location systems on phones and other devices. A judge will have to approve use of the powers, and the recently amended bill forbids use against journalists, lawyers and other "sensitive professions," according to Le Monde. The measure is also meant to limit use to serious cases, and only for a maximum of six months. Geolocation would be limited to crimes that are punishable by at least five years in prison.
An earlier version of the bill passed the Senate, but the amendment will require that legislative body's approval before it can become law.
Civil liberties advocates are alarmed. The digital rights group La Quadrature du Net previously pointed out the potential for abuse. As the bill isn't clear about what constitutes a serious crime, there are fears the French government might use this to target environmental activists and others who aren't grave threats. The organization also notes that worrying security policies have a habit of expanding to less serious crimes. Genetic registration was only used for sex offenders at first, La Quadrature says, but is now being used for most crimes.
The group further notes that the remote access may depend on security vulnerabilities. Police would be exploiting security holes instead of telling manufacturers how to patch those holes, La Quadrature says.
Justice Minister Éric Dupond-Moretti says the powers would only be used for "dozens" of cases per year, and that this was "far away" from the surveillance state of Orwell's 1984. It will save lives, the politician argues.
The legislation comes as concerns about government device surveillance are growing. There's been a backlash against NSO Group, whose Pegasus spyware has allegedly been misused to spy on dissidents, activists and even politicians. While the French bill is more focused, it's not exactly reassuring to those worried about government overreach.
This article originally appeared on Engadget at https://www.engadget.com/french-assembly-passes-bill-allowing-police-to-remotely-activate-phone-cameras-and-microphones-for-surveillance-210539401.html?src=rss
All of the the world's governments will, at least officially, be out of the chemical weapons business. The US Army tellsThe New York Times it should finish destroying the world's last declared chemical weapons stockpile as soon as tomorrow, July 7th. The US and most other nations agreed to completely eliminate their arsenals within 10 years after the Chemical Weapons Convention took effect in 1997, but the sheer size of the American collection (many of the warheads are several decades old) and the complexity of safe disposal left the country running late.
The current method relies on robots that puncture, drain and wash the chemical-laden artillery shells and rockets, which are then baked to render them harmless. The drained gas is diluted in hot water and neutralized either with bacteria (for mustard gas) or caustic soda (for nerve agents). The remaining liquid is then incinerated. Teams use X-rays to check for leaks before destruction starts, and they remotely monitor robots to minimize contact with hazardous material.
The Army initially wanted to dispose of the weapons by sinking them on ships, as it had quietly done before, but faced a public backlash over the potential environmental impact. Proposals to incinerate chemical agents in the 1980s also met with objections, although the military ultimately destroyed a large chunk of the stockpile that way.
The US last used chemical weapons in World War I, but kept producing them for decades as a deterrent. Attention to the program first spiked in 1968, when strange sheep deaths led to revelations that the Army was storing chemical weapons across the US and even testing them in the open.
This measure will only wipe out confirmed stockpiles. Russia has been accused of secretly making nerve gas despite insisting that it destroyed its last chemical weapons in 2017. Pro-government Syrian military forces and ISIS extremists used the weapons throughout much of the 2010s. This won't stop hostile countries and terrorists from using the toxins.
Even so, this is a major milestone. In addition to wiping out an entire category of weapons of mass destruction, it represents another step toward reduced lethality in war. Drones reduce the exposure for their operators (though not the targets), and experts like AI researcher Geoffrey Hinton envision an era when robots fight each other. While humanity would ideally end war altogether, efforts like these at least reduce the casualties.
This article originally appeared on Engadget at https://www.engadget.com/the-us-is-destroying-the-worlds-last-known-chemical-weapons-stockpile-181026211.html?src=rss
A judge has blocked the Biden administration and other federal officials from communicating with social media companies in a case that could have far-reaching implications. On Tuesday, a Trump-appointed judge granted the state attorneys general in Louisiana and Missouri a temporary injunction against the federal government, reports The Washington Post. The two Republican lawyers sued President Joe Biden and other top government officials, including Dr. Anthony Fauci and Surgeon General Vivek Murthy, last year, accusing them of colluding with Meta, Twitter and YouTube to remove “truthful information” related to the COVID-19 lab leak theory, 2020 election and other topics.
Although he has yet to make a final ruling in the case, Judge Terry A. Doughty wrote in his order that the Republican attorneys general “produced evidence of a massive effort by Defendants, from the White House to federal agencies, to suppress speech based on its content.” While the order grants some exceptions for the government to communicate with Meta, Twitter and YouTube, it also specifically targets more than a dozen individual officials. Among those are Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, and Alejandro Mayorkas, the secretary of Homeland Security.
The lawsuit is the latest effort by some Republicans to allege the Biden administration pressured social media platforms to censor conservative views. The GOP has aired that grievance in a few different venues — including, most notably, a contentious House Oversight Committee hearing at the start of the year related to the so-called “Twitter Files.” The lawsuit from the attorneys general of Louisiana and Missouri takes a different tack. Instead of directly targeting Meta, Twitter and YouTube, which argue they have a First Amendment right to decide what content is allowed on their platforms, the attorneys general sued the federal government. Whatever happens next, that strategy has already led to the most successful effort yet to counter online content moderation.
Separately, it's worth noting Meta, Twitter and YouTube have all recently scaled back their moderation policies in one way or another. In the case of YouTube, for instance, the company said last month it would begin allowing videos that falsely claim fraud occurred during the 2020 election. Meta, meanwhile, last month back its COVID-19 misinformation rules for Instagram and Facebook in countries where the pandemic is no longer deemed a national emergency.
This article originally appeared on Engadget at https://www.engadget.com/judge-blocks-federal-officials-from-contacting-tech-companies-192554203.html?src=rss
Nobody said dragging one of the largest government bureaucracies to ever exist into the digital era was going to be easy but the sheer scale and myriad variety of failings we have seen in recent decades have had very real, and near universally negative, consequences for the Americans reliant on these social systems. One need look no further than at how SNAP — the federal Supplemental Nutrition Assistance Program — has repeatedly fallen short in its mission to help feed low-income Americans. Jennifer Pahlka, founder and former executive director of Code for America, takes an unflinching view at the many missteps and groupthink slip-ups committed by our government in the pursuit of bureaucratic efficiency in Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better.
The lawmakers who voted to cut the federal workforce in the 1990s, just as digital technology was starting to truly reshape our lives, wanted smaller government. But starving government of know-how, digital or otherwise, hasn’t made it shrink. It has ballooned it. Sure, there are fewer public servants, but we spend billions of dollars on satellite software that never goes to space, we pay vendors hundreds of thousands of dollars for basic web forms that don’t work, and we make applying for government services feel like the Inquisition. That’s the funny thing about small government: the things we do to get it — to limit government’s intrusion into our lives — have a habit of accomplishing the opposite.
Take, for example, an application for food stamps that requires answering 212 separate questions. That’s what Jake Solomon at Code for America discovered when he tried to find out why so few Californians in need enrolled in the state’s Supplemental Nutrition Assistance Program, or SNAP. Many of the questions were confusing, while others were oddly specific and seemed to assume the person applying was a criminal. “Have you or any member of your household ever been found guilty of trading SNAP benefits for drugs after September 22, 1996? Have you or any member of your household ever been found guilty of trading SNAP benefits for guns, ammunition, or explosives after September 22, 1996?” It would often take up to an hour for people to fill out the entire form. They couldn’t apply on a mobile phone; the application form, called MyBenefits CalWIN, didn’t work on mobile. Lots of the people Jake observed tried to complete the form on computers at the public library instead, but the library computers kicked you off after half an hour. You had to wait for your turn to come again and pick up where you left off.
SNAP is a federal program that states are responsible for administering. The smaller the jurisdiction in charge, the more likely that the program will be attuned to local needs and values. California, along with nine other states, has chosen to further devolve administration to its individual counties, putting the burden of managing client data on fifty-eight separate entities. To handle that burden, the counties (with the exception of Los Angeles) formed two consortia that pooled IT resources. When it became clear that clients should be able to apply online, each consortium then contracted for a single online application form to save money. It turned out to be quite expensive anyway: MyBenefits CalWIN, the form Jake studied, cost several million dollars to build. But at least that got divided across the eighteen counties in the consortium.
What those several million dollars had gotten them was another question. Jake and his Code for America colleagues published a “teardown” of the website, over a hundred screenshots of it in action, with each page marked up to highlight the parts that confused and frustrated the people trying to use it. (To be fair, the teardown also highlighted elements that were helpful to users; there were just far fewer of them.) The teardown was a powerful critique. It was noticed by anti-poverty advocates and the press alike, and the ways in which the counties were failing their clients started to get a lot of attention. Jake should not have been popular with the people responsible for MyBenefits CalWIN. Which was why he was surprised when HP, the vendor managing the website, invited him to a meeting of the consortium to present his work.
The meeting brought representatives from each of the counties to a business hotel in downtown Sacramento. It was only after Jake finished showing them his observations that he realized why he’d been invited. The HP representative at the meeting presented a variety of options for how the consortium might use its resources over the coming year, and then the county representatives began engaging in that hallmark of democracy: voting. One of the questions up for a vote was whether to engage some of HP’s contracted time to make MyBenefits CalWIN usable on a mobile phone. Fresh off Jake’s critique, that priority got the votes it needed to proceed. Jake had done the job he’d been invited to do without even knowing what it was.
What struck Jake about the process was not his success in convincing the county representatives. It was not that different from what Mary Ann had achieved when her recording of Dominic convinced the deputy secretary of the VA to let her team fix the health care application. The HP rep was interested in bringing to life for the county reps the burdens that applicants experienced. Jake was very good at doing that, and the rep had been smart to use him.
What Jake did find remarkable was the decision-making process. To him, it was clear how to decide the kinds of questions the group discussed that day. SNAP applicants were by definition low-income, and most low-income people use the web through their phones. So at Code for America, when Jake developed applications for safety-net benefits, he built them to work on mobile phones from the start. And when he and his team were trying to figure out the best way to phrase something, they came up with a few options that sounded simple and clear, and tested these options with program applicants. If lots of people stopped at some point when they filled out the form, it was a sign that that version of the instructions was confusing them. If some wording resulted in more applications being denied because the applicant misunderstood the question, that was another sign. Almost every design choice was, in effect, made by the users.
The counties, on the other hand, made those same choices by committee. Because each of the eighteen counties administers the SNAP program separately, the focus was on accommodating the unique business processes of each separate county and the many local welfare offices within the counties. It wasn’t that the county reps didn’t care about the experience of their users—their vote to start making MyBenefits CalWIN work on mobile phones was proof of that. But the process the consortium followed was not constructed to identify and address the needs of users. It had been set up to adjudicate between the needs of the counties. The result had been, for years, an experience for clients that was practically intolerable.
Ever since the founding of the United States, a core value for many has been restricting the concentration of government power. The colonists were, after all, rebelling against a monarchy. When power is concentrated in the hands of one person or one regime, the reasoning goes, we lose our liberty. We need to have some government, so we’ll have to trust some people to make some decisions, but best to make it hard for any one person to do anything significant, lest that person begin to act like a king. Best to make sure that any decisions require lots of different people to weigh in.
But as Jake saw, the way you get 212 questions on a form for food assistance is not concentrated power, it’s diffuse power. And diffuse power is not just an artifact of the complexities federalism can bring, with decisions delegated down to local government and then aggregated back up through mechanisms like the county consortia. The fear of having exercised too much power, and being criticized for it, is ever present for many public servants. The result is a compulsion to consult every imaginable stakeholder, except the ones who matter most: the people who will use the service.
A tech leader who made the transition from a consumer internet company to public service recently called me in frustration. He’d been trying to clarify roles on a new government project and had explained to multiple departments how important it would be to have a product manager, someone empowered to direct and absorb user research, understand both external and internal needs, and integrate all of it. The departments had all enthusiastically agreed. But when it came time to choose that person, each department presented my friend with a different name, sometimes several. There were more than a dozen in all.
He thought perhaps he was supposed to choose the product manager from among these names. But the department representatives explained that all these people would need to share the role of product manager, since each department had some stake in the product. Decisions about the product would be made by what was essentially a committee, something like the federal CIO Council that resulted in the ESB imperative. Members would be able to insist on what they believed their different departments needed, and no one would have the power to say no to anyone. Even without the complications of federalism, the project would still be doomed to exactly the kind of bloat that MyBenefits CalWIN suffered from.
This kind of cultural tendency toward power sharing makes sense. It is akin to saying this project will have no king, no arbitrary authority who might act imperiously. But the result is bloat, and using a bloated service feels intrusive and onerous. It’s easy to start seeing government as overreaching if every interaction goes into needless detail and demands countless hours.
Highly diffuse decision-making frameworks can make it very hard to build good digital services for the public. But they are rooted in laws that go back to long before the digital era.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-recoding-america-jennifer-phalka-metropolitan-books-food-stamps-143018881.html?src=rss
The technology industry isn't thrilled with Arkansas' law requiring social media age checks. NetChoice, a tech trade group that includes Google, Meta and TikTok, has sued the state of Arkansas over claimed US Constitution violations in the Social Media Safety Act. The measure allegedly treads on First Amendment free speech rights by making users hand over private data in order to access social networks. It also "seizes decision making" from families, NetChoice argues.
The alliance also believes the Act hurts privacy and safety by making internet companies rely on a third-party service to store and track kids' data. State residents often don't know or associate with the service, NetChoice claims, and an external firm is supposedly a "prime target" for hacks. The law tries to regulate the internet outside state laws while ignoring federal law, according to the lawsuit. As Arkansas can't verify residency without requiring data, it's effectively asking everyone to submit documents.
State Attorney General Tim Griffin tells Engadget in a statement that he looks forward to "vigorously defending" the Social Media Safety Act. The law requires age verification for all users by submitting driver's licenses and other "commercially reasonable" methods. Anyone under 18 also needs to get a parent's consent. There are exceptions that appear to cover major social networks and their associated categories, such as those for "professional networking" (think LinkedIn) or short entertaining video clips (like TikTok).
Arkansas' requirement is part of a greater trend among politicians to demand age verification for social media. States like Utah, Connecticut and Ohio have either passed or are considering similar laws, while Senator Josh Hawley proposed a federal bill barring all social media access for kids under 16. They're concerned younger users might be exposed to creeps and inappropriate content, and that use can harm mental health by presenting a skewed view of the world and encouraging addiction.
There's no guarantee the lawsuit will succeed. If it does, though, it could affect similar attempts to verify ages through personal data. If Arkansas' approach is deemed unconstitutional, other states might have to drop their own efforts.
This article originally appeared on Engadget at https://www.engadget.com/tech-firms-sue-arkansas-over-social-media-age-verification-law-180002953.html?src=rss
Last year, Twitter sued India over orders to block content within the country, saying the government had applied its 2021 IT laws "arbitrarily and disproportionately." Now, India's Karnataka High Court has dismissed the plea, with a judge saying Twitter had failed to explain why it delayed complying with the new laws in the first place, TechCrunch has reported. The court also imposed a 5 million rupee ($61,000 fine) on the Elon Musk-owned firm.
"Your client (Twitter) was given notices and your client did not comply. Punishment for non-compliance is seven years imprisonment and unlimited fine. That also did not deter your client," the judge told Twitter's legal representation. "So you have not given any reason why you delayed compliance, more than a year of delay… then all of sudden you comply and approach the Court. You are not a farmer but a billon dollar company."
Twitter’s relationship with India was fraught for much of 2021. In February, the government threatened to jail Twitter employees unless the company removed content related to protests by farmers held that year. Shortly after that, India ordered Twitter to pull tweets criticizing the country’s response to the COVID-19 pandemic. More recently, the government ordered Twitter to block tweets from Freedom House, a nonprofit organization that claimed India was an example of a country where freedom of the press is on the decline.
Those incidents put Twitter in a compromising situation. It either had to comply with government orders to block content (and face censorship criticism inside and outside the country), or ignore them and risk losing its legal immunity. In August, it complied with the orders and took down content as ordered.
The court order follows recent comments from Twitter co-founder Jack Dorsey, saying that India threatened to raid employees homes if it didn't comply with orders to remove posts and accounts. In a tweet, India's deputy minister for information technique called that "an outright lie" saying Twitter was "in non-compliance with the law."
Twitter filed the suit around the same time that Elon Musk started trying to wiggle out of buying Twitter. Since then, Twitter has often complied with government takedown requests — most recently in Turkey, where it limited access to some tweets ahead of a tightly contested election won by incumbent president Recep Tayyip Erdogan.
This article originally appeared on Engadget at https://www.engadget.com/twitters-lawsuit-over-censorship-in-india-has-been-dismissed-114031691.html?src=rss
Meta's Oversight Board has called for a six month ban on Cambodian Prime Minister Hun Sen's Facebook and Instagram accounts for inciting violence, it wrote in a news release. It's the second time in the last week that the Board has reversed a high profile Meta review, after a Brazilian user posted a video asking followers to "besiege" government. However, it's the first time the Oversight Board has asked for a head of state to be banned, a decision that may have ramifications for future policy decisions.
Hun Sen, who has led Cambodia since 1985, is facing an election this month. Earlier in the year, he posted a video of a speech telling political opponents he'd "gather CPP (Cambodia People's Party) people to protest and beat you up." Following several user reports and appeals, Meta policy and subject matter experts recommended leaving the post up based on newsworthiness, even though it violated the company's community standards for violence and incitement.
"Given the severity of the violation, Hun Sen’s history of committing human rights violations and intimidating political opponents, as well as his strategic use of social media to amplify such threats, the Board calls on Meta to immediately suspend Hun Sen’s Facebook page and Instagram account for six months," it wrote. The suspension is non-binding, but Meta must take down the contested video within 60 days.
In explaining the decision, the Board said that the "harm caused by allowing the content on the platform outweighs the post's public interest value," particularly given the prime minister's reach on social media. The original moderation decision, it added, "results in Meta's platforms contributing to these harms by amplifying the threats and resulting intimidation."
Such behavior should not be rewarded. Meta should more heavily weigh press freedom when considering newsworthiness so that the allowance is not applied to government speech in situations where that government has made its own content more newsworthy by limiting free press.
On top of Hun Sen's ban, the Board advised Meta to make clear that its moderation policies are not restricted to single incidents of civil unrest or violence. It also recommended removing the newsworthiness allowance policy in cases involving incitement of violence, and prioritize reviews involving heads of state and senior members of government. Finally, it asked Meta to reveal the reasoning behind its decision for Hun Sen "and in all account-level actions against heads of state and senior members of government."
The Board's review could set a bar for moderation of other authoritarian leaders in Asia, Human Rights Watch director Phil Robertson told The Post, while calling the takedown request of Hun Sen "long overdue." Facebook famously banned former US president Donald Trump from the platform (and restored his account earlier this year), but has caved to censorship demands in nations including Vietnam. Twitter owner Elon Musk recently justified censorship in Turkey ahead of an election, saying the company has "no actual choice" but to comply with such requests.
The Cambodian government hasn't responded yet to the board's decision, but previously said that the remarks were "only a confirmation of the legal process" in the nation. Hun Sen, who has 14 million Facebook followers, said today that he would halt any active posting on Facebook and use Telegram instead.
This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-urges-facebook-to-suspend-cambodias-prime-minister-132014772.html?src=rss