Posts with «author_name|karissa bell» label

Elon Musk and Twitter are now fighting about Signal messages

Elon Musk’s private messages could once again land him in hot water in his legal fight with Twitter. Lawyers for the two sides once again faced off in Delaware’s Court of Chancery ahead of an October trial that will determine the fate of the deal.

Among the issues raised in the more than three-hour long hearing was Musk’s use of encrypted messaging app Signal. Twitter’s lawyers claim that Musk has been withholding messages sent via the app, citing a screenshot of an exchange between Musk and Jared Birchall, the head of Musk’s family office.

According to Twitter’s lawyers, the message referenced Morgan Stanley and Marc Andreesen as well as “a conversation about EU regulatory approval” of Musk’s deal with Twitter. Twitter’s lawyers said they uncovered a screenshot of the exchange after Musk and Birchall had denied using Signal to talk about the deal. The screenshot showed the message was set to automatically delete.

Lawyers for Twitter also cited “a missing text message” between Musk and Oracle Chairman Larry Ellison, who was set to be a co-investor in the Twitter deal. Musk and Ellison were texting the morning before Musk tweeted that the Twitter deal was “temporarily on hold.” It’s not clear what the significance of the texts are, but Twitter’s lawyers noted that Musk wrote to Ellison saying “interesting times” before arranging a phone call with him.

Twitter’s lawyers are asking the judge in the case, Kathaleen St. J. McCormick, to sanction Musk over his side’s handling of his messages. “We do think that the time has come for the court to issue a severe sanction,” Twitter’s lawyers said during the hearing.

Musk’s side attempted to downplay the significance of the Tesla CEO’s use of Signal. “There actually is no evidence that we destroyed evidence,” one of Musk’s lawyers responded. “Signal, you know, it sounds like it's a nefarious device,” she said. “In fact, Twitter executives have testified that a number of them actually use Signal messaging.”

Musk’s lawyers cited the existence of Signal messages between Jack Dorsey and board chair Bret Taylor, and noted that current CEO Parag Agrawal has also turned over Signal messages. “Signal is not some exotic mechanism, it's very common in Silicon Valley to use this platform,” she said.

Notably, the latest hearing is not the first time Twitter’s lawyers have used Musk’s private messages obtained in the legal discovery process in their bid to enforce the original terms of the deal with Musk. Twitter’s lawyers previously called out a text message between Musk and one of his Morgan Stanley bankers in which he cited concerns about “World War 3” as a reason to slow-roll his negotiations with Twitter.

McCormick is expected to rule on Twitter’s motion to sanction Musk in the next couple days. A five-day trial that will determine the fate of the deal is scheduled for October 17th.

Meta dismantles a China-based network of fake accounts ahead of the midterms

Meta has taken down a network of fake accounts from China that targeted the United States with memes and posts about “hot button” political issues ahead of the midterm elections.The company said the fake accounts were discovered before they amassed a large following or attracted meaningful engagement, but that the operation was significant due to its timing and because of the topics the accounts posted about.

The network consisted of 81 Facebook accounts, eight Facebook Pages, two Instagram accounts and a single Facebook Group. Just 20 accounts followed at least one of the Pages and the group had about 250 members, according to Meta.

The fake accounts posted in four different “clusters” of activity, Meta said, beginning with Chinese-language content “about geopolitical issues, criticizing the US.” The next cluster graduated to memes and posts in English, while subsequent clusters created Facebook Pages and hashtags that also circulated on Twitter. In addition to the US, some clusters also targeted posts to people in the Czech Republic.

During a call with reporters, Meta’s Global Threat Intelligence Lead Ben Nimmo said the people behind the accounts “made a number of mistakes” that allowed Meta to catch them more easily, such as only posting during working hours in China. At the same time, Nimmo said the network represented a “new direction for Chinese influence operations” because the accounts posed as both liberals and conservatives, advocating for both sides on issues like gun control and abortion rights.

“It's like they were using these hot button issues to try and find an entry point into American discourse,” Nimmo said. “It is an important new direction to be aware of.” The accounts also shared memes about President Joe Biden, Florida Senator Marco Rubio, Utah Senator Mitt Romney and House Speaker Nancy Pelosi, according to Meta.

Meta also shared details about a much larger network of fake accounts from Russia, which it described as the “most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine.” The company identified more than 1,600 Facebook accounts and 700 Facebook Pages associated with the effort, which drew more than 5,000 followers.

The network used the accounts to boost a series of fake websites that impersonated legitimate news outlets and European organizations. They targeted people in Germany, France, Italy, Ukraine and the United Kingdom, and posted in several languages.

“They would post original articles that criticized Ukraine and Ukrainian refugees, praised Russia and argued that Western sanctions on Russia would backfire,” Meta writes in its report. “They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change[.]org and Avaaz[.]com, and even LiveJournal.”

Meta notes that “on a few occasions” the posts from these fake accounts were “amplified by Russian embassies in Europe and Asia” though it didn’t find direct links between the embassy accounts and the network. For both the Russia and China-based networks, Meta said it was unable to attribute the fake accounts to specific individuals or groups within the countries.

The takedowns come as Meta and itspeers are ramping up security and anti-misinformation efforts to prepare for the midterm elections in the fall. For Meta, that means largely using the same strategy it employed in the 2020 presidential election: a combination of highlighting authoritative information and resources, while relying on labels and third-party fact checkers to tamp down false and unverified info.

The FDA may have unintentionally made 'Nyquil Chicken' go viral on TikTok

If you’ve been anywhere near social media, local news, or late-night talk shows in the last few days, you’ve probably heard something about “Nyquil Chicken,” a supposedly viral TikTok “challenge” that’s exactly what it sounds like: cooking chicken in a marinade of cold medicine.

News about the supposed trend is usually accompanied by vomit-inducing photos of raw chicken simmering in dark green syrup. It’s both disgusting and, as the FDA recently reminded the public, just as toxic as it looks. But it turns out Nyquil Chicken was neither new, nor particularly viral, and the FDA’s bizarrely-timed warning may have backfired, making the meme more popular than ever.

First, a bit of history: As reporter Ryan Borderick points out in his newsletter Garbage Day, Nyquil Chicken originated as a joke on 4Chan in 2017. The meme briefly resurfaced in January where it got some traction on TikTok before once again fading away.

Then, last week, the FDA — inexplicably — issued a press release warning about the dangers of cooking chicken in Nyquil. In a notice titled “A Recipe for Danger: Social Media Challenges Involving Medicines,” the FDA refers to it as a “recent” trend. But they cite no recent examples, and it’s unclear why they opted to push out a warning more than eight months after the meme had first appeared on TikTok.

Screenshot / TikTok

Now, in what we can only hope will be a valuable lesson on unintended consequences, we know that it was likely the FDA’s warning about Nyquil chicken that pushed this “challenge” to new levels of virality, at least on TikTok. TikTok has now confirmed that on September 14th, the day before the FDA notice, there were only five searches for “Nyquil chicken” in the app. But by September 21st, that number skyrocketed “by more than 1,400 times,” according to BuzzFeed News, which first reported the TikTok search data.

TikTok, which has recently taken steps to limit the spread of both dangerous “challenges” and “alarmist warnings” about hoaxes, is now blocking searches for “Nyquil Chicken.” Searches now direct users to resources encouraging users to “stop and take a moment to think” before pursuing a potentially dangerous “challenge.”

As both BuzzFeed and Gizmodo note, there’s little evidence that people are actually cooking chicken in Nyquil, much less actually ingesting it. That’s a good thing because, as the FDA makes very clear, doing so is not only extremely gross, but highly toxic. But the whole thing is yet another example of why we should all be more skeptical of panic-inducing viral “challenges.”

Facebook violated Palestinians' right to free expression, says report commissioned by Meta

Meta has finally released the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report's authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users that did not appeal erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Oversight Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was diminished. At the time, Meta blamed an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”

Twitter is logging out some users following password reset 'incident'

Twitter has disclosed an “incident” affecting the accounts of an unspecified number of users who opted to reset their passwords. According to the company, a “bug” introduced sometime in the last year prevented Twitter users from being logged out of their accounts on all of their devices after initiating a password reset.

“if you proactively changed your password on one device, but still had an open session on another device, that session may not have been closed,” Twitter explains in a brief blog post. “Web sessions were not affected and were closed appropriately.”

Twitter says it is “proactively” logging some users out as a result of the bug. The company attributed the issue to “a change to the systems that power password resets” that occurred at some point in 2021. A Twitter spokesperson declined to elaborate on when this change was made or exactly how many users are affected. “I can share that for most people, this wouldn't have led to any harm or account compromise,” the spokesperson said. 

While Twitter states that “most people” wouldn’t have had their accounts compromised as a result, the news could be worrying for those who have used shared devices, or dealt with a lost or stolen device in the last year.

Notably, Twitter’s disclosure of the incident comes as the company is reeling from allegations from its former head of security who had filed a whistleblower complaint accusing the company of “grossly negligent” security practices. Twitter has so far declined to address the claims in detail, citing its ongoing litigation with Elon Musk. Musk is using the whistleblower allegations in his legal case to get out of his $44 billion deal to buy Twitter.

Meta is reportedly cutting staff and reorganizing teams

Meta has begun cutting staff and reorganizing teams in an effort to cut costs, according to a new report in The Wall Street Journal. The company apparently doesn’t want to frame the changes as layoffs, but is reportedly “quietly nudging out a significant number of staffers” as it prepares for more significant cuts.

It’s not clear how many Meta employees have been affected so far. According to the report. Meta has been allowing staffers to apply for new jobs within the company, but workers only have a 30-day window to do so. The result, according to The Journal, is that “workers with good reputations and strong performance reviews are being pushed out on a regular basis.”

Meta has been signaling for some time that it will reduce staff and cut projects as it deals with shrinking revenue amid what Mark Zuckerberg has described as an “economic downturn.” The CEO warned during the company’s most recent earnings call that Meta would slow hiring and would need to “get more done with fewer resources.”

Zuckerberg has recently told employees the company is facing “serious times” and managers have been asked to identify “low performers” to cut. The company has also axed some projects from its Reality Labs division, which has lost $10 billion in 2021. Dozens of Meta contractors employed by an outside firm were also recently told their jobs had been eliminated.

TikTok adds new rules for politicians' accounts ahead of the midterm elections

TikTok is adding new rules for accounts belonging to politicians, government officials and political parties ahead of the midterm elections. The company says it will require these accounts to go through a “mandatory” verification process, and will restrict them from accessing advertising and other revenue generating features.

Up until now, verification for politicians and other officials was entirely optional. But that’s now changing, at least in the United States, as TikTok gears up for the midterm elections this fall. In a blog post, the company says the update is meant to help it enforce its rules, which bar political advertising of any kind, more consistently.

By verifying their accounts, TikTok will be able to block politicians and political parties from accessing the platform’s advertising tools or other revenue generating features like tipping. Accounts will also be barred from payouts from the company’s creator fund, and from in-app shopping features.

TikTok says it also plans to add further restrictions that will prevent politicians and political parties from using the platform to solicit campaign contributions or other donations, even on outside websites. That policy, which will take effect “in the coming weeks,” will bar videos that direct viewers to third-party fundraising sites. It also means that politicians will not be allowed to post videos asking for donations.

The new policies are the latest piece of TikTok’s strategy to prepare for the midterm elections. The company already began rolling out an in-app Elections Center to highlight voting resources and details about local races. But enforcing its ban on political ads has proved to be challenging for TikTok, which has had to contend with undisclosed branded content from creators. The new rules don’t address that issue specifically, but the added restrictions for campaigns and politicians will make it more difficult for candidates and other officials to evade its rules.

YouTube will share ad revenue with Shorts creators

YouTube just announced a major change to its Partner Program that will allow its short-form video creators to make a lot more money from its platform. The company announced that it‘s adding advertising to its TikTok rival, YouTube Shorts, and will share revenue with creators.

The change could help YouTube draw creators away from TikTok, where stars have complained about low creator fund payouts. “This is the first time real revenue sharing is being offered for short-form video on any platform at scale,” YouTube Chief Product Officer Neal Mahon said during an event announcing the news.

With the new revenue sharing program, creators who get 10 million views on Shorts in a 90-day period can apply to join the Partner Program. Like TikTok, ads on Shorts will appear between videos in the Shorts feed. Revenue from the ads will be pooled and split among creators, Mohan said. Creators will get a 45 percent cut of the ads, regardless of whether they use music.

“Each creator is paid on their share of total Shorts views, and this revenue share remains the same, even if they use music,” he explained.

Developing…

YouTube’s ‘dislike’ barely works, according to new study on recommendations

If you’ve ever felt like it’s difficult to “un-train” YouTube’s algorithm from suggesting a certain type of video once it slips into your recommendations, you’re not alone. In fact, it may be even more difficult than you think to get YouTube to accurately understand your preferences. One major issue, according to new research conducted by Mozilla, is that YouTube’s in-app controls such as the “dislike” button, are largely ineffective as a tool for controlling suggested content. According to the report, these buttons “prevent less than half of unwanted algorithmic recommendations.”

Researchers at Mozilla used data gathered from RegretsReporter, its browser extension that allows people to “donate” their recommendations data for use in studies like this one. In all, the report relied on millions of recommended videos, as well as anecdotal reports from thousands of people.

Mozilla tested the effectiveness of four different controls: the thumbs down “dislike” button, “not interested,” “don’t recommend channel” and “remove from watch history.” The researchers found that these had varying degrees of effectiveness, but that the overall impact was “small and inadequate.”

Of the four controls, the most effective was “don’t recommend from channel,” which prevented 43 percent of unwanted recommendations, while “not interested” was the least effective and only prevented about 11 percent of unwanted suggestions. The “dislike” button was nearly the same at 12 percent, and “remove from watch history” weeded out about 29 percent.

In their report, Mozilla’s researchers noted the great lengths study participants said they would sometimes go to in order to prevent unwanted recommendations, such as watching videos while logged out or while connected to a VPN. The researchers say the study highlights the need for YouTube to better explain its controls to users, and to give people more proactive ways of defining what they want to see.

“The way that YouTube and a lot of platforms operate is they rely a lot of passive data collection in order to infer what your preferences are,” says Becca Ricks, a senior researcher at Mozilla who co-authored the report. “But it's a little bit of a paternalistic way to operate where you're kind of making choices on behalf of people. You could be asking people what they want to be doing on the platform versus just watching what they're doing.”

Mozilla’s research comes amid increased calls for major platforms to make their algorithms more transparent. In the United States, lawmakers have proposed bills to scale back “opaque” recommendation algorithms and to hold companies accountable for algorithmic bias. The European Union is even farther ahead. The recently passed Digital Services Act will require platforms to explain how recommendation algorithms work and open them to outside researchers.

A new California law will require social media platforms to add more 'protections' for children

California Governor Gavin Newsom has signed into law a new bill that could upend how social media platforms deal with underage users. The bill, known as AB 2273, “requires online platforms to consider the best interest of child users and to default to privacy and safety settings that protect children’s mental and physical health and wellbeing,” according to a press release from Newsom’s office.

The law, which won’t go into effect until July of 2024, is meant to place further restrictions on the type of data that platforms can collect from children. From Newsom’s press release: “AB 2273 prohibits companies that provide online services, products or features likely to be accessed by children from using a child’s personal information; collecting, selling, or retaining a child’s geolocation; profiling a child by default; and leading or encouraging children to provide personal information.”

However, it’s still not yet clear exactly what this will mean on a practical level for social media, games and other online platforms. And the bill has already faced sharp criticism from privacy advocates as well as the tech industry.

One criticism, backed by digital rights groups, is that requiring companies to identify child users could harm the privacy of everyone, not just kids. “The bill is so vaguely and broadly written that it will almost certainly lead to widespread use of invasive age verification techniques that subject children (and everyone else) to more surveillance while claiming to protect their privacy,” Fight For the Future wrote in a statement denouncing the bill. “Requiring age verification also makes it nearly impossible to use online services anonymously, which threatens freedom of expression, particularly for marginalized communities, human rights activists, whistleblowers, and journalists.”

Newsom’s office said in a statement that a “Children’s Data Protection Working Group” would write a report on “best practices” for implementing the law by January 2024.

The California law comes as pressure has mounted on social media companies to do more to protect the privacy and wellbeing of children who use their platforms. Lawmakers in the Senate have also proposed federal legislation that would increase data protections for younger users and President Joe Biden has said he supports banning online advertising that targets children.