Posts with «author_name|karissa bell» label

Twitter's lawyers say Elon Musk wanted out of the deal because of 'World War 3,' not bots

The whistleblower complaint from Twitter’s former head of security is already complicating the company’s legal battle with Elon Musk. Lawyers representing Musk and Twitter met in court Tuesday for a hearing that will determine whether the claims made by Pieter “Mudge” Zatko can be added to Elon Musk’s legal case to get out of his $44 billion commitment to buy Twitter.

Notably, the hearing was one of the first times any Twitter representative has publicly addressed Zatko’s complaint. In the two weeks since Zatko went public, Twitter has largely stayed silent on the substance of the claims.

During the hearing, Twitter’s lawyers portrayed Zatko as a disgruntled employee, saying that he had a “huge ax to grind” with the company and that he “was not in charge of spam at Twitter.” They accused him of “structuring his whistleblower complaint, to tie it to the merger agreement.” (Zatko’s lawyers previously said he didn’t go public in order to “benefit Musk.”) Notably, Twitter’s lawyers didn’t address claims that the company’s lax security practices may have harmed national security or that CEO Parag Agrawal told Zatko to lie to the company board.

Twitter’s lawyers did suggest that Musk was looking for reasons to kill the deal before Zatko’s complaint was public. At one point, Twitter’s lawyer quoted from a May 3rd text message Musk sent to his banker at Morgan Stanley:

“Let’s slow down just a few days … it won’t make sense to buy Twitter if we’re headed into World War 3,” Twitter’s lawyer read aloud, quoting Musk. “This is why Mr. Musk didn’t want to buy Twitter, this stuff about the bots, mDAU [monetizable daily active users] and Zatko is all pretext.”

On the other side, Musk’s lawyers touted Zatko’s credentials as a “decorated” executive who had once been offered a position as a US government official. They said Musk had “nothing to do with” Zatko’s whistleblower complaint and that Twitter had purposely hidden damaging information. Whether it will be enough to sway the judge in the case though, is unclear. In one exchange the judge pointedly remarked on Musk’s decision to waive due diligence before agreeing to the acquisition.

“Why didn’t we discover this in diligence,” Musk’s lawyer said, referencing Zatko’s whistleblower complaint. “They hid it, that’s why.” “We’ll never know, right,” the judge responded. “Because the diligence didn’t happen.”

Musk’s lawyers, pushing for the October trial to be delayed, closed out the more than three-hour long hearing by arguing that “it’s not us causing this chaos or this delay.”

“Nobody at Twitter is having all hands on meetings today over the poop emoji from two months ago,” he said, in an apparent — and unprompted — reference to a May 16th tweet from Musk directed at Agrawal. “The reason that they're having all-hands-on meetings today at Twitter is because a senior decorated executive said that the company was committing fraud. That’s our fault? That’s our chaos? That’s their chaos.”

Meta's virtual Connect event will stream live Oct. 11th

Meta Connect, the company’s annual event devoted to augmented and virtual reality, is just over a month away. The newly renamed event will take place October 11th, Mark Zuckerberg announced in a post on Facebook.

As with last year, the event will be virtual and streamed live on Reality Labs’ Facebook page. For now, there are few other official details available. The Meta Connect website says more information about speakers and the schedule for the day are “coming soon.”

But we already know a bit about what to expect. Meta is likely to finally show off Project Cambria, its new high-end VR headset that may be called Meta Quest Pro. In his Facebook post, Zuckerberg seemed to tease the big reveal, with a photo of him wearing a headset that was almost entirely obscured. Zuckerberg also recently promised “major updates” to Horizon avatars after his own low-res VR likeness was mercilessly dragged.

Meta

It will also be the company’s first big VR event since Zuckerberg announced Facebook’s rebrand to Meta at Connect last fall. At the time, he tried to articulate his vision for the metaverse and the social network’s role in it. But that vision hasn't always been clear. And the idea of a "metaverse" has still been a source of confusion (and sometimes derision, as evidenced by the reaction to Zuckerberg’s avatar). Meta Connect will also be Zuckerberg’s latest opportunity to not only hype the company’s latest hardware, but to try to build excitement for an eventual metaverse.

Thousands of Google's cafeteria workers 'quietly unionized during the pandemic,' report says

Since the start of the COVID-19 pandemic, 4,000 of people who work in Google’s cafeterias have joined unions, according to a new report in The Washington Post. According to the report, “about 90 percent of total food services workers at Google” are now unionized. 

That number is particularly significant as the company’s cafeterias, like many of its peers, are overwhelmingly staffed by contract workers who don’t get the same benefits as full-time employees. Contractors across the company have pushed for higher wages and increased protections in recent months.

Cafeteria workers at Google’s Atlanta office could soon be the latest to join the ranks of unionized workers. Workers employed by a contracting firm called Sodexo reportedly told their manager they plan to unionize, and Sodexo said they would not block the move if “a majority” of workers supported it.

It’s unclear when an official agreement may be reached but a spokesperson for Unite Here, the union representing Google’s cafeteria workers, told The Post they were “hopeful that we can quickly reach an agreement on a union contract.” Other cafeteria workers at Google have already seen significant benefits since joining Unite Here. According to The Post, “the average unionized worker at a Google cafeteria makes $24 an hour, pays little to nothing for health insurance and has access to a pension plan.” By contrast, the Sodexo workers in Atlanta make $15 an hour and can spend “hundreds” on health insurance.

TikTok denies security breach after hackers claim to have records of more than a billion users

TikTok has denied a security breach after posts on hacking forums claimed to have compromised the app’s source code, as well as account details of potentially billions of people. In a statement posted to Twitter, the company said it “found no evidence of a breach,” following an investigation of the claims. The company also told Bloomberg UK that the alleged source code posted by the hackers “is completely unrelated to TikTok’s backend source code.”

Claims of a potential breach had been circulating among the security community after a post on a hacking forum claimed to be in possession of a database with more than two billion entries related to TikTok and WeChat accounts. The hacking group claimed to have obtained the TikTok records from an insecure cloud server.

The supposed hackers published a sample of the TikTok data but, as security researcher Troy Hunt pointed out, it contained data that was already publicly accessible and thus “could have been constructed without breach.” Hunt, who runs the “haveibeenpwned” service, said the data was overall “pretty inconclusive.”

TikTok prioritizes the privacy and security of our users’ data. Our security team investigated these claims and found no evidence of a security breach. https://t.co/TdCZDUFLPN

— TikTokComms (@TikTokComms) September 5, 2022

While TikTok has strongly denied a breach, the info in the database could have come from other means. As Bleeping Computer notes, it could be the result of a data broker or some other third-party that scraped publicly-available data from the service.

Claims of a security breach come just days after Microsoft researchers disclosed that they had found a “high-severity vulnerability” in TikTok’s Android app that put millions of accounts at risk. Microsoft said the vulnerability was fixed less than a month after it alerted TikTok to the issue in February of 2022. TikTok has long faced questions about its security practices and what user data is shared with parent company ByteDance. The company said last month that Oracle would review its algorithms and content moderation systems in an effort to assuage concerns.

Meta faces $402 million EU fine over Instagram's privacy settings for children

Meta has been fined €405 million ($402 million) by the Irish Data Protection Commission for its handling of children’s privacy settings on Instagram, which violated Europe’s General Data Protection Regulation (GDPR). As Politico reports, it’s the second-largest fine to come out of Europe’s GDPR laws, and the third (and largest) fine levied against Meta by the regulator.

The fine stems from the photo sharing app’s privacy settings on accounts run by children. The DPC had been investigating Instagram over children’s use of business accounts, which made personal data like email addresses and phone numbers publicly visible. The investigation also covered Instagram’s policy of defaulting all new accounts, including teens, to be publicly viewable. 

“This inquiry focused on old settings that we updated over a year ago, and we’ve since released many new features to help keep teens safe and their information private," a Meta spokesperson told Politico in a statement. "Anyone under 18 automatically has their account set to private when they join Instagram, so only people they know can see what they post, and adults can’t message teens who don’t follow them. We engaged fully with the DPC throughout their inquiry, and we’re carefully reviewing their final decision.”

The fine, which Meta could still appeal, comes as Instagram has faced intense scrutiny over its handling of child safety issues. The company halted work on an Instagram Kids app last year following a whistleblower’s claims that meta ignored its own research indicating the app can have a negative impact on some teens’ mental health. Since then, the app has added more safety features, including changing default settings on teen accounts to private.

The next USB standard will double existing speeds even with an older cable

There’s a new, super-fast, version of USB 4 on the horizon and you won’t even have to buy a brand new cable to take advantage of it. The USB Promoter Group announced that the next-gen USB Version 2.0 standard is “pending release” and will double the bandwidth of existing USB 4 connectors, from 40 Gbps to up to 80 Gbps.

Amazingly, the new standard is also backwards compatible with previous USB 4 cables. This means that existing USB-C cables capable of 40 Gbps will also get the faster speeds when the new standard becomes available. From the press release: “Key characteristics of the updated USB4 solution include: Up to 80 Gbps operation, based on a new physical layer architecture, using existing 40 Gbps USB Type-C passive cables and newly-defined 80 Gbps USB Type-C active cables.”

The USB promoter Group didn’t explain the details around how that’s possible, but a spokesperson for the organization told The Verge it was “a requirement when the new specification was developed and the specifics as to how 80Gbps signaling is accomplished will be disclosed once the final specification is released.”

It’s not yet clear when the new standard will actually reach consumers. In its press release, the USB Promoter Group said the “update is specifically targeted to developers at this time” so it could still be some time before the rest of us can get computers with the new super-fast connector.

YouTube is still battling 2020 election misinformation as it prepares for the midterms

YouTube and Google are the latest platforms to share more about how they are preparing for the upcoming midterm elections, and the flood of misinformation that will come with it.

For YouTube, much of that strategy hinges on continuing to counter misinformation about the 2020 presidential election. The company’s election misinformation policies already prohibit videos that allege “widespread fraud, errors, or glitches” occurred in any previous presidential election. In a new blog post about its preparations for the midterms, the company says it's already removed “a number of videos related to the midterms” for breaking these rules, and that other channels have been temporarily suspended for videos related to the upcoming midterms.

The update comes as YouTube continues to face scrutiny for its handling of the 2020 election, and whether its recommendations pushed some people toward election fraud videos. (Of note, the Journal of Online Trust and Safety published a study on the topic today.)

In addition to taking down videos, YouTube also says it will launch “an educational media literacy campaign” aimed at educating viewers about “manipulation tactics used to spread misinformation.” The campaign will launch in the United States first, and will cover topics like “using emotional language” and “cherry picking information,” according to the company.

Google

And Both Google and YouTube will promote authoritative election information in their services, including in search results. Before the midterms, YouTube will link to information about how to vote, and on Election day, videos related to the midterms will link to “timely context around election results.” Similarly, Google will surface election results directly in search, which it has done in previous elections as well.

The company is also trying to make it easier to find details about local and regional races. Beginning in “the coming weeks,” Google will highlight local news sources from different states in election-related searches.

Twitter still hasn't addressed 'egregious' whistleblower claims

Twitter now has a whistleblower problem of its own. Last week, the company’s former head of security, Pieter “Mudge” Zatko, went public with an extensive whistleblower complaint detailing numerous security lapses and other issues he experienced during his tenure.

Much of the complaint details specific security problems he encountered. It also repeatedly blasts Twitter’s executives for putting user and revenue growth ahead of platform safety, and claims that in some cases executives lied to both twitter’s board and the public about these issues.

But some of the most striking claims in the documents published byThe Washington Post, which include the 84-page whistleblower complaint, as well as a report on the company’s misinformation policies, are about much more than a culture of growth at all costs. They detail significant lapses in the company’s security, and executives who were either absent or unconcerned by the risk presented by these practices. They also help shed light on the company’s at times chaotic approach to countering misinformation and other safety issues.

Notably, Twitter has said little about most of these claims. The company has said the whistleblower complaint is “riddled with inaccuracies,” but hasn’t elaborated. In fact, the company has largely declined to publicly address the specific issues raised by Zatko in any way in the week since the complaint became public

But while many have focused on Zatko’s allegations that Twitter lied to Musk about the prevalence of bots, there are several other claims that merit scrutiny — none of which have been addressed by Twitter in any detail. The company didn't respond to questions about the substance of Zatko's claims.

Twitter might have foreign agents on its payroll

Some of the most explosive claims made by Zatko are those that talk about how Twitter’s interactions with foreign governments and organizations could be endangering national security. Among the issues he raises: Twitter could have people working for foreign governments on staff.

He states that at least one agent of the Indian government was on the company’s payroll, and claims that a U.S. government source separately warned that there was at least one employee “working on behalf of another particular foreign intelligence agency.” It’s unclear what country the source was referring to but, crucially, it wouldn’t be the first instance of a Twitter worker spying for another country.

He also raises concerns about Twitter’s ongoing financial relationship — presumably via advertising — with “Chinese entities” and how they may be able to use the company’s tools to identify people using VPNs to circumvent the country's ban on the service. “Mr. Zatko was told that Twitter was too dependent on the revenue stream to do anything other than attempt to increase it,” the complaint says.

Jack Dorsey was ‘disengaged,’ Parag Agrawal allowed problems to ‘fester’

Throughout the complaint, Zatko describes interactions with Jack Dorsey and current CEO Parag Agrawal (Agrawal was Chief Technology Officer when Zatko first joined the company). Neither executive comes off particularly well.

The complaint notes that Dorsey personally recruited Zatko for the job as head of security, yet once he started, Zatko says Dorsey was either absent or bizarrely silent. According to the complaint, the two executives had “no more than six” one-on-one phone calls — during which Dorsey ”cumulatively spoke perhaps fifty words” — in the entire time they worked together. (Dorsey later tweeted that this was “completely false.”) Zatko, perhaps charitably, describes Dorsey’s demeanor as “disengaged,” and says the CEO was “experiencing a drastic loss of focus” in 2021. Zatko’s experience was apparently not unique either.

From the complaint:

In some meetings-even after he was briefed on complex corporate issues Dorsey did not speak a word. Mudge heard from his colleagues that Dorsey would remain silent for days or weeks. Worried about Dorsey's health, the senior team mostly tried to cover up for him, but even mid- and lower-level staff could tell that the ship was rudderless.

Zatko also describes a strained relationship with Agrawal, both while he was CTO and later when he took over the CEO role after Dorsey stepped down. The complaint at one point notes that some of Twitter’s biggest problems “had developed under Agrawal's watch.” He claims Agrawal was well aware of the company’s security issues, but did little to address them because “Agrawal had caused them, or allowed them to fester, in his role as CTO.” In one incident described by the former security chief, Agrawal was notified of a “huge red flag” but made no effort to look into it further.

In or around August 2021, Mudge notified then-CTO Agrawal and others that the login system for Twitter's engineers was registering, on average, between 1500 and 3000 failed logins every day, a huge red flag. Agrawal acknowledged that no one knew that, and never assigned anyone to diagnose why this was happening or how to fix it.

More worryingly, he claims that Agrawal told him to lie to Twitter’s board of directors about how bad Twitter’s security problems were. And he says he was ultimately fired when he attempted to correct the misleading information they had been provided. (Agrawal told Twitter staffers that Zatko was fired for “ineffective leadership and poor performance.” Zatko, via his lawyers, has disputed the claim.)

Twitter’s internal security practices were shockingly lax

Zatko joined Twitter at the end of 2020 to shore up the company’s systems and practices following a high profile and extremely embarrassing hack in which teenage Bitcoin scammers were able to take over some of accounts of some of Twitter’s most influential users. So it’s not surprising that he identified several security issues soon after joining. But the complaint describes a number of “egregious deficiencies” that were clearly worse than anything Zatko had anticipated.

For example, he repeatedly points out that employee devices were poorly managed. Unlike many companies of Twitter’s size, it had no MDM (mobile device management) policy “leaving the company with no visibility or control over thousands of devices used to access core company systems.” Likewise, Zatko claims that many employee computers were also not properly maintained. According to him, more than 30 percent of employee devices had software updates disabled.

Twitter, he says, “did not actively monitor what employees were doing” on their devices. To the point that Twitter repeatedly caught employees “intentionally installing spyware on their work computers at the request of external organizations,” and that their actions often came to light merely “by accident.”

The fact that Twitter did so little to monitor employee devices was even more concerning because, according to Zatko, roughly half of the company’s 10,000 employees were “given access to sensitive live production systems and user data in order to do their jobs.” He also claims Agrawal “misrepresented the truth” when he claimed the company had tightened access following the 2020 hack.

The company told The Washington Post it had improved its security practices since 2020, but hasn’t elaborated.

Twitter’s data centers were at risk of a “company ending” failure

According to Zatko, Twitter’s data centers were in such a sorry state that there was a nonzero risk that Twitter could lose service — permanently.

From the complaint:

Mudge was shocked to learn that even a temporary but overlapping outage of a small number of datacenters would likely result in the service going offline for weeks, months, or permanently. … On top of this all engineers had some form of access to the data centers, the majority of the systems in the data centers were running out of date software no longer supported by vendors, and there was minimal visibility due to extremely poor logging.

According to Zatko, these issues were so serious they could have potentially triggered “an existential company ending event.” Later, he says that just such a scenario almost occurred in the Spring of 2021, when “Twitter engineers working around the clock were narrowly able to stabilize the problem before the whole platform shut down.”

New features like Fleets, Spaces and Birdwatch had safety issues

Twitter has been racing to create new features over the last year and a half as it’s faced pressure to grow its user base and revenue. But according to the whistleblower documents, major new features sometimes launched without adequately accounting for safety.

For example, Zatko claims that Fleets, the company’s now defunct disappearing tweets feature, “avoided undergoing security and privacy reviews before launch.” The complaint notes that Twitter engineers had to race to address privacy issues that cropped up soon after its launch. A separate report on misinformation at Twitter also raised issues with Fleets. It states that the feature was originally slated to launch prior to the 2020 election, but that the company’s safety team had to “beg” to get the launch pushed to back until after the election

Multiple interviewees reported that they had to "beg" the product team not to launch before the election because they did not have the resources or capabilities to [take] action on disinformation or misinformation on a new product during such a busy, critical time.

Zatko also alleges that another high profile new feature, Spaces, had significant issues with content moderation.

“In December 2021, an executive incorrectly told staff and Board members that Twitter's "Spaces" product was being appropriately moderated. But Mudge researched and discovered that about half of "Spaces" content flagged for review was in a language that the moderators did not speak, and that there was little to no moderation happening.”

Smaller experiments also ran into issues. Birdwatch, the company’s collaborative fact checking feature, also a “pain point” for Twitter’s safety team, who worried QAnon-supporting accounts may join. That concern was apparently well-founded as one was discovered the night before the experiment went public.

In launching Twitter's Birdwatch program, members of the SI [Site Integrity] team said that they were involved in the process throughout, and made suggestions as to how the product could be more secure, including specifically warning that users aligned with QAnon would likely attempt to join. However, feedback was not incorporated in an attempt to keep the product open, leading to a last-minute scramble to secure the product launch. On the evening before Birdwatch launched, Twitter realized that an overt QAnon account had been accepted into the Birdwatch program.

Twitter lacks adequate resources for addressing misinformation

These issues are further detailed in a separate document, also published by The Washington Post, addressing Twitter’s misinformation policies. The report, prepared at Mudge’s request by an outside firm, found that the company is “consistently behind the curve in actioning against disinformation and misinformation threats.” It concluded that “a lack of investment in critical resources, and reactive policies and processes have driven Twitter to operate in a constant state of crisis that does not support the company's broader mission of protecting authentic conversation.”

The report details just how understaffed these teams are at Twitter, noting that the company relied on internal “volunteers” to staff up its misinformation efforts during the 2020 presidential election, It also repeatedly points out that the company lacks the staff or resources to effectively monitor misinformation and other threats in languages other than English. “Despite having a global mission, persistent gaps in resources, tools, and capabilities we identified means Twitter does not have the capabilities to operate globally — including in priority markets – when it comes to misinformation and disinformation,” the report’s authors write.

Zatko claims other Twitter executives attempted to “hide the findings” of the “damning independent report.”

Twitter’s internal support was at times nonexistent and ‘inappropriate’

Tracking misinformation and dealing with content moderation wasn’t the only area where Zatko says Twitter at times struggled to keep up. He reports that the @TwitterSupport account was “historically unmanned.” And that when he started there was a backlog of more than 1 million support cases including “items such as harassment, violations of various rules, and reported accounts and tweets, problems with accounts.”

While he says he oversaw improvements that substantially cut down the number of cases in the backlog. “it was historically the norm that cases in backlogs would eventually become so old that they would be silently closed, which most would agree is inappropriate support.”

What’s next

Much of what happens next will be up to the government agencies investigating the claims — details were sent to the Justice Department, SEC and FTC — but it will also make things a lot more complicated for the company in the short term.

Twitter was already in the midst of a high-stakes legal battle with Elon Musk over his $44 billion acquisition, and Musk is already using the complaint to try to delay the trial and fuel his arguments for reneging on the deal. (In a statement, Zatko’s lawyers said his compliance with a subpoena from Musk was “involuntary,” and that “he did not make his whistleblower disclosures to the appropriate governmental bodies to benefit Musk or to harm Twitter, but rather to protect the American public and Twitter shareholders.”)

The disclosures have also caught the attention of Congress, and Zatko is scheduled to testify to the Senate Judiciary Committee on September 13th. “Mr. Zatko’s allegations of widespread security failures and foreign state actor interference at Twitter raise serious concerns,” committee chair Sen. Dick Durbin said in a statement. “If these claims are accurate, they may show dangerous data privacy and security risks for Twitter users around the world.”

Twitter, naturally, hasn’t commented on the upcoming Senate hearing, Musk’s subpoena or potential investigations by the FTC or SEC.

Facebook, Instagram and WhatsApp could soon have exclusive features for those willing to pay

Facebook, Instagram and WhatsApp could soon have specialized features available only to users willing to pay for them. Meta is forming a new division called New Monetization Experiences that will be solely focused on paid features for the company’s app, according to a memo reported byThe Verge.

Wile Facebook and Instagram already have a number of paid features that cater to creators, like Stars, paid events and various subscription products, it sounds like the new division at Meta will be separate from those initiatives. (Of note, Meta had pledged not to take a cut of creator earnings until 2023.)

It’s not clear what type of paid features might come out of the effort, but Meta’s VP of monetization John Hegeman told The Verge the company is keeping a close eye on its industry peers. Twitter, Snapchat and Telegram have all recently launched monthly subscriptions that unlock exclusive features and other in-app perks for paid subscribers.

Paid features could help Meta find new sources of non-advertising revenue. The company’s multibillion-dollar advertising business has taken a significant hit of late due to iOS privacy changes and an economic downturn that’s also affected its competitors.

Instagram's new test lets you mute specific words from suggested posts

Instagram is giving users more ways to tweak their suggested posts amid a backlash to the app’s aggressive shift toward recommendations. The app is testing a new option that will allow users to use keywords and emoji to mute certain topics from appearing in suggested posts.

The change will block posts in which the users’ keywords, which can include emoji as well as words and phrases, appear in the caption or hashtag for a post. “You can use this feature to stop seeing content that’s not interesting to you,” Instagram writes in a blog post. Users can customize their filter words from the app's settings. The company notes that people can also opt to snooze all recommendations entirely.

Instagram is also testing a new way to weed out unwanted posts from the app’s Explore section. With the change, users can select multiple posts at a time and mark them all as “not interested.” This will hide those posts, and block out similar recommendations in the future, according to the company.

Instagram

While both new options require a bit of extra work, the changes could bring some relief for users’ who have been frustrated by the quality of Instagram’s recommendations as the app has taken increasingly aggressive steps to become more like TikTok. Instagram’s top exec Adam Mosseri said last month the company would tone down the number of recommended posts and halt its experiment with a full-screen feed. Both changes have been deeply unpopular, prompting viral memes criticizing the company’s efforts to copy TikTok.

Regardless of criticism, Meta’s leaders have been clear that they intend to shift both Facebook and Instagram’s feeds from mostly friend content to more posts from AI-driven recommendations. But the new controls could help the company eventually improve the quality of those suggestions, which might make them more palatable to users in the long run.