Posts with «company legal & law matters» label

University professors in Texas are suing the state over ‘unconstitutional’ TikTok ban

A group of college professors sued Texas today for banning TikTok on state devices and networks, as reported byThe Washington Post. The plaintiffs say the prohibition compromises their research and teaching while “preventing or seriously impeding faculty from pursuing research that relates to TikTok,” including studying the very disinformation and data-collection practices the restriction claims to address. The plaintiffs say the ban makes it “almost impossible for faculty to use TikTok in their classrooms — whether to teach about TikTok or to use content from TikTok to teach about other subjects.”

The Knight First Amendment Institute at Columbia University filed the lawsuit in the name of the Coalition for Independent Technology Research, an academic research advocacy group the Texas professors are members of. The lawsuit names Governor Greg Abbott and 14 other state and public education officials as defendants. “The government’s authority to control their research and teaching… cannot survive First Amendment scrutiny,” the complaint says.

One example cited by the plaintiffs is Jacqueline Vickery, Associate Professor in the Department of Media Arts at the University of North Texas, who studies and teaches how young people use social media for expression and political organizing. “The ban has forced her to suspend research projects and change her research agenda, alter her teaching methodology, and eliminate course materials,” the complaint reads. “It has also undermined her ability to respond to student questions and to review the work of other researchers, including as part of the peer-review process.”

The lawsuit says that, although faculty at public universities are public employees, the First Amendment shields them from government control over their research and teaching. “Imposing a broad restraint on the research and teaching of public university faculty is not a constitutionally permissible means of protecting Texans’ ‘way of life’ or countering the threat of disinformation,” the suit says, citing Abbott’s comments that he feared the Chinese government “wields TikTok to attack our way of life.” The suit also condemns the double standard of claiming to care about Texans’ privacy while still allowing Meta, Google and Twitter (all American companies) to harvest much of the same data as TikTok.

“The ban is suppressing research about the very concerns that Governor Abbott has raised, about disinformation, about data collection,” Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, told The Washington Post. “There are other ways to address those concerns that don’t impose the same severe burden on faculty and researchers’ First Amendment rights,” he added, as well as their “ability to continue studying what has, like it or not, become a hugely popular and influential communications platform.”

This is the third lawsuit this year challenging state TikTok bans. Two Montana lawsuits funded by the Chinese social media company claim the prohibition violates free speech rights. According toThe New York Times, TikTok is not involved with the Texas suit.

This article originally appeared on Engadget at https://www.engadget.com/university-professors-in-texas-are-suing-the-state-over-unconstitutional-tiktok-ban-173100334.html?src=rss

Celsius founder Alex Mashinsky arrested and charged with fraud

The problems keep mounting for Celsius founder Alex Mashinsky, as he’s been arrested and charged by federal authorities with fraud. Mashinsky faces seven criminal counts, including securities, commodities and wire fraud, as originally reported by CBS News. He and his company are being independent sued by three government agencies — the FTC, CFTC and SEC. The U.S. Attorney’s Office alleges that Mashinsky misled customers regarding the nature of his company, making it seem like a bank when it was actually a high-risk investment fund.

Celsius’s former chief revenue officer, Roni Cohen-Pavon, was also arrested, with both Pavon and Mashinsky being charged with manipulating the price of the company’s proprietary crypto token so they could sell their own stock at inflated prices. 

“Mashinsky misrepresented, among other things, the safety of Celsius’s yield-generating activities, Celsius’s profitability, the long-term sustainability of Celsius’ high rewards rates and the risks associated with depositing crypto assets with Celsius,'' federal prosecutors wrote in a charging document obtained by CNBC.

Additionally, the FTC reached a $4.7 billion settlement today with Celsius, which nearly matches the record fines levied against Meta in 2019 for violating the privacy of consumers. The company has agreed to these financial terms, but will only make payments once it returns what remains in customer assets as part of ongoing bankruptcy proceedings.

This all follows a New York-based lawsuit issued in January that also alleged massive fraud. That suit seeks appropriate damages after Celsius allegedly defrauded investors out of "billions of dollars" in cryptocurrency.

While details are scant on today’s arrest, the New York suit alleges that Mashinsky misled customers about the company’s worsening financial health and failed to register as a commodities and securities dealer, among many other allegations. New York State Attorney General Letitia James alleged that Mashinsky deceived hundreds of thousands of investors, with over 26,000 of them located in New York.

If convicted on all counts, Mashinsky and Pavon face decades in prison. Mashinsky resigned as CEO of Celsius last year and is no longer involved with the company.

This article originally appeared on Engadget at https://www.engadget.com/celsius-founder-alex-mashinsky-arrested-and-charged-with-fraud-170235270.html?src=rss

FTC opens investigation into ChatGPT creator OpenAI

American regulators now appear to be clamping down on generative AI in earnest. The Washington Post has learned the Federal Trade Commission (FTC) has launched an investigation into OpenAI, the creator of ChatGPT and DALL-E. Officials have requested documents showing how the company tackles risks stemming from its large language AI models. The FTC is concerned the company may be violating consumer protection laws through "unfair or deceptive" practices that could hurt the public's privacy, security or reputation.

The Commission is particularly interested in information linked to a bug that leaked ChatGPT users' sensitive data, including payments and chat histories. While OpenAI said the number of affected users was very small, the FTC is worried this stems from poor security practices. The agency also wants details of any complaints alleging the AI made false or malicious statements about individuals, and info showing how well users understand the accuracy of the products they're using.

We've asked OpenAI for comment. The FTC declined comment and typically doesn't remark on investigations, but has previously warned that generative AI could run afoul of the law by doing more harm than good to consumers. It could be used to perpetrate scams, run misleading marketing campaigns or lead to discriminatory advertising, for instance. If the government body finds a company in violation, it can apply fines or issue consent decrees that force certain practices.

AI-specific laws and rules aren't expected in the near future. Even so, the government has stepped up pressure on the tech industry. OpenAI chief Sam Altman testified before the Senate in May, where he defended his company by outlining privacy and safety measures while touting AI's claimed benefits. He said protections were in place, but that OpenAI would be "increasingly cautious" and continue to upgrade its safeguards.

It's not clear if the FTC will pursue other generative AI developers, such as Google and Anthropic. The OpenAI investigation shows how the Commission might approach other cases, though, and signals that the regulator is serious about scrutinizing AI developers.

This article originally appeared on Engadget at https://www.engadget.com/ftc-opens-investigation-into-chatgpt-creator-openai-164551958.html?src=rss

FTC appeals ruling that would have let Microsoft’s Activision takeover move forward

The Federal Trade Commission isn't giving up on its attempt to halt Microsoft's pending $68.7 billion purchase of Activision Blizzard. The agency has appealed Judge Jacqueline Scott Corley's denial of its request for a preliminary injunction to temporarily stop the deal from going through.

The FTC has sued to prevent the merger from happening over antitrust concerns. An administrative trial is set to start in August, but the companies have a merger deadline of July 18th. The agency was concerned Microsoft and Activision would close their deal by then despite a UK regulator blocking the deal in that country.

Bloomberg first reported that the agency was considering an appeal against Corley's decision. The FTC told Engadget after Tuesday's ruling that it would announce its "next step to continue our fight to preserve competition and protect consumers" in the following days.

Corley ruled that, unless the FTC obtains an emergency stay from the Ninth Circuit Court of Appeals by 11:59PM PT on July 14th, a temporary restraining order that's currently preventing Microsoft and Activision from closing the deal will be dissolved. The restraining order was put in place until Corley made a decision on the preliminary injunction.

Meanwhile, after Corley's ruling, Microsoft, Activision Blizzard and the UK's Competition and Markets Authority said they agreed to pause their legal battle and see if they could reach a compromise. The CMA later clarified that although "merging parties don’t have the opportunity to put forward new remedies once a final report has been issued, they can choose to restructure a deal." It added that doing so could lead to a fresh merger investigation, which would likely delay the takeover beyond July 18th.

This article originally appeared on Engadget at https://www.engadget.com/ftc-appeals-ruling-that-would-have-let-microsofts-activision-takeover-move-forward-231729137.html?src=rss

Sarah Silverman sues OpenAI and Meta over copyright infringement

Sarah Silverman is suing OpenAI. On Friday, the comedian and author, alongside novelists Christopher Golden and Richard Kadrey, filed a pair of complaints against OpenAI and Meta (via Gizmodo). The group alleges the firms trained their large language models on copyrighted materials, including works they published, without obtaining consent.

The complaints center around the datasets OpenAI and Meta allegedly used to train ChatGPT and LLaMA. In the case of OpenAI, while it's "Books1" dataset conforms approximately to the size of Project Gutenberg — a well known copyright-free book repository — lawyers for the plaintiffs argue that the “Books2” datasets is too large to have derived from anywhere other than so-called "shadow libraries" of illegally available copyrighted material, such as Library Genesis and Sci-Hub. Everyday pirates can access these materials through direct downloads, but perhaps more usefully for those generating large language models, many shadow libraries also make written material available in bulk torrent packages. One exhibit from Silverman’s lawsuit involves an exchange between the comedian’s lawyers and ChatGPT. Silverman’s legal team asked the chatbot to summarize The Bedwetter, a memoir she published in 2010. The chatbot was not only able to outline entire parts of the book, but some passages it relayed appear to have been reproduced verbatim.

Silverman, Golden and Kadrey aren’t the first authors to sue OpenAI over copyright infringement. In fact, the firm faces a host of legal challenges over how it went about training ChatGPT. In June alone, the company was served with two separate complaints. One is a sweeping class action suit that alleges OpenAI violated federal and state privacy laws by scraping data to train the large language models behind ChatGPT and DALL-E.

This article originally appeared on Engadget at https://www.engadget.com/sarah-silverman-sues-openai-and-meta-over-copyright-infringement-175322447.html?src=rss

Canadian judge rules the thumbs up emoji counts as a contract agreement

A Canadian judge has ruled that the popular “thumbs-up” emoji not only can be used as a contract agreement, but is just as valid as an actual signature. The Saskatchewan-based judge made the ruling on the grounds that the courts must adapt to the “new reality” of how people communicate, as originally reported by The Guardian.

The case involved a grain buyer sending out a mass text to drum up clients and a farmer agreeing to sell 86 tons of flax for around $13 per bushel. The buyer texted a contract agreement to the farmer and asked for the farmer to “confirm” receiving the contract. He issued a thumb’s up emoji as receipt of the document, but backed out of the deal after flax prices increased.

The buyer sued the farmer, arguing that the thumb’s up represented more than just receipt of the contract. It represented an agreement to the conditions of the contract, and a judge agreed, ordering the farmer to cough up nearly $62,000, likely causing a string of puke emojis.

The farmer, Chris Achter, said in an affidavit that he “did not have time to review” the contract and the thumb’s up was just acknowledgment of receipt. Justice Timothy Keene relied on Dictionary.com’s definition of the emoji which notes the image is used to “express assent, approval, or encouragement in digital communications, especially in Western cultures,” ultimately siding with the grain buyer.

“This court readily acknowledges that a 👍 emoji is a non-traditional means to ‘sign’ a document but nevertheless under these circumstances this was a valid way to convey the two purposes of a ‘signature’,” Justice Keene wrote.

The defense argued that giving this type of power to an emoji would open the “floodgates” to enhanced interpretations of other emojis. While the justice dismissed this line of reasoning, anyone who regularly texts the LOL emoji without actually laughing out loud is likely quaking in their boots right now.

This article originally appeared on Engadget at https://www.engadget.com/canadian-judge-rules-the-thumbs-up-emoji-counts-as-a-contract-agreement-190026176.html?src=rss

French Assembly passes bill allowing police to remotely activate phone cameras and microphones for surveillance

French law enforcement may soon have far-reaching authority to snoop on alleged criminals. Lawmakers in France's National Assembly have passed a bill that lets police surveil suspects by remotely activating cameras, microphones and GPS location systems on phones and other devices. A judge will have to approve use of the powers, and the recently amended bill forbids use against journalists, lawyers and other "sensitive professions," according to Le Monde. The measure is also meant to limit use to serious cases, and only for a maximum of six months. Geolocation would be limited to crimes that are punishable by at least five years in prison.

An earlier version of the bill passed the Senate, but the amendment will require that legislative body's approval before it can become law.

Civil liberties advocates are alarmed. The digital rights group La Quadrature du Net previously pointed out the potential for abuse. As the bill isn't clear about what constitutes a serious crime, there are fears the French government might use this to target environmental activists and others who aren't grave threats. The organization also notes that worrying security policies have a habit of expanding to less serious crimes. Genetic registration was only used for sex offenders at first, La Quadrature says, but is now being used for most crimes.

The group further notes that the remote access may depend on security vulnerabilities. Police would be exploiting security holes instead of telling manufacturers how to patch those holes, La Quadrature says.

Justice Minister Éric Dupond-Moretti says the powers would only be used for "dozens" of cases per year, and that this was "far away" from the surveillance state of Orwell's 1984. It will save lives, the politician argues.

The legislation comes as concerns about government device surveillance are growing. There's been a backlash against NSO Group, whose Pegasus spyware has allegedly been misused to spy on dissidents, activists and even politicians. While the French bill is more focused, it's not exactly reassuring to those worried about government overreach.

This article originally appeared on Engadget at https://www.engadget.com/french-assembly-passes-bill-allowing-police-to-remotely-activate-phone-cameras-and-microphones-for-surveillance-210539401.html?src=rss

Apple wants to take the Epic Games case to the Supreme Court

Apple is initiating one last-ditch effort to maintain a cut of in-app sales, asking the Supreme Court to hear its appeal of Epic Games' anti-trust case, Reuters reports. Two lower courts ruled that Apple must drop its guidelines preventing apps from including their own payment options, a policy that helped Apple's bottom line. 

The fight began in 2020 when Epic rolled out a new Fortnite update that allowed gamers to purchase digital coins through a direct payment feature. The move violated Apple's policy that required all iOS games to use in-app purchases — and gave Apple a 30 percent cut of the profits. Apple removed Fortnite from its App Store in response, despite its regular status as one of its highest-grossing games. In retaliation, Epic sued Apple to end its "unfair and anti-competitive actions" with the goal of changing its policy versus seeking any damages. 

The lawsuit was a mixed bag for both parties involved: In 2021, US District Judge Yvonne Gonzalez Rogers ruled that Epic knowingly violated Apple's rules and the iPhone maker wasn't required to add Fortnite back to its App Store. Rogers also stated that Apple wasn't acting like a monopoly but that the company must allow apps to provide their users with third-party payment systems. The change went into effect last year, and the US Ninth Circuit Court of Appeals upheld the entire injunction this past April. 

In their filing, Apple's lawyers claim that the ruling extends beyond Epic Games and "exceeds the district court's authority under Article III, which limits federal court jurisdiction to actual cases and controversies." Basically, they argue that the court overreached and asked the Supreme Court to acknowledge that and let its App Store go back to business as usual (developers giving but cuts of sales to Apple). One way or another, Apple will at least have to adapt in some countries, with new European Union regulations requiring the company to allow third-party app stores by 2024.

This article originally appeared on Engadget at https://www.engadget.com/apple-wants-to-take-the-epic-games-case-to-the-supreme-court-123501115.html?src=rss

Tech firms sue Arkansas over social media age verification law

The technology industry isn't thrilled with Arkansas' law requiring social media age checks. NetChoice, a tech trade group that includes Google, Meta and TikTok, has sued the state of Arkansas over claimed US Constitution violations in the Social Media Safety Act. The measure allegedly treads on First Amendment free speech rights by making users hand over private data in order to access social networks. It also "seizes decision making" from families, NetChoice argues.

The alliance also believes the Act hurts privacy and safety by making internet companies rely on a third-party service to store and track kids' data. State residents often don't know or associate with the service, NetChoice claims, and an external firm is supposedly a "prime target" for hacks. The law tries to regulate the internet outside state laws while ignoring federal law, according to the lawsuit. As Arkansas can't verify residency without requiring data, it's effectively asking everyone to submit documents.

State Attorney General Tim Griffin tells Engadget in a statement that he looks forward to "vigorously defending" the Social Media Safety Act. The law requires age verification for all users by submitting driver's licenses and other "commercially reasonable" methods. Anyone under 18 also needs to get a parent's consent. There are exceptions that appear to cover major social networks and their associated categories, such as those for "professional networking" (think LinkedIn) or short entertaining video clips (like TikTok).

Arkansas' requirement is part of a greater trend among politicians to demand age verification for social media. States like Utah, Connecticut and Ohio have either passed or are considering similar laws, while Senator Josh Hawley proposed a federal bill barring all social media access for kids under 16. They're concerned younger users might be exposed to creeps and inappropriate content, and that use can harm mental health by presenting a skewed view of the world and encouraging addiction.

There's no guarantee the lawsuit will succeed. If it does, though, it could affect similar attempts to verify ages through personal data. If Arkansas' approach is deemed unconstitutional, other states might have to drop their own efforts.

This article originally appeared on Engadget at https://www.engadget.com/tech-firms-sue-arkansas-over-social-media-age-verification-law-180002953.html?src=rss

Twitter's lawsuit over censorship in India has been dismissed

Last year, Twitter sued India over orders to block content within the country, saying the government had applied its 2021 IT laws "arbitrarily and disproportionately." Now, India's Karnataka High Court has dismissed the plea, with a judge saying Twitter had failed to explain why it delayed complying with the new laws in the first place, TechCrunch has reported. The court also imposed a 5 million rupee ($61,000 fine) on the Elon Musk-owned firm. 

"Your client (Twitter) was given notices and your client did not comply. Punishment for non-compliance is seven years imprisonment and unlimited fine. That also did not deter your client," the judge told Twitter's legal representation. "So you have not given any reason why you delayed compliance, more than a year of delay… then all of sudden you comply and approach the Court. You are not a farmer but a billon dollar company."

Twitter’s relationship with India was fraught for much of 2021. In February, the government threatened to jail Twitter employees unless the company removed content related to protests by farmers held that year. Shortly after that, India ordered Twitter to pull tweets criticizing the country’s response to the COVID-19 pandemic. More recently, the government ordered Twitter to block tweets from Freedom House, a nonprofit organization that claimed India was an example of a country where freedom of the press is on the decline.

Those incidents put Twitter in a compromising situation. It either had to comply with government orders to block content (and face censorship criticism inside and outside the country), or ignore them and risk losing its legal immunity. In August, it complied with the orders and took down content as ordered.

The court order follows recent comments from Twitter co-founder Jack Dorsey, saying that India threatened to raid employees homes if it didn't comply with orders to remove posts and accounts. In a tweet, India's deputy minister for information technique called that "an outright lie" saying Twitter was "in non-compliance with the law." 

Twitter filed the suit around the same time that Elon Musk started trying to wiggle out of buying Twitter. Since then, Twitter has often complied with government takedown requests — most recently in Turkey, where it limited access to some tweets ahead of a tightly contested election won by incumbent president Recep Tayyip Erdogan.  

This article originally appeared on Engadget at https://www.engadget.com/twitters-lawsuit-over-censorship-in-india-has-been-dismissed-114031691.html?src=rss