Posts with «legislative branch» label

TikTok is encouraging its users to call their representatives about attempts to ban the app

TikTok is stepping up its efforts to fight a new bill that could force a ban of the app in the United States. The app has been alerting its millions of US users about the measure, which would force ByteDance to sell TikTok in order for the app to remain available in US app stores.

“TikTok is at risk of being shut down in the US,” the push notification says. “Call your representative now.” An in-app message then instructs users to “speak up now — before your government strips 170 million Americans of their Constitutional right to free expression.” It also provides users a shortcut to dial their representative’s office if they enter their zip code.

The push alerts are reportedly already having a dramatic effect. Politico reporter Olivia Beavers said that House staffers report their offices are being inundated with calls. One staffer said on X that “we're getting a lot of calls from high schoolers asking what a Congressman is.”

We're getting a lot of calls from high schoolers asking what a Congressman is.

Yes really. pic.twitter.com/LzzvGU3UCi

— Taylor Hulsey (@TaylorMHulsey) March 7, 2024

Unfortunately for TikTok, their plan to stir up resistance to the bill may not be having the intended effect. The flood of calls may in fact be “backfiring,” according to Beavers, who says the response may be increasing support for the bill among members of Congress. In a post on X, Representative Mike Gallagher, who chairs the select committee that introduced the bill, said the push notifications were “interfering with the legislative process.” TikTok didn’t immediately respond to a request for comment.

The alerts come amid growing support for the measure, which was introduced earlier this week by members of the House Energy and Commerce Committee. Committee members are expected to vote Thursday on whether to advance the bill. President Joe Biden, whose administration has also sought to force a divestiture of TikTok, is reportedly supportive of the bill. As Punchbowl News notes, previous bills to ban TikTok have not had the backing of the White House.

If passed, the bill would give TikTok about six months to separate itself from ByteDance or else an app store ban would take effect. Digital rights groups oppose the measure. The ACLU has called it “unconstitutional,” while other groups say that comprehensive privacy legislation would be a more effective way to protect Americans’ data.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-encouraging-its-users-to-call-their-representatives-about-attempts-to-ban-the-app-202056111.html?src=rss

Oregon’s new Right to Repair bill targets anti-repair practices

Oregon is set to become the latest state to pass a Right to Repair law. The Oregon House of Representatives passed the Right to Repair Act (SB 1596) on March 4, two weeks after it advanced from the Senate. It now heads to Governor Tina Kotek's desk, who has five days to sign it.

California, Minnesota and New York have similar legislation, but Nathan Proctor, the Public Interest Research Group's Right to Repair Campaign senior director, calls Oregon's legislation "the best bill yet." (It's worth noting that Colorado also has its own Right to Repair legislation that has a different remit around agricultural equipment rather than around consumer electronics.)

If made into law, Oregon's Right To Repair Act would be the first to ban "parts pairing," a practice that prevents individuals from swapping out a piece for another, theoretically equivalent one. For example, a person might replace their iPhone battery with an identical one from the same model, but they'll likely receive an error message that it either can't be verified or used. The system forces people to buy the part directly from the manufacturer and can only activate it with their consent — otherwise users will have to buy an entirely new device altogether. Instead, under the new bill, manufacturers would be required to:

  • Prevent or inhibit an independent repair provider or an owner from installing or enabling the function of an otherwise functional replacement part or a component of consumer electronic equipment, including a replacement part or a component that the original equipment manufacturer has not approved.

  • Reduce the functionality or performance of consumer electronic equipment.

  • Cause consumer electronic equipment to display misleading alerts or warnings, which the owner cannot immediately dismiss, about unidentified parts.

Along with restricting parts pairing, the act dictates that manufacturers must make compatible parts available to device owners through the company or an authorized service provider for the most favorable price and without any "substantial" conditions.

The parts pairing ban applies to any devices first built or sold in Oregon starting in 2025. However, the law backdates general coverage of electronics to 2015, except for cell phones. Oregon's mobile devices purchased starting July 2021 count — a stipulation in line with California's and Minnesota's Right to Repair bills.

This article originally appeared on Engadget at https://www.engadget.com/oregons-new-right-to-repair-bill-targets-anti-repair-practices-143001457.html?src=rss

Senate tells social media CEOs they have 'blood on their hands' for failing to protect children

The CEOs of Meta, Snap, Discord, X and TikTok testified at a high-stakes Senate Judiciary Committee hearing on child exploitation online. During the hearing, Mark Zuckerberg, Evan Spiegel, Jason Citron, Linda Yaccarino and Shou Chew spent hours being grilled by lawmakers about their records on child safety. 

The hearing was the first time Spiegel, Citron and Yaccarino testified to Congress. Notably, all three were subpoenaed by the committee after refusing to appear voluntarily, according to lawmakers. Judiciary Committee Chair Senator Dick Durbin noted that Citron “only accepted services of his subpoena after US Marshals were sent to Discord’s headquarters at taxpayers’ expense.”

The hearing room was filled with parents of children who had been victims of online exploitation on social media. Many members of the audience silently held up photos of their children as the CEOs entered the room, and Durbin kicked off the hearing with a somber video featuring victims of child exploitation and their parents.

“Discord has been used to groom, abduct and abuse children,” Durbin said. “Meta’s Instagram helped connect and promote a network of pedophiles. Snapchat’s disappearing messages have been co-opted by criminals who financially extort young victims. TikTok has become a quote platform of choice for predators to access, engage and groom children for abuse. And the prevalence of CSAM on X has grown as the company has gutted its trust and safety workforce.”

During the hearing, many of the senators shared personal stories of parents whose children had died by suicide after being exploited online. "Mr. Zuckerberg, you and the companies before us — I know you don't mean it to be so — but you have blood on your hands," Senator Lindsey Graham said in his opening remarks. The audience applauded. 

While years of similar hearings have so far failed to produce any new laws, there is growing bipartisan support in Congress for new safety regulations. As Tech Policy Press points out, there are currently more than half a dozen bills dealing with children's online safety that have been proposed by senators. These include the Kids Online Safety Act (KOSA), which would require platforms to create more parental control and safety features and submit to independent audits, and COPPA 2.0, a revised version of the 1998 Children and Teens' Online Privacy Protection Act, which would bar companies from collecting or monetizing children’s data without consent.

Senators have also proposed a number of bills to address child exploitation, including the EARN IT Act, currently in its third iteration since 2020, and the STOP CSAM Act. None of these have advanced to the Senate floor for a vote. Many of these bills have faced intense lobbying from the tech industry, though some companies in attendance said they are open to some aspects of the legislation.

Zuckerberg suggest a different approach, saying he supported age verification and parental control requirements at the app store level, which would effectively shift the burden to Apple and Google. Meta has come under particular pressure in recent months following a lawsuit from 41 states for harming teens’ mental health. Court documents from the suit allege that Meta turned a blind eye to children under 13 using its service, did little to stop adults from sexually harassing teens on Facebook and that Zuckerberg personally intervened to stop an effort to ban plastic surgery filters on Instagram.

Developing...

This article originally appeared on Engadget at https://www.engadget.com/senate-tells-social-media-ceos-they-have-blood-on-their-hands-for-failing-to-protect-children-170411884.html?src=rss

Tech CEOs are set to testify in a Senate online child sexual exploitation hearing in December

The Senate Judiciary Committee will hold a hearing on online child sexual exploitation on December 6 and the CEOs of major tech companies are set to testify. The committee expects Meta CEO Mark Zuckerberg and his counterpart at TikTok, Shou Zi Chew, to testify voluntarily. It also wants to hear from the CEOs of X (formerly Twitter), Discord and Snap, and it has issued subpoenas to them.

"Big Tech’s failure to police itself at the expense of our kids cannot go unanswered," committee chair Sen. Dick Durbin (D-IL) and ranking member Sen. Lindsey Graham (R-SC) said in a joint statement, as Reuters reports. "I’m hauling in Big Tech CEOs before the Senate Judiciary Committee to testify on their failure to protect kids online," Durbin wrote on X.

JUST ANNOUNCED: Senate Judiciary Committee will press Big Tech CEOs on their failures to protect kids at hearing on Dec 6.

Subpoenas issued to CEOs of Discord, Snap, & X. Committee remains in discussion w/ Meta, TikTok—expects their CEOs will agree to testify voluntarily.

— Senate Judiciary Committee (@JudiciaryDems) November 20, 2023

According to the committee, X and Discord refused to accept service of the subpoenas on their CEO's behalf, "requiring the committee to enlist the assistance of the US Marshals Service" to serve them personally. "We have been working in good faith to participate in the Judiciary committee’s hearing on child protection online as safety is our top priority at X," Wifredo Fernandez, head of US and Canada government affairs at X, told Engadget in a statement. "Today we are communicating our updated availability to participate in a hearing on this important issue." Engadget has also contacted Discord for comment.

The issue of tech platforms allegedly facilitating harms against kids has become an increasingly pressing issue. Earlier this month, former Meta executive Arturo Béjar testified that Zuckerberg failed to respond to his email detailing concerns about harms facing children on the company's platforms. Senators then demanded documents from the company's CEO "related to senior executives’ knowledge of the mental and physical health harms associated with its platforms, including Facebook and Instagram."

This article originally appeared on Engadget at https://www.engadget.com/tech-ceos-are-set-to-testify-in-a-senate-online-child-sexual-exploitation-hearing-in-december-180206072.html?src=rss

The Supreme Court will hear social media cases with immense free speech implications

On Friday, the US Supreme Court agreed to take on two landmark social media cases with enormous implications for online speech, as reported by The Washington Post. The conservative-dominated court will determine if laws passed by Texas and Florida are violating First Amendment rights by requiring social platforms to host content they would otherwise block.

Tech industry groups, including Meta, X (formerly Twitter) and Google, say the laws are unconstitutional and violate private companies’ First Amendment rights. “Telling private websites they must give equal treatment to extremist hate isn’t just unwise, it is unconstitutional, and we look forward to demonstrating that to the Court,” Matt Schruers of the Computer & Communications Industry Association (CCIA), one of the trade associations challenging the legislation, told The Washington Post. The CCIA called the order “encouraging.”

The groups representing the tech companies contesting the laws say platforms would be at legal risk for removing violent or hateful content, propaganda from hostile governments and spam. However, leaving the content online could be bad for their bottom lines as they would risk advertiser and user boycotts.

Supporters of the Republican-sponsored state laws claim that social media companies are biased against conservatives and are illegally censoring their views. “These massive corporate entities cannot continue to go unchecked as they silence the voices of millions of Americans,” said TX Attorney General Ken Paxton (R), who recently survived an impeachment trial accusing him of abuses of office, bribery and corruption. Appeals courts (all with Republican-appointed judges) have issued conflicting rulings on the laws.

The US Supreme Court voted five to four in 2022 to put the Texas law on hold while the legal sparring continued. Justices John Roberts, Stephen Breyer, Sonia Sotomayor, Brett Kavanaugh and Amy Coney Barrett voted to prevent the law from taking effect. Meanwhile, Samuel Alito, Clarence Thomas, Elena Kagan and Neil Gorsuch dissented from the temporary hold. Alito (joined by Thomas and Gorsuch) said he hadn’t decided on the law’s constitutionality but would have let it stand in the interim. The dissenting Kagan didn’t sign off on Alito’s statement or provide separate reasoning.

The Biden administration is against the laws. “The act of culling and curating the content that users see is inherently expressive, even if the speech that is collected is almost wholly provided by users,” Solicitor General Elizabeth B. Prelogar said to the justices. “And especially because the covered platforms’ only products are displays of expressive content, a government requirement that they display different content — for example, by including content they wish to exclude or organizing content in a different way — plainly implicates the First Amendment.”

This article originally appeared on Engadget at https://www.engadget.com/the-supreme-court-will-hear-social-media-cases-with-immense-free-speech-implications-164302048.html?src=rss

California governor vetoes bill for obligatory human operators in autonomous trucks

California Gov. Gavin Newsom has blocked a bill that would have required autonomous trucks weighing more than 10,000 pounds (4,536kg) to have human safety drivers on board while operating on public roads. The governor said in a statement that the legislation, which California Senate members passed in a 36-2 vote, was unnecessary. Newsom believes existing laws are sufficient to ensure there's an "appropriate regulatory framework."

The governor noted that, under a 2012 law, the state's Department of Motor Vehicles collaborates with the National Highway Traffic Safety Administration, California Highway Patrol and other relevant bodies "to determine the regulations necessary for the safe operation of autonomous vehicles on public roads.” Newsom added that the DMV is committed to making sure rules keep up with the pace of evolving autonomous vehicle tech. "DMV continuously monitors the testing and operations of autonomous vehicles on California roads and has the authority to suspend or revoke permits as necessary to protect the public's safety," his veto message reads.

Newsom, who has a reputation for being friendly to the tech industry, reportedly faced pressure within his administration not to sign the bill. The state's Office of Business and Economic Development warned that the proposed law would lead to companies that are working on self-driving tech to move out of California.

On the other hand, as the Associated Press notes, California Labor Federation head Lorena Gonzalez Fletcher estimates that not requiring human drivers in trucks would cost around 250,000 jobs. “We will not sit by as bureaucrats side with tech companies, trading our safety and jobs for increased corporate profits," Fletcher, who called autonomous trucks dangerous, said in a statement. "We will continue to fight to make sure that robots do not replace human drivers and that technology is not used to destroy good jobs.”

This article originally appeared on Engadget at https://www.engadget.com/california-governor-vetoes-bill-for-obligatory-human-operators-in-autonomous-trucks-170051289.html?src=rss

House and Senate bills aim to protect journalists' data from government surveillance

News gatherers in the US may soon have safeguards against government attempts to comb through their data. Bipartisan House and Senate groups have reintroduced legislation, the PRESS Act (Protect Reporters from Exploitive State Spying), that limits the government's ability to compel data disclosures that might identify journalists' sources. The Senate bill, would extend disclosure exemptions and standards to cover email, phone records, and other info third parties hold.

The PRESS Act would also require that the federal government gives journalists a chance to respond to data requests. Courts could still demand disclosure if it's necessary to prevent terrorism, identify terrorists or prevent serious "imminent" violence. The Senate bill is the work of Richard Durbin, Mike Lee and Ron Wyden, while the House equivalent comes from representatives Kevin Kiley and Jamie Raskin.

Sponsors characterize the bill as vital to protecting First Amendment press freedoms. Anonymous source leaks help keep the government accountable, Wyden says. He adds that surveillance like this can deter reporters and sources worried about retaliation. Lee, meanwhile, says the Act will also maintain the public's "right to access information" and help it participate in a representative democracy.

The senators point to instances from both Democratic and Republican administrations where law enforcement subpoenaed data in a bid to catch sources. Most notably, the Justice Department under Trump is known to have seized call records and email logs from major media outlets like CNN and The New York Times following an April 2017 report on how former FBI director James Comey handled investigations during the 2016 presidential election.

Journalist shield laws exist in 48 states and the District of Columbia, but there's no federal law. That void lets the Justice Department and other government bodies quietly grab data from telecoms and other providers. The PRESS Act theoretically patches that hole and minimizes the chances of abuse.

There's no guarantee the PRESS Act will reach President Biden's desk and become law. However, both Congress camps are betting that bipartisan support will help. The House version passed "unanimously" in the previous session of Congress, Wyden's office says.

This article originally appeared on Engadget at https://www.engadget.com/house-and-senate-bills-aim-to-protect-journalists-data-from-government-surveillance-192907280.html?src=rss

Lawmakers seek 'blue-ribbon commission' to study impacts of AI tools

The wheels of government have finally begun to turn on the issue of generative AI regulation. US Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation on Monday that would establish a 20-person commission to study ways to “mitigate the risks and possible harms” of AI while “protecting” America's position as a global technology power. 

The bill would require the Executive branch to appoint experts from throughout government, academia and industry to conduct the study over the course of two years, producing three reports during that period. The president would appoint eight members of the committee, while Congress, in an effort "to ensure bipartisanship," would split the remaining 12 positions evenly between the two parties (thereby ensuring the entire process devolves into a partisan circus).

"[Generative AI] can be disruptive to society, from the arts to medicine to architecture to so many different fields, and it could also potentially harm us and that's why I think we need to take a somewhat different approach,” Lieu told the Washington Post. He views the commission as a way to give lawmakers — the same folks routinely befuddled by TikTok — a bit of "breathing room" in understanding how the cutting-edge technology functions.

Senator Brian Schatz (D-HI) plans to introduce the bill's upper house counterpart, Lieu's team told WaPo, though no timeline for that happening was provided. Lieu also noted that Congress as a whole would do well to avoid trying to pass major legislation on the subject until the commission has had its time. “I just think we need some experts to inform us and just have a little bit of time pass before we put something massive into law,” Lieu said.

Of course, that would then push the passage any sort of meaningful Congressional regulation on generative AI out to 2027, at the very earliest, rather than right now, when we actually need it. Given how rapidly both the technology and the use cases for it have evolved in just the last six months, this study will have its work cut out just keeping pace with the changes, much less convincing the octogenarians running our nation of the potential dangers AI poses to our democracy.

This article originally appeared on Engadget at https://www.engadget.com/lawmakers-seek-blue-ribbon-commission-to-study-impacts-of-ai-tools-152550502.html?src=rss

Senators reintroduce COPPA 2.0 bill to tighten child safety online

Yet more senators are trying to resurrect legislation aimed at protecting kids' online privacy. Senators Bill Cassidy and Ed Markey have reintroduced a "COPPA 2.0" (Children and Teens' Online Privacy Protection Act) bill that would expand and revise the 1998 law to deal with the modern internet, particularly social media.

COPPA 2.0 would bar companies from gathering personal data from teens aged 13 to 16 without their consent. It would ban all targeted advertising to children and teens, and create a "bill of rights" that limits personal info gathering for marketing purposes. The measure would also require a button to let kids and parents delete personal data when it's "technologically feasible."

The sequel potentially makes it easier to take action in the first place. Where COPPA requires direct knowledge that companies are collecting data from kids under 13, 2.0 would cover apps and services that are "reasonably likely" to have children as users. The Federal Trade Commission, meanwhile, would have to establish a division committed to regulating youth marketing and privacy.

Cassidy and Markey portray the bill as necessary to tackle a "mental health crisis" where tech giants allegedly play a role. The politicians argue that social networks amplify teens' negative feelings, pointing to Facebook's own research as evidence.

Social networks have tried to clamp down on misuses of child data. Meta's Facebook and Instagram have limited ad targeting for teens, for instance. However, there have also been concerns that online platforms haven't gone far enough. On top of earlier calls for bans on ad targeting, states like Arkansas and Utah have already passed laws respectively requiring age verification and parental permission for social media. Another Senate bill, the Protecting Kids on Social Media Act, would require parents' approval across the US.

Whether or not COPPA 2.0 makes it to the President's desk for signature isn't clear. The first attempt got stuck in committee ahead of the current Congress session. It also comes right as other senators are making attempts to revive the EARN IT Act (aimed at curbing child sexual abuse material) and the Kids Online Safety Act (meant to fight toxic online content as a whole). All three reintroductions are bipartisan, but they'll need considerably stronger support in the Senate, plus successful equivalents in the House, to become law.

This article originally appeared on Engadget at https://www.engadget.com/senators-reintroduce-coppa-20-bill-to-tighten-child-safety-online-165043087.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss