Posts with «legislative branch» label

Tech CEOs are set to testify in a Senate online child sexual exploitation hearing in December

The Senate Judiciary Committee will hold a hearing on online child sexual exploitation on December 6 and the CEOs of major tech companies are set to testify. The committee expects Meta CEO Mark Zuckerberg and his counterpart at TikTok, Shou Zi Chew, to testify voluntarily. It also wants to hear from the CEOs of X (formerly Twitter), Discord and Snap, and it has issued subpoenas to them.

"Big Tech’s failure to police itself at the expense of our kids cannot go unanswered," committee chair Sen. Dick Durbin (D-IL) and ranking member Sen. Lindsey Graham (R-SC) said in a joint statement, as Reuters reports. "I’m hauling in Big Tech CEOs before the Senate Judiciary Committee to testify on their failure to protect kids online," Durbin wrote on X.

JUST ANNOUNCED: Senate Judiciary Committee will press Big Tech CEOs on their failures to protect kids at hearing on Dec 6.

Subpoenas issued to CEOs of Discord, Snap, & X. Committee remains in discussion w/ Meta, TikTok—expects their CEOs will agree to testify voluntarily.

— Senate Judiciary Committee (@JudiciaryDems) November 20, 2023

According to the committee, X and Discord refused to accept service of the subpoenas on their CEO's behalf, "requiring the committee to enlist the assistance of the US Marshals Service" to serve them personally. "We have been working in good faith to participate in the Judiciary committee’s hearing on child protection online as safety is our top priority at X," Wifredo Fernandez, head of US and Canada government affairs at X, told Engadget in a statement. "Today we are communicating our updated availability to participate in a hearing on this important issue." Engadget has also contacted Discord for comment.

The issue of tech platforms allegedly facilitating harms against kids has become an increasingly pressing issue. Earlier this month, former Meta executive Arturo Béjar testified that Zuckerberg failed to respond to his email detailing concerns about harms facing children on the company's platforms. Senators then demanded documents from the company's CEO "related to senior executives’ knowledge of the mental and physical health harms associated with its platforms, including Facebook and Instagram."

This article originally appeared on Engadget at https://www.engadget.com/tech-ceos-are-set-to-testify-in-a-senate-online-child-sexual-exploitation-hearing-in-december-180206072.html?src=rss

The Supreme Court will hear social media cases with immense free speech implications

On Friday, the US Supreme Court agreed to take on two landmark social media cases with enormous implications for online speech, as reported by The Washington Post. The conservative-dominated court will determine if laws passed by Texas and Florida are violating First Amendment rights by requiring social platforms to host content they would otherwise block.

Tech industry groups, including Meta, X (formerly Twitter) and Google, say the laws are unconstitutional and violate private companies’ First Amendment rights. “Telling private websites they must give equal treatment to extremist hate isn’t just unwise, it is unconstitutional, and we look forward to demonstrating that to the Court,” Matt Schruers of the Computer & Communications Industry Association (CCIA), one of the trade associations challenging the legislation, told The Washington Post. The CCIA called the order “encouraging.”

The groups representing the tech companies contesting the laws say platforms would be at legal risk for removing violent or hateful content, propaganda from hostile governments and spam. However, leaving the content online could be bad for their bottom lines as they would risk advertiser and user boycotts.

Supporters of the Republican-sponsored state laws claim that social media companies are biased against conservatives and are illegally censoring their views. “These massive corporate entities cannot continue to go unchecked as they silence the voices of millions of Americans,” said TX Attorney General Ken Paxton (R), who recently survived an impeachment trial accusing him of abuses of office, bribery and corruption. Appeals courts (all with Republican-appointed judges) have issued conflicting rulings on the laws.

The US Supreme Court voted five to four in 2022 to put the Texas law on hold while the legal sparring continued. Justices John Roberts, Stephen Breyer, Sonia Sotomayor, Brett Kavanaugh and Amy Coney Barrett voted to prevent the law from taking effect. Meanwhile, Samuel Alito, Clarence Thomas, Elena Kagan and Neil Gorsuch dissented from the temporary hold. Alito (joined by Thomas and Gorsuch) said he hadn’t decided on the law’s constitutionality but would have let it stand in the interim. The dissenting Kagan didn’t sign off on Alito’s statement or provide separate reasoning.

The Biden administration is against the laws. “The act of culling and curating the content that users see is inherently expressive, even if the speech that is collected is almost wholly provided by users,” Solicitor General Elizabeth B. Prelogar said to the justices. “And especially because the covered platforms’ only products are displays of expressive content, a government requirement that they display different content — for example, by including content they wish to exclude or organizing content in a different way — plainly implicates the First Amendment.”

This article originally appeared on Engadget at https://www.engadget.com/the-supreme-court-will-hear-social-media-cases-with-immense-free-speech-implications-164302048.html?src=rss

California governor vetoes bill for obligatory human operators in autonomous trucks

California Gov. Gavin Newsom has blocked a bill that would have required autonomous trucks weighing more than 10,000 pounds (4,536kg) to have human safety drivers on board while operating on public roads. The governor said in a statement that the legislation, which California Senate members passed in a 36-2 vote, was unnecessary. Newsom believes existing laws are sufficient to ensure there's an "appropriate regulatory framework."

The governor noted that, under a 2012 law, the state's Department of Motor Vehicles collaborates with the National Highway Traffic Safety Administration, California Highway Patrol and other relevant bodies "to determine the regulations necessary for the safe operation of autonomous vehicles on public roads.” Newsom added that the DMV is committed to making sure rules keep up with the pace of evolving autonomous vehicle tech. "DMV continuously monitors the testing and operations of autonomous vehicles on California roads and has the authority to suspend or revoke permits as necessary to protect the public's safety," his veto message reads.

Newsom, who has a reputation for being friendly to the tech industry, reportedly faced pressure within his administration not to sign the bill. The state's Office of Business and Economic Development warned that the proposed law would lead to companies that are working on self-driving tech to move out of California.

On the other hand, as the Associated Press notes, California Labor Federation head Lorena Gonzalez Fletcher estimates that not requiring human drivers in trucks would cost around 250,000 jobs. “We will not sit by as bureaucrats side with tech companies, trading our safety and jobs for increased corporate profits," Fletcher, who called autonomous trucks dangerous, said in a statement. "We will continue to fight to make sure that robots do not replace human drivers and that technology is not used to destroy good jobs.”

This article originally appeared on Engadget at https://www.engadget.com/california-governor-vetoes-bill-for-obligatory-human-operators-in-autonomous-trucks-170051289.html?src=rss

House and Senate bills aim to protect journalists' data from government surveillance

News gatherers in the US may soon have safeguards against government attempts to comb through their data. Bipartisan House and Senate groups have reintroduced legislation, the PRESS Act (Protect Reporters from Exploitive State Spying), that limits the government's ability to compel data disclosures that might identify journalists' sources. The Senate bill, would extend disclosure exemptions and standards to cover email, phone records, and other info third parties hold.

The PRESS Act would also require that the federal government gives journalists a chance to respond to data requests. Courts could still demand disclosure if it's necessary to prevent terrorism, identify terrorists or prevent serious "imminent" violence. The Senate bill is the work of Richard Durbin, Mike Lee and Ron Wyden, while the House equivalent comes from representatives Kevin Kiley and Jamie Raskin.

Sponsors characterize the bill as vital to protecting First Amendment press freedoms. Anonymous source leaks help keep the government accountable, Wyden says. He adds that surveillance like this can deter reporters and sources worried about retaliation. Lee, meanwhile, says the Act will also maintain the public's "right to access information" and help it participate in a representative democracy.

The senators point to instances from both Democratic and Republican administrations where law enforcement subpoenaed data in a bid to catch sources. Most notably, the Justice Department under Trump is known to have seized call records and email logs from major media outlets like CNN and The New York Times following an April 2017 report on how former FBI director James Comey handled investigations during the 2016 presidential election.

Journalist shield laws exist in 48 states and the District of Columbia, but there's no federal law. That void lets the Justice Department and other government bodies quietly grab data from telecoms and other providers. The PRESS Act theoretically patches that hole and minimizes the chances of abuse.

There's no guarantee the PRESS Act will reach President Biden's desk and become law. However, both Congress camps are betting that bipartisan support will help. The House version passed "unanimously" in the previous session of Congress, Wyden's office says.

This article originally appeared on Engadget at https://www.engadget.com/house-and-senate-bills-aim-to-protect-journalists-data-from-government-surveillance-192907280.html?src=rss

Lawmakers seek 'blue-ribbon commission' to study impacts of AI tools

The wheels of government have finally begun to turn on the issue of generative AI regulation. US Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation on Monday that would establish a 20-person commission to study ways to “mitigate the risks and possible harms” of AI while “protecting” America's position as a global technology power. 

The bill would require the Executive branch to appoint experts from throughout government, academia and industry to conduct the study over the course of two years, producing three reports during that period. The president would appoint eight members of the committee, while Congress, in an effort "to ensure bipartisanship," would split the remaining 12 positions evenly between the two parties (thereby ensuring the entire process devolves into a partisan circus).

"[Generative AI] can be disruptive to society, from the arts to medicine to architecture to so many different fields, and it could also potentially harm us and that's why I think we need to take a somewhat different approach,” Lieu told the Washington Post. He views the commission as a way to give lawmakers — the same folks routinely befuddled by TikTok — a bit of "breathing room" in understanding how the cutting-edge technology functions.

Senator Brian Schatz (D-HI) plans to introduce the bill's upper house counterpart, Lieu's team told WaPo, though no timeline for that happening was provided. Lieu also noted that Congress as a whole would do well to avoid trying to pass major legislation on the subject until the commission has had its time. “I just think we need some experts to inform us and just have a little bit of time pass before we put something massive into law,” Lieu said.

Of course, that would then push the passage any sort of meaningful Congressional regulation on generative AI out to 2027, at the very earliest, rather than right now, when we actually need it. Given how rapidly both the technology and the use cases for it have evolved in just the last six months, this study will have its work cut out just keeping pace with the changes, much less convincing the octogenarians running our nation of the potential dangers AI poses to our democracy.

This article originally appeared on Engadget at https://www.engadget.com/lawmakers-seek-blue-ribbon-commission-to-study-impacts-of-ai-tools-152550502.html?src=rss

Senators reintroduce COPPA 2.0 bill to tighten child safety online

Yet more senators are trying to resurrect legislation aimed at protecting kids' online privacy. Senators Bill Cassidy and Ed Markey have reintroduced a "COPPA 2.0" (Children and Teens' Online Privacy Protection Act) bill that would expand and revise the 1998 law to deal with the modern internet, particularly social media.

COPPA 2.0 would bar companies from gathering personal data from teens aged 13 to 16 without their consent. It would ban all targeted advertising to children and teens, and create a "bill of rights" that limits personal info gathering for marketing purposes. The measure would also require a button to let kids and parents delete personal data when it's "technologically feasible."

The sequel potentially makes it easier to take action in the first place. Where COPPA requires direct knowledge that companies are collecting data from kids under 13, 2.0 would cover apps and services that are "reasonably likely" to have children as users. The Federal Trade Commission, meanwhile, would have to establish a division committed to regulating youth marketing and privacy.

Cassidy and Markey portray the bill as necessary to tackle a "mental health crisis" where tech giants allegedly play a role. The politicians argue that social networks amplify teens' negative feelings, pointing to Facebook's own research as evidence.

Social networks have tried to clamp down on misuses of child data. Meta's Facebook and Instagram have limited ad targeting for teens, for instance. However, there have also been concerns that online platforms haven't gone far enough. On top of earlier calls for bans on ad targeting, states like Arkansas and Utah have already passed laws respectively requiring age verification and parental permission for social media. Another Senate bill, the Protecting Kids on Social Media Act, would require parents' approval across the US.

Whether or not COPPA 2.0 makes it to the President's desk for signature isn't clear. The first attempt got stuck in committee ahead of the current Congress session. It also comes right as other senators are making attempts to revive the EARN IT Act (aimed at curbing child sexual abuse material) and the Kids Online Safety Act (meant to fight toxic online content as a whole). All three reintroductions are bipartisan, but they'll need considerably stronger support in the Senate, plus successful equivalents in the House, to become law.

This article originally appeared on Engadget at https://www.engadget.com/senators-reintroduce-coppa-20-bill-to-tighten-child-safety-online-165043087.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

The EARN IT Act will be introduced to Congress for the third time

The controversial EARN IT Act, first introduced in 2020, is returning to Congress after failing twice to land on the president’s desk. The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, (EARN IT) Act is intended to minimize the proliferation of Child Sexual Abuse Material (CSAM) throughout the web, but detractors say it goes too far and risks further eroding online privacy protections.

Here's how it would work, according to the language of the bill's reintroduction last year. Upon passing, EARN IT would create a national commission composed of politically-appointed law enforcement specialists. This body would be tasked with making a list of best practices to ostensibly curb the digital distribution of CSAM. If online service providers do not abide by these best practices, they would potentially lose blanket immunity under Section 230 of the Communications Decency Act, opening them up to all kinds of legal hurdles — including civil lawsuits and criminal charges.

Detractors say EARN IT places a whole lot of power to regulate the internet in the hands of the commission the bill would create as well as state legislatures. Additionally, language in last year's bill suggests that these guidelines would likely extend to encrypted information, so if an encrypted transmission runs afoul of any guidelines, the platform is on the hook. This will force providers to monitor encrypted communications, which goes against the whole point of encryption in the first place. Additionally, end-to-end encryption is designed so that not even the platform can read the contents. In other words, providers might not be able to offer those protections. 

“This was a dangerous bill two years ago, and because it’s doubled down on its anti-encryption stance, it’s even more dangerous now,” The Center for Internet and Society at Stanford Law School wrote in a blog post last year, a stance also mirrored by the Center for Democracy and Technology. The American Civil Liberties Union, pushing back on a prior version of the bill, said that it "threatens our online speech and privacy rights in ways that will disproportionately harm LGBTQ people, sex workers and others who use the internet to privately communicate and share information and resources."

The Rape, Abuse & Incest National Network (RAINN) has come out in defense of the bill, saying that it will “incentivize technology companies to proactively search for and remove” CSAM materials. “Tech companies have the technology to detect, remove, and stop the distribution of child sexual abuse material. However, there is no incentive to do so because they are subject to no consequences for their inaction,” wrote Erin Earp, RAINN’s interim vice president for public policy.

The bipartisan Senate bills have consistently been introduced by Republican Senator Lindsay Graham and Democrat Senator Richard Blumenthal, and their companion bills in the House likewise have been sponsored by Republican Representative Ann Wagner and Democrat Representative Sylvia Garcia. The full text of H.R.2732 is not publicly available yet, so it's unclear if anything has changed since last year's attempt, though when reintroduced last year it was more of the same. (We've reached out to the offices of Reps. Wagner and Garcia for a copy of the bill's text.) A member of Senator Graham's office confirmed to Engadget that the companion bill will be introduced within the next week. It also remains to be seen if and when this will come up for a vote. Both prior versions of EARN IT died in committee before ever coming to a vote.

This article originally appeared on Engadget at https://www.engadget.com/the-earn-it-act-will-be-introduced-to-congress-for-the-third-time-192619083.html?src=rss

Legislation to ban government use of facial recognition hits Senate for the third time

Biometric technology may make it easy to unlock your phone, but democratic lawmakers have long cautioned against the use of facial recognition and biometrics by law enforcement. Not only have researchers documented instances of racial and gender bias in such systems, false positives have even led to real instances of wrongful arrest. That's why lawmakers have re-introduced the Facial Recognition and Biometric Technology Act. This actually marks the third time the bill was introduced to the Senate — despite being introduced in 2020 and 2021, the act was never advanced to a vote.

If passed, the Facial Recognition and Biometric Technology Act would outright ban any use of facial recognition or biometric surveillance by the federal government unless that use is explicitly approved by an Act of Congress. That approval itself would be pretty limited: It would need to define who was allowed to use biometric surveillance, the exact type of biometric surveillance they would be using and the specific purpose it would be used for. Approval would also have the burden of further restrictions, such as adhering to minimum accuracy rates that would hopefully avoid false positives in the rare instances when use of the technology is approved.

The bill also hopes to encourage local and state governments to follow its lead, including a clause that would tie some federal funding for local law enforcement to complying with a "substantially similar" ban on facial recognition and biometrics.

While the bill hasn't had much luck making it to the floor of either chamber of congress, some states and local governments have been banning facial recognition technology on their own. In 2020, Portland Oregon put strict guardrails on the use of facial recognition technology. New York State and Massachusetts have also put restrictions on the use of biometrics. Even the IRS walked back plans to use facial recognition for identity verification purposes.

That sounds encouraging for the re-introduced bill, but that momentum isn't universal: Law enforcement still sees biometrics as a useful tool for investigating crime, and the TSA has been testing systems that compare travelers to the photo on their passport or driver's license.

This article originally appeared on Engadget at https://www.engadget.com/legislation-to-ban-government-use-of-facial-recognition-hits-senate-for-the-third-time-194547733.html?src=rss

House committee approves bill that could lead to a TikTok ban in the US

The House Foreign Affairs Committee has voted to advance legislation that would give President Joe Biden the power to ban TikTok in the US along with other apps owned by Chinese companies. The panel approved the the Deterring America’s Technological Adversaries (DATA) Act in a 24-16 vote. All Republicans on the panel were in favor while every Democrat voted against the bill.

There are several more steps that the bill needs to go through before it becomes law. The full House and the Senate would have to pass it, and Biden would have to sign the bill. Still, it's a notable step forward for the latest attempt to ban TikTok in the US entirely.

Republican committee chair Michael McCaul introduced the DATA Act (PDF) only last week. McCaul expects the bill to go to a full house vote later this month, according to Reuters.

The legislation would grant the president the power to enact sanctions, including bans, on any company that the Treasury Secretary deems "knowingly provides or may transfer sensitive personal data of persons subject to United States jurisdiction to any foreign person that is subject to the jurisdiction or direction" of China. The same applies to a foreign person or company that "is owned by, directly or indirectly controlled by, or is otherwise subject to the influence of China."

Democratic members of the Foreign Affairs Committee claimed that the legislation was too broad. It would "damage our allegiances across the globe, bring more companies into China's sphere, destroy jobs here in the United States and undercut core American values of free speech and free enterprise," Rep. Gregory Meeks, the ranking Democrat member, said. He suggested that the legislation as is could lead to sanctions against businesses in Korea and Taiwan that supply semiconductors and other parts to Chinese companies.

pic.twitter.com/zCHrWw1BH6

— TikTokComms (@TikTokComms) March 1, 2023

"A US ban on TikTok is a ban on the export of American culture and values to the billion-plus people who use our service worldwide," TikTok wrote on Twitter. "We're disappointed to see this rushed piece of legislation move forward, despite its considerable negative impact on the free speech rights of millions of Americans who use and love TikTok."

"Congress must not censor entire platforms and strip Americans of their constitutional right to freedom of speech and expression," American Civil Liberties Union senior policy counsel Jenna Leventoff said in a statement. "Whether we’re discussing the news of the day, live streaming protests, or ​​even watching cat videos, we have a right to use TikTok and other platforms to exchange our thoughts, ideas, and opinions with people around the country and around the world." Leventoff called the bill "vague, overbroad and unconstitutional."

TikTok has faced a growing backlash in recent months over concerns that the Chinese government may obtain user data from the app. Owner ByteDance is headquartered in Beijing, but TikTok claims it doesn't share data with the Chinese government. By last summer, TikTok was routing all US data to Oracle servers based in the country. It pledged to delete US users' private data from its own servers.

Nevertheless, the US government has banned the app from federally owned devices, this week giving agencies 30 days to make sure it's gone from phones and tablets they operate. Most US states, the European Union, Canada and Quebec are also preventing their employees from using TikTok on state-owned devices.

TikTok has been trying for years to convince US officials that it's not a threat to national security in an attempt to avert a complete ban. The company's CEO Shou Zi Chew is set to testify before the Energy and Commerce Committee on March 23rd to discuss privacy, as well as TikTok's influence on kids and its links to China.

This article originally appeared on Engadget at https://www.engadget.com/house-committee-approves-bill-that-could-lead-to-a-tiktok-ban-in-the-us-185632229.html?src=rss