Posts with «politics & government» label

Biden Administration will invest $140 million to launch seven new National AI Research Institutes

Ahead of a meeting between Vice President Kamala Harris and the heads of America's four leading AI tech companies — Alphabet, OpenAI, Anthropic and Microsoft — the Biden Administration announced Thursday a sweeping series of planned actions to help mitigate some of the risks that these emerging technologies pose to the American public. That includes $140 million to launch seven new AI R&D centers as part of the National Science Foundation, extracting commitments from leading AI companies to participate in a "public evaluation" of their AI systems at DEFCON 31, and ordering the Office of Management and Budget (OMB) to draft policy guidance for federal employees.

"The Biden Harris administration has been leading on these issues since long before these newest generative AI products debuted last fall," a senior administration official said during a reporters call Wednesday. The Administration unveiled its AI Bill of Rights "blueprint" last October, which sought to "help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public," per a White House press release.

"At a time of rapid innovation, it is essential that we make clear the values we must advance, and the common sense we must protect," the administration official continued. "With [Thursday's announcement] and the blueprint for an AI bill of rights, we've given company and policymakers and the individuals building these technologies, some clear ways that they can mitigate the risks [to consumers]."

While the federal government does already have authority to protect the citizenry and hold companies accountable, as the FTC demonstrated Monday, "there's a lot the federal government can do to make sure we get AI right," the official added — like found seven brand new National AI Research Institutes as part of the NSF. They'll act to collaborate research efforts across academia, the private sector and government to develop ethical and trustworthy in fields ranging from climate, agriculture and energy, to public health, education, and cybersecurity."

"We also need companies and innovators to be our partners in this work," the White House official said. "Tech companies have a fundamental responsibility to make sure their products are safe and secure and that they protect people's rights before they're deployed or made public tomorrow."

To that end, the Vice President is scheduled to meet with tech leaders at the White House on Thursday for what is expected to be a "frank discussion about the risks we see in current and near-term AI development," the official said. "We're also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and will discuss how we can work together to protect the American people from the potential harms of AI so that they can reach the benefits of these new technology."

The Administration also announced that it has obtained "independent commitment" from more than a half dozen leading AI companies — Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI — to put their AI systems up for public evaluation at DEFCON 31 (August 10-13th). There, thousands of attendees will be able to poke and prod around in these models to see if they square with the Biden admin's stated principles and practices of the Blueprint. Finally, the OMB will issue guidance to federal employees in coming months regarding official use of the technology and help establish specific policies for agencies to follow, and allow for public comment before those policies are finalized.

"These are important new steps to come out responsible innovation and to make sure AI improved people's lives, without putting rights and safety at risk," the official noted.

This article originally appeared on Engadget at https://www.engadget.com/biden-administration-will-invest-140-million-to-launch-seven-new-national-ai-research-institutes-090026144.html?src=rss

White House proposes 30 percent tax on electricity used for crypto mining

The Biden administration wants to impose a 30 percent tax on the electricity used by cryptocurrency mining operations, and it has included the proposal in its budget for the fiscal year of 2024. In a blog post on the White House website, the administration has formally introduced the Digital Asset Mining Energy or DAME excise tax. It explained that it wants to tax cryptomining firms, because they aren't paying for the "full cost they impose on others," which include environmental pollution and high energy prices. 

Crypto mining has "negative spillovers on the environment," the White House continued, and the pollution it generates "falls disproportionately on low-income neighborhoods and communities of color." It added that the operations' "often volatile power consumption " can raise electricity prices for the people around them and cause service interruptions. Further, local power companies are taking a risk if they decide to upgrade their equipment to make their service more stable, since miners can easily move away to another location, even abroad. 

It's no secret that the process of mining cryptocurrency uses up massive amounts of electricity. In April, The New York Times published a report detailing the power used by the 34 large scale Bitcoin miners in the US that it had identified. Apparently, just those 34 operations altogether use the same amount of electricity as three million households in the country. The Times explained that most Bitcoin mining took place in China until 2021 when the country banned it, making the United State the new leader. (In the US, New York Governor Kathy Hochul signed legislation that restricts crypto mining in the state last year.) Previous reports estimated the electricity consumption related to Bitcoin alone to be more than some countries', including Argentina, Norway and the Netherlands

As Yahoo News noted, there are other industries, such as steel manufacturing, that also use large amounts of electricity but aren't taxed for their energy consumption. In its post, the administration said that cryptomining "does not generate the local and national economic benefits typically associated with businesses using similar amounts of electricity."

Critics believe that the government made this proposal to go after and harm an industry it doesn't support. A Forbes report also suggested that DAME may not be the best solution for the issue, and that taxing the industry's greenhouse gas emissions might be a better alternative. That could encourage mining firms not just to minimize energy use, but also to find cleaner sources of power. It might be difficult to convince the administration to go down that route, though: In its blog post, it said that the "environmental impacts of cryptomining exist even when miners use existing clean power." Apparently, mining operations in communities with hydropower have been observed to reduce the amount of clean power available for use by others. That leads to higher prices and to even higher consumption of electricity from non-clean sources. 

If the proposal ever becomes a law, the government would impose the excise tax in phases. It would start by adding a 10 percent tax on miners' electricity use in the first year, 20 percent in the second and then 30 percent from the third year onwards. 

This article originally appeared on Engadget at https://www.engadget.com/white-house-proposes-30-percent-tax-on-electricity-used-for-crypto-mining-090342986.html?src=rss

Bipartisan Senate group reintroduces a revised Kids Online Safety Act

US Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) reintroduced a bill today that would put the onus on social media companies to add online safeguards for children. The Kids Online Safety Act (KOSA) was first introduced last February (sponsored by the same pair) but never made it to the Senate floor after backlash from advocacy groups. The revamped legislation “provides specific tools to stop Big Tech companies from driving toxic content at kids and to hold them accountable for putting profits over safety,” said Blumenthal. It follows a separate bill introduced last month with a similar aim.

Like the original KOSA, the updated bill would require annual independent audits by “experts and academic researchers” to force regulation-averse social media companies to address the online dangers posed to children. However, the updated legislation attempts to address the concerns that led to its previous iteration’s downfall, namely that its overly broad nature could do more harm than good by requiring surveillance and censorship of young users. The EFF described the February 2022 bill as “a heavy-handed plan to force platforms to spy on young people” that “fails to properly distinguish between harmful and non-harmful content, leaving politically motivated state attorneys general with the power to define what harms children. One of the primary fears is that states could use the flimsy definitions to ban content for political gain.”

The rewritten bill adds new protections for services like the National Suicide Hotline, LGBTQ+ youth centers and substance-abuse organizations to avoid being unnecessarily harmed. In addition, it would make social platforms give minors options to safeguard their information, turn off addictive features and opt out of algorithmic recommendations. (Social platforms would have to enable the strongest settings by default.) It would also give parents “new controls to help support their children and identify harmful behaviors” while offering children “a dedicated channel to report harms” on the platform. Additionally, it would specifically ban the promotion of suicide, eating disorders, substance abuse, sexual exploitation and the use of “unlawful products for minors” like gambling, drugs and alcohol. Finally, it would require social companies to provide “academic and public interest organizations” with data to help them research social media’s effects on the safety and well-being of minors.

The American Psychological Association, Common Sense Media and other advocacy groups support the updated bill. It has 26 cosponsors from both parties, including lawmakers ranging from Dick Durbin (D-IL) and Sheldon Whitehouse (D-RI) to Chuck Grassley (R-IA) and Lindsey Graham (R-SC). Blackburn told CNBC today that Senate Majority Leader Chuck Schumer (D-NY) is “a hundred percent behind this bill and efforts to protect kids online.”

Despite the Senators’ renewed optimism about passing the bill, some organizations believe it’s still too broad to avoid a negative net impact. “The changes made to the bill do not at all address our concerns,” Evan Greer, director of digital rights advocacy group Fight For the Future, said in an emailed statement to Engadget. “If Senator Blumenthal’s office had been willing to meet with us, we could have explained why. I can see where changes were made that attempt to address the concerns, but they fail to do so. Even with the new changes, this bill will allow extreme right-wing attorneys general to dictate what content platforms can recommend to younger users.”

The ACLU also opposes the resurrected bill. “KOSA’s core approach still threatens the privacy, security and free expression of both minors and adults by deputizing platforms of all stripes to police their users and censor their content under the guise of a ‘duty of care,’” ACLU Senior Policy Counsel Cody Venzke toldCNBC. “To accomplish this, the bill would legitimize platforms’ already pervasive data collection to identify which users are minors when it should be seeking to curb those data abuses. Moreover, parental guidance in minors’ online lives is critical, but KOSA would mandate surveillance tools without regard to minors’ home situations or safety. KOSA would be a step backward in making the internet a safer place for children and minors.”

Blumenthal argues that the bill was “very purposely narrowed” to prevent harm. “I think we’ve met that kind of suggestion very directly and effectively,” he said at a press conference. “Obviously, our door remains open. We’re willing to hear and talk to other kinds of suggestions that are made. And we have talked to many of the groups that had great criticism and a number have actually dropped their opposition, as I think you’ll hear in response to today’s session. So I think our bill is clarified and improved in a way that meets some of the criticism. We’re not going to solve all of the problems of the world with a single bill. But we are making a measurable, very significant start.”

This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-group-reintroduces-a-revised-kids-online-safety-act-212117992.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

The White House is examining how companies use AI to monitor workers

The Biden administration is preparing to examine how companies use artificial intelligence to monitor and manage workers. According to Bloomberg, the White House will publish a blog post later today that invites American workers to share how automated tools are being used in their workplaces.

“While these technologies can benefit both workers and employers in some cases, they can also create serious risks to workers,” the post states, per Bloomberg. “The constant tracking of performance can push workers to move too fast on the job, posing risks to their safety and mental health.” Citing media reports, the White House adds the technology has also been used to deter workers from organizing their workplaces and to perpetuate pay and discipline discrimination.

The blog post calls for input from a variety of stakeholders, including researchers, advocacy groups and even employers. Notably, the Biden administration says it wants to know what regulations and enforcement action the federal government should implement to address the “economic, safety, physical, mental and emotional impacts” of workplace surveillance tech.

The call for information comes after a handful of states passed laws against unreasonable productivity quotas. Specifically, New York’s Warehouse Worker Protection Act grants workers the right to request information on their quota at any time. It also prohibits companies from imposing productivity demands that interfere with an employee’s state-mandated meal and restroom breaks.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-is-examining-how-companies-use-ai-to-monitor-workers-174217114.html?src=rss

US lawmakers introduce bill to prevent AI-controlled nuclear launches

Bipartisan US lawmakers from both chambers of Congress introduced legislation this week that would formally prohibit AI from launching nuclear weapons. Although Department of Defense policy already states that a human must be “in the loop” for such critical decisions, the new bill — the Block Nuclear Launch by Autonomous Artificial Intelligence Act — would codify that policy, preventing the use of federal funds for an automated nuclear launch without “meaningful human control.”

Aiming to protect “future generations from potentially devastating consequences,” the bill was introduced by Senator Ed Markey (D-MA) and Representatives Ted Lieu (D-MA), Don Beyer (D-VA) and Ken Buck (R-CO). Senate co-sponsors include Jeff Merkley (D-OR), Bernie Sanders (I-VT), and Elizabeth Warren (D-MA). “As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons – not robots,” said Markey. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”

Artificial intelligence chatbots (like the ever-popular ChatGPT, the more advanced GPT-4 and Google Bard), image generators and voice cloners have taken the world by storm in recent months. (Republicans are already using AI-generated images in political attack ads.) Various experts have voiced concerns that, if left unregulated, humanity could face grave consequences. “Lawmakers are often too slow to adapt to the rapidly changing technological environment,” Cason Schmit, Assistant Professor of Public Health at Texas A&M University, toldThe Conversation earlier this month. Although the federal government hasn’t passed any AI-based legislation since the proliferation of AI chatbots, a group of tech leaders and AI experts signed a letter in March requesting an “immediate” six-month pause on developing AI systems beyond GPT-4. Additionally, the Biden administration recently opened comments seeking public feedback about possible AI regulations.

“While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear,” said Rep. Lieu. “It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences. That’s why I’m pleased to introduce the bipartisan, bicameral Block Nuclear Launch by Autonomous AI Act, which will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”

Given the current political climate in Washington, passing even the most common-sense of bills isn’t guaranteed. Nevertheless, perhaps a proposal as fundamental as “don’t let computers decide to obliterate humanity” will serve as a litmus test for how prepared the US government is to deal with this quickly evolving technology.

This article originally appeared on Engadget at https://www.engadget.com/us-lawmakers-introduce-bill-to-prevent-ai-controlled-nuclear-launches-184727260.html?src=rss

Canada's controversal streaming bill just became law

Canada has passed its controversial streaming bill that requires Netflix, Spotify and other companies to pay to support Canadian series, music and other content, the CBC has reported. After clearing a final hurdle in the Senate on Thursday, Bill C-11 imposes the same content laws on streamers as it does on traditional broadcasters. The government has promised that the bill only applies to companies and not individual content creators on YouTube or other platforms.

The new rules give the Canadian Radio-television and Telecommunications Commission (CRTC) regulator broad powers over streaming companies, which could face fines or other penalties if they don't comply with the new laws. "Online streaming has changed how we create, discover, and consume our culture, and it's time we updated our system to reflect that," a Canadian government press release states.

Critics have said that the bill could cause over-regulation online. "Under this archaic system of censorship, government gatekeepers will now have the power to control which videos, posts and other content Canadians can see online," Canada's Conservative opposition wrote on a web page dedicated to C-11. Streaming companies like YouTube and TikTok opposed the bill as well. 

The law has also been criticized for being overly broad, with a lack of clarity on how it will apply in some cases. "The bill sets out a revised broadcasting policy for Canada, which includes an expanded list of things the Canadian broadcasting system 'should' do," a Senate page states. "But precisely what this would mean in concrete terms for broadcasters is not yet known." 

Canada is far from the first country to enact local content rules for streaming companies, though. The EU requires a minimum of 30 percent locally produced content for member nations, most of which easily exceed that. Australia also recently announced that content quotas will be placed on Netflix, Disney+, Prime Video and the other international streamers by July of 2024.

Some notable Canadian series include Schitt's Creek, Letterkenny and M'entends-tu. Numerous US and international shows are also shot in "Hollywood North" in cities like Montreal, Toronto and Vancouver, including The Handmaid's Tale, The Boys, Riverdale and others.

This article originally appeared on Engadget at https://www.engadget.com/canadas-controversal-streaming-bill-just-became-law-065036243.html?src=rss

SpaceX’s Starship launch caused a fire in a Texas state park

After a string of delays and a scrubbed launch attempt, SpaceX finally conducted the first test flight of its Starship spacecraft earlier this month. While the vehicle got off the ground, it seems federal agencies will be dealing with the explosive fallout of the mission for quite some time.

Federal agencies say the launch led to a 3.5-acre fire on state park land. The blaze was extinguished. Debris from the rocket, which SpaceX said it had to blow up in the sky for safety reasons after a separation failure, was found across hundreds of acres of land. “Although no debris was documented on refuge fee-owned lands, staff documented approximately 385 acres of debris on SpaceX’s facility and at Boca Chica State Park,” the Texas arm of the US Fish and Wildlife Service told Bloomberg.

The agency noted it hasn’t found evidence of dead wildlife as a result of the incident. Still, it’s working with the Federal Aviation Administration on a site assessment and post-launch recommendations, while ensuring compliance with the Endangered Species Act.

Soon after the launch and Starship’s explosion, the FAA said it was carrying out a mishap investigation. Starship is grounded for now and its return to flight depends on the agency “determining that any system, process or procedure related to the mishap does not affect public safety.”

Starship’s approved launch plan included an anomaly response process, which the FAA says was triggered after the spacecraft blew up. As such, SpaceX is required to remove debris from sensitive habitats, carry out a survey of wildlife and vegetation and send reports to several federal agencies. “The FAA will ensure SpaceX complies with all required mitigations,” the agency told Bloomberg.

Even if SpaceX can sate federal agencies' concerns swiftly, it may be quite some time until the next Starship launch. The super heavy-lift space launch vehicle destroyed its launch pad, sending chunks of debris into the air. Footage showed the shrapnel landing on a nearby beach and even hitting a van hundreds of yards from the launch site. Fortunately, no one was hurt, according to the FAA.

This article originally appeared on Engadget at https://www.engadget.com/spacex-starship-launch-caused-a-fire-in-a-texas-state-park-165630774.html?src=rss

Republicans attack Biden with a fully AI-generated ad

It's not a huge surprise that the Republican National Committee (RNC) had attack ads ready to go whenever President Joe Biden officially announced his re-election campaign. What's novel this time is that the video uses imagery generated by artificial intelligence to present the RNC's vision of what the world may look like if Biden wins again in 2024. 

The RNC told Axios it was the first time it had used a video that was made entirely with AI. The ad starts by depicting Biden and Vice President Kamala Harris at an election victory party. Although there's a faint disclaimer in the top-left corner noting that the ad was "built entirely with AI imagery," there's a dead giveaway that it's not a real photo of Biden and Harris — both of the smiling, AI-generated figures have far too many teeth.

The ad goes on to depict several domestic and international incidents that the RNC suggests might happen if the Biden-Harris ticket wins again. "This morning, an emboldened China invades Taiwan," a fake news announcer says, for instance. The ad goes on to stoke fears of a financial crisis prompted by the closures of hundreds of regional banks, as well as border agents being overrun by asylum seekers and the military taking over San Francisco due to "escalating crime and the fentanyl crisis."

This particular ad doesn't stray too far from the kinds of talking points one might expect Republicans to hit in an attack ad. But the video is a sobering bellwether of what we may see more of from political campaigns in the months and years to come. It's not difficult to imagine AI-generated images depicting outright falsehoods in attack ads. 

This article originally appeared on Engadget at https://www.engadget.com/republicans-attack-biden-with-a-fully-ai-generated-ad-184055192.html?src=rss

Supreme Court will decide if government officials can block social media critics

The Supreme Court will soon hear two cases that could decide whether or not government figures can block their critics on social networks. The court has agreed to tackle appeals from California and Michigan residents who claim officials violated First Amendment free speech rights by blocking them on social media in response to critical commentary.

In California, Christopher and Kimberly Garnier believe Poway Unified School District members Michelle O'Connor-Ratcliff and TJ Zane unfairly blocked them on Facebook and Twitter for writing hundreds of critical comments on talking points like school budgets and race. Michigan's Kevin Lindke, meanwhile, says City Manager James Freed violated his rights by blocking him on Facebook over criticism regarding the pandemic.

The cases have had different outcomes so far. A federal judge sided with the Garniers in 2021, and an appeals court upheld the decision noting that O'Connor-Ratcliff and Zane both used their social accounts in an official role. However, the federal judge in the other case ruled for Freed in 2021, who won an appeal in 2022. Freed wasn't acting as City Manager when he blocked Lindke, the judges found.

Cases like this took the spotlight in 2019, when then-President Trump and Rep. Alexandria Ocasio-Cortez both faced accusations they violated free speech rights by banning critics. To date, courts have typically ruled based on whether or not officials are using their accounts for business. Even a personal account used for official activity amounts to a public space where criticism must be allowed, a federal appeals court found when hearing Trump's case. These issues haven't reached the Supreme Court until now. The legal body's decisions could settle the question and force officials to allow critics so long as the posts don't amount to harassment or threats. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-will-decide-if-government-officials-can-block-social-media-critics-155717504.html?src=rss