Posts with «politics & government» label

Biden signs executive order to stop Russia and China from buying Americans’ personal data

President Joe Biden has signed an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly. There are likely to be additional restrictions on companies’ ability to sell data as part of cloud service contracts, investment agreements and employment agreements.

Though the White House described the step as “the most significant executive action any President has ever taken to protect Americans’ data security,” it’s unclear how exactly enforcement of the new policies will be handled within the Justice Department. A DoJ official said the executive order would require due diligence from data brokers to vet who they are dealing with, similar to the way companies are expected to adhere to US sanctions.

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

This article originally appeared on Engadget at https://www.engadget.com/biden-signs-executive-order-to-stop-russia-and-china-from-buying-americans-personal-data-100029820.html?src=rss

The Pentagon used Project Maven-developed AI to identify air strike targets

The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month. 

US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.

The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019. 

Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."

This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss

India’s government is forcing X to censor accounts via executive order amid the farmers’ protest

X, formerly Twitter, is once again restricting content in India. The company's Global Government Affairs account announced that the Indian government had issued an executive order mandating that X withhold specific accounts and posts or face penalties such as "significant fines and imprisonment." X further stated that it doesn't agree with the order and is challenging it. 

The designated posts and accounts will only be blocked within India, however, there's no clear list of those affected. "Due to legal restrictions, we are unable to publish the executive orders, but we believe that making them public is essential for transparency," the Global Government Affairs post stated. "This lack of disclosure can lead to a lack of accountability and arbitrary decision-making." X claims to have notified all affected parties. 

The posts likely center around the ongoing farmers' protest, which, since February 13, has seen multiple farmers' unions on strike in a bid to get floor pricing, or a minimum support price, for crops sold. Violent clashes between protesters and police have already resulted in at least one death, AP News reports. Mohammed Zubair, an Indian journalist and co-founder of Alt News, shared purported screenshots of suspended accounts belonging to individuals critical of the current government, on-the-ground reporters, prominent farm unionists, and more. 

This forced blocking is far from the first incident between X and India. In 2022, X sued the Indian government for "arbitrarily and disproportionately" applying its IT laws passed the year prior. The law required the company to hire a point of contact for the local authorities and a domestic compliance officer. Prior to this concession, in early 2021, the Indian government had threatened to jail X's employees if posts about the then occurring farmers' protest stayed live on the site. Shortly after, the country mandated that X remove content criticizing its COVID-19 response.

India dismissed X's suit in June 2023, claiming the company didn't properly explain why it had ever delayed complying with the country's IT laws. The court also fined X 5 million rupees ($60,300), stating, "You are not a farmer but a billon dollar company." The order followed shortly after Twitter co-founder Jack Dorsey claimed that India had threatened to raid employees' homes and shut down the site if the company hadn't taken down posts during the farmers' protest. 

This article originally appeared on Engadget at https://www.engadget.com/indias-government-is-forcing-x-to-censor-accounts-via-executive-order-amid-the-farmers-protest-112617420.html?src=rss

Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss

Their children were shot, so they used AI to recreate their voices and call lawmakers

The parents of a teenager who was killed in Florida’s Parkland school shooting in 2018 have started a bold new project called The Shotline to lobby for stricter gun laws in the country. The Shotline uses AI to recreate the voices of children killed by gun violence and send recordings through automated calls to lawmakers, The Wall Street Journal reported

The project launched on Wednesday, six years after a gunman killed 17 people and injured more than a dozen at a high school in Parkland, Florida. It features the voice of six children, some as young as ten, and young adults, who have lost their lives in incidents of gun violence across the US. Once you type in your zip code, The Shotline finds your local representative and lets you place an automated call from one of the six dead people in their own voice, urging for stronger gun control laws. “I’m back today because my parents used AI to recreate my voice to call you,” says the AI-generated voice of Joaquin Oliver, one of the teenagers killed in the Parkland shooting. “Other victims like me will be calling too.” At the time of publishing, more than 8,000 AI calls had been submitted to lawmakers through the website.

“This is a United States problem and we have not been able to fix it,” Oliver’s father Manuel, who started the project along with his wife Patricia, told the Journal. “If we need to use creepy stuff to fix it, welcome to the creepy.”

To recreate the voices, the Olivers used a voice cloning service from ElevenLabs, a two-year-old startup that recently raised $80 million in a round of funding led by Andreessen Horowitz. Using just a few minutes of vocal samples, the software is able to recreate voices in more than two dozen languages. The Olivers reportedly used their son’s social media posts for his voice samples. Parents and legal guardians of gun violence victims can fill up a form to submit their voices to The Shotline to be added its repository of AI-generated voices.


The project raises ethical questions about using AI to generate deepfakes of voices belonging to dead people. Last week, the Federal Communications Commission declared that robocalls made using AI-generated voices were illegal, a decision that came weeks after voters in New Hampshire received calls impersonating President Joe Biden telling them to not vote in their state’s primary. An analysis by security company called Pindrop revealed that Biden’s audio deepfake was created using software from ElevenLabs.

The company’s co-founder Mati Staniszewski told the Journal that ElevenLabs allows people to recreate the voices of dead relatives if they have the rights and permissions. But so far, it's not clear whether parents of minors had the rights to their children's likenesses.

This article originally appeared on Engadget at https://www.engadget.com/their-children-were-shot-so-they-used-ai-to-recreate-their-voices-and-call-lawmakers-003832488.html?src=rss

X let terrorist groups pay for verification, report says

X has allowed dozens of sanctioned individuals and groups to pay for its premium service, according to a new report from the Tech Transparency Project (TTP). The report raises questions about whether X is running afoul of US sanctions.

The report found 28 verified accounts belonging to people and groups the US government considers to be a national security threat. The group includes two leaders of Hezbollah, accounts associated with Houthis in Yemen and state-run media accounts from Iran and Russia. Of those, 18 of the accounts were verified after X began charging for verification last spring.

“The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of U.S. sanctions,” the report says. As the TTP points out, X’s own policies state that sanctioned individuals are prohibited from paying for premium services. Some of the accounts identified by the TTP also had ads in their replies, according to the group, “raising the possibility that they could be profiting from X’s revenue-sharing program.”

Changing up Twitter’s verification policy was one of the most significant changes implemented by Elon Musk after he took over the company. Under the new rules, anyone can pay for a blue checkmark if they subscribe to X Premium. X doesn’t require users to submit identification, and the company has at times scrambled to shut down impersonators.

X also offers gold checkmarks to advertisers as part of its “verified organizations” tier, which starts at $200 a month. The TTP report found that accounts belonging to Iran’s Press TV and Russia’s Tinkoff Bank — both sanctioned entities — had gold checks. X has also given away gold checks to at least 10,000 companies. As the report points out, even giving away the gold badge to sanctioned groups could violate US government policies.

X didn’t immediately respond to a request for comment, but it appears that the company has removed verification from some of the accounts named in the TTP’s report. “X, formerly known as Twitter, has removed the blue check and suspended the paid subscriptions of several Iranian outlets,” Press TV tweeted from its account, which still has a gold check. The Hezbollah leaders’ accounts are also no longer verified.

This article originally appeared on Engadget at https://www.engadget.com/x-let-terrorist-groups-pay-for-verification-report-says-201254824.html?src=rss

Midjourney might ban Biden and Trump images this election season

With the rise of AI tools that can quickly create modified images and videos, making fake images to spread political misinformation leading to the upcoming US presidential election has become easier than ever. Midjourney's solution to that might be to ban political images altogether, according to Bloomberg. David Holz, Midjourney's CEO, reportedly told users during a chat session on Discord that the company is close to banning images such as those of Biden and Trump over the next 12 months.

"I know it's fun to make Trump pictures — I make Trump pictures," he told users who attended the session. "Trump is aesthetically really interesting. However, probably better to just not — better to pull out a little bit during this election. We'll see." As Bloomberg notes, people had previously used the company's AI to generate deepfakes of Trump getting arrested. The company ended free trials for its AI image generator after those images — along with those infamous deepfakes of the pope wearing a Balenciaga-inspired coat — went viral.

At the moment, the company already has rules in place prohibiting the creation of "misleading public figures" and "events portrayals" with the "potential to mislead." Bloomberg was still able to create modified images of Trump covered in spaghetti using the older version of Midjourney's system, though, whereas the newer version refused to generate modified images of the former President. Of course, even if Midjourney does ban images of high-profile politicians, it will only be protecting its platform from drawing the ire of critics and becoming the center of attention this election season. It will not prevent the use of AI tools in political disinformation campaigns or the spread fake information meant to manipulate the elections as a whole. 

Other tech companies have also taken steps to help prevent political disinformation, or at least to help make it easier to identify. ChatGPT will soon start tagging images created using DALL-E 3, while Meta is working to develop technology that can detect and signify whether an image, video or audio clip has been generated using AI.

This article originally appeared on Engadget at https://www.engadget.com/midjourney-might-ban-biden-and-trump-images-this-election-season-064442076.html?src=rss

The FCC says robocalls that use AI-generated voices are illegal

The Federal Communication Commission is moving forward with its plan to ban AI robocalls. Commissioners voted unanimously on Wednesday in favor of a Declaratory Ruling that was proposed in late January. Under the measure, the FCC deems robocalls made using AI-generated voices to be "artificial" voices per the Telephone Consumer Protection Act (TCPA). That makes the practice illegal. The ruling takes effect immediately.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”

The TCPA is a 1991 law that bans artificial or recorded voices being used to call residences without the receivers' consent. It's up to the FCC to create rules to enforce that legislation, as Ars Technica notes. As the FCC pointed out last month, under the TCPA, telemarketers need "to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards."

The FCC vote in favor of the ban comes at somewhat of an inflection point for AI. Not only have such technologies become vastly more widespread over the last year or so, an AI-generated version of President Joe Biden's voice was used in a recent robocall that urged Democrats not to vote in New Hampshire's Presidential primary. A criminal investigation into that incident is underway.

Given that we're in an election year and the volume of misinformation and disinformation is already likely to rise, clamping down on AI robocalls now seems like a wise move. While stage AGs can take action against robocallers, the FCC also has the ability to fine them under the TCPA. Last year, the agency issued its largest ever fine of $300 million last year against a company that made more than 5 billion robocalls in a three-month period.

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-says-robocalls-that-use-ai-generated-voices-are-illegal-162132319.html?src=rss

NASA's Jet Propulsion Laboratory is laying off 570 workers

Even NASA is not immune to layoffs. The agency says it's cutting around 530 employees from its Jet Propulsion Laboratory (JPL) in California amid budget uncertainty. That's eight percent of the facility's workforce. JPL is laying off about 40 contractors too, just weeks after imposing a hiring freeze and canning 100 other contractors. Workers are being informed of their fates today.

"After exhausting all other measures to adjust to a lower budget from NASA, and in the absence of an FY24 appropriation from Congress, we have had to make the difficult decision to reduce the JPL workforce through layoffs," NASA said in a statement spotted by Gizmodo. "The impacts will occur across both technical and support areas of the Lab. These are painful but necessary adjustments that will enable us to adhere to our budget allocation while continuing our important work for NASA and our nation."

Uncertainty over the final budget that Congress will allocate to NASA for 2024 has played a major factor in the cuts. It's expected that the agency will receive around $300 million for Mars Sample Return (MSR), an ambitious mission in which NASA plans to launch a lander and orbiter to the red planet in 2028 and bring back soil. In its 2024 budget proposal, NASA requested just under $950 million for the project.

“While we still do not have an FY24 appropriation or the final word from Congress on our Mars Sample Return (MSR) budget allocation, we are now in a position where we must take further significant action to reduce our spending,” JPL Director Laurie Leshin wrote in a memo. "In the absence of an appropriation, and as much as we wish we didn’t need to take this action, we must now move forward to protect against even deeper cuts later were we to wait."

NASA has yet to provide a full cost estimate for MSR, though an independent report pegged the price at between $8 billion and $11 billion. In its proposed 2024 budget, the Senate Appropriations subcommittee ordered NASA to submit a year-by-year funding plan for MSR. If the agency does not do so, the subcommittee warned that the mission could be canceled.

That's despite MSR having enjoyed success so far. The Perseverance rover has dug up some soil samples that contain evidence of organic matter and would warrant closer analysis were NASA able to bring them back to Earth. The samples could help scientists learn more about Mars, such as whether the planet ever hosted life.

This article originally appeared on Engadget at https://www.engadget.com/nasas-jet-propulsion-laboratory-is-laying-off-570-workers-185336632.html?src=rss

The EU wants to criminalize AI-generated porn images and deepfakes

Back in 2022, the European Commission released a proposal for a directive on how to combat domestic violence and violence against women in other forms. Now, the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and "cyber-flashing," or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven't criminalized them yet. "This is an urgent issue to address, given the exponential spread and dramatic impact of violence online," it wrote in its announcement. In addition, the directive will require member states to develop measures that can help users more easily identify cyber-violence and to know how to prevent it from happening if possible or how to seek help. It will require them to provide their residents with an online portal where they can send in reports, as well. 

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift's face urged EU officials to move forward with the proposal. If you'll recall, X even had to temporarily block searches for the musician's name after the images went viral. "The latest disgusting way of humiliating women is by sharing intimate images generated by AI in a couple of minutes by anybody," European Commission Vice President Věra Jourová told the publication. "Such pictures can do huge harm, not only to popstars but to every woman who would have to prove at work or at home that it was a deepfake." At the moment, though, the aforementioned rules are just part of a bill that representatives of EU member states still need to approve. "The final law is also pending adoption in Council and European Parliament," the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

This article originally appeared on Engadget at https://www.engadget.com/the-eu-wants-to-criminalize-ai-generated-porn-images-and-deepfakes-105037524.html?src=rss