Posts with «politics & government» label

Phony AI Biden robocalls reached up to 25,000 voters, says New Hampshire AG

Two companies based in Texas have been linked to a spate of robocalls that used artificial intelligence to mimic President Joe Biden. The audio deepfake was used to urge New Hampshire voters not to participate in the state's presidential primary. New Hampshire Attorney General John Formella said as many as 25,000 of the calls were made to residents of the state in January.

Formella says an investigation has linked the source of the robocalls to Texan companies Life Corporation and Lingo Telecom. No charges have yet been filed against either company or Life Corporation's owner, a person named Walter Monk. The probe is ongoing and other entities are believed to be involved. Federal law enforcement officials are said to be looking into the case too.

“We have issued a cease-and-desist letter to Life Corporation that orders the company to immediately desist violating New Hampshire election laws," Formella said at a press conference, according to CNN. "We have also opened a criminal investigation, and we are taking next steps in that investigation, sending document preservation notices and subpoenas to Life Corporation, Lingo Telecom and any other individual or entity."

The Federal Communications Commission also sent a cease-and-desist letter to Lingo Telecom. The agency said (PDF) it has warned both companies about robocalls in the past.

The deepfake was created using tools from AI voice cloning company ElevenLabs, which banned the user responsible. The company says it is "dedicated to preventing the misuse of audio AI tools and [that it takes] any incidents of misuse extremely seriously."

Meanwhile, the FCC is seeking to ban robocalls that use AI-generated voices. Under the Telephone Consumer Protection Act, the agency is responsible for making rules regarding robocalls. Commissioners are to vote on the issue in the coming weeks.

This article originally appeared on Engadget at https://www.engadget.com/phony-ai-biden-robocalls-reached-up-to-25000-voters-says-new-hampshire-ag-205253966.html?src=rss

Fallout from the Fulton County cyberattack continues, key systems still down

Key systems in Fulton County, Georgia have been offline since last week when a 'cyber incident' hit government systems. While the county has tried its best to continue operations as normal, phone lines, court systems, property records and more all went down. The county has not yet confirmed details of the cyber incident, such as what group could be behind it or motivations for the attack. As of Tuesday, there did not appear to be a data breach, according to Fulton County Board of Commissioners Chairman Robb Pitts.

Fulton County made headlines in August as the place where prosecutors chose to bring election interference charges against former president Donald Trump. But don't worry, officials assured the public that the case had not been impacted by the attack. “All material related to the election case is kept in a separate, highly secure system that was not hacked and is designed to make any unauthorized access extremely difficult if not impossible,” said Fulton County District Attorney Fani Willis.

Despite this, Fulton County election systems did not appear to be the target of the attack. While Fulton County's Department of Registration and Elections went down, “there is no indication that this event is related to the election process,” Fulton County said in a statement. “In an abundance of caution, Fulton County and the (Georgia) Secretary of State’s respective technology systems were isolated from one another as part of the response efforts.”

So far, the impact of the attack ranges widely from delays getting marriage certificates to disrupted court hearings. On Wednesday, a miscommunication during the outage even let a murder suspect out of custody. A manhunt continues after officials mistakenly released the suspect while being transferred between Clayton County and Fulton County for a hearing.

The county has not released information on when it expects systems to be fully restored, but it is working with law enforcement on recovery efforts. In the meantime, while constituents have trouble reaching certain government services, Fulton County put out a list of contact information for impacted departments. Fulton County also released a full list of impacted systems.

While the government IT outages occurred, a local student also hacked into Fulton County Schools systems, according to StateScoop on Friday. The school system is still determining if any personal information may have been breached, but most services came back online by Monday.

This article originally appeared on Engadget at https://www.engadget.com/fallout-from-the-fulton-county-cyberattack-continues-key-systems-still-down-161505036.html?src=rss

The FCC wants to make robocalls that use AI-generated voices illegal

The rise of AI-generated voices mimicking celebrities and politicians could make it even harder for the Federal Communications Commission (FCC) to fight robocalls and prevent people from getting spammed and scammed. That's why FCC Chairwoman Jessica Rosenworcel wants the commission to officially recognize calls that use AI-generated voices as "artificial," which would make the use of voice cloning technologies in robocalls illegal. Under the FCC's Telephone Consumer Protection Act (TCPA), solicitations to residences that use an artificial voice or a recording are against the law. As TechCrunch notes, the FCC's proposal will make it easier to go after and charge bad actors. 

"AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate," FCC Chairwoman Jessica Rosenworcel said in a statement. "No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls." If the FCC recognizes AI-generated voice calls as illegal under existing law, the agency can give State Attorneys General offices across the country "new tools they can use to crack down on... scams and protect consumers."

The FCC's proposal comes shortly after some New Hampshire residents received a call impersonating President Joe Biden, telling them not to vote in their state's primary. A security firm performed a thorough analysis of the call and determined that it was created using AI tools by a startup called ElevenLabs. The company had reportedly banned the account responsible for the message mimicking the president, but the incident could end up being just one of the many attempts to disrupt the upcoming US elections using AI-generated content. 

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-wants-to-make-robocalls-that-use-ai-generated-voices-illegal-105628839.html?src=rss

NSA admits to buying Americans’ web browsing data from brokers without warrants

The National Security Agency’s director has confirmed that the agency buys Americans’ web browsing data from brokers without first obtaining warrants. Senator Ron Wyden (D-OR) blocked the appointment of the NSA’s inbound director Timothy Haugh until the agency answered his questions regarding its collection of Americans’ location and Internet data. Wyden said he’d been trying for three years to “publicly release the fact that the NSA is purchasing Americans’ internet records.”

In a letter dated December 11, current NSA Director Paul Nakasone confirmed to Wyden that the agency does make such purchases from brokers. "NSA acquires various types of [commercially available information] for foreign intelligence, cybersecurity, and other authorized mission purposes, to include enhancing its signals intelligence (SIGINT) and cybersecurity missions," Nakasone wrote. "This may include information associated with electronic devices being used outside and, in certain cases, inside the United States."

Nakasone went on to claim that the NSA "does not buy and use location data collected from phones known to be used in the United States either with or without a court order. Similarly, NSA does not buy and use location data collected from automobile telematics systems from vehicles known to be located in the United States."

An NSA spokesperson told Reuters that the agency uses such data sparingly but that it has notable value for national security and cybersecurity purposes. "At all stages, NSA takes steps to minimize the collection of US [personal] information, to include application of technical filters," the spokesperson said.

Wyden has called the practice unlawful. "Such records can identify Americans who are seeking help from a suicide hotline or a hotline for survivors of sexual assault or domestic abuse," he said.

The senator urged Director of National Intelligence Avril Haines to order US intelligence agencies to stop buying Americans’ private data without consent. He also asked Haines to direct intelligence agencies to "conduct an inventory of the personal data purchased by the agency about Americans, including, but not limited to, location and internet metadata." Wyden said that any data that does not comply with Federal Trade Commission standards regarding personal data sales should be deleted.

Wyden pointed to an FTC settlement that this month banned a data broker from selling location data. The agency alleged that the information, which it claimed was sold to buyers including government contractors, "could be used to track people’s visits to sensitive locations such as medical and reproductive health clinics, places of religious worship and domestic abuse shelters."

The FTC stated in its complaint against the broker, formerly known as X-Mode Social, that by "failing to fully inform consumers how their data would be used and that their data would be provided to government contractors for national security purposes, X-Mode failed to provide information material to consumers and did not obtain informed consent from consumers to collect and use their location data."

The settlement was the first of its kind with a data broker. In a statement, Wyden, who has been investigating the data broker industry for several years, said he was "not aware of any company that provides such a warning to users [regarding their consent] before collecting their data."

The issue of US federal agencies buying phone location data isn't exactly new. In 2020, it emerged that Customs and Border Protection had been doing so. The following year, Wyden claimed the Defense Intelligence Agency and the Pentagon bought and used location data from Americans’ phones.

This article originally appeared on Engadget at https://www.engadget.com/nsa-admits-to-buying-americans-web-browsing-data-from-brokers-without-warrants-154904461.html?src=rss

Facebook was inundated with deepfaked ads impersonating UK's Prime Minister

Facebook was flooded with fake advertisements featuring a deepfaked Rishi Sunak ahead of the UK's general election that's expected to take place this year, according to research conducted by communications company Fenimore Harper. The firm found 143 different ads impersonating the UK's Prime Minister on the social network last month, and it believes the ad may have reached more than 400,000 people. It also said that funding for the ads originated from 23 countries, including Turkey, Malaysia, the Philippines and the United States, and that the collective amount of money spent to promote them from December 8, 2023 to January 8, 2024 was $16,500. 

As The Guardian notes, one of the fake ads showed a BBC newscast wherein Sunak said that the UK government has decided to invest in a stock market app launched by Elon Musk. That clip then reportedly linked to a fake BBC news page promoting an investment scam. The video, embedded in Fenimore Harper's website, seems pretty realistic if the viewer doesn't look too closely at people's mouths when they speak. Someone who has no idea what deepfakes are could easily be fooled into thinking that the video is legit.

The company says this is the "first widespread paid promotion of a deepfaked video of a UK political figure." That said, Meta has long been contending with election misinformation on its websites and apps. A spokesperson told The Guardian that the "vast majority" of the adverts were disabled before Fenimore Harper's report was published and that "less than 0.5 percent of UK users saw any individual ad that did go live."

Meta announced late last year that it was going to require advertisers to disclose whether the ads they submit have been digitally altered in the event that they're political or social in nature. It's going to start enforcing the rule this year, likely in hopes that it can help mitigate the expected spread of fake news connected to the upcoming presidential elections in the US. 

This article originally appeared on Engadget at https://www.engadget.com/facebook-was-inundated-with-deepfaked-ads-impersonating-uks-prime-minister-143009584.html?src=rss

OpenAI's policy no longer explicitly bans the use of its technology for 'military and warfare'

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

New Department of Labor rule could reclassify countless gig workers as employees

The US Department of Labor (DOL) published a final rule to the Federal Register on Wednesday that would increase the difficulty of classifying workers as independent contractors. If the rule survives court challenges unscathed, it will replace a business-friendly Trump-era regulation that did the opposite. It’s scheduled to go into effect on March 11.

The new rule, first proposed in 2022, could have profound implications for companies like Uber and DoorDash that rely heavily on gig workers. It would mandate that workers who are “economically dependent” on a company be considered employees.

The rule restores a pre-Trump precedent of using six factors to determine workers’ classification. These include their opportunity for profit or loss, the financial stake and nature of resources the worker has invested in the work, the work relationship’s permanence, the employer’s degree of control over the person’s work, how essential the person’s work is to the employer’s business and the worker’s skill and initiative.

In its decision to publish the new guidance, the DOL cites a “longstanding precedent” in the courts predating the Trump administration’s hard right turn. “A century of labor protections for working people is premised on the employer-employee relationship,” Acting Labor Secretary Julie Su said in a press call with Bloomberg.

“Misclassifying employees as independent contractors is a serious issue that deprives workers of basic rights and protections,” Su wrote in the announcement post. “This rule will help protect workers, especially those facing the greatest risk of exploitation, by making sure they are classified properly and that they receive the wages they’ve earned.”

Mike Kemp via Getty Images

If the rule takes effect, it’s expected to increase employer costs. The US Chamber of Commerce, a non-government lobby for business interests, unsurprisingly opposes it. “It is likely to threaten the flexibility of individuals to work when and how they want and could have significant negative impacts on our economy,” Marc Freedman, VP of the US Chamber of Commerce, said in a statement to Reuters.

DoorDash sounds optimistic that the rule wouldn’t apply to its workforce. “We are confident that Dashers are properly classified as independent contractors under the FLSA, and we do not anticipate this rule causing changes to our business,” the company wrote in a statement. “We will continue to engage with the Department of Labor, Congress, and other stakeholders to find solutions that ensure Dashers maintain their flexibility while gaining access to new benefits and protections.”

Groups with similar views are expected to mount legal challenges to the rule before it goes into effect. A previous attempt by the Biden Administration to void the Trump-era rules met such a fate when a federal judge blocked the DOL’s reversal.

Although the most prominent theoretical applications of the rule would be with gig economy apps like DoorDash, Lyft and Uber, it could stretch to sectors including healthcare, trucking and construction. “The department is seeing misclassifications in places it hasn’t seen it before,” Wage and Hour Division Administrator Jessica Looma said to Bloomberg on Monday. “Health care, construction, janitorial, and even restaurant workers who are often living paycheck to paycheck are some of the most vulnerable workers.”

This article originally appeared on Engadget at https://www.engadget.com/new-department-of-labor-rule-could-reclassify-countless-gig-workers-as-employees-130836919.html?src=rss

Apple reportedly faces pressure in India after sending out warnings of state-sponsored hacking

Indian authorities allied with Prime Minister Narendra Modi have questioned Apple on the accuracy of its internal threat algorithms and are now investigating the security of its devices, according to The Washington Post. Officials apparently targeted the company after it warned journalists and opposition politicians that state-sponsored hackers may have infiltrated their devices back in October. While Apple is under scrutiny for its security measures in the eyes of the public, the Post says government officials were more upfront with what they wanted behind closed doors. 

They reportedly called up the company's representatives in India to pressure Apple into finding a way to soften the political impact of its hacking warnings. The officials also called in an Apple security expert to conjure alternative explanations for the warnings that they could tell people — most likely one that doesn't point to the government as the possible culprit. 

The journalists and politicians who posted about Apple's warnings on social media had one thing in common: They were all critical of Modi's government. Amnesty International examined the phone of one particular journalist named Anand Mangnale who was investigating long-time Modi ally Gautam Adani and found that an attacker had planted the Pegasus spyware on his Apple device. While Apple didn't explicitly say that the Indian government is to blame for the attacks, Pegasus, developed by the Israeli company NSO Group, is mostly sold to governments and government agencies

The Post's report said India's ruling political party has never confirmed or denied using Pegasus to spy on journalists and political opponents, but this is far from the first time its critics have been infected with the Pegasus spyware. In 2021, an investigation by several publications that brought the Pegasus project to light found the spyware on the phones of people with a history of opposing and criticizing Modi's government. 

This article originally appeared on Engadget at https://www.engadget.com/apple-reportedly-faces-pressure-in-india-after-sending-out-warnings-of-state-sponsored-hacking-073036597.html?src=rss

UK Supreme Court rules AI can't be a patent inventor, 'must be a natural person'

AI may or may not take people's jobs in years to come, but in the meantime, there's one thing they cannot obtain: patents. Dr. Stephen Thaler has spent years trying to get patents for two inventions created by his AI "creativity machine" DABUS. Now, the United Kingdom's Supreme Court has rejected his appeal to approve these patents when listing DABUS as the inventor, Reuters reports

The court's rationale stems from a provision in UK patent law that states, "an inventor must be a natural person." The ruling stipulated that the appeal was unconcerned with whether this should change in the future. "The judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines," Thaler's lawyers said in a statement. 

Thaler first attempt to register the patents — for a food container and a flashing light — was in 2018, as owner of the machine that invented them. However, the UK's Intellectual Property Office said he must list an actual human being on the application, and when he refused, it withdrew his application. Thaler fought the decision in the High Court and then the Court of Appeal, with Lady Justice Elisabeth Laing stating, "Only a person can have rights. A machine cannot." 

Thaler, an American, also submitted the two products to the United States Patent and Trademark Office, which rejected his application. Plus, he previously sued the US Copyright Office (USCO) for not awarding him the copyright for a piece of art DABUS created. The case reached the US District Court of Columbia, with Judge Beryl Howell's ruling explaining, "Human authorship is a bedrock requirement of copyright." Thaler has argued that this provision is unconstitutional, but the US Supreme Court declined to hear his case, ending any further chances to argue his stance. While the UK and US have rejected Thaler's petitions, he has succeeded in countries such as Australia and South Africa. 

This article originally appeared on Engadget at https://www.engadget.com/uk-supreme-court-rules-ai-cant-be-a-patent-inventor-must-be-a-natural-person-131207359.html?src=rss

European Commission agrees to new rules that will protect gig workers rights

Gig workers in the EU will soon get new benefits and protections, making it easier for them to receive employment status. Right now, over 500 digital labor platforms are actively operating in the EU, employing roughly 28 million platform workers. The new rules follow agreements made between the European Parliament and the EU Member States, after policies were first proposed by the European Commission in 2021.

The new rules highlight employment status as a key issue for gig workers, meaning an employed individual can reap the labor and social rights associated with an official worker title. This can include things like a legal minimum wage, the option to engage in collective bargaining, health protections at work, options for paid leave and sick days. Through a recognition of a worker status from the EU, gig workers can also qualify for unemployment benefits.

Given that most gig workers are employed by digital apps, like Uber or Deliveroo, the new directive will require “human oversight of the automated systems” to make sure labor rights and proper working conditions are guaranteed. The workers also have the right to contest any automated decisions by digital employers — such as a termination.

The new rulings will also require employers to inform and consult workers' when there are “algorithmic decisions” that affect them. Employers will be required to report where their gig workers are fulfilling labor-related tasks to ensure the traceability of employees, especially when there are cross-border situations to consider in the EU.

Before the new gig worker protections can formally roll out, there needs to be a final approval of the agreement by the European Parliament and the Council. The stakeholders will have two years to implement the new protections into law. Similar protections for gig workers in the UK were introduced in 2021. Meanwhile, in the US, select cities have rolled out minimum wage rulings and benefits — despite Uber and Lyft’s pushback against such requirements.

This article originally appeared on Engadget at https://www.engadget.com/european-commission-agrees-to-new-rules-that-will-protect-gig-workers-rights-175155671.html?src=rss