Posts with «elections» label

Google will require political ads 'prominently disclose' their AI-generated aspects

AI-generated images and audio are already making their way into the 2024 Presidential election cycle. In an effort to staunch the flow of disinformation ahead of what is expected to be a contentious election, Google announced on Wednesday that it will require political advertisers to "prominently disclose" whenever their advertisement contains AI-altered or -generated aspects, "inclusive of AI tools." The new rules will based on the company's existing Manipulated Media Policy and will take effect in November.

“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said in a statement obtained by The Hill. Small and inconsequential edits like resizing images, minor cleanup to the background or color correction will all still be allowed — those that depict people or things doing stuff that they never actually did or those that otherwise alter actual footage will be flagged. 

Those ads that do utilize AI aspects will need to label them as such in a "clear and conspicuous" manner that is easily seen by the user, per the Google policy. The ads will be moderated first through Google's own automated screening systems and then reviewed by a human as needed.

Google's actions run counter to other companies in social media. X/Twitter recently announced that it reversed its previous position and will allow political ads on the site, while Meta continues to take heat for its own lackadaisical ad moderation efforts. 

The Federal Election Commission is also beginning to weigh in on the issue. LAst month it sought public comment on amending a standing regulation "that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties" to clarify that the "related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign advertisements" as well.

This article originally appeared on Engadget at https://www.engadget.com/google-will-require-political-ads-prominently-disclose-their-ai-generated-aspects-232906353.html?src=rss

Trump's Georgia election interference trial will be livestreamed on YouTube

In an unprecedented decision, Fulton County Judge Scott McAfee announced on Thursday that he will allow not only a press pool, cameras and laptops to be present in the courtroom during the election interference trial of former President Donald Trump, but that the entire proceedings will be livestreamed on YouTube as well. That stream will be operated by the court.

Trump and 18 co-defendants are slated their trial on October 23rd. Tsplhey're facing multiple racketeering charges surrounding their efforts in the state of Georgia to subvert and overturn the results of the 2020 presidential election, what Fulton County DA Fanni Harris describes as "a criminal enterprise" to unconstitutionally keep the disgraced politician in power. Trump has pled not guilty to all charges. 

While recording court proceedings can be an uncommon occurrence in some jurisdictions, the state of Georgia takes a far more lax approach in allowing the practice. 

“Georgia courts traditionally have allowed the media and the public in so that everyone can scrutinize how our process actually works,” Atlanta-based attorney Josh Schiffer, told Atlanta First News. “Unlike a lot of states with very strict rules, courts in Georgia are going to basically leave it up to the judges.”

For example, when Trump was arraigned in New York on alleged financial crimes, only still photography was allowed. For his Miami charges, photography wasn't allowed at all. This means that the public will not be privy to the in-court proceedings of Trump's federal election interference case, only the Georgia state prosecution.

This article originally appeared on Engadget at https://www.engadget.com/trumps-georgia-election-interference-trial-will-be-livestreamed-on-youtube-193146662.html?src=rss

Trump's first post since he was reinstated on X is his mug shot

Former President Donald Trump is back on Twitter (now X) more than two years after he was banned from the platform in the aftermath of the January 6th Capitol riot. On August 24th, 2023, Trump tweeted for the first time since the website reinstated his account on November 19th, 2022. His first post? An image with the mug shot taken when he was booked at the Fulton County jail in Georgia on charges that he conspired to overturn the results of 2020 Presidential elections. 

The image also says "Election Interference" and "Never Surrender!," along with a URL of his website. Trump linked to his website in the tweet, as well, where his mug shot is also prominently featured with a lengthy note that starts with: "Today, at the notoriously violent jail in Fulton County, Georgia, I was ARRESTED despite having committed NO CRIME."

In November last year, Musk appeared to make the decision to reinstate Trump’s account based on the results of a Twitter poll. He asked people to vote on whether Trump should have access to his account returned. At the end of 24 hours, the option to reinstate the former president won with 51.8 percent of a decision that saw more than 15 million votes. Musk admitted at the time that some of the action on the poll came from “bot and troll armies.” Prior to the poll, Musk also said the decision on whether to reinstate Trump would come from a newly formed moderation council, but he never followed through on that pledge.

The website then known as Twitter banned Trump in early 2021 after he broke the company’s rules against inciting violence. The initial suspension saw Trump lose access to his account for 12 hours, but days later, the company made the decision permanent. At first, Trump tried to skirt the ban, even going so far as to file a lawsuit against Twitter that ultimately failed. Following his de-platforming from Twitter, Facebook and other social media websites, Trump went on to create Truth Social. Following his reinstatement, Trump said he didn’t “see any reason” to return to the platform. That said, the promise of reaching a huge audience with something as dramatic as a mug shot was obviously too good for Trump to pass up, particularly with what is likely to be a messy Republican primary on the horizon.

This article originally appeared on Engadget at https://www.engadget.com/trumps-first-post-since-he-was-reinstated-on-x-is-his-mug-shot-025650320.html?src=rss

Hack left majority of UK voters' data exposed for over a year

The UK's Electoral Commission has revealed that some personal information of around 40 million voters was left exposed for over a year. The agency — which regulates party and election finance and elections in the country — said it was the target of a “complex cyberattack.” It first detected suspicious activity on its network in October 2022, but said the intruders first gained access to its systems in August 2021.

The perpetrators found a way onto to the Electoral Commission's servers, which hosted the agency's email and control systems, as well as copies of the electoral registers. Details of donations and loans to registered political parties and non-party campaigners were not affected as those are stored on a separate system. The agency doesn't hold the details of anonymous voters or the addresses of overseas electors registered outside of the UK.

The data that was exposed included the names and addresses of UK residents who registered to vote between 2014 and 2022, along with those who are registered as overseas voters. Information provided to the commission through email and web forms was exposed too. 

"We know that this data was accessible, but we have been unable to ascertain whether the attackers read or copied personal data held on our systems," the commission said. The agency confirmed to TechCrunch that the attack could have affected around 40 million voters. According to UK census data, there were 46.6 million parliamentary electoral registrations and 48.8 million local government electoral registrations in December 2021.

The Electoral Commission says it had to adopt several measures before disclosing the hack. It had to lock out the "hostile actors," analyze the possible extent of the breach and put more security measures in place to stop a similar situation from happening in the future.

Data in the electoral registers is limited and much of it is in the public domain already, the agency said. As such, officials don't believe the data by itself represents a major risk to individuals. However, the agency warned, it's possible that the information "could be combined with other data in the public domain, such as that which individuals choose to share themselves, to infer patterns of behavior or to identify and profile individuals."

The Electoral Commission also noted that there was no impact on UK election security as a result of the attack. "The data accessed does not impact how people register, vote, or participate in democratic processes," it said. "It has no impact on the management of the electoral registers or on the running of elections. The UK’s democratic process is significantly dispersed and key aspects of it remain based on paper documentation and counting. This means it would be very hard to use a cyber-attack to influence the process."

This article originally appeared on Engadget at https://www.engadget.com/hack-left-majority-of-uk-voters-data-exposed-for-over-a-year-150045052.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

Court rules that Uber and Lyft can keep treating drivers as contractors in California

Uber and Lyft don't have to worry about reclassifying its workers in California for now. An appeals court has just ruled that gig workers, such as rideshare drivers, can continue to be classified as independent contractors under Proposition 22

If you'll recall, California passed Assembly Bill 5 (AB5) in September 2019 that legally obligates companies to treat their gig workers as full-time employees. That means providing them with all the appropriate benefits and protections, such as paying for their unemployment and health insurance. As a response, Uber, Lyft, Instacart and DoorDash poured over $220 million into campaigning for the Prop 22 ballot measure, which would allow them to treat app-based workers as independent contractors. It ended up passing by a wide margin in the state.

In 2021, a group of critics that included the Service Employees International Union and the SEIU California State Council filed a lawsuit in 2021 to overturn the proposition. The judge in charge of the case sided with them and called Prop 22 unconstitutional. He said back then that the proposition illegally "limits the power of a future legislature to define app-based drivers as workers subject to workers' compensation law." 

The three appeals court judges have now overturned that ruling, though according to The New York Times, one of them wanted to throw out Prop 22 entirely for the same reason the lower court judge gave when he handed down his decision. While the appeals court upheld the policy in the end, it ordered that a clause that makes it hard for workers in the state to unionize be severed from the rest of the proposition. That particular clause required a seven-eighths majority vote from the California legislature to be able to amend workers' rights to collective bargaining. 

David Huerta, the president of the Service Employees International Union in California, told The Times in a statement: "Every California voter should be concerned about corporations’ growing influence in our democracy and their ability to spend millions of dollars to deceive voters and buy themselves laws." The group is now expected to appeal this ruling and to take their fight to the Supreme Court, which could take months to decide whether to hear the case. 

This article originally appeared on Engadget at https://www.engadget.com/court-rules-uber-lyft-keep-contractors-classification-drivers-california-054040457.html?src=rss

Trump has reportedly asked Meta to reinstate his Facebook account

Former President Donald Trump has reportedly petitioned Meta to restore his Facebook account. According to NBC News, the Trump campaign sent a letter to the company on Tuesday, pushing for a meeting to discuss his “prompt reinstatement to the platform.” In 2020, Facebook banned Trump following the aftermath of the January 6th Capitol riot. At first, the suspension was set to last 24 hours, but the company made the ban indefinite less than a day later. In June 2021, following a recommendation from the Oversight Board, Meta said it would revisit the suspension after two years and “evaluate” the “risk to public safety” to determine if Trump should get his account back.

Meta did not immediately respond to Engadget’s comment request. The company told NBC News it would announce a decision “in the coming weeks in line with the process we laid out.” In 2021, Meta signaled Trump’s ban wouldn’t last forever. “When the suspension is eventually lifted, Mr Trump’s account will be subject to new enhanced penalties if he violates our policies, up to and including permanent removal of his accounts,” Nick Clegg, Meta’s president of global affairs, said at the time.

The letter is likely a bid by Trump to regain control of his Facebook account ahead of the 2024 presidential election. Trump has more than 34 million Facebook followers, and the platform was critical to his 2016 run. According to a Bloomberg report published after the election, the Trump campaign ran 5.9 million different versions of ads to test the ones that got the most engagement from the company’s users. Meta subsequently put a limit on high-volume advertising. One Trump Organization employee told NBC News that change prevented Trump’s 2020 campaign from using Facebook the way it did in 2016.

YouTube is still battling 2020 election misinformation as it prepares for the midterms

YouTube and Google are the latest platforms to share more about how they are preparing for the upcoming midterm elections, and the flood of misinformation that will come with it.

For YouTube, much of that strategy hinges on continuing to counter misinformation about the 2020 presidential election. The company’s election misinformation policies already prohibit videos that allege “widespread fraud, errors, or glitches” occurred in any previous presidential election. In a new blog post about its preparations for the midterms, the company says it's already removed “a number of videos related to the midterms” for breaking these rules, and that other channels have been temporarily suspended for videos related to the upcoming midterms.

The update comes as YouTube continues to face scrutiny for its handling of the 2020 election, and whether its recommendations pushed some people toward election fraud videos. (Of note, the Journal of Online Trust and Safety published a study on the topic today.)

In addition to taking down videos, YouTube also says it will launch “an educational media literacy campaign” aimed at educating viewers about “manipulation tactics used to spread misinformation.” The campaign will launch in the United States first, and will cover topics like “using emotional language” and “cherry picking information,” according to the company.

Google

And Both Google and YouTube will promote authoritative election information in their services, including in search results. Before the midterms, YouTube will link to information about how to vote, and on Election day, videos related to the midterms will link to “timely context around election results.” Similarly, Google will surface election results directly in search, which it has done in previous elections as well.

The company is also trying to make it easier to find details about local and regional races. Beginning in “the coming weeks,” Google will highlight local news sources from different states in election-related searches.

Meta's anti-misinformation strategy for the 2022 midterms is mostly a repeat of 2020

Meta has outlined its strategy for combatting misinformation during the 2022 US midterm elections, and they'll mostly sound familiar if you remember the company's 2020 approach. The Facebook and Instagram owner said it will maintain policies and protections "consistent" with the presidential election, including policies barring vote misinformation and linking people to trustworthy information. It will once again ban political ads during the last week of the election campaign. This isn't quite a carbon copy, however, as Meta is fine-tuning its methods in response to lessons learned two years ago.

To start, Meta is "elevating" post comments from local elections officials to make sure reliable polling information surfaces in conversations. The company is also acknowledging concerns that it used info labels too often in 2020 — for the 2022 midterms, it's planning to show labels in a "targeted and strategic way."

Meta's update comes just days after Twitter detailed its midterm strategy, and echoes the philosophy of its social media rival. Both are betting that their 2020 measures were largely adequate, and that it's just a question of refining those systems for 2022.

Whether or not that's true is another matter. In a March 2021 study, advocacy group Avaaz said Meta didn't do enough to stem the flow of misinformation and allowed billions of views for known false content. Whistleblower Frances Haugen also maintains that Meta has generally struggled to fight bogus claims, and it's no secret that Meta had to extend its ban on political ads after the 2020 vote. Facebook didn't catch some false Brazilian election ads, according to Global Witness. Meta won't necessarily deal with serious problems during the midterms, but it's not guaranteed a smooth ride.

NGO says Facebook failed to detect misinformation in Brazilian election ads

Less than two months before Brazil’s 2022 election, a report from international NGO Global Witness found Facebook parent company Meta “appallingly” failed to detect false political ads. The organization tested Facebook’s ability to catch election-related misinformation by submitting 10 ads.

Five of the advertisements featured blatantly false information about the election. For instance, some mentioned the wrong election date and methods citizens could use to cast their votes. The other five ads sought to discredit Brazil’s electoral process, including the electronic voting system the country has used since 1996. Of the 10 ads, Facebook only rejected one initially but later approved it without any further action from Global Witness.

In addition to their content, the ads had other red flags Global Witness contends Meta should have caught. To start, the non-profit did not verify the account it used to submit the advertisements through the company’s ad authorizations process. “This is a safeguard that Meta has in place to prevent election interference, but we were easily able to bypass this,” Global Witness said.

Additionally, the organization submitted the ads from London and Nairobi. In doing so, it did not need to use a VPN or local payment system to mask its identity. Moreover, the ads did not feature a “paid for by” disclaimer, which Meta notes all “social issue” advertisements in Brazil must include by June 22, 2022.

“What’s quite clear from the results of this investigation and others is that their content moderation capabilities and the integrity systems that they deploy in order to mitigate some of the risk during election periods, it’s just not working,” Jon Lloyd, senior advisor at Global Witness, told The Associated Press.

Meta did not immediately respond to Engadget’s request for comment. A Meta spokesperson told The Associated Press it has “prepared extensively” for Brazil’s upcoming election. “We’ve launched tools that promote reliable information and label election-related posts, established a direct channel for the Superior Electoral Court (Brazil’s electoral authority) to send us potentially-harmful content for review, and continue closely collaborating with Brazilian authorities and researchers,” the company said.

This isn’t the first time Global Witness has found Facebook’s election safeguards wanting. Earlier this year, the non-profit conducted a similar investigation ahead of Kenya’s recent election and reached many of the same conclusions. Then, as now, Global Witness called on Meta to strengthen and increase its content moderation and integrity systems.