Posts with «politics & government» label

Senate bill would hold AI companies liable for harmful content

Politicians think they have a way to hold companies accountable for troublesome generative AI: take away their legal protection. Senators Richard Blumenthal and Josh Hawley have introduced a No Section 230 Immunity for AI Act that, as the name suggests, would prevent OpenAI, Google and similar firms from using the Communications Decency Act's Section 230 to waive liability for harmful content and avoid lawsuits. If someone created a deepfake image or sound bite to ruin a reputation, for instance, the tool developer could be held responsible alongside the person who used it.

Hawley characterizes the bill as forcing AI creators to "take responsibility for business decisions" as they're developing products. He also casts the legislation as a "first step" toward creating rules for AI and establishing safety measures. In a hearing this week on AI's effect on human rights, Blumenthal urged Congress to deny AI the broad Section 230 safeguards that have shielded social networks from legal consequences.

In May, Blumenthal and Hawley held a hearing where speakers like OpenAI chief Sam Altman called for the government to act on AI. Industry leaders have already urged a pause on AI experimentation, and more recently compared the threat of unchecked AI to that of nuclear war.

Congress has pushed for Section 230 reforms for years in a bid to rein in tech companies, particularly over concerns that internet giants might knowingly allow hurtful content. A 2021 House bill would have held businesses liable if they knowingly used algorithms that cause emotional or physical harm. These bills have stalled, though, and Section 230 has remained intact. Legislators have had more success in setting age verification requirements that theoretically reduce mental health issues for younger users.

It's not clear this bill stands a greater chance of success. Blumenthal and Hawley are known for introducing online content bills that fail to gain traction, such as the child safety-oriented EARN IT Act and Hawley's anti-addiction SMART Act. On top of persuading fellow senators, they'll need an equivalent House bill that also survives a vote.

This article originally appeared on Engadget at https://www.engadget.com/senate-bill-would-hold-ai-companies-liable-for-harmful-content-212340911.html?src=rss

Google, OpenAI will share AI models with the UK government

The UK's AI oversight will include chances to directly study some companies' technology. In a speech at London Tech Week, Prime Minister Rishi Sunak revealed that Google DeepMind, OpenAI and Anthropic have pledged to provide "early or priority access" to AI models for the sake of research and safety. This will ideally improve inspections of these models and help the government recognize the "opportunities and risks," Sunak says.

It's not clear just what data the tech firms will share with the UK government. We've asked Google, OpenAI and Anthropic for comment.

The announcement comes weeks after officials said they would conduct an initial assessment of AI model accountability, safety, transparency and other ethical concerns. The country's Competition and Markets Authority is expected to play a key role. The UK has also committed to spending an initial £100 million (about $125.5 million) to create a Foundation Model Taskforce that will develop "sovereign" AI meant to grow the British economy while minimizing ethical and technical problems.

Industry leaders and experts have called for a temporary halt to AI development over worries creators are pressing forward without enough consideration for safety. Generative AI models like OpenAI's GPT-4 and Anthropic's Claude have been praised for their potential, but have also raised concerns about inaccuracies, misinformation and abuses like cheating. The UK's move theoretically limits these issues and catches problematic models before they've done much damage.

This doesn't necessarily give the UK complete access to these models and the underlying code. Likewise, there are no guarantees the government will catch every major issue. The access may provide relevant insights, though. If nothing else, the effort promises increased transparency for AI at a time when the long-term impact of these systems isn't entirely clear.

This article originally appeared on Engadget at https://www.engadget.com/google-openai-will-share-ai-models-with-the-uk-government-134318263.html?src=rss

The DeSantis campaign used AI-generated images to attack Trump

The 2024 US Presidential race is heating up, and the candidates along with their supporters have a new weapon at their disposal: Artificial intelligence tools, which can easily generate realistic images and voices. As Semafor reports, the DeSantis War Room has shared images showing Donald Trump embracing Dr. Anthony Fauci, who became a controversial figure to conservatives following the COVID-19 pandemic. The short clip compares Trump as a reality TV star firing people in The Apprentice with the "real life Trump" who apparently refused to fire the doctor. 

While the video was marked on Twitter with a warning that says three of the still shots showing Trump embracing Fauci are "AI generated images," some people could believe they were real. DeSantis' camp didn't deny that they weren't real photos, but it didn't explicitly say that either. Someone familiar with DeSantis' political campaign told Semafor that the video was a "social media post" and not an ad, though it's unclear if that's the reason why the images were not clearly marked as computer-generated. 

The DeSantis camp isn't the only group using AI-generated images this election, though. When President Joe Biden officially announced his re-election bid, the Republican National Committee (RNC) released video ads created entirely using artificial intelligence. The video shows Biden and Kamala Harris at a victory party followed by several domestic and international incidents the RNC says might happen if they win. Donald Trump, Jr. also reportedly shared a video swapping DeSantis' face and voice over a scene from The Office in the past. Meanwhile, former President Trump shared a video mocking DeSantis' Twitter Spaces launch with AI voices that featured figures, such as Adolf Hitler. Clearly, Americans will be bombarded with AI-generated images and voices this election, and it's up to them to figure out what's real and what's not. 

This article originally appeared on Engadget at https://www.engadget.com/the-desantis-campaign-used-ai-generated-images-to-attack-trump-062923571.html?src=rss

YouTube changes misinformation policy to allow videos falsely claiming fraud in the 2020 US election

In a Friday afternoon news dump, YouTube inexplicably announced today that 2020 election denialism is a-okay. The company says it “carefully deliberated this change” without offering any specifics on its reasons for the about-face. YouTube initially banned content disputing the results of the 2020 election in December of that year.

In a feeble attempt to explain its decision, the company wrote that it “recognized it was time to reevaluate the effects of this policy in today's changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/youtube-changes-misinformation-policy-to-allow-videos-falsely-claiming-fraud-in-the-2020-us-election-184319851.html?src=rss

Meta test will limit news posts for Facebook and Instagram users in Canada

Last year, Facebook parent Meta said it may stop Canadians from sharing news content in response to the country's proposed Online Sharing Act. Now, the company has announced that it will begin tests on Facebook and Instagram that "limit some users and publishers from viewing or sharing some news content in Canada," it wrote in a blog post. The testing will take place over several weeks and the "small percentage" of users affected will be notified if they try to share news content. 

"As we have repeatedly shared, the Online News Act is fundamentally flawed legislation that ignores the realities of how our platforms work, the preferences of the people who use them, and the value we provide news publishers," the company wrote.

The proposed law, also known as Bill C-18, was introduced by the ruling Liberal government earlier this year. Modeled after a similar Australian law, it aims to force internet platforms like Facebook into revenue-sharing partnerships with local news organizations. It came about, in part, because of Facebook and Google's dominance of the online advertising market — with both companies combined taking 80 percent of revenue.

Last year, Meta said it was trying to be "transparent about the possibility that we may be forced to consider whether we continue to allow the sharing of news content in Canada." The company made the threat after a government panel failed to invite Meta to a meeting about the legislation. Google also temporarily blocked some Canadian users from seeing news content. 

In response, Canadian Heritage Minister Pablo Rodriguez called the tests "unacceptable," Reuters reported. "When a big tech company... tells us, 'If you don't do this or that, then I'm pulling the plug' — that's a threat. I've never done anything because I was afraid of a threat," he told Reuters

Facebook, Google and others eventually agreed to the Australian law, and now pay publishers to post news links with snippets. Before that happened, though, Facebook followed through on its threat to block users from sharing news links in the nation. It later reversed the ban following further discussions, after the government made amendments addressing Facebook's concerns about the value of its platform to publishers.

For now, the test will only affect a small number of users and for a limited time. If it follows the same playbook it used in Australia though, Meta may block news sharing for all users in Canada, possibly as a way to force the government and publishers to the bargaining table.

"As the Minister of Canadian Heritage has said, how we choose to comply with the legislation is a business decision we must make, and we have made our choice," the company wrote. "While these product tests are temporary, we intend to end the availability of news content in Canada permanently following the passage of Bill C-18."

This article originally appeared on Engadget at https://www.engadget.com/meta-test-will-limit-news-posts-for-facebook-and-instagram-users-in-canada-104026273.html?src=rss

NASA's SLS rocket is $6 billion over budget and six years behind schedule

NASA's Space Launch System (SLS) rocket designed to take astronauts to the moon is over budget and far behind it's original schedule, according to a scathing new audit from NASA's Inspector General. Furthermore, the report foresees "additional cost and schedule increases" that could potentially jeopardize the entire Artemis mission if problems aren't handled. 

NASA's spending on the Artemis Moon Program is expected to reach $93 billion by 2025, including $23.8 billion already spent on the SLS system through 2022. That sum represents "$6 billion in cost increases and over six years in schedule delays above NASA’s original projections," the report states. 

The SLS, which finally launched for the first time in November 2022, uses four RS-25 engines per launch, including 16 salvaged from retired Space Shuttles. Once those run out (all engines on SLS are expendable), NASA will switch to RS-25E engines being built by Aerojet Rocketdyne, which are supposed to be 30 percent cheaper and 11 percent more powerful. It also uses solid rocket boosters provided by Northrop Grumman. 

The older technology isn't helping with the budget as NASA expected, though. "These increases are caused by interrelated issues such as assumptions that the use of heritage technologies from the Space Shuttle and Constellation Programs were expected to result in significant cost and schedule savings compared to developing new systems for the SLS," the audit states. "However, the complexity of developing, updating, and integrating new systems along with heritage components proved to be much greater than anticipated." 

For instance, only 5 of the 16 engine adaptations have been completed, and scope and cost increases have hit the booster contract as well. The latter has been the biggest issue, increasing from $2.5 billion to $4.4 billion since Artemis was announced, and delaying the schedule by five years. 

The Inspector General also blames the use of "cost-plus" contracts that allow suppliers to inflate budgets more easily, instead of fixed-priced contracts. The report recommends that upcoming work be shifted to a fixed-price regime and that procurement issues be resolved, among others. NASA management has agreed to all eight recommendations. 

The Artemis moon mission project was based on the Constellation program, originally launched in 2005 with the goal of returning to the moon by 2020 and eventually, Mars. Cancellation of that project by the Obama administration was met with widespread criticism, largely because the program guaranteed jobs around the US. 

However, the NASA Authorization Act of 2010, introduced the same year, mandated construction of the SLS and requiring the repurposing of existing technology, contracts and workforce from Constellation. It also required partnerships with private space companies. SpaceX, for one, is developing its own Starship rocket system, also capable of carrying astronauts to the Moon and Mars. However, Starship exploded on its first orbital launch mission, and may not fly again soon due to issues with the self-destruct command and the considerable damage it did to local ecosystems. 

This article originally appeared on Engadget at https://www.engadget.com/nasas-sls-rocket-is-6-billion-over-budget-and-six-years-behind-schedule-091432515.html?src=rss

Ron DeSantis can't announce he's running for president because Twitter's servers are 'kind of melting'

Ron DeSantis was supposed to take to Twitter Spaces today to officially announce his bid for the 2024 Republican presidential nomination. Unfortunately, for the governor of Florida, it appears Twitter was not prepared for the influx of people who were waiting to listen to the announcement. Shortly after the Space went live, the call dropped and DeSantis has yet to say he's running to become the president of the United States of America. "We've got so many people here that I think we are kind of melting the servers, which is a good sign," said Republican megadonor and Elon Musk confidant David Sacks said during a moment when the Space briefly returned before dropping again.  

Musk’s decision to personally give DeSantis a platform should put to bed any questions about his politics. Since his takeover of the platform last October, the billionaire has repeatedly engaged with and enabled fringe far-right voices. At the start of December, Twitter reinstated the account of Andrew Anglin, the creator of the white supremacist website The Daily Stormer. Before that, Musk elevated conspiracy theories about the attack on Paul Pelosi. More recently, he has publicly attacked Anthony Fauci and George Soros. In helping DeSantis announce his presidential bid, Musk is aligning himself with a politician who has signed legislation that has restricted access to abortion and banned transition care for minors.

Before today, Twitter, under its previous leadership, had never so directly engaged with a presidential candidate. At most, the platform's user-facing political outreach involved an election hub that pointed people to information on how to vote and livestreams devoted to debates between presidential candidates. Now the company plans to give former Fox News pundit Tucker Carlson a platform.

During Donald Trump's years as president, the public got used to seeing a US leader use Twitter as a personal megaphone. The former president was banned from the website in 2021 in the aftermath of the January 6th US Capitol riot. Last November, Musk appeared to make the decision to reinstate Trump’s account on the results of a Twitter poll. The company reinstated Trump’s account on November 19th, 2022, but even with some attempted public coaxing from Musk, the former president has not tweeted since before his ban.

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/ron-desantis-cant-announce-hes-running-for-president-because-twitters-servers-are-kind-of-melting-222437047.html?src=rss

Google and the European Commission will collaborate on AI ground rules

The world’s governments have taken note of generative AI’s potential for massive disruption and are acting accordingly. European Commission (EC) industry chief Thierry Breton said Wednesday that it would work with Alphabet on a voluntary pact to establish artificial intelligence ground rules, according toReuters. Breton met with Google CEO Sundar Pichai in Brussels to discuss the arrangement, which will include input from companies based in Europe and other regions. The EU has a history of enacting strict technology rules, and the alliance gives Google a chance to provide input while steering clear of trouble down the road.

The compact aims to set up guidelines ahead of official legislation like the EU’s proposed AI Act, which will take much longer to develop and enact. “Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” Breton said in a statement. He encouraged EU nations and lawmakers to settle on specifics by the end of the year.

In a similar move, EU tech chief Margrethe Vestager said Tuesday that the federation would work with the United States on establishing minimum standards for AI. She hopes EU governments and lawmakers will “agree to a common text” for regulation by the end of 2023. “That would still leave one if not two years then to come into effect, which means that we need something to bridge that period of time,” she said. Topics of concern for the EU include copyright, disinformation, transparency and governance.

OpenAI’s ChatGPT, the service most associated with AI fears, exploded in popularity after its November launch, on its way to becoming the fastest-growing application ever (despite not having an official mobile app until this month). Unfortunately, its viral popularity is paired with legitimate fears about its capacity to upend society. In addition, image generators can produce AI-generated “photos” that are increasingly difficult to discern from reality, and speech cloners can mimic the voices of famous artists and public figures. Soon, video generators will evolve, making deepfakes even more of a concern.

Despite its undeniable potential for creativity and productivity, generative AI can threaten the livelihoods of countless content creators while posing new security and privacy risks and proliferating misinformation / disinformation. Left unregulated, corporations tend to maximize profits no matter the human cost, and generative AI is a tool that, paired with bad actors, could wreak immeasurable global havoc. “There is a shared sense of urgency. In order to make the most of this technology, guard rails are needed,” Vestager said. “Can we discuss what we can expect companies to do as a minimum before legislation kicks in?”

This article originally appeared on Engadget at https://www.engadget.com/google-and-the-european-commission-will-collaborate-on-ai-ground-rules-192035744.html?src=rss

Pegasus spyware found on phones of Mexican president's close ally

It's not unusual to hear of countries using NSO Group's Pegasus spyware to surveil the public, but there are now concerns one government is spying on itself. Sources for The New York Times and The Washington Post claim Pegasus has been found on the phone of Mexico undersecretary for human rights Alejandro Encinas, a longtime ally of President Andrés Manuel López Obrador, as well as at least two members of Encinas' office. While there's no firm evidence pointing to a culprit, this comes as Encinas has been investigating alleged military abuses of power since 2018, including the notorious disappearance of 43 students in Iguala in 2014.

The University of Toronto-based Citizen Lab research team detected Pegasus in a 2022 audit, according to a source speaking to The Post. Encinas' phone has been compromised more than once, The Times says, including last year as he was heading the commission covering the Iguala disappearances. He blamed the tragedy on the police, military, certain officials and drug traffickers. Encinas apparently briefed Obrador about the spying this March, but has remained silent since.

Encinas, Citizen Lab and the Mexican Defense Ministry have already declined to comment. NSO Group tells The Times in a statement that it looks into "all credible allegations" of misuse, and ends contracts when it finds problems.

In a press conference, Obrador has minimized the alleged snooping and doesn't believe the military is to blame. However, anti-corruption critics Ángela Buitrago and Eduardo Bohorquez are worried the Mexican army may be using Pegasus to retaliate against Encinas, revealing a lack of effective government oversight in the process.

NSO Group itself has faced widespread criticism. The US banned trade with the company in 2021 for allegedly selling spyware to authoritarian governments that used the tools to eliminate dissent by surveilling activists and journalists. NSO has denied enabling abuses and even hired a libel attorney who accused some journalists of misrepresenting its business.

This article originally appeared on Engadget at https://www.engadget.com/pegasus-spyware-found-on-phones-of-mexican-presidents-close-ally-154511274.html?src=rss

White House reveals its next steps towards 'responsible' AI development

The White House has made responsible AI development a focus of this administration in recent months, releasing a Blueprint AI Bill of Rights, developing a risk management framework, committing $140 million to found seven new National Academies dedicated to AI research and weighing in on how private enterprises are leveraging the technology. On Tuesday, the executive branch announced its next steps towards that goal including releasing an update to the National AI R&D Strategic Plan for the first time since 2019 as well as issuing a request for public input on critical AI issues. The Department of Education also dropped its hotly-anticipated report on the effects and risks of AI for students.

The OSTP's National AI R&D Strategic Plan, which guides the federal government's investments in AI research, hadn't been updated since the Trump Administration (when he gutted the OSTP staffing levels). The plan seeks to promote responsible innovation in the field that serves the public good without infringing on the public's rights, safety and democratic values, having done so until this point through eight core strategies. Tuesday's update adds a ninth, establishing "a principled and coordinated approach to international collaboration in AI research," per the White House. 

"The federal government plays a critical role in ensuring that technologies like AI are developed responsibly, and to serve the American people," the OSTP argued in its release. "Federal investments over many decades have facilitated many key discoveries in AI innovations that power industry and society today, and federally funded research has sustained progress in AI throughout the field’s evolution."

The OSTP also wants to hear the publics thoughts on both its new strategies and the technology's development in general. As such it is inviting "interested individuals and organizations" to submit their comments to one or more of nearly 30 prompt questions, including "How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?" and "How can Federal agencies use shared pools of resources, expertise, and lessons learned to better leverage AI in government?" through the Federal eRulemaking Portal by 5:00 pm ET on July 7, 2023. Responses should be limited to 10 pages of 11-point font.

The Department of Education also released its report on the promises and pitfalls of AI in schools on Tuesday, focusing on the how it impacts Learning, Teaching, Assessment, and Research. Despite recent media hysteria about generative AIs like ChatGPT fomenting the destruction of higher education by helping students write their essays, the DoE noted that AI "can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators." 

This article originally appeared on Engadget at https://www.engadget.com/white-house-reveals-its-next-steps-towards-responsible-ai-development-190636857.html?src=rss