Posts with «politics & government» label

Apple says it was ordered it to remove WhatsApp and Threads from China App Store

Apple users in China won't be able to find and download WhatsApp and Threads from the App Store anymore, according to The Wall Street Journal and The New York Times. The company said it pulled the apps from the store to comply with orders it received from Cyberspace Administration, China's internet regulator, "based on [its] national security concerns." It explained to the publications that it's "obligated to follow the laws in the countries where [it operates], even when [it disagrees]."

The Great Firewall of China blocks a lot of non-domestic apps and technologies in the country, prompting locals to use VPN if they want to access any of them. Meta's Facebook and Instagram are two of those applications, but WhatsApp and Threads have been available for download until now. The Chinese regulator's order comes shortly before the Senate is set to vote on a bill that could lead to a TikTok ban in the US. Cyberspace Administration's reasoning — that the apps are a national security concern — even echoes American lawmakers' argument for blocking TikTok in the country. 

In the current version of the bill, ByteDance will have a year to divest TikTok, or else the short form video-sharing platform will be banned from app stores. The House is expected to pass the bill, which is part of a package that also includes aid to Ukraine and Israel. President Joe Biden previously said that he supports the package and will immediately sign the bills into law. 

This article originally appeared on Engadget at https://www.engadget.com/apple-says-it-was-ordered-it-to-remove-whatsapp-and-threads-from-china-app-store-061441223.html?src=rss

The bill that could ban TikTok is barreling ahead

The bill that could lead to a ban of TikTok in the United States appears to be much closer to becoming law. The legislation sailed through the House of Representatives last month, but faced an uncertain future in the Senate due to opposition from a few prominent lawmakers.

But momentum for the “Protecting Americans from Foreign Adversary Controlled Applications Act” seems to once again be growing. The House is set to vote on a package of bills this weekend, which includes a slightly revised version of the TikTok bill. In the latest version of the bill, ByteDance would have up to 12 months to divest TikTok, instead of the six-month period stipulated in the original measure.

That change, as NBC News notes, was apparently key to winning over support from some skeptical members of the Senate, including Sen. Maria Cantwell, chair of the Senate Commerce Committee. So with the House expected to pass the revised bill Saturday — it’s part of a package that also includes aid to Ukraine and Israel — its path forward is starting to look much more certain, with a Senate vote coming “as early as next week,” according to NBC. President Joe Biden has said he would sign the bill if it’s passed by Congress.

If passed into law, TikTok (and potentially other apps "controlled by a foreign adversary" and deemed to be a national security threat) would face a ban in US app stores if it declined to sell to a new owner. TikTok CEO Shou Chew has suggested the company would likely mount a legal challenge to the law.

“It is unfortunate that the House of Representatives is using the cover of important foreign and humanitarian assistance to once again jam through a ban bill that would trample the free speech rights of 170 million Americans, devastate 7 million businesses, and shutter a platform that contributes $24 billion to the U.S. economy, annually,” TikTok said in a statement.

This article originally appeared on Engadget at https://www.engadget.com/the-bill-that-could-ban-tiktok-is-barreling-ahead-230518984.html?src=rss

Media coalition asks the feds to investigate Google’s removal of California news links

The News/Media Alliance, formerly the Newspaper Association of America, asked US federal agencies to investigate Google’s removal of links to California news media outlets. Google’s tactic is in response to the proposed California Journalism Preservation Act (CJPA), which would require it and other tech companies to pay for links to California-based publishers’ news content.

The News/Media Alliance, which represents over 2,200 publishers, sent letters to the Department of Justice, Federal Trade Commission and California State Attorney General on Tuesday. It says the removal “appears to be either coercive or retaliatory, driven by Google’s opposition to a pending legislative measure in Sacramento.”

The CJPA would require Google and other tech platforms to pay California media outlets in exchange for links. The proposed bill passed the state Assembly last year.

In a blog post last week announcing the removal, Google VP of Global News Partnerships Jaffer Zaidi warned that the CJPA is “the wrong approach to supporting journalism” (because Google’s current approach totally hasn’t left the industry in smoldering ruins!). Zaidi said the CJPA “would also put small publishers at a disadvantage and limit consumers’ access to a diverse local media ecosystem.” Nothing to see here, folks: just your friendly neighborhood multi-trillion-dollar company looking out for the little guy!

Google described its link removal as a test to see how the bill would impact its platform:

“To prepare for possible CJPA implications, we are beginning a short-term test for a small percentage of California users,” Zaidi wrote. “The testing process involves removing links to California news websites, potentially covered by CJPA, to measure the impact of the legislation on our product experience. Until there’s clarity on California’s regulatory environment, we’re also pausing further investments in the California news ecosystem, including new partnerships through Google News Showcase, our product and licensing program for news organizations, and planned expansions of the Google News Initiative.”

In its letters, The News/Media Alliance lists several laws it believes Google may be breaking with the “short-term” removal. Potential federal violations include the Lanham Act, the Sherman Antitrust Act and the Federal Trade Commission Act. The letter to California’s AG cites the state’s Unruh Civil Rights Act, regulations against false advertising and misrepresentation, the California Consumer Privacy Act and California’s Unfair Competition Law (UCL).

“Importantly, Google released no further details on how many Californians will be affected, how the Californians who will be denied news access were chosen, what publications will be affected, how long the compelled news blackouts will persist, and whether access will be blocked entirely or just to content Google particularly disfavors,” News/Media Alliance President / CEO Danielle Coffey wrote in the letter to the DOJ and FTC. “Because of these unknowns, there are many ways Google’s unilateral decision to turn off access to news websites for Californians could violate laws.”

Google has a mixed track record in dealing with similar legislation. It pulled Google News from Spain for seven years in response to local copyright laws that would have required licensing fees to publishers. However, it signed deals worth around $150 million to pay Australian publishers and retreated from threats to pull news from search results in Canada, instead spending the $74 million required by the Online News Act.

Google made more than $73 billion in profits in 2023. The company currently has a $1.94 trillion market cap.

This article originally appeared on Engadget at https://www.engadget.com/media-coalition-asks-the-feds-to-investigate-googles-removal-of-california-news-links-212052979.html?src=rss

Starlink terminals are reportedly being used by Russian forces in Ukraine

Starlink satellite internet terminals are being widely used by Russian forces in Ukraine, according to a report by The Wall Street Journal. The publication indicates that the terminals, which were developed by Elon Musk’s SpaceX, are being used to coordinate attacks in eastern Ukraine and Crimea. Additionally, Starlink terminals can be used on the battlefield to control drones and other forms of military tech.

The terminals are reaching Russian forces via a complex network of black market sellers. This is despite the fact that Starlink devices are banned in the country. WSJ followed some of these sellers as they smuggled the terminals into Russia and even made sure deliveries got to the front lines. Reporting also indicates that some of the terminals were originally purchased on eBay.

This black market for Starlink terminals allegedly stretches beyond occupied Ukraine and into Sudan. Many of these Sudanese dealers are reselling units to the Rapid Support Forces, a paramilitary group that’s been accused of committing atrocities like ethnically motivated killings, targeted abuse of human rights activists, sexual violence and the burning of entire communities. WSJ notes that hundreds of terminals have found their way to members of the Rapid Support Forces.

Back in February, Elon Musk addressed earlier reports that Starlink terminals were being used by Russian soldiers in the war against Ukraine. “To the best of our knowledge, no Starlinks have been sold directly or indirectly to Russia,” he wrote on X. The Kremlin also denied the reports, according to Reuters. Despite these proclamations, WSJ says that “thousands of the white pizza-box-sized devices” have landed with “some American adversaries and accused war criminals.”

After those February reports, House Democrats have demanded that Musk take action, according to Business Insider, noting that Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink actually has the ability to disable individual terminals and each item includes geofencing technology that is supposed to prevent use in unauthorized countries, though it's unclear if black market sellers can get around these hurdles.

AHouse Democrats have demanded that Musk take action, ar. He took steps to limit Ukraine’s use of the technology on the grounds that the terminals were never intended for use in military conflicts. According to his biography, Musk also blocked Ukraine’s use of Starlink near Crimea early in the conflict, ending the country’s plans for an attack on Russia’s naval fleet. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, wrote on X that “civilians, children are being killed” as a result of Musk’s decision. He further dinged the billionaire by writing “this is the price of a cocktail of ignorance and a big ego.”

However, Musk fired back and said that Starlink was never active in the area near Crimea, so there was nothing to disable. He also said that the policy in question was decided upon before Ukraine’s planned attack on the naval fleet. Ukraine did lose access to more than 1,300 Starlink terminals in the early days of the conflict due to a payment issue. SpaceX reportedly charged Ukraine $2,500 per month to keep each unit operational, which ballooned to $3.25 million per month. This pricing aligns with the company’s high cost premium plan. It’s worth noting that SpaceX has donated more than 3,600 terminals to Ukraine.

SpaceX has yet to comment on the WSJ report regarding the blackmarket proliferation of Starlink terminals. We’ll update this post when it does.

This article originally appeared on Engadget at https://www.engadget.com/starlink-terminals-are-reportedly-being-used-by-russian-forces-in-ukraine-154832503.html?src=rss

The FCC will vote to restore net neutrality later this month

The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2017.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lets the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-play “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FCC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

Republicans (and Democratic Senator Joe Manchin) opposed her confirmation through a lengthy 16-month process. During that period, telecom lobbying dollars flowed freely and Republicans cited past Sohn tweets critical of Fox News, along with vocal opposition from law enforcement, as justification for blocking the confirmation. Democrats finally regained an FCC majority with the swearing-in of Anna Gomez in late September, near the end of Biden’s third year in office.

“The pandemic proved once and for all that broadband is essential,” FCC Chairwoman Rosenworcel wrote in a press release. “After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the internet remains fast, open, and fair. A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609.html?src=rss

California introduces 'right to disconnect' bill that would allow employees to possibly relax

Burnout, quiet quitting, strikes — the news (and likely your schedule) is filled with markers that workers are overwhelmed and too much is expected of them. There's little regulation in the United States to prevent employers from forcing workers to be at their desks or on call at all hours, but that might soon change. California State Assemblyman Matt Haney has introduced AB 2751, a "right to disconnect" proposition, The San Francisco Standard reports

The bill is in its early stages but, if passed, would make every California employer lay out exactly what a person's hours are and ensure they aren't required to respond to work-related communications while off the clock. Time periods in which a salaried employee might have to work longer hours would need to be laid out in their contract. Exceptions would exist for emergencies. 

The Department of Labor would monitor adherence and fine companies a minimum of $100 for wrongdoing — whether that's forcing employees to be on Zoom, their inbox, answering texts or monitoring Slack when they're not getting paid to do so. "I do think it’s fitting that California, which has created many of these technologies, is also the state that introduces how we make it sustainable and update our protections for the times we live in and the world we’ve created," Haney told The Standard

It's not clear how much support exists for AB 2751, but as a tech hub and a major economic center, the bill has the potential to create tremendous impact for works in California, and pressure other states to follow suit. The bill follows similar legislation in other countries. In 2017, France became the first nation to implement a "right to disconnect" policy, a model which has been copied in Argentina, Ireland, Mexico and Spain.

This article originally appeared on Engadget at https://www.engadget.com/california-introduces-right-to-disconnect-bill-that-would-allow-employees-to-possibly-relax-151705072.html?src=rss

NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

An AI chatbot released by the New York City government to help business owners access pertinent information has been spouting falsehoods, at times even misinforming users about actions that are against the law, according to a report from The Markup. The report, which was co-published with the local nonprofit newsrooms Documented and The City, includes numerous examples of inaccuracies in the chatbot’s responses to questions relating to housing policies, workers’ rights and other topics.

Mayor Adams’ administration introduced the chatbot in October as an addition to the MyCity portal, which launched in March 2023 as “a one-stop shop for city services and benefits.” The chatbot, powered by Microsoft’s Azure AI, is aimed at current and aspiring business owners, and was billed as a source of “actionable and trusted information” that comes directly from the city government’s sites. But it is a pilot program, and a disclaimer on the website notes that it “may occasionally produce incorrect, harmful or biased content.”

In The Markup’s tests, the chatbot repeatedly provided incorrect information. In response to the question, “Can I make my store cashless?”, for example, it replied, “Yes, you can make your store cashless in New York City” — despite the fact that New York City banned cashless stores in 2020. The report shows the chatbot also responded incorrectly about whether employers can take their workers’ tips, whether landlords have to accept section 8 vouchers or tenants on rental assistance, and whether businesses have to inform staff of scheduling changes. A housing policy expert that spoke to The Markup called the chatbot “dangerously inaccurate” at its worst.

The city has indicated that the chatbot is still a work in progress. In a statement to The Markup, Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said the chatbot “has already provided thousands of people with timely, accurate answers,” but added, “We will continue to focus on upgrading this tool so that we can better support small businesses across the city.” 

This article originally appeared on Engadget at https://www.engadget.com/nycs-business-chatbot-is-reportedly-doling-out-dangerously-inaccurate-information-203926922.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

The White House lays out extensive AI guidelines for the federal government

It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

Israel’s military reportedly used Google Photos to identify civilians in Gaza

The New York Times reports that Israel’s military intelligence has been using an experimental facial recognition program in Gaza that’s misidentified Palestinian civilians as having ties to Hamas. Google Photos allegedly plays a part in the chilling program’s implementation, although it appears not to be through any direct collaboration with the company.

The surveillance program reportedly started as a way to search for Israeli hostages in Gaza. However, as often happens with new wartime technology, the initiative was quickly expanded to “root out anyone with ties to Hamas or other militant groups,” according to The NYT. The technology is flawed, but Israeli soldiers reportedly haven’t treated it as such when detaining civilians flagged by the system.

According to intelligence officers who spoke to The NYT, the program uses tech from the private Israeli company Corsight. Headquartered in Tel Aviv, it promises its surveillance systems can accurately recognize people with less than half of their faces exposed. It can supposedly be effective even with “extreme angles, (even from drones) darkness, and poor quality.”

But an officer in Israel’s Unit 8200 learned that, in reality, it often struggled with grainy, obscured or injured faces. According to the official, Corsight’s tech included false positives and cases where an accurately identified Palestinian was incorrectly flagged as having Hamas ties.

Three Israeli officers told The NYT that its military used Google Photos to supplement Corsight’s tech. Intelligence officials allegedly uploaded data containing known persons of interest to Google’s service, allowing them to use the app’s photo search feature to flag them among its surveillance materials. One officer said Google’s ability to match partially obscured faces was superior to Corsight’s, but they continued using the latter because it was “customizable.”

Engadget emailed Google for a statement, but we haven’t heard back from them at the time of publication. We’ll update this story if we get a response.

One man erroneously detained through the surveillance program was poet Mosab Abu Toha, who told The NYT he was pulled aside at a military checkpoint in northern Gaza as his family tried to flee to Egypt. He was then allegedly handcuffed and blindfolded, and then beaten and interrogated for two days before finally being returned. He said soldiers told him before his release that his questioning (and then some) had been a “mistake.”

The Things You May Find Hidden in My Ear: Poems From Gaza scribe said he has no connection to Hamas and wasn’t aware of an Israeli facial recognition program in Gaza. However, during his detention, he said he overheard someone saying the Israeli army had used a “new technology” on the group with whom he was incarcerated.

This article originally appeared on Engadget at https://www.engadget.com/israels-military-reportedly-used-google-photos-to-identify-civilians-in-gaza-200843298.html?src=rss