Posts with «politics & government» label

FCC moves forward with its plan to restore net neutrality protections

As expected, the commissioners of the Federal Communications Commission voted along party lines to move forward with a plan to largely restore Obama-era net neutrality protections. All three of the agency's Democratic commissioners voted in favor of the Notice of Proposed Rulemaking (PDF), with the two Republican commissioners dissenting.

FCC Chairwoman Jessica Rosenworcel, who has long supported net neutrality rules, last month announced a proposal to reclassify fixed broadband as an essential communications service under Title II of the Communications Act of 1934. It also aims to reclassify mobile broadband as a commercial mobile service.

If broadband is reclassified in this way, the FCC would have greater scope to regulate it in a similar way to how water, power and phone services are overseen. As such, it would have more leeway to re-establish net neutrality rules.

Supporters believe that net neutrality protections are fundamental to an open and equitable internet. When such rules are in place, internet service providers have to provide users with access to every site, content and app at the same speeds and conditions. They can't block or give preference to any content and they're not allowed to, for instance, charge video streaming streaming services for faster service.

"The proposed net neutrality rules will ensure that all viewpoints, including those with which I disagree, are heard," Commissioner Anna Gomez, who was sworn in as the panel's third Democratic member in September, said ahead of the vote. "Moreso, these principles protect consumers while also maintaining a healthy, competitive broadband internet ecosystem. Because we know that competition is required for access to a healthy, open internet that is accessible to all."

On the other hand, critics say that net neutrality rules are unnecessary. "Since the FCC’s 2017 decision to return the Internet to the same successful and bipartisan regulatory framework under which it thrived for decades, broadband speeds in the U.S. have increased, prices are down, competition has intensified, and record-breaking new broadband builds have brought millions of Americans across the digital divide," Brendan Carr, the senior Republican on the FCC, said in a statement. "The Internet is not broken and the FCC does not need Title II to fix it. I would encourage the agency to reverse course and focus on the important issues that Congress has authorized the FCC to advance."

Restoring previous net neutrality rules (which the Trump administration overturned in 2017) has been part of President Joe Biden's agenda for several years. However, until Gomez was sworn in, the FCC was deadlocked, leaving that goal in limbo until now.

The FCC suggests that reclassification will grant it more authority to "safeguard national security, advance public safety, protect consumers and facilitate broadband deployment." In addition, the agency wants to "reestablish a uniform, national regulatory approach to protect the open internet" and stop ISPs from "engaging in practices harmful to consumers."

The FCC will now seek comment on the proposal with members of the public and stakeholders (such as ISPs) having the chance to weigh in on the agency's plan. After reviewing and possibly implementing feedback, the FCC is then expected to issue a final rule on the reclassification of broadband internet access. As the Electronic Frontier Foundation points out, this means net neutrality protections could be restored as soon as next spring.

It's still not a sure thing that net neutrality protections will return, however. The implementation of revived rules could face legal challenges from the telecom industry. It may also take quite some time for the FCC to carry out the rulemaking process, which may complicate matters given that we're going into a presidental election year. 

Nevertheless, net neutrality is a major priority for the fully staffed commission under Rosenworcel. “We’re laserlike focused on getting this rulemaking process started, then we're going to review the record, and my hope is we'll be able to move to order," the FCC chair told The Washington Post

This article originally appeared on Engadget at https://www.engadget.com/fcc-moves-forward-with-its-plan-to-restore-net-neutrality-protections-154431460.html?src=rss

IRS will start piloting its free TurboTax alternative in 2024

It looks like the Internal Revenue Service (IRS) truly was working on a free TurboTax alternative like earlier reports had claimed. The US tax authority has announced that it will start pilot testing its new Direct File program for the 2024 filing season, though it will initially be available for select taxpayers in 13 states only. During its pilot period, Direct File will only cover individual federal tax returns and won't have the capability to prepare people's state returns. That's why 9 out of the 13 states testing it — namely Alaska, Florida, New Hampshire, Nevada, South Dakota, Tennessee, Texas, Washington and Wyoming — don't levy state income taxes. 

Arizona, California, Massachusetts and New York, the other four states in the list, worked with the IRS to integrate their state taxes into the Direct File system for 2024. The IRS says it invited all states to join the pilot program, but not all of them were in a position to participate "at this time." In addition to being only available in certain locations, Direct File will only be accessible by people with "relatively simple returns" at the beginning. It will cover W-2 wages and tax credits like the Earned Income Tax Credit and the Child Tax Credit, for instance, but it will not cover self-employment income and itemized deductions. However, the agency is still finalizing the tax scope for the pilot, so it could still change over the coming months. 

Based on the screenshots the IRS shared with The Washington Post, taxpayers will only have to answer a questionnaire to be able to file their taxes directly, simplifying the process without having to pay for a third-party service. An IRS official told the publication that select eligible taxpayers in the aforementioned states will start getting invitations to use the service sometime around mid-February next year. The agency says it will begin with a small group of taxpayers before expanding access to more and more people as the filing season for the 2023 federal tax return progresses.

"This is a critical step forward for this innovative effort that will test the feasibility of providing taxpayers a new option to file their returns for free directly with the IRS," IRS Commissioner Danny Werfel said in a statement. "In this limited pilot for 2024, we'll be working closely with the states that have agreed to participate in an important test run of the state integration. This will help us gather important information about the future direction of the Direct File program."

The IRS is hoping to gather data and feedback during the pilot to be able to analyze how effective Direct File is. It's also hoping to identify areas of improvement for a "potential large-scale launch in the future."

This article originally appeared on Engadget at https://www.engadget.com/irs-will-start-piloting-its-free-turbotax-alternative-in-2024-065553528.html?src=rss

YouTube warned by EU official to keep a close eye on Israel-Hamas war content

EU Commissioner Thierry Breton has been sending warning letters to online platforms, reminding them of their duty to address disinformation going around regarding the Israel-Hamas war. Now Breton has written a letter addressed to Alphabet CEO Sundar Pichai, reminding him of the company's "precise obligations regarding content moderation under the EU Digital Services Act." Specifically, Breton is asking Alphabet to be "very vigilant" when it comes to Israel-Hamas-related content posted on YouTube. 

The European Commission has been seeing a "surge of illegal content and disinformation" being disseminated via certain platforms, he said, telling Pichai that Alphabet has an obligation to protect children and teens from "violent content depicting hostage taking and other graphic videos." Breton also warned Pichai that if Alphabet receives notices of illegal content from the EU, it must respond in a timely manner. Finally, he reminded the CEO that the company must have mitigation measures in place to address "civic discourse stemming from disinformation." The video-sharing service must also adequately differentiate reliable news sources from terrorist propaganda and manipulated content, such as clickbait videos. 

YouTube spokesperson Ivy Choi told The Verge that the service has "removed tens of thousands of harmful videos and terminated hundreds of channels" following the attacks in Israel and the "conflict now underway in Israel and Gaza." The platform's systems, she added, "continue to connect people with high-quality news and information." She also said that YouTube's teams are "working around the clock to monitor for harmful footage and remain vigilant to take action quickly if needed on all types of content, including Shorts and livestreams."

Breton was the same the official who had previously sent Elon Musk an "urgent" letter about the spread of disinformation on X amid the Israel-Hamas war. He called out the spread of "fake and manipulated images and facts circulating on [the platform formerly known as Twitter] in the EU, such as repurposed old images of unrelated armed conflicts or military footage that actually originated from video games." X CEO Linda Yaccarino published the company's response a day later, claiming to have removed or labeled "tens of thousands of pieces of content" and to have deleted hundreds of Hamas-affiliated accounts from the platform. Even so, the European Union still opened an investigation into X for the lackluster moderation of illegal content and disinformation related to the war.

The EU commissioner also sent Meta a stern letter, voicing similar concerns about misinformation on its platforms. Meta responded by saying that "expert teams from across [ts] company have been working around the clock to monitor [its] platforms while protecting people's ability to use [its] apps to shed light on important developments happening on the ground." Breton sent TikTok a letter about disinformation spreading on its platform related to the Israel-Hamas war, as well, giving the company 24 hours to explain how it's complying with EU law. 

In addition to asking YouTube to keep a close eye on Israel-Hamas disinformation, Breton also tackled the issue of election-related disinformation in his letter. He is asking the service to notify his team of the measures it has taken to mitigate deepfakes "in light of upcoming elections in Poland, The Netherlands, Lithuania, Belgium, Croatia, Romania and Austria, and the European Parliament elections."

Given the extensive reach of #YouTube, recalling the precise obligations of the #DSA in the context of the terrorist attacks by Hamas against Israel and disinformation around elections ⤵️ pic.twitter.com/82UXy3a8Dc

— Thierry Breton (@ThierryBreton) October 13, 2023

This article originally appeared on Engadget at https://www.engadget.com/youtube-warned-by-eu-official-to-keep-a-close-eye-on-israel-hamas-war-content-090134619.html?src=rss

The EPA won't force water utilities to inspect their cyber defenses

The EPA is withdrawing its plan to require states to assess the cybersecurity and integrity of public water system programs. While the agency says it continues to believe cybersecurity protective measures are essential for the public water industry, the decision was made after GOP-led states sued the agency for proposing the rule.

In a memo that accompanied the new rules in March, the EPA said that cybersecurity attacks on water and wastewater systems “have the potential to disable or contaminate the delivery of drinking water to consumers and other essential facilities like hospitals.” Despite the EPA’s willingness to provide training and technical support to help states and public water system organizations implement cybersecurity surveys, the move garnered opposition from both GOP state attorneys and trade groups.

Republican state attorneys that were against the new proposed policies said that the call for new inspections could overwhelm state regulators. The attorney generals of Arkansas, Iowa and Missouri all sued the EPA – claiming the agency had no authority to set these requirements. This led to the EPA’s proposal being temporarily blocked back in June.

While it's unclear if any cybersecurity regulations will be put in motion to protect the public moving forward, the EPA said it plans to continue working with the industry to “lower cybersecurity risks to clean and safe water.“ It encourages all states to “voluntarily review” the cybersecurity of their water systems, nothing that any proactive actions might curb the potential public health impacts if a hack were to take place.

Ever since the highly publicized Solarwinds hack in 2020 that exposed government records and the 2021 Colonial Pipeline ransomware attack that temporarily shut down operations for the oil pipeline system, it's been abundantly clear that government entities and public agencies are hackable and prime targets for bad actors. The Biden administration has initiated a national strategy focused on public-private alliances to shift the burden of cybersecurity onto the organizations that are “best-positioned to reduce risks for all of us.”

This article originally appeared on Engadget at https://www.engadget.com/the-epa-wont-force-water-utilities-to-inspect-their-cyber-defenses-232301497.html?src=rss

X CEO responds to EU officials over handling of Israel-Hamas disinformation

Linda Yaccarino, X's CEO, said the company has redistributed its resources and has refocused internal teams, which are now working round the clock to address the platform's needs related to the ongoing Israel-Hamas war. Yaccarino talked about the measures the website has taken so far to contain fake news about the Hamas attacks on Israel, along with hateful posts in support of terrorism and violence, in her response to EU officials

On October 10, EU Commissioner Thierry Breton sent Elon Musk an "urgent letter," calling his attention and reminding him of X's content moderation obligations under the region's Digital Services Act. Breton said the EU had indications that the platform formerly known as Twitter is being used to disseminate illegal content and disinformation. Some of the images being circulated on the website, Breton said, were manipulated images from unrelated armed conflicts. Others, including supposed footage of military action, were taken from video games. 

Indeed, Open Source Intelligence (OSINT) researchers told Wired that they'd been inundated with false information on the website, making it difficult to rely on X for information gathering. In the past, posts from news outlets on the ground and reputable sources quickly showed up on people's timelines. But now, the website's algorithm is boosting posts by users paying $8 a month for their blue checkmarks, even if they're misleading content and lies. It didn't help that Musk himself endorsed two accounts that had previously been proven to post false information to those who want to follow details about the war. One of those accounts also openly post antisemitic comments. 

In her response, Yaccarino claimed that X has removed or labeled "tens of thousands of pieces of content" since the attack on Israel began. She also said that X has deleted hundreds of Hamas-affiliated accounts from the platform so far, and that it continues to work with counter-terrorism organizations to prevent further distribution of terrorist content on the website. 

According to Yaccarino, the platform now has over 700 Community Notes, the website's crowd-sourced fact-checking tool, related to the attack. And since even media posts can now get notes, around 5,000 posts containing images and videos have been marked with the crowd-sourced messages. The CEO said that notes appear for media and image posts within minutes of them being created and for text posts within a median time of five hours, but X is working to make them show up on posts more quickly. 

In his letter, Breton said that the EU received reports from qualified sources that there were "potentially illegal content" circulating on X despite flags from relevant authorities. Yaccarino addressed that directly in her response, writing that the website has not received any notice from Europol and urging the European Commission to provide more details so that it can investigate further. 

Everyday we're reminded of our global responsibility to protect the public conversation by ensuring everyone has access to real-time information and safeguarding the platform for all our users. In response to the recent terrorist attack on Israel by Hamas, we've redistributed… https://t.co/VR2rsK0J9K

— Linda Yaccarino (@lindayaX) October 12, 2023

This article originally appeared on Engadget at https://www.engadget.com/x-ceo-responds-to-eu-officials-over-handling-of-israel-hamas-disinformation-103956726.html?src=rss

New York lawmakers are cracking down on kids' exposure to social media algorithms

A new bill out of New York is targeting the thing we all have a love-hate relationship with on social media: the algorithm. Governor Kathy Hochul joined lawmakers in introducing the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, which would require a parent or guardian's consent to access algorithm-based feeds on platforms such as TikTok, YouTube and Instagram. In her statement of support, Hochul called for adults to protect their children and villainized algorithms as technology that "follows" and "preys" on young people.

Lawmakers pointed to a range of studies demonstrating social media's association with poor mental health and sleep quality in young people — especially with excessive use. "Social media platforms are fueling a national youth mental health crisis that is harming children's wellbeing and safety," New York State Attorney General Letitia James said. "Young New Yorkers are struggling with record levels of anxiety and depression, and social media companies that use addictive features to keep minors on their platforms longer are largely to blame. This legislation will help tackle the risks of social media affecting our children and protect their privacy."

While pages like TikTok's For You face restrictions, the legislation would allow young people to view content from people they follow without permission. This setup means they can still see accounts with dangerous misinformation or ideals — such as promoting harmful eating habits — as long as they click the follow button. However, the law would also allow parents or guardians to limit the number of hours a person can spend on each app and to restrict access and notifications completely between midnight and 6 AM. Social media platforms that fail to enforce these policies could owe up to $5,000 in damages.

Lawmakers proposed an identical fine for violations of the New York Child Data Protection Act, which was introduced alongside the SAFE for Kids Act. This legislation would ban "collecting, using, sharing or selling" anyone under 18's personal data unless they receive consent or can prove it absolutely necessary.

SAFE for Kids Act's sponsors, State Senator Andrew Gounardes and Assemblywoman Nily Rozic, could bring it before the New York legislature as soon as early 2024. The bill has already faced opposition from Meta and TikTok, as well as Tech:NYC, which represents more than 800 tech companies. Concerns range from restricting free speech to losing out on community-building.

The first state-led bill of this kind passed in Utah earlier this year, requiring anyone under the age of 18 to obtain a parent or guardian's consent to create a social media profile — not just to explore the algorithm. Arkansas enacted a similar law soon after, but a judge blocked it from taking effect in September. Utah's legislation is set to take effect in early 2024. Each of these cases would require more comprehensive age verification on the part of social media companies, likely reviewing an ID of some sort — not something every early adolescent has.

This article originally appeared on Engadget at https://www.engadget.com/new-york-lawmakers-are-cracking-down-on-kids-exposure-to-social-media-algorithms-095838157.html?src=rss

California’s new law makes it easier for consumers to request the deletion of their data

California is officially the first state to pass a law streamlining personal data removal. On October 10, Governor Gavin Newsom signed SB 362, known as the Delete Act, into law, requiring the California Privacy Protection Agency (CPPA) to create and roll out a tool allowing state residents to request that all data brokers delete their information. There are nearly 500 registered data brokers in California.

Advocates for the bill painted it as a necessary protection. “Data brokers possess thousands of data points on each and every one of us, and they currently sell reproductive healthcare, geolocation, and purchasing data to the highest bidder,” Senator Josh Becker, author of the bill, said in a statement. “The Delete Act protects our most sensitive information.”

Current privacy laws allow Californians to make this request, but they must contact each company, and it can be denied. The CPPA has until 2026 to build the necessary system and has the authority to charge brokers to use it. Under the Delete Act, each broker must register with the CPPA and fulfill deletion requests every 45 days or risk facing a penalty such as a fine. Third-party compliance audits are set to begin in 2028 and occur every three years moving forward.

The Delete Act met opposition from organizations such as the Association of National Advertisers, which voiced concerns about schemes that charge consumers exorbitant amounts of money to delete their data and small businesses or non-profits being unable to reach their target audience without this detailed information.

This article originally appeared on Engadget at https://www.engadget.com/californias-new-law-makes-it-easier-for-consumers-to-request-the-deletion-of-their-data-095555419.html?src=rss

California's 'right to repair' bill is now California's 'right to repair' law

California became just the third state in the nation to pass a "right to repair" consumer protection law on Tuesday, following Minnesota and New York, when Governor Gavin Newsom signed SB 244. The California Right to Repair bill had originally been introduced in 2019. It passed, nearly unanimously, through the state legislature in September. 

“This is a victory for consumers and the planet, and it just makes sense,” Jenn Engstrom, state director of CALPIRG, told iFixit (which was also one of SB244's co-sponsors). “Right now, we mine the planet’s precious minerals, use them to make amazing phones and other electronics, ship these products across the world, and then toss them away after just a few years’ use ... We should make stuff that lasts and be able to fix our stuff when it breaks, and now thanks to years of advocacy, Californians will finally be able to, with the Right to Repair.”

Turns out Google isn't offering seven years of replacement parts and software updates to the Pixel 8 out of the goodness of its un-beating corporate heart. The new law directly stipulates that all electronics and appliances costing $50 or more, and sold within the state after July 1, 2021 (yup, two years ago), will be covered under the legislation once it goes into effect next year, on July 1, 2024. 

For gear and gadgets that cost between $50 and $99, device makers will have to stock replacement parts and tools, and maintain documentation for three years. Anything over $100 in value gets covered for the full seven-year term. Companies that fail to do so will be fined $1,000 per day on the first violation, $2,000 a day for the second and $5,000 per day per violation thereafter.

There are, of course, carve outs and exceptions to the rules. No, your PS5 is not covered. Not even that new skinny one. None of the game consoles are, neither are alarm systems or heavy industrial equipment that "vitally affects the general economy of the state, the public interest, and the public welfare." 

“I’m thrilled that the Governor has signed the Right to Repair Act into law," State Senator Susan Talamantes Eggman, one of the bill's co-sponsors, said. "As I’ve said all along, I’m so grateful to the advocates fueling this movement with us for the past six years, and the manufacturers that have come along to support Californians’ Right to Repair. This is a common sense bill that will help small repair shops, give choice to consumers, and protect the environment.”

The bill even received support from Apple, of all companies. The tech giant famous for its "walled garden" product ecosystem had railed against the idea when it was previously proposed in Nebraska, claiming the state would become "a mecca for hackers." However, the company changed its tune when SB 244 was being debated, writing a letter of support reportedly stating, "We support 'SB 244' because it includes requirements that protect individual users' safety and security as well as product manufacturers' intellectual property."

This article originally appeared on Engadget at https://www.engadget.com/californias-right-to-repair-bill-is-now-californias-right-to-repair-law-232526782.html?src=rss

The NSA has a new security center specifically for guarding against AI

The National Security Agency (NSA) is starting a dedicated artificial intelligence security center, as reported by AP. This move comes after the government has begun to increasingly rely on AI, integrating multiple algorithms into defense and intelligence systems. The security center will work to protect these systems from theft and sabotage, in addition to safeguarding the country from external AI-based threats.

The NSA’s recent move toward AI security was announced Thursday by outgoing director General Paul Nakasone. He says that the division will operate underneath the umbrella of the pre-existing Cybersecurity Collaboration Center. This entity works with private industry and international partners to protect the US from cyberattacks stemming from China, Russia and other countries with active malware and hacking campaigns.

For instance, the agency issued an advisory this week suggesting that Chinese hackers have been targeting government, industrial and telecommunications outfits via hacked router firmware. There’s also the specter of election interference, though Nakasone says he’s yet to see any evidence of Russia or China trying to influence the 2024 US presidential election. Still, this has been a big problem in the past, and that was before the rapid proliferation of AI algorithms like the CIA’s recently-announced chatbot.

As artificial intelligence threatens to boost the abilities of these bad actors, the US government will look to this new security division to keep up. The NSA decided on establishing the unit after conducting a study that suggested poorly-secured AI models pose a significant national security challenge. This has only been compounded by the increase of generative AI technologies that the NSA points out can be used for both good and bad purposes.

Nakasone says the organization will become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and for the goal of secure development and adoption of artificial intelligence within “our national security systems and our defense industrial base.” To that end, the group will work hand-in-hand with industry leaders, science labs, academic institutions, international partners and, of course, the Department of Defense.

Nakasone is on his way out of the NSA and the US Cyber Command and he’ll be succeeded by his current deputy, Air Force Lt. Gen. Timothy Haugh. Nakasone has been at his post since 2018 and, by all accounts, has had quite a successful run of it.

This article originally appeared on Engadget at https://www.engadget.com/the-nsa-has-a-new-security-center-specifically-for-guarding-against-ai-180354146.html?src=rss

The Supreme Court will hear social media cases with immense free speech implications

On Friday, the US Supreme Court agreed to take on two landmark social media cases with enormous implications for online speech, as reported by The Washington Post. The conservative-dominated court will determine if laws passed by Texas and Florida are violating First Amendment rights by requiring social platforms to host content they would otherwise block.

Tech industry groups, including Meta, X (formerly Twitter) and Google, say the laws are unconstitutional and violate private companies’ First Amendment rights. “Telling private websites they must give equal treatment to extremist hate isn’t just unwise, it is unconstitutional, and we look forward to demonstrating that to the Court,” Matt Schruers of the Computer & Communications Industry Association (CCIA), one of the trade associations challenging the legislation, told The Washington Post. The CCIA called the order “encouraging.”

The groups representing the tech companies contesting the laws say platforms would be at legal risk for removing violent or hateful content, propaganda from hostile governments and spam. However, leaving the content online could be bad for their bottom lines as they would risk advertiser and user boycotts.

Supporters of the Republican-sponsored state laws claim that social media companies are biased against conservatives and are illegally censoring their views. “These massive corporate entities cannot continue to go unchecked as they silence the voices of millions of Americans,” said TX Attorney General Ken Paxton (R), who recently survived an impeachment trial accusing him of abuses of office, bribery and corruption. Appeals courts (all with Republican-appointed judges) have issued conflicting rulings on the laws.

The US Supreme Court voted five to four in 2022 to put the Texas law on hold while the legal sparring continued. Justices John Roberts, Stephen Breyer, Sonia Sotomayor, Brett Kavanaugh and Amy Coney Barrett voted to prevent the law from taking effect. Meanwhile, Samuel Alito, Clarence Thomas, Elena Kagan and Neil Gorsuch dissented from the temporary hold. Alito (joined by Thomas and Gorsuch) said he hadn’t decided on the law’s constitutionality but would have let it stand in the interim. The dissenting Kagan didn’t sign off on Alito’s statement or provide separate reasoning.

The Biden administration is against the laws. “The act of culling and curating the content that users see is inherently expressive, even if the speech that is collected is almost wholly provided by users,” Solicitor General Elizabeth B. Prelogar said to the justices. “And especially because the covered platforms’ only products are displays of expressive content, a government requirement that they display different content — for example, by including content they wish to exclude or organizing content in a different way — plainly implicates the First Amendment.”

This article originally appeared on Engadget at https://www.engadget.com/the-supreme-court-will-hear-social-media-cases-with-immense-free-speech-implications-164302048.html?src=rss