Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. Among the highlights: the establishment of the United States AI Safety Institute, the first release of draft policy guidance on the federal government's use of AI and a declaration on the responsible military applications for the emerging technology.
"President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said in her prepared remarks.
"Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions," she said. The existential threats that generative AI systems present was a central theme of the summit.
"To define AI safety we must consider and address the full spectrum of AI risk — threats to humanity as a whole, threats to individuals, to our communities and to our institutions, and threats to our most vulnerable populations," she continued. "To make sure AI is safe, we must manage all these dangers."
To that end, Harris announced Wednesday that the White House, in cooperation with the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) within the NIST. It will be responsible for actually creating and publishing the all of the guidelines, benchmark tests, best practices and such for testing and evaluating potentially dangerous AI systems.
These tests could include the red-team exercises that President Biden had mentioned in his EO. The AISI would also be tasked in providing technical guidance to lawmakers and law enforcement on a wide range of AI-related topics, including identifying generated content, authenticating live-recorded content, mitigating AI-driven discrimination, and ensuring transparency in its use.
Additionally, the Office of Management and Budget (OMB) is set to release for public comment the administration's first draft policy guidance on government AI use later this week. Like the Blueprint for an AI Bill of Rights that it builds upon, the draft policy guidance outlines steps that the national government can take to "advance responsible AI innovation" while maintaining transparency and protecting federal workers from increased surveillance and job displacement. This draft guidance will eventually be used to establish safeguards for the use of AI in a broad swath of public sector applications including transportation, immigration, health and education so it is being made available for public comment at ai.gov/input.
Harris also announced during her remarks that the Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy the US issued in February has collected 30 signatories to date, all of whom have agreed to a set of norms for responsible development and deployment of military AI systems. Just 165 nations to go! The administration is also launching a a virtual hackathon in efforts to blunt the harm AI-empowered phone and internet scammers can inflict. Hackathon participants will work to build AI models that can counter robocalls and robotexts, especially those targeting elderly folks with generated voice scams.
Content authentication is a growing focus of the Biden-Harris administration. President Biden's EO explained that the Commerce Department will be spearheading efforts to validate content produced by the White House through a collaboration with the C2PA and other industry advocacy groups. They'll work to establish industry norms, such as the voluntary commitments previously extracted from 15 of the largest AI firms in Silicon Valley. In her remarks, Harris extended that call internationally, asking for support from all nations in developing global standards in authenticating government-produced content.
“These voluntary [company] commitments are an initial step toward a safer AI future, with more to come," she said. "As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies."
"One important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation," Harris continued.
This article originally appeared on Engadget at https://www.engadget.com/kamala-harris-announces-ai-safety-institute-to-protect-american-consumers-060011065.html?src=rss
The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.
"The President several months ago directed his team to pull every lever," a senior administration official told reporters on a recent press call. "That's what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits ... It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law."
These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring 9 to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.
ASSOCIATED PRESS
Public Safety
"In response to the President's leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public," the senior administration official said. "That is not enough."
The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure.
By leveraging the Defense Production Act, this EO will "require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests," per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.
In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.
Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It's geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. "This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.
What's more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats "to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks," per the release. "Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI." In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.
In an effort to proactively address the decrepit state of America's digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration's existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.
AI Watermarking and Cryptographic Validation
We're already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call.
Adobe
The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”
Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government's official messaging can be relied upon.
Civil Rights and Consumer Protections
The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people's rights and safety,” the administration official said. “But there's more to do.”
The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, per the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."
Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future LLMs to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” per the White House release, developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.
In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.
Worker Protections
The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”
Richard Shotwell/Invision/AP
The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There's a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.
To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We're trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.
The White House reportedly did not preview the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.
Chip Somodevilla via Getty Images
At a Washington Post event on Thursday, Senate Majority Leader Charles Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming.
“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”
This article originally appeared on Engadget at https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss
Senator Charles Schumer (D-NY) once again played host to Silicon Valley’s AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute and NAACP CEO Derrick Johnson. On the agenda: “the transformational innovation that pushes the boundaries of medicine, energy, and science, and the sustainable innovation necessary to drive advancements in security, accountability, and transparency in AI,” according to a release from Sen. Schumer’s office.
Upon exiting the meeting Tuesday, Schumer told the assembled press, "it is clear that American leadership on AI can’t be done on the cheap. Almost all of the experts in today’s Forum called for robust, sustained federal investment in private and public sectors to achieve our goals of American-led transformative and sustainable innovation in AI.
Per National Security AI Commission estimates, paying for that could cost around $32 billion a year. However, Schumer believes that those funding challenges can be addressed by "leveraging the private sector by employing new and innovative funding mechanisms – like the Grand Challenges prize idea."
"We must prioritize transformational innovation, to help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," Schumer remarked. But in doing so, we must act sustainably in order to minimize harms to workers, civil society and the environment. "We need to strike a balance between transformational and sustainable innovation," Schumer said. "Finding this balance will be key to our success."
Senators Brian Schatz (D-HI) and John Kennedy (R-LA) also got in on the proposed regulatory action Tuesday, introducing legislation that would provide more transparency on AI-generated content by requiring clear labeling and disclosures. Such technology could resemble the Content Credentials tag that the C2PA and CAI industry advocacy groups are developing.
"Our bill is simple," Senator Schatz said in a press statement. "If any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”
The Schatz-Kennedy AI Labeling Act, as they're calling it, would require generative AI system developers to clearly and conspicuously disclose AI-generated content to users. Those developers, and their licensees, would also have to take "reasonable steps" to prevent "systematic publication of content without disclosures." The bill would also establish a working group to create non-binding technical standards to help social media platforms automatically identify such content as well.
“It puts the onus where it belongs: on the companies and not the consumers,” Schatz said on the Senate floor Tuesday. “Labels will help people to be informed. They will also help companies using AI to build trust in their content.”
Tuesday’s meeting follows the recent introduction of new AI legislation, dubbed the Artificial Intelligence Advancement Act of 2023 (S. 3050). Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), Charles Schumer (D-NY) and Todd Young (R-IN) all co-sponsored the bill. The bill proposes AI bug bounty programs and would require a vulnerability analysis study for AI-enabled military applications. It’s passage into law would also launch a report into AI regulation in the financial services industry (which the head of the SEC had recently been lamenting) as well as a second report on data sharing and coordination.
“It’s frankly a hard challenge,” SEC Chairman Gary Gensler told The Financial Times recently, speaking on the challenges the financial industry faces in AI adoption and regulation. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”
"Working people are fighting back against artificial intelligence and other technology used to eliminate workers or undermine and exploit us," AFL-CIO President Liz Shuler said at the conclusion of Tuesday's forum. "If we fail to involve workers and unions across the entire innovation process, AI will curtail our rights, threaten good jobs and undermine our democracy. But the responsible adoption of AI, properly regulated, has the potential to create opportunity, improve working conditions and build prosperity."
The forums are part of Senator Schumer’s SAFE Innovation Framework, which his office debuted in June. “The US must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity,” the program announcement reads.
While Andreesen calls for AI advancement at any cost and Tegmark continues to advocate for a developmental “time out,” rank and file AI industry workers are also fighting to make their voices heard ahead of the forum. On Monday, a group of employees from two dozen leading AI firms published an open letter to Senator Schumer, demanding Congress take action to safeguard their livelihoods from the “dystopian future” that Andreessen’s screed, for example, would require.
“Establishing robust protections related to workplace technology and rebalancing power between workers and employers could reorient the economy and tech innovation toward more equitable and sustainable outcomes,” the letter authors argue.
Senator Ed Markey (D-MA) and Representative Pramila Jayapal (WA-07) had, the previous month, called on leading AI companies to “answer for the working conditions of their data workers, laborers who are often paid low wages and provided no benefits but keep AI products online.”
"We covered a lot of good ground today, and I think we’ll all be walking out of the room with a deeper understanding of how to approach American-led AI innovation," Schumer said Tueseay. "We’ll continue this conversation in weeks and months to come – in more forums like this and committee hearings in Congress – as we work to develop comprehensive, bipartisan AI legislation."
This article originally appeared on Engadget at https://www.engadget.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622.html?src=rss
Currently, bad actors are using AI to steal people's voices and repurpose them in calls to loved ones — often presenting a state of distress. This advancement goes beyond seemingly real calls from banks and credit card companies, providing a disturbing and jarring experience: not knowing if you're speaking to someone you know.
The financial repercussions (not to mention potential mental distress) are tremendous. Senator Ben Ray Luján, chair of the subcommittee, estimates that individuals nationwide receive 1.5 billion to 3 billion scam calls monthly, defrauding Americans out of $39 billion in 2022. This figure is despite the Telephone Robocall Abuse Criminal Enforcement (TRACED) Act of 2019, which expanded the government's power to prosecute callers and for individuals to block them.
In fact, much of the blame for this continued issue has been collectively placed on government agencies like the Federal Communications Commission (FCC). "FCC enforcement actions are not sufficient to make a meaningful difference in these illegal calls. U.S.-based providers continue to spurn the Commission's requirements to respond to traceback requests," Margot Saunders, a senior attorney at the National Consumer Law Center, said in her testimony to the subcommittee. "The fines issued against some of the most egregious fraudsters have not been recovered, which undermines the intended deterrent effect of imposing these fines. Yet the Commission has referred only three forfeiture orders to the Department of Justice related to unwanted calls since the FCC began TRACED Act reporting in 2020."
Saunders called on the FCC to issue clearer guidance on existing regulations and harsher penalties (namely suspension) on complicit voice service providers. She further expressed the need for explicit consent requirements in order for individuals to be contacted.
Mike Rudolph, chief technology officer at robocall-blocking firm YouMail, pitched the idea of using AI to flag insufficient information in the FCC's Robocall Mitigation Database. Instead of properly completing and filing the required information, some phone providers avoid accountability for their (lack of) action and submit blank or irrelevant papers.
This article originally appeared on Engadget at https://www.engadget.com/us-senate-begins-collecting-evidence-on-how-ai-could-thwart-robocalls-102553733.html?src=rss
The Biden administration and the US Commerce Department just named 31 regions as "tech hubs", drawn from nearly 400 applicants. These hub areas are spread across the country, in addition to territories like Puerto Rico, and each spot could share in $500 million of funding as originally detailed in the CHIPS and Science Act that was signed into law back in 2022.
The administration hopes to use these hubs to “catalyze investment in technologies critical to economic growth, national security and job creation” with an end goal of helping “communities across the country become centers of innovation critical to American competitiveness.” Additionally, Commerce Secretary Gina Raimondo told reporters that the program seeks to diversify the country’s tech interests, moving away from traditional hubs like Silicon Valley, Seattle and Boston, as reported by Yahoo.
To that end, these hubs will focus on everything under the sun, from artificial intelligence, biotech, clean energy, semiconductors, quantum computing and more. Examples include a hub in Washington state that’s developing new materials for next-gen fuel-efficient aircraft, a Wisconsin program seeking to make advancements in personalized medicine and a New York organization researching new battery technologies, among 28 others. It’s worth noting that many of these hubs are in small or medium-sized cities, with Raimondo saying that “people shouldn't have to move to get a good job.”
There’s one caveat. Snagging one of these coveted hub designations doesn’t guarantee federal funding. The Commerce Department will follow each program throughout the next year, with funding to follow. Raimondo says that five to 10 hubs will receive up to $75 million. With 31 hub areas and just $500 million to disperse, that could leave many locations in the financial cold.
Additionally, the CHIPS and Science Act is a robust piece of legislation that drops more than $280 billion into various sectors, so these hubs represent less than 1/500th of the allocated funding set aside by the bill. There’s $52 billion in tax credits and funding for US chipmakers to expand domestic production, $7 billion for clean hydrogen and $1.5 billion to “boost US leadership in wireless technologies and their supply chains.” The bill also sets aside $10 billion to “invest in regional innovation and technology” which is the exact point of these hubs, so maybe more money is coming down the line.
Biden has asked Congress for an additional $4 billion to fund even more regional tech hubs, but, well, that would be part of the full-year budget and you may have noticed that the House still lacks a speaker with a government shutdown on the horizon.
This article originally appeared on Engadget at https://www.engadget.com/biden-administration-designates-31-new-tech-hubs-to-encourage-innovation-155812340.html?src=rss
As expected, the Supreme Court will weigh in on a controversial case attempting to limit contact between federal officials and social media companies. The case could have sweeping implications for how social media companies make policy and content moderation decisions.
The case stems from a lawsuit, brought by the attorneys general of Missouri and Louisiana, that alleges Biden Administration officials, the CDC and FBI overreached in their dealings Meta, Google and Twitter as the companies responded to pandemic and election-related misinformation. A lower court previously issued an injunction that severely limited government officials’ ability to communicate with social media companies, though some restrictions were later relaxed.
Now, with the Supreme Court agreeing to hear the government’s appeal in the case, the entire lower court order remains on hold. As The New York Timesnotes, three justices, Samuel Alito, Clarence Thomas and Neil Gorsuch, dissented, calling the decision to allow the lower court order to remain paused “highly disturbing.”
It’s not the only case involving free speech and social media on the Supreme Court docket this term. The court will also take on two landmark cases that could reshape how social media platforms enforce content moderation rules. Those cases involve two state laws, in Texas and Florida, that would prevent social media companies from removing certain types of posts.
This article originally appeared on Engadget at https://www.engadget.com/the-supreme-court-will-hear-case-on-governments-contacts-with-social-media-companies-224551081.html?src=rss
As expected, the commissioners of the Federal Communications Commission voted along party lines to move forward with a plan to largely restore Obama-era net neutrality protections. All three of the agency's Democratic commissioners voted in favor of the Notice of Proposed Rulemaking (PDF), with the two Republican commissioners dissenting.
FCC Chairwoman Jessica Rosenworcel, who has long supported net neutrality rules, last month announced a proposal to reclassify fixed broadband as an essential communications service under Title II of the Communications Act of 1934. It also aims to reclassify mobile broadband as a commercial mobile service.
If broadband is reclassified in this way, the FCC would have greater scope to regulate it in a similar way to how water, power and phone services are overseen. As such, it would have more leeway to re-establish net neutrality rules.
Supporters believe that net neutrality protections are fundamental to an open and equitable internet. When such rules are in place, internet service providers have to provide users with access to every site, content and app at the same speeds and conditions. They can't block or give preference to any content and they're not allowed to, for instance, charge video streaming streaming services for faster service.
"The proposed net neutrality rules will ensure that all viewpoints, including those with which I disagree, are heard," Commissioner Anna Gomez, who was sworn in as the panel's third Democratic member in September, said ahead of the vote. "Moreso, these principles protect consumers while also maintaining a healthy, competitive broadband internet ecosystem. Because we know that competition is required for access to a healthy, open internet that is accessible to all."
On the other hand, critics say that net neutrality rules are unnecessary. "Since the FCC’s 2017 decision to return the Internet to the same successful and bipartisan regulatory framework under which it thrived for decades, broadband speeds in the U.S. have increased, prices are down, competition has intensified, and record-breaking new broadband builds have brought millions of Americans across the digital divide," Brendan Carr, the senior Republican on the FCC, said in a statement. "The Internet is not broken and the FCC does not need Title II to fix it. I would encourage the agency to reverse course and focus on the important issues that Congress has authorized the FCC to advance."
Restoring previous net neutrality rules (which the Trump administration overturned in 2017) has been part of President Joe Biden's agenda for several years. However, until Gomez was sworn in, the FCC was deadlocked, leaving that goal in limbo until now.
The FCC suggests that reclassification will grant it more authority to "safeguard national security, advance public safety, protect consumers and facilitate broadband deployment." In addition, the agency wants to "reestablish a uniform, national regulatory approach to protect the open internet" and stop ISPs from "engaging in practices harmful to consumers."
The FCC will now seek comment on the proposal with members of the public and stakeholders (such as ISPs) having the chance to weigh in on the agency's plan. After reviewing and possibly implementing feedback, the FCC is then expected to issue a final rule on the reclassification of broadband internet access. As the Electronic Frontier Foundation points out, this means net neutrality protections could be restored as soon as next spring.
It's still not a sure thing that net neutrality protections will return, however. The implementation of revived rules could face legal challenges from the telecom industry. It may also take quite some time for the FCC to carry out the rulemaking process, which may complicate matters given that we're going into a presidental election year.
Nevertheless, net neutrality is a major priority for the fully staffed commission under Rosenworcel. “We’re laserlike focused on getting this rulemaking process started, then we're going to review the record, and my hope is we'll be able to move to order," the FCC chair told The Washington Post.
This article originally appeared on Engadget at https://www.engadget.com/fcc-moves-forward-with-its-plan-to-restore-net-neutrality-protections-154431460.html?src=rss
It looks like the Internal Revenue Service (IRS) truly was working on a free TurboTax alternative like earlier reports had claimed. The US tax authority has announced that it will start pilot testing its new Direct File program for the 2024 filing season, though it will initially be available for select taxpayers in 13 states only. During its pilot period, Direct File will only cover individual federal tax returns and won't have the capability to prepare people's state returns. That's why 9 out of the 13 states testing it — namely Alaska, Florida, New Hampshire, Nevada, South Dakota, Tennessee, Texas, Washington and Wyoming — don't levy state income taxes.
Arizona, California, Massachusetts and New York, the other four states in the list, worked with the IRS to integrate their state taxes into the Direct File system for 2024. The IRS says it invited all states to join the pilot program, but not all of them were in a position to participate "at this time." In addition to being only available in certain locations, Direct File will only be accessible by people with "relatively simple returns" at the beginning. It will cover W-2 wages and tax credits like the Earned Income Tax Credit and the Child Tax Credit, for instance, but it will not cover self-employment income and itemized deductions. However, the agency is still finalizing the tax scope for the pilot, so it could still change over the coming months.
Based on the screenshots the IRS shared with The Washington Post, taxpayers will only have to answer a questionnaire to be able to file their taxes directly, simplifying the process without having to pay for a third-party service. An IRS official told the publication that select eligible taxpayers in the aforementioned states will start getting invitations to use the service sometime around mid-February next year. The agency says it will begin with a small group of taxpayers before expanding access to more and more people as the filing season for the 2023 federal tax return progresses.
"This is a critical step forward for this innovative effort that will test the feasibility of providing taxpayers a new option to file their returns for free directly with the IRS," IRS Commissioner Danny Werfel said in a statement. "In this limited pilot for 2024, we'll be working closely with the states that have agreed to participate in an important test run of the state integration. This will help us gather important information about the future direction of the Direct File program."
The IRS is hoping to gather data and feedback during the pilot to be able to analyze how effective Direct File is. It's also hoping to identify areas of improvement for a "potential large-scale launch in the future."
This article originally appeared on Engadget at https://www.engadget.com/irs-will-start-piloting-its-free-turbotax-alternative-in-2024-065553528.html?src=rss
The EPA is withdrawing its plan to require states to assess the cybersecurity and integrity of public water system programs. While the agency says it continues to believe cybersecurity protective measures are essential for the public water industry, the decision was made after GOP-led states sued the agency for proposing the rule.
In a memo that accompanied the new rules in March, the EPA said that cybersecurity attacks on water and wastewater systems “have the potential to disable or contaminate the delivery of drinking water to consumers and other essential facilities like hospitals.” Despite the EPA’s willingness to provide training and technical support to help states and public water system organizations implement cybersecurity surveys, the move garnered opposition from both GOP state attorneys and trade groups.
Republican state attorneys that were against the new proposed policies said that the call for new inspections could overwhelm state regulators. The attorney generals of Arkansas, Iowa and Missouri all sued the EPA – claiming the agency had no authority to set these requirements. This led to the EPA’s proposal being temporarily blocked back in June.
While it's unclear if any cybersecurity regulations will be put in motion to protect the public moving forward, the EPA said it plans to continue working with the industry to “lower cybersecurity risks to clean and safe water.“ It encourages all states to “voluntarily review” the cybersecurity of their water systems, nothing that any proactive actions might curb the potential public health impacts if a hack were to take place.
This article originally appeared on Engadget at https://www.engadget.com/the-epa-wont-force-water-utilities-to-inspect-their-cyber-defenses-232301497.html?src=rss
California became just the third state in the nation to pass a "right to repair" consumer protection law on Tuesday, following Minnesota and New York, when Governor Gavin Newsom signed SB 244. The California Right to Repair bill had originally been introduced in 2019. It passed, nearly unanimously, through the state legislature in September.
“This is a victory for consumers and the planet, and it just makes sense,” Jenn Engstrom, state director of CALPIRG, told iFixit(which was also one of SB244's co-sponsors). “Right now, we mine the planet’s precious minerals, use them to make amazing phones and other electronics, ship these products across the world, and then toss them away after just a few years’ use ... We should make stuff that lasts and be able to fix our stuff when it breaks, and now thanks to years of advocacy, Californians will finally be able to, with the Right to Repair.”
Turns out Google isn't offering seven years of replacement parts and software updates to the Pixel 8 out of the goodness of its un-beating corporate heart. The new law directly stipulates that all electronics and appliances costing $50 or more, and sold within the state after July 1, 2021 (yup, two years ago), will be covered under the legislation once it goes into effect next year, on July 1, 2024.
For gear and gadgets that cost between $50 and $99, device makers will have to stock replacement parts and tools, and maintain documentation for three years. Anything over $100 in value gets covered for the full seven-year term. Companies that fail to do so will be fined $1,000 per day on the first violation, $2,000 a day for the second and $5,000 per day per violation thereafter.
“I’m thrilled that the Governor has signed the Right to Repair Act into law," State Senator Susan Talamantes Eggman, one of the bill's co-sponsors, said. "As I’ve said all along, I’m so grateful to the advocates fueling this movement with us for the past six years, and the manufacturers that have come along to support Californians’ Right to Repair. This is a common sense bill that will help small repair shops, give choice to consumers, and protect the environment.”
The bill even received support from Apple, of all companies. The tech giant famous for its "walled garden" product ecosystem had railed against the idea when it was previously proposed in Nebraska, claiming the state would become "a mecca for hackers." However, the company changed its tune when SB 244 was being debated, writing a letter of support reportedly stating, "We support 'SB 244' because it includes requirements that protect individual users' safety and security as well as product manufacturers' intellectual property."
This article originally appeared on Engadget at https://www.engadget.com/californias-right-to-repair-bill-is-now-californias-right-to-repair-law-232526782.html?src=rss