Posts with «government» label

TikTok creators sue Montana over statewide ban of the app

One day after Montana Governor signed the first statewide ban on TikTok into law, the measure is already facing a legal challenge. Five TikTok creators are suing in an effort to block the ban from taking effect.

In court filings, lawyers representing the TikTok creators say the ban is unconstitutional and that it violates their First Amendment rights. They also take issue with Montana’s supposed national security justification for the ban. 

“Montana has no authority to enact laws advancing what it believes should be the United States’ foreign policy or its national security interests, nor may Montana ban an entire forum for communication based on its perceptions that some speech shared through that forum, though protected by the First Amendment, is dangerous,” the suit states. “Montana can no more ban its residents from viewing or posting to TikTok than it could ban the Wall Street Journal because of who owns it or the ideas it publishes.”

In an interview on Fox News, Montana’s Attorney General, Austin Knudsen, said that legal challenges to the ban were expected. “There are some important issues here that I do think we probably need the federal courts to step in and answer for us here,” he said. “And that was part of our calculus in bringing this.”

The lawsuit is among the first legal challenges to the law, and will likely be closely watched as federal officials consider a nationwide ban on the app. Right now, the Montana ban is set to take effect January 1, 2024, though lawsuits challenging it could delay that timeline. TikTok itself hasn’t commented on whether it’s planning to bring its own litigation in Montana, but said in a statement following the bill’s signing that it planned “to defend the rights of our users inside and outside of Montana.”

This article originally appeared on Engadget at https://www.engadget.com/tiktok-creators-sue-montana-over-statewide-ban-of-the-app-225725851.html?src=rss

The IRS reportedly has a free TurboTax alternative in the works

Doing your taxes in the United States can be famously convoluted. It can also be expensive: on top of paying their tax bills, Americans who have more complicated finances often have to pay for software to help them navigate the US tax code. That might change soon: a report from the Washington Post says that the Internal Revenue Services is preparing to roll out a free direct filing system that will allow Americans to complete their taxes digitally.

The first version of the direct filing system could be available as soon as next year, according to the report, with a pilot program launching for a small group of taxpayers in January of 2024. That would arrive just a year after the IRS publicly started exploring the option, when the tax agency tapped the New America think tank to help explore the feasibility of an agency-run filing program. That effort was kicked off in February of this year, after the Inflation Reduction Act earmarked $15 million to the IRS to research a "multi-lingual and mobile-friendly" free direct e-file system.

That focus on a user-friendly system might be the point. The IRS already offers a Free File Online tool, but according to the Government Accountability Offices, it's used by less than 3% of eligible taxpayers. If the program is a success, it could make filing taxes easier and more affordable for millions of Americans. If not? Well, TurboTax and H&R Block probably aren't going anywhere. After all, the US tax prep and filing industry is still worth about $14 billion.

This article originally appeared on Engadget at https://www.engadget.com/the-irs-reportedly-has-a-free-turbotax-alternative-in-the-works-191527170.html?src=rss

New York State AG proposes broad regulations for the cryptocurrency industry

New York State may soon have its own legislation to prevent crypto scandals on par with FTX's downfall. Attorney General Letitia James has proposed a law, the CRPTO Act (Crypto Regulation, Protection, Transparency and Oversight), that's meant to thwart cryptocurrency fraud and protect investors. Whether or not it's the "strongest and most comprehensive" set of crypto regulations that James touts, it would theoretically prevent repeats of some high-profile incidents.

The CRPTO Act would bar conflicts of interest, such as owning multiple practices or marketplaces that trade for their own accounts. Companies would have to publicly report financial statements, including risk disclosures. There would be a host of investor safeguards, such as "know-your-customer" requirements, compensation for fraud victims and a ban on stablecoins (crypto coins whose value is tied to a safe asset) that aren't pegged directly to US currency or "high-quality" liquid assets.

The bill would let the Attorney General's office shut down lawbreakers and fine $10,000 per violation for individuals, and $100,000 per violation for companies. The office would also have the power to issue subpoenas and demand damages, penalties and restitution. The Department of Financial Services, meanwhile, would be ensured authority to license various crypto service providers.

James pointed to multiple real-world examples of alleged abuse the CRPTO act would potentially stop. Terraform Labs, for instance, promised a very high 20 percent interest rate to investors in one token on its marketplace if they bought the company's other token, supposedly hiding the assets' real value. Celsius, meanwhile, bought up its own token and created an artificial appearance of demand. That left investors "caught by surprise" when Celsius declared bankruptcy, according to the Attorney General.

The federal government is already cracking down on crypto fraud. The Securities Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) believe existing rules already cover numerous crypto-related activities, and in some cases have jockeyed to claim primary responsibility for regulating the technology. Politicians in the House and Senate are pushing for nationwide regulations. New York's efforts go one step further by tackling crypto-specific problems, though, and the state's role as a financial hub may effectively let it dictate policies guiding firms across the US.

This article originally appeared on Engadget at https://www.engadget.com/new-york-state-ag-proposes-broad-regulations-for-the-cryptocurrency-industry-162228624.html?src=rss

Vice President Harris tells tech CEOs they have a moral responsibility to safeguard AI

The Biden administration may be funding AI research, but it's also hoping to keep companies accountable for their behavior. Vice President Kamala Harris has met the CEOs of Alphabet (Google's parent), Microsoft, OpenAI and Anthropic in a bid to get more safeguards for AI. Private firms have an "ethical, moral and legal responsibility" to make their AI products safe and secure, Harris says in a statement. She adds that they still have to honor current laws.

The Vice President casts generative AI technologies like Bard, Bing Chat and ChatGPT as having the potential to both help and harm the country. It can address some of the "biggest challenges," but it can also be used to violate rights, create distrust and weaken "faith in democracy," according to Harris. She pointed to investigations into Russian interference during the 2016 presidential election as evidence that hostile nations will use tech to undercut democratic processes.

Finer details of the discussions aren't available as of this writing. However, Bloombergclaims invitations to the meeting outlined discussions of the risks of AI development, efforts to limit those risks and other ways the government could cooperate with the private sector to safely embrace AI.

Generative AI has been helpful for detailed search answers, producing art and even writing messages for job hunters. Accuracy remains a problem, however, and there are concerns about cheating, copyright violations and job automation. IBM said this week it would pause hiring for roles that could eventually be replaced with AI. There's been enough worry about AI's dangers that industry leaders and experts have called for a six-month pause on experiments to address ethical issues.

Biden's officials aren't waiting for companies to act. The National Telecommunications and Information Administration is asking for public comments on possible rules for AI development. Even so, the Harris meeting sends a not-so-subtle message that AI creators face a crackdown if they don't act responsibly.

This article originally appeared on Engadget at https://www.engadget.com/vice-president-harris-tells-tech-ceos-they-have-a-moral-responsibility-to-safeguard-ai-211049047.html?src=rss

Biden Administration will invest $140 million to launch seven new National AI Research Institutes

Ahead of a meeting between Vice President Kamala Harris and the heads of America's four leading AI tech companies — Alphabet, OpenAI, Anthropic and Microsoft — the Biden Administration announced Thursday a sweeping series of planned actions to help mitigate some of the risks that these emerging technologies pose to the American public. That includes $140 million to launch seven new AI R&D centers as part of the National Science Foundation, extracting commitments from leading AI companies to participate in a "public evaluation" of their AI systems at DEFCON 31, and ordering the Office of Management and Budget (OMB) to draft policy guidance for federal employees.

"The Biden Harris administration has been leading on these issues since long before these newest generative AI products debuted last fall," a senior administration official said during a reporters call Wednesday. The Administration unveiled its AI Bill of Rights "blueprint" last October, which sought to "help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public," per a White House press release.

"At a time of rapid innovation, it is essential that we make clear the values we must advance, and the common sense we must protect," the administration official continued. "With [Thursday's announcement] and the blueprint for an AI bill of rights, we've given company and policymakers and the individuals building these technologies, some clear ways that they can mitigate the risks [to consumers]."

While the federal government does already have authority to protect the citizenry and hold companies accountable, as the FTC demonstrated Monday, "there's a lot the federal government can do to make sure we get AI right," the official added — like found seven brand new National AI Research Institutes as part of the NSF. They'll act to collaborate research efforts across academia, the private sector and government to develop ethical and trustworthy in fields ranging from climate, agriculture and energy, to public health, education, and cybersecurity."

"We also need companies and innovators to be our partners in this work," the White House official said. "Tech companies have a fundamental responsibility to make sure their products are safe and secure and that they protect people's rights before they're deployed or made public tomorrow."

To that end, the Vice President is scheduled to meet with tech leaders at the White House on Thursday for what is expected to be a "frank discussion about the risks we see in current and near-term AI development," the official said. "We're also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and will discuss how we can work together to protect the American people from the potential harms of AI so that they can reach the benefits of these new technology."

The Administration also announced that it has obtained "independent commitment" from more than a half dozen leading AI companies — Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI — to put their AI systems up for public evaluation at DEFCON 31 (August 10-13th). There, thousands of attendees will be able to poke and prod around in these models to see if they square with the Biden admin's stated principles and practices of the Blueprint. Finally, the OMB will issue guidance to federal employees in coming months regarding official use of the technology and help establish specific policies for agencies to follow, and allow for public comment before those policies are finalized.

"These are important new steps to come out responsible innovation and to make sure AI improved people's lives, without putting rights and safety at risk," the official noted.

This article originally appeared on Engadget at https://www.engadget.com/biden-administration-will-invest-140-million-to-launch-seven-new-national-ai-research-institutes-090026144.html?src=rss

Senators reintroduce COPPA 2.0 bill to tighten child safety online

Yet more senators are trying to resurrect legislation aimed at protecting kids' online privacy. Senators Bill Cassidy and Ed Markey have reintroduced a "COPPA 2.0" (Children and Teens' Online Privacy Protection Act) bill that would expand and revise the 1998 law to deal with the modern internet, particularly social media.

COPPA 2.0 would bar companies from gathering personal data from teens aged 13 to 16 without their consent. It would ban all targeted advertising to children and teens, and create a "bill of rights" that limits personal info gathering for marketing purposes. The measure would also require a button to let kids and parents delete personal data when it's "technologically feasible."

The sequel potentially makes it easier to take action in the first place. Where COPPA requires direct knowledge that companies are collecting data from kids under 13, 2.0 would cover apps and services that are "reasonably likely" to have children as users. The Federal Trade Commission, meanwhile, would have to establish a division committed to regulating youth marketing and privacy.

Cassidy and Markey portray the bill as necessary to tackle a "mental health crisis" where tech giants allegedly play a role. The politicians argue that social networks amplify teens' negative feelings, pointing to Facebook's own research as evidence.

Social networks have tried to clamp down on misuses of child data. Meta's Facebook and Instagram have limited ad targeting for teens, for instance. However, there have also been concerns that online platforms haven't gone far enough. On top of earlier calls for bans on ad targeting, states like Arkansas and Utah have already passed laws respectively requiring age verification and parental permission for social media. Another Senate bill, the Protecting Kids on Social Media Act, would require parents' approval across the US.

Whether or not COPPA 2.0 makes it to the President's desk for signature isn't clear. The first attempt got stuck in committee ahead of the current Congress session. It also comes right as other senators are making attempts to revive the EARN IT Act (aimed at curbing child sexual abuse material) and the Kids Online Safety Act (meant to fight toxic online content as a whole). All three reintroductions are bipartisan, but they'll need considerably stronger support in the Senate, plus successful equivalents in the House, to become law.

This article originally appeared on Engadget at https://www.engadget.com/senators-reintroduce-coppa-20-bill-to-tighten-child-safety-online-165043087.html?src=rss

White House proposes 30 percent tax on electricity used for crypto mining

The Biden administration wants to impose a 30 percent tax on the electricity used by cryptocurrency mining operations, and it has included the proposal in its budget for the fiscal year of 2024. In a blog post on the White House website, the administration has formally introduced the Digital Asset Mining Energy or DAME excise tax. It explained that it wants to tax cryptomining firms, because they aren't paying for the "full cost they impose on others," which include environmental pollution and high energy prices. 

Crypto mining has "negative spillovers on the environment," the White House continued, and the pollution it generates "falls disproportionately on low-income neighborhoods and communities of color." It added that the operations' "often volatile power consumption " can raise electricity prices for the people around them and cause service interruptions. Further, local power companies are taking a risk if they decide to upgrade their equipment to make their service more stable, since miners can easily move away to another location, even abroad. 

It's no secret that the process of mining cryptocurrency uses up massive amounts of electricity. In April, The New York Times published a report detailing the power used by the 34 large scale Bitcoin miners in the US that it had identified. Apparently, just those 34 operations altogether use the same amount of electricity as three million households in the country. The Times explained that most Bitcoin mining took place in China until 2021 when the country banned it, making the United State the new leader. (In the US, New York Governor Kathy Hochul signed legislation that restricts crypto mining in the state last year.) Previous reports estimated the electricity consumption related to Bitcoin alone to be more than some countries', including Argentina, Norway and the Netherlands

As Yahoo News noted, there are other industries, such as steel manufacturing, that also use large amounts of electricity but aren't taxed for their energy consumption. In its post, the administration said that cryptomining "does not generate the local and national economic benefits typically associated with businesses using similar amounts of electricity."

Critics believe that the government made this proposal to go after and harm an industry it doesn't support. A Forbes report also suggested that DAME may not be the best solution for the issue, and that taxing the industry's greenhouse gas emissions might be a better alternative. That could encourage mining firms not just to minimize energy use, but also to find cleaner sources of power. It might be difficult to convince the administration to go down that route, though: In its blog post, it said that the "environmental impacts of cryptomining exist even when miners use existing clean power." Apparently, mining operations in communities with hydropower have been observed to reduce the amount of clean power available for use by others. That leads to higher prices and to even higher consumption of electricity from non-clean sources. 

If the proposal ever becomes a law, the government would impose the excise tax in phases. It would start by adding a 10 percent tax on miners' electricity use in the first year, 20 percent in the second and then 30 percent from the third year onwards. 

This article originally appeared on Engadget at https://www.engadget.com/white-house-proposes-30-percent-tax-on-electricity-used-for-crypto-mining-090342986.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

Canada's controversal streaming bill just became law

Canada has passed its controversial streaming bill that requires Netflix, Spotify and other companies to pay to support Canadian series, music and other content, the CBC has reported. After clearing a final hurdle in the Senate on Thursday, Bill C-11 imposes the same content laws on streamers as it does on traditional broadcasters. The government has promised that the bill only applies to companies and not individual content creators on YouTube or other platforms.

The new rules give the Canadian Radio-television and Telecommunications Commission (CRTC) regulator broad powers over streaming companies, which could face fines or other penalties if they don't comply with the new laws. "Online streaming has changed how we create, discover, and consume our culture, and it's time we updated our system to reflect that," a Canadian government press release states.

Critics have said that the bill could cause over-regulation online. "Under this archaic system of censorship, government gatekeepers will now have the power to control which videos, posts and other content Canadians can see online," Canada's Conservative opposition wrote on a web page dedicated to C-11. Streaming companies like YouTube and TikTok opposed the bill as well. 

The law has also been criticized for being overly broad, with a lack of clarity on how it will apply in some cases. "The bill sets out a revised broadcasting policy for Canada, which includes an expanded list of things the Canadian broadcasting system 'should' do," a Senate page states. "But precisely what this would mean in concrete terms for broadcasters is not yet known." 

Canada is far from the first country to enact local content rules for streaming companies, though. The EU requires a minimum of 30 percent locally produced content for member nations, most of which easily exceed that. Australia also recently announced that content quotas will be placed on Netflix, Disney+, Prime Video and the other international streamers by July of 2024.

Some notable Canadian series include Schitt's Creek, Letterkenny and M'entends-tu. Numerous US and international shows are also shot in "Hollywood North" in cities like Montreal, Toronto and Vancouver, including The Handmaid's Tale, The Boys, Riverdale and others.

This article originally appeared on Engadget at https://www.engadget.com/canadas-controversal-streaming-bill-just-became-law-065036243.html?src=rss

SpaceX’s Starship launch caused a fire in a Texas state park

After a string of delays and a scrubbed launch attempt, SpaceX finally conducted the first test flight of its Starship spacecraft earlier this month. While the vehicle got off the ground, it seems federal agencies will be dealing with the explosive fallout of the mission for quite some time.

Federal agencies say the launch led to a 3.5-acre fire on state park land. The blaze was extinguished. Debris from the rocket, which SpaceX said it had to blow up in the sky for safety reasons after a separation failure, was found across hundreds of acres of land. “Although no debris was documented on refuge fee-owned lands, staff documented approximately 385 acres of debris on SpaceX’s facility and at Boca Chica State Park,” the Texas arm of the US Fish and Wildlife Service told Bloomberg.

The agency noted it hasn’t found evidence of dead wildlife as a result of the incident. Still, it’s working with the Federal Aviation Administration on a site assessment and post-launch recommendations, while ensuring compliance with the Endangered Species Act.

Soon after the launch and Starship’s explosion, the FAA said it was carrying out a mishap investigation. Starship is grounded for now and its return to flight depends on the agency “determining that any system, process or procedure related to the mishap does not affect public safety.”

Starship’s approved launch plan included an anomaly response process, which the FAA says was triggered after the spacecraft blew up. As such, SpaceX is required to remove debris from sensitive habitats, carry out a survey of wildlife and vegetation and send reports to several federal agencies. “The FAA will ensure SpaceX complies with all required mitigations,” the agency told Bloomberg.

Even if SpaceX can sate federal agencies' concerns swiftly, it may be quite some time until the next Starship launch. The super heavy-lift space launch vehicle destroyed its launch pad, sending chunks of debris into the air. Footage showed the shrapnel landing on a nearby beach and even hitting a van hundreds of yards from the launch site. Fortunately, no one was hurt, according to the FAA.

This article originally appeared on Engadget at https://www.engadget.com/spacex-starship-launch-caused-a-fire-in-a-texas-state-park-165630774.html?src=rss