Posts with «government» label

EPA creates youth council to advise the agency on climate change policy

If younger generations are more likely to feel the effects of climate change, shouldn't they have a say in related government policies? The Environmental Protection Agency (EPA) thinks so. It's officially forming its "first-ever" National Environmental Youth Advisory Council. The agency is inviting 16 people aged 18 to 29 to have them influence the agency's approach to environmental issues that affect youth communities.

In keeping with the EPA's increasing focus on environmental justice, at least half of the council's overall membership will come from, live in or do most of its work in "disadvantaged" communities where clean air, land and water aren't guaranteed. Youth interested in the panel will have until August 22nd at 11:59PM Eastern to apply, with webinars for would-be applicants on June 30th and August 7th.

Agency head Michael Regan argues that it's not practical to address environmental issues without the help of younger people who are often at the "forefront of social movements." The council makes sure that youth play a role in decisions, the administrator adds.

Plans for the council were originally unveiled in June 2022, and come several months after the EPA created an Office of Environmental Justice and External Civil Rights. That new division is meant to include "underserved communities" in the regulatory process, Vice President Kamala Harris said at the time. In that light, the youth council is an extension of last year's strategy.

President Biden's administration has made the environment a key element of its policy. The wide-ranging Inflation Reduction Act includes $3 billion in environmental justice grants as well as revised (if sometimes stricter) EV tax credits. The youth council won't necessarily lead to major changes in policy, but it makes sense when young adults are more likely to deal with the most severe effects of rising global temperatures than the official rule makers.

This article originally appeared on Engadget at https://www.engadget.com/epa-creates-youth-council-to-advise-the-agency-on-climate-change-policy-154558548.html?src=rss

House and Senate bills aim to protect journalists' data from government surveillance

News gatherers in the US may soon have safeguards against government attempts to comb through their data. Bipartisan House and Senate groups have reintroduced legislation, the PRESS Act (Protect Reporters from Exploitive State Spying), that limits the government's ability to compel data disclosures that might identify journalists' sources. The Senate bill, would extend disclosure exemptions and standards to cover email, phone records, and other info third parties hold.

The PRESS Act would also require that the federal government gives journalists a chance to respond to data requests. Courts could still demand disclosure if it's necessary to prevent terrorism, identify terrorists or prevent serious "imminent" violence. The Senate bill is the work of Richard Durbin, Mike Lee and Ron Wyden, while the House equivalent comes from representatives Kevin Kiley and Jamie Raskin.

Sponsors characterize the bill as vital to protecting First Amendment press freedoms. Anonymous source leaks help keep the government accountable, Wyden says. He adds that surveillance like this can deter reporters and sources worried about retaliation. Lee, meanwhile, says the Act will also maintain the public's "right to access information" and help it participate in a representative democracy.

The senators point to instances from both Democratic and Republican administrations where law enforcement subpoenaed data in a bid to catch sources. Most notably, the Justice Department under Trump is known to have seized call records and email logs from major media outlets like CNN and The New York Times following an April 2017 report on how former FBI director James Comey handled investigations during the 2016 presidential election.

Journalist shield laws exist in 48 states and the District of Columbia, but there's no federal law. That void lets the Justice Department and other government bodies quietly grab data from telecoms and other providers. The PRESS Act theoretically patches that hole and minimizes the chances of abuse.

There's no guarantee the PRESS Act will reach President Biden's desk and become law. However, both Congress camps are betting that bipartisan support will help. The House version passed "unanimously" in the previous session of Congress, Wyden's office says.

This article originally appeared on Engadget at https://www.engadget.com/house-and-senate-bills-aim-to-protect-journalists-data-from-government-surveillance-192907280.html?src=rss

Lawmakers seek 'blue-ribbon commission' to study impacts of AI tools

The wheels of government have finally begun to turn on the issue of generative AI regulation. US Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation on Monday that would establish a 20-person commission to study ways to “mitigate the risks and possible harms” of AI while “protecting” America's position as a global technology power. 

The bill would require the Executive branch to appoint experts from throughout government, academia and industry to conduct the study over the course of two years, producing three reports during that period. The president would appoint eight members of the committee, while Congress, in an effort "to ensure bipartisanship," would split the remaining 12 positions evenly between the two parties (thereby ensuring the entire process devolves into a partisan circus).

"[Generative AI] can be disruptive to society, from the arts to medicine to architecture to so many different fields, and it could also potentially harm us and that's why I think we need to take a somewhat different approach,” Lieu told the Washington Post. He views the commission as a way to give lawmakers — the same folks routinely befuddled by TikTok — a bit of "breathing room" in understanding how the cutting-edge technology functions.

Senator Brian Schatz (D-HI) plans to introduce the bill's upper house counterpart, Lieu's team told WaPo, though no timeline for that happening was provided. Lieu also noted that Congress as a whole would do well to avoid trying to pass major legislation on the subject until the commission has had its time. “I just think we need some experts to inform us and just have a little bit of time pass before we put something massive into law,” Lieu said.

Of course, that would then push the passage any sort of meaningful Congressional regulation on generative AI out to 2027, at the very earliest, rather than right now, when we actually need it. Given how rapidly both the technology and the use cases for it have evolved in just the last six months, this study will have its work cut out just keeping pace with the changes, much less convincing the octogenarians running our nation of the potential dangers AI poses to our democracy.

This article originally appeared on Engadget at https://www.engadget.com/lawmakers-seek-blue-ribbon-commission-to-study-impacts-of-ai-tools-152550502.html?src=rss

Biden administration announces $930 million in grants to expand rural internet access

The Biden administration on Friday announced $930 million in grants designed to expand rural access to broadband internet. Part of the Department of Commerce’s “Enabling Middle Mile Broadband Infrastructure Program,” the grants will fund the deployment of more than 12,000 miles of new fiber optic cable across 35 states and Puerto Rico. The administration said Friday it expects grant recipients to invest an additional $848.46 million, a commitment that should double the program's impact.

“Much like how the interstate highway system connected every community in America to regional and national systems of highways, this program will help us connect communities across the country to regional and national networks that provide quality, affordable high-speed internet access,” Commerce Secretary Gina Raimondo said.

High-speed internet is no longer a luxury, it’s a necessity. That's why my Administration is investing in expanding access to affordable high-speed internet to close the digital divide.https://t.co/Mxd81tjeEg.

— President Biden (@POTUS) June 17, 2023

According to the Commerce Department, it received over 260 applications for the Middle Mile Grant Program, totaling $7.47 billion in funding requests. The agency primarily awarded grants to telecom and utility companies, though it also set aside funding for tribal governments and nonprofits. Per Gizmodo, the largest grant, valued at $88.8 million, went to a telecommunications company in Alaska that will build a fiber optic network in a part of the state where 55 percent of residents have no internet access. On average, the Commerce Department awarded $26.6 million to most applicants. Grant recipients now have five years to complete work on their projects, though the administration hopes many of the buildouts will be completed sooner.

In addition to creating new economic opportunities in traditionally underserved communities, the government says the projects should improve safety in those areas too. “They can improve network resilience in the face of the climate crisis, and increasing natural disasters like wildfires, floods, and storms, creating multiple routes for the internet traffic to use instead of just one, like a detour on the freeway,” White House infrastructure coordinator Mitch Landrieu told Bloomberg.

The funding is just one of many recent efforts by the government to close the rural digital divide. At the start of last year, the Federal Communications Commission announced an accountability program designed to ensure recipients of the Rural Digital Opportunity Fund properly spend the money they receive from the public purse.

This article originally appeared on Engadget at https://www.engadget.com/biden-administration-announces-930-million-in-grants-to-expand-rural-internet-access-153708056.html?src=rss

Senate bill would hold AI companies liable for harmful content

Politicians think they have a way to hold companies accountable for troublesome generative AI: take away their legal protection. Senators Richard Blumenthal and Josh Hawley have introduced a No Section 230 Immunity for AI Act that, as the name suggests, would prevent OpenAI, Google and similar firms from using the Communications Decency Act's Section 230 to waive liability for harmful content and avoid lawsuits. If someone created a deepfake image or sound bite to ruin a reputation, for instance, the tool developer could be held responsible alongside the person who used it.

Hawley characterizes the bill as forcing AI creators to "take responsibility for business decisions" as they're developing products. He also casts the legislation as a "first step" toward creating rules for AI and establishing safety measures. In a hearing this week on AI's effect on human rights, Blumenthal urged Congress to deny AI the broad Section 230 safeguards that have shielded social networks from legal consequences.

In May, Blumenthal and Hawley held a hearing where speakers like OpenAI chief Sam Altman called for the government to act on AI. Industry leaders have already urged a pause on AI experimentation, and more recently compared the threat of unchecked AI to that of nuclear war.

Congress has pushed for Section 230 reforms for years in a bid to rein in tech companies, particularly over concerns that internet giants might knowingly allow hurtful content. A 2021 House bill would have held businesses liable if they knowingly used algorithms that cause emotional or physical harm. These bills have stalled, though, and Section 230 has remained intact. Legislators have had more success in setting age verification requirements that theoretically reduce mental health issues for younger users.

It's not clear this bill stands a greater chance of success. Blumenthal and Hawley are known for introducing online content bills that fail to gain traction, such as the child safety-oriented EARN IT Act and Hawley's anti-addiction SMART Act. On top of persuading fellow senators, they'll need an equivalent House bill that also survives a vote.

This article originally appeared on Engadget at https://www.engadget.com/senate-bill-would-hold-ai-companies-liable-for-harmful-content-212340911.html?src=rss

Google, OpenAI will share AI models with the UK government

The UK's AI oversight will include chances to directly study some companies' technology. In a speech at London Tech Week, Prime Minister Rishi Sunak revealed that Google DeepMind, OpenAI and Anthropic have pledged to provide "early or priority access" to AI models for the sake of research and safety. This will ideally improve inspections of these models and help the government recognize the "opportunities and risks," Sunak says.

It's not clear just what data the tech firms will share with the UK government. We've asked Google, OpenAI and Anthropic for comment.

The announcement comes weeks after officials said they would conduct an initial assessment of AI model accountability, safety, transparency and other ethical concerns. The country's Competition and Markets Authority is expected to play a key role. The UK has also committed to spending an initial £100 million (about $125.5 million) to create a Foundation Model Taskforce that will develop "sovereign" AI meant to grow the British economy while minimizing ethical and technical problems.

Industry leaders and experts have called for a temporary halt to AI development over worries creators are pressing forward without enough consideration for safety. Generative AI models like OpenAI's GPT-4 and Anthropic's Claude have been praised for their potential, but have also raised concerns about inaccuracies, misinformation and abuses like cheating. The UK's move theoretically limits these issues and catches problematic models before they've done much damage.

This doesn't necessarily give the UK complete access to these models and the underlying code. Likewise, there are no guarantees the government will catch every major issue. The access may provide relevant insights, though. If nothing else, the effort promises increased transparency for AI at a time when the long-term impact of these systems isn't entirely clear.

This article originally appeared on Engadget at https://www.engadget.com/google-openai-will-share-ai-models-with-the-uk-government-134318263.html?src=rss

FCC orders Avid Telecom to stop health insurance-related robocalls

The Federal Communications Commission has issued a cease-and-desist letter to Avid Telecom, the same company sued by nearly all Attorneys General in the US for alleged robocall activities. In the letter (PDF) addressed to Avid CEO Michael Lansky, the FCC said it has determined that the company "is apparently originating illegal robocall traffic on behalf of one or more of its clients." The commission explained that it worked with USTelecom’s Industry Traceback Group, which investigated prerecorded telemarketing calls related to health insurance that the aforementioned state attorneys general identified as robocalls made without consent.

Apparently, their investigation determined that Avid originated the calls. When notified about the calls, Avid told the traceback group that its customer obtained consent through opt-in websites, but the FCC explained in its letter that the customer "failed to make adequate disclosures to obtain consent." That is, it didn't tell people that their consent authorizes the caller "to deliver advertisements or telemarketing messages using an auto-dialer or an artificial or prerecorded voice." In some cases, the customer allegedly called people even after they revoked their consent. 

The FCC has outlined the steps Avid has to take to address the issue, starting by investigating the identified traffic. Then, it has to implement measures that can prevent new and existing customers from using its network to make illegal calls. Within 48 hours of receiving the letter, Avid is required to update the FCC with the measures it has taken to mitigate robocalls coming from its network. After that, it has to inform the commission of the safeguards it has implemented to prevent its customers from using its network to make robocalls. The FCC warned that if Avid fails to comply, downstream voice service providers might permanently block all of Avid’s traffic. 

In late May, Attorneys General from 48 states filed a lawsuit against the Arizona-based VoIP services provider, accusing it of being the origin for over 7.5 billion calls to people on the National Do Not Call Registry. According to the lawsuit, Avid spoofed phone numbers and made calls appear as if they were from government offices, law enforcement agencies and companies like Amazon. The Attorneys General are asking the court to issue an injunction on Avid for making robocalls and to make the company pay for damages and restitution to the people it called illegally.

This article originally appeared on Engadget at https://www.engadget.com/fcc-orders-avid-telecom-to-stop-health-insurance-related-robocalls-064428940.html?src=rss

Meta test will limit news posts for Facebook and Instagram users in Canada

Last year, Facebook parent Meta said it may stop Canadians from sharing news content in response to the country's proposed Online Sharing Act. Now, the company has announced that it will begin tests on Facebook and Instagram that "limit some users and publishers from viewing or sharing some news content in Canada," it wrote in a blog post. The testing will take place over several weeks and the "small percentage" of users affected will be notified if they try to share news content. 

"As we have repeatedly shared, the Online News Act is fundamentally flawed legislation that ignores the realities of how our platforms work, the preferences of the people who use them, and the value we provide news publishers," the company wrote.

The proposed law, also known as Bill C-18, was introduced by the ruling Liberal government earlier this year. Modeled after a similar Australian law, it aims to force internet platforms like Facebook into revenue-sharing partnerships with local news organizations. It came about, in part, because of Facebook and Google's dominance of the online advertising market — with both companies combined taking 80 percent of revenue.

Last year, Meta said it was trying to be "transparent about the possibility that we may be forced to consider whether we continue to allow the sharing of news content in Canada." The company made the threat after a government panel failed to invite Meta to a meeting about the legislation. Google also temporarily blocked some Canadian users from seeing news content. 

In response, Canadian Heritage Minister Pablo Rodriguez called the tests "unacceptable," Reuters reported. "When a big tech company... tells us, 'If you don't do this or that, then I'm pulling the plug' — that's a threat. I've never done anything because I was afraid of a threat," he told Reuters

Facebook, Google and others eventually agreed to the Australian law, and now pay publishers to post news links with snippets. Before that happened, though, Facebook followed through on its threat to block users from sharing news links in the nation. It later reversed the ban following further discussions, after the government made amendments addressing Facebook's concerns about the value of its platform to publishers.

For now, the test will only affect a small number of users and for a limited time. If it follows the same playbook it used in Australia though, Meta may block news sharing for all users in Canada, possibly as a way to force the government and publishers to the bargaining table.

"As the Minister of Canadian Heritage has said, how we choose to comply with the legislation is a business decision we must make, and we have made our choice," the company wrote. "While these product tests are temporary, we intend to end the availability of news content in Canada permanently following the passage of Bill C-18."

This article originally appeared on Engadget at https://www.engadget.com/meta-test-will-limit-news-posts-for-facebook-and-instagram-users-in-canada-104026273.html?src=rss

White House reveals its next steps towards 'responsible' AI development

The White House has made responsible AI development a focus of this administration in recent months, releasing a Blueprint AI Bill of Rights, developing a risk management framework, committing $140 million to found seven new National Academies dedicated to AI research and weighing in on how private enterprises are leveraging the technology. On Tuesday, the executive branch announced its next steps towards that goal including releasing an update to the National AI R&D Strategic Plan for the first time since 2019 as well as issuing a request for public input on critical AI issues. The Department of Education also dropped its hotly-anticipated report on the effects and risks of AI for students.

The OSTP's National AI R&D Strategic Plan, which guides the federal government's investments in AI research, hadn't been updated since the Trump Administration (when he gutted the OSTP staffing levels). The plan seeks to promote responsible innovation in the field that serves the public good without infringing on the public's rights, safety and democratic values, having done so until this point through eight core strategies. Tuesday's update adds a ninth, establishing "a principled and coordinated approach to international collaboration in AI research," per the White House. 

"The federal government plays a critical role in ensuring that technologies like AI are developed responsibly, and to serve the American people," the OSTP argued in its release. "Federal investments over many decades have facilitated many key discoveries in AI innovations that power industry and society today, and federally funded research has sustained progress in AI throughout the field’s evolution."

The OSTP also wants to hear the publics thoughts on both its new strategies and the technology's development in general. As such it is inviting "interested individuals and organizations" to submit their comments to one or more of nearly 30 prompt questions, including "How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?" and "How can Federal agencies use shared pools of resources, expertise, and lessons learned to better leverage AI in government?" through the Federal eRulemaking Portal by 5:00 pm ET on July 7, 2023. Responses should be limited to 10 pages of 11-point font.

The Department of Education also released its report on the promises and pitfalls of AI in schools on Tuesday, focusing on the how it impacts Learning, Teaching, Assessment, and Research. Despite recent media hysteria about generative AIs like ChatGPT fomenting the destruction of higher education by helping students write their essays, the DoE noted that AI "can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators." 

This article originally appeared on Engadget at https://www.engadget.com/white-house-reveals-its-next-steps-towards-responsible-ai-development-190636857.html?src=rss

TikTok is suing Montana over law banning the app in the state

TikTok filed a lawsuit Monday to challenge Montana’s ban of the social platform, as reported byThe Wall Street Journal. The case was brought against the state attorney general Austin Knudsen.

Montana’s governor signed the bill banning the app in the state last week, one month after the state’s legislature passed it. The law is set to take effect on January 1st, 2024. The following day after the signing, a group of creators sued the state along similar grounds as TikTok’s suit today.

We are challenging Montana’s unconstitutional TikTok ban to protect our business and the hundreds of thousands of TikTok users in Montana. We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts.

— TikTokComms (@TikTokComms) May 22, 2023

The law prohibits the ByteDance-owned platform from operating in the state, as well as preventing app stores from listing the app for download. Although it isn't clear how Montana plans to enforce the ban, it states that violations will tally fines of $10,000 per day.

This is a developing story. Please check back for updates.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-suing-montana-over-law-banning-the-app-in-the-state-200642508.html?src=rss