Posts with «internet & networking technology» label

Google will keep third-party tracking cookies on Chrome as they are

Google will not make any to changes to how third-party cookies work on the Chrome browser at all. Anthony Chavez, Google VP for Privacy Sandbox, has announced that the company has "made the decision to maintain [its] current approach to offering users third-party cookie choice in Chrome." It will also "not be rolling out a new standalone prompt for third-party cookies" that would have allowed users to opt out of being tracked by advertisers. Google has made the announced a few days after a federal judge ruled that it has an illegal monopoly on online advertising

The company originally announced that it was going to phase out third-party tracking cookies in 2022 as part of its Privacy Sandbox initiative, which aims to make the web more secure and private to use. But due to a series of delays and regulatory hurdles — the UK's Competition and Markets Authority (CMA) and the US Department of Justice both looked into Google's initiative out of concerns that it could harm smaller advertisers — the planned deprecation got delayed to 2024 and then again to 2025. 

Last year, Google ultimately decided that it wasn't going to kill third-party cookies and will instead introduce "a new experience in Chrome that lets people make an informed choice that applies across their web browsing." That new experience isn't coming. In his new announcement, Chavez said that a lot has changed since the Privacy Sandbox initiative debuted, and Google has taken new developments in privacy-enhancing technologies that secure people's browsing into consideration when it made its decision.

Despite killing all its plans to remove third-party cookies from Chrome, Google will keep the Privacy Sandbox initiative alive. Chavez said it will continue enhancing tracking protections in Chrome's incognito mode, such as launching IP Protection later this year, and will continue working on features like Safe Browsing, Safety Check and built-in password protections.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-will-keep-third-party-tracking-cookies-on-chrome-as-they-are-130026362.html?src=rss

The Washington Post partners with OpenAI to bring its content to ChatGPT

The Washington Post is partnering with OpenAI to bring its reporting to ChatGPT. The two organizations did not disclose the financial terms of the agreement, but the deal will see ChatGPT display summaries, quotes and links to articles from The Post when users prompt the chatbot to search the web.

"We're all in on meeting our audiences where they are," said Peter Elkins-Williams, head of global partnerships at The Post. "Ensuring ChatGPT users have our impactful reporting at their fingertips builds on our commitment to provide access where, how and when our audiences want it."

The Post is no stranger to generative AI. In November, the publisher began using the technology to offer article summaries. Since the start of February, ChatGPT Search has been available to everyone, with no account or sign-in necessary. 

Later that same month, Jeff Bezos, the owner of The Washington Post, announced a "significant shift" in the publisher's editorial strategy. As part of the overhaul, the paper has been publishing daily opinion stories "in defense of two pillars," personal liberties and free markets. Given that focus and Amazon's own investments in artificial intelligence, it's not surprising to see The Washington Post and OpenAI sign a strategic partnership.

More broadly, today's announcement sees yet another publisher partnering with OpenAI, following an early but brief period of resistance from some players in the news media industry — most notably The New York Times. According to OpenAI, it has signed similar agreements with more than 20 news publishers globally.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-washington-post-partners-with-openai-to-bring-its-content-to-chatgpt-141215314.html?src=rss

ProtonVPN two-year plans are 64 percent off right now

A VPN (virtual private network) can help you stay safe online and one of our top picks is currently on sale. A two-year subscription to the ProtonVPN Plus plan is currently $86.16. That’s 64 percent off the usual price. The deal drops the cost from $10 to $3.59 per month, and it reduces the overall price for 24 months by $153.

This plan allows you to use ProtonVPN on up to 10 devices at a time. It should be pretty easy to find a server to route your internet traffic through as well, since ProtonVPN has more than 8,600 of them across north of 110 countries.

ProtonVPN is our pick for the best VPN overall due to a blend of its security, usability and privacy. ProtonVPN has a no-logs policy. That means it doesn't keep any records of information that passes through its network. In other words, it doesn't track your internet activity while you're using it, helping to protect you and your anonymity.

Other features of ProtonVPN Plus include ad-, malware- and tracker-blocking, as well as fast performance. In our testing, ProtonVPN had a minimal impact on connection speeds in our geoblock, streaming and gaming tests. ProtonVPN is also open source, meaning that anyone with enough knowhow can take a look under the hood and validate Proton's technical claims.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/protonvpn-two-year-plans-are-64-percent-off-right-now-152355804.html?src=rss

Humane is said to be seeking a $1 billion buyout after only 10,000 orders of its terrible AI Pin

It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.

At a price of $700 (plus a mandatory $24 per month for 4G service), that puts Humane's initial revenue at a maximum of about $7.24 million, not accounting for canceled orders. And yet Humane wants a buyer for north of $1 billion after taking a swing and missing so hard it practically knocked out the umpire.

HP is reportedly one of the companies that Humane was in talks with over a potential sale, with discussions starting only a week or so after the reviews came out. Any buyer that does take the opportunity to snap up Humane's business and tech might be picking up somewhat of a poisoned chalice. Not least because the company this week urged its marks customers to stop using the AI Pin's charging case over a possible “fire safety risk.”

This article originally appeared on Engadget at https://www.engadget.com/humane-is-said-to-be-seeking-a-1-billion-buyout-after-only-10000-orders-of-its-terrible-ai-pin-134147878.html?src=rss

Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

OpenAI, Google and Anthropic did not immediately respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

This article originally appeared on Engadget at https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html?src=rss

Malicious code has allegedly compromised TikTok accounts belonging to CNN and Paris Hilton

There’s a new exploit making its way through TikTok and it has already compromised the official accounts of Paris Hilton, CNN and others, as reported by Forbes. It’s spread via direct message and doesn’t require a download, click or any form of response, beyond opening the chat. It’s currently unclear how many accounts have been affected.

Even weirder? The hacked accounts aren’t really doing anything. A source within TikTok told Forbes that these impacted accounts “do not appear to be posting content”. TikTok issued a statement to The Verge, saying that it is "aware of a potential exploit targeting a number of brand and celebrity accounts." The social media giant is "working directly with affected account owners to restore access." 

Semafor recently reported that CNN’s TikTok had been hacked, which forced the network to disable the account. It’s unclear if this is the very same hack that has gone on to infect other big-time accounts. The news organization said that it was “working with TikTok on the backend on additional security measures.” 

CNN staffers told Semafor that the news entity had “grown lax” regarding digital safety practices, with one employee noting that dozens of colleagues had access to the official TikTok account. However, another network source suggested that the breach wasn’t the result of someone gaining access from CNN’s end. That’s about all we know for now. We’ll update this post when more news comes in.

Of course, this isn’t the first big TikTok hack. Back in 2023, the company acknowledged that around 700,000 accounts in Turkey had been compromised due to insecure SMS channels involved with its two-factor authentication. Researchers at Microsoft discovered a vulnerability in 2022 that allowed hackers to overtake accounts with just a single click. Later that same year, an alleged security breach allegedly impacted more than a billion users.

This article originally appeared on Engadget at https://www.engadget.com/malicious-code-has-allegedly-compromised-tiktok-accounts-belonging-to-cnn-and-paris-hilton-174000353.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation, which it dubbed “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

Samsung reportedly requires independent repair stores to rat on customers using aftermarket parts

If you take in your Samsung device to an independent shop for repair, Samsung requires the store to send your name, contact information, device identifier, and the nature of your complaint to the mothership. Worse, if the repair store detects that your device has been previously repaired with an aftermarket or a non-Samsung part, Samsung requires the establishment to “immediately disassemble” your device and “immediately notify” the company.

These details were revealed thanks to 404 Media, which obtained a contract that Samsung requires all independent repair stores to sign in exchange for selling them genuine repair parts. Here’s the relevant section from the contract: “Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung.” It adds that the store “shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated.” Samsung did not respond to a request for comment from Engadget.

Samsung’s contract is troubling — customers who take their devices to independent repair stores do not necessarily expect their personal information to the sent to the device manufacturer. And if they’ve previously repaired their devices by using third-party parts that are often vastly cheaper than official ones (and just as good in many cases), they certainly do not expect an repair store to snitch on them to the manufacturer and have their device rendered unusable.

Experts who spoke to 404 Media said that consumers are within their rights to use third-party parts to repair devices they own under the Magnuson Moss Warranty Act, a federal law that governs consumer product warranties in the US.So far, Right to Repair legislation exists in 30 states in the country according to the Public Interest Research Group (PIRG), a consumer advocacy organization. But in states like New York, Minnesota and California where this legislation goes into effect this year, contracts like the one Samsung makes repair stores sign would be illegal, 404 Media pointed out.

“This is exactly the kind of onerous, one-sided ‘agreement’ that necessitates the right-to-repair,” Kit Walsh, a staff attorney at the Electronic Freedom Foundation told the publication. “In addition to the provision you mentioned about dismantling devices with third-party components, these create additional disincentives to getting devices repaired, which can harm both device security and the environment as repairable devices wind up in landfills.”

This isn’t the only incident around device repair that Samsung has found itself in hot water. Hours before the report from 404 Media, repair blog and parts retailer iFixit announced that it was ending its collaboration with Samsung to launch a “Repair Hub” less than two years into the partnership. “Samsung’s approach to repairability does not align with our mission,” iFixit said in a blog post, citing the high prices of Samsung’s parts and the unrepairable nature of Samsung’s devices that “remained frustratingly glued together” as reasons for pulling the plug.

This article originally appeared on Engadget at https://www.engadget.com/samsung-reportedly-requires-independent-repair-stores-to-rat-on-customers-using-aftermarket-parts-203925729.html?src=rss