Posts with «internet & networking technology» label

Humane is said to be seeking a $1 billion buyout after only 10,000 orders of its terrible AI Pin

It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.

At a price of $700 (plus a mandatory $24 per month for 4G service), that puts Humane's initial revenue at a maximum of about $7.24 million, not accounting for canceled orders. And yet Humane wants a buyer for north of $1 billion after taking a swing and missing so hard it practically knocked out the umpire.

HP is reportedly one of the companies that Humane was in talks with over a potential sale, with discussions starting only a week or so after the reviews came out. Any buyer that does take the opportunity to snap up Humane's business and tech might be picking up somewhat of a poisoned chalice. Not least because the company this week urged its marks customers to stop using the AI Pin's charging case over a possible “fire safety risk.”

This article originally appeared on Engadget at https://www.engadget.com/humane-is-said-to-be-seeking-a-1-billion-buyout-after-only-10000-orders-of-its-terrible-ai-pin-134147878.html?src=rss

Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

OpenAI, Google and Anthropic did not immediately respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

This article originally appeared on Engadget at https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html?src=rss

Malicious code has allegedly compromised TikTok accounts belonging to CNN and Paris Hilton

There’s a new exploit making its way through TikTok and it has already compromised the official accounts of Paris Hilton, CNN and others, as reported by Forbes. It’s spread via direct message and doesn’t require a download, click or any form of response, beyond opening the chat. It’s currently unclear how many accounts have been affected.

Even weirder? The hacked accounts aren’t really doing anything. A source within TikTok told Forbes that these impacted accounts “do not appear to be posting content”. TikTok issued a statement to The Verge, saying that it is "aware of a potential exploit targeting a number of brand and celebrity accounts." The social media giant is "working directly with affected account owners to restore access." 

Semafor recently reported that CNN’s TikTok had been hacked, which forced the network to disable the account. It’s unclear if this is the very same hack that has gone on to infect other big-time accounts. The news organization said that it was “working with TikTok on the backend on additional security measures.” 

CNN staffers told Semafor that the news entity had “grown lax” regarding digital safety practices, with one employee noting that dozens of colleagues had access to the official TikTok account. However, another network source suggested that the breach wasn’t the result of someone gaining access from CNN’s end. That’s about all we know for now. We’ll update this post when more news comes in.

Of course, this isn’t the first big TikTok hack. Back in 2023, the company acknowledged that around 700,000 accounts in Turkey had been compromised due to insecure SMS channels involved with its two-factor authentication. Researchers at Microsoft discovered a vulnerability in 2022 that allowed hackers to overtake accounts with just a single click. Later that same year, an alleged security breach allegedly impacted more than a billion users.

This article originally appeared on Engadget at https://www.engadget.com/malicious-code-has-allegedly-compromised-tiktok-accounts-belonging-to-cnn-and-paris-hilton-174000353.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation, which it dubbed “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

Samsung reportedly requires independent repair stores to rat on customers using aftermarket parts

If you take in your Samsung device to an independent shop for repair, Samsung requires the store to send your name, contact information, device identifier, and the nature of your complaint to the mothership. Worse, if the repair store detects that your device has been previously repaired with an aftermarket or a non-Samsung part, Samsung requires the establishment to “immediately disassemble” your device and “immediately notify” the company.

These details were revealed thanks to 404 Media, which obtained a contract that Samsung requires all independent repair stores to sign in exchange for selling them genuine repair parts. Here’s the relevant section from the contract: “Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung.” It adds that the store “shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated.” Samsung did not respond to a request for comment from Engadget.

Samsung’s contract is troubling — customers who take their devices to independent repair stores do not necessarily expect their personal information to the sent to the device manufacturer. And if they’ve previously repaired their devices by using third-party parts that are often vastly cheaper than official ones (and just as good in many cases), they certainly do not expect an repair store to snitch on them to the manufacturer and have their device rendered unusable.

Experts who spoke to 404 Media said that consumers are within their rights to use third-party parts to repair devices they own under the Magnuson Moss Warranty Act, a federal law that governs consumer product warranties in the US.So far, Right to Repair legislation exists in 30 states in the country according to the Public Interest Research Group (PIRG), a consumer advocacy organization. But in states like New York, Minnesota and California where this legislation goes into effect this year, contracts like the one Samsung makes repair stores sign would be illegal, 404 Media pointed out.

“This is exactly the kind of onerous, one-sided ‘agreement’ that necessitates the right-to-repair,” Kit Walsh, a staff attorney at the Electronic Freedom Foundation told the publication. “In addition to the provision you mentioned about dismantling devices with third-party components, these create additional disincentives to getting devices repaired, which can harm both device security and the environment as repairable devices wind up in landfills.”

This isn’t the only incident around device repair that Samsung has found itself in hot water. Hours before the report from 404 Media, repair blog and parts retailer iFixit announced that it was ending its collaboration with Samsung to launch a “Repair Hub” less than two years into the partnership. “Samsung’s approach to repairability does not align with our mission,” iFixit said in a blog post, citing the high prices of Samsung’s parts and the unrepairable nature of Samsung’s devices that “remained frustratingly glued together” as reasons for pulling the plug.

This article originally appeared on Engadget at https://www.engadget.com/samsung-reportedly-requires-independent-repair-stores-to-rat-on-customers-using-aftermarket-parts-203925729.html?src=rss

Google plans to run a fiber optic cable from Kenya to Australia

Google said on Thursday it will build a fiber optic cable to connect Africa and Australia. Named Umoja (a Swahili word meaning “unity”), one end of the cable will start in Kenya and pass through Uganda, Rwanda, the Democratic Republic of the Congo, Zambia, Zimbabwe and South Africa (with access points for the countries) before crossing the Indian Ocean to the land down under.

Google says the project is designed to “increase digital connectivity, accelerate economic growth, and deepen resilience across Africa.” In addition to the cable itself, the company says it will work with the Kenyan government to boost cybersecurity, data-driven innovation, digital upskilling and responsibly and safely deploying AI.

Umoja will join Equiano, Google’s private undersea cable running between Portugal and South Africa (with pitstops in other nations).

Google says the new route is critical to strengthen network resilience in the region, which has a history of “high-impact outages.” In other words, more network redundancy makes outages less catastrophic to the area’s broadband infrastructure.

“The new intercontinental fiber optic route will significantly enhance our global and regional digital infrastructure,” Kenyan President William Ruto wrote about the initiative in a Google blog post. “This initiative is crucial in ensuring the redundancy and resilience of our region’s connectivity to the rest of the world, especially in light of recent disruptions caused by cuts to sub-sea cables. By strengthening our digital backbone, we are not only improving reliability but also paving the way for increased digital inclusion, innovation, and economic opportunities for our people and businesses.”

This article originally appeared on Engadget at https://www.engadget.com/google-plans-to-run-a-fiber-optic-cable-from-kenya-to-australia-191744476.html?src=rss

Microsoft outage impacts Bing, Copilot, ChatGPT internet search and other sites

Multiple Microsoft services including Bing and Copilot, along with ChatGPT internet search and DuckDuckGo are down in Europe, Bleeping Computer reported. Bing.com and Copilot return blank pages and 429 errors, while DuckDuckGo simply states: "There was an error displaying the search results. Please try again."

On its @MSFT365Status X page, Microsoft stated that "We're investigating an issue where users may be unable to access the Microsoft Copilot service. We're working to isolate the cause of the issue. More information can be found in the admin center under CP795190." OpenAI also confirmed the issue and said it's investigating. 

Both ChatGPT internet search (available to Plus or corporate users) and DuckDuckGo rely on the Bing API, hence why those sites are down as well. The outage appears to have started at around 3AM ET today (May 23). 

Microsoft was clobbered by another outage in January, when Teams went down across North and South America. The company was also hit by a massive breach that same month, with a US government review board calling Microsoft's security culture "inadequate" and in need of an overhaul.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-outage-impacts-bing-copilot-chatgpt-internet-search-and-other-sites-102456872.html?src=rss

UK's AI Safety Institute easily jailbreaks major LLMs

In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found that the four undisclosed LLMs tested were "highly vulnerable to basic jailbreaks." Some unjailbroken models even generated "harmful outputs" without researchers attempting to produce them.

Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted "relatively simple attacks" though, all responded to between 98 and 100 percent of harmful questions.

UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It's meant to "carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely."

The AISI's report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.

This article originally appeared on Engadget at https://www.engadget.com/uks-ai-safety-institute-easily-jailbreaks-major-llms-133903699.html?src=rss