Posts with «internet & networking technology» label

Surprise, this $30 video doorbell has serious security issues

Video doorbells manufactured by a Chinese company called Eken and sold under different brands for around $30 each come with serious security issues that put their users at risk, according to Consumer Reports. The publication found that these doorbell cameras are sold on popular marketplaces like Walmart, Sears and Amazon, which has even given some of their listings the Amazon Choice badge. They're listed under the brands Eken, Tuck, Fishbot, Rakeblue, Andoe, Gemee and Luckwolf, among others, and they're typically linked to a user's phone via the Aiwit app. Outside the US, the devices are sold on global marketplaces like Shein and Temu. We found them on Chinese website Alibaba and Southeast Asian e-commerce website Lazada, as well. 

Based on Consumer Reports' investigation, these devices aren't encrypted and can expose a user's home IP address and WiFi network name to the internet, making it easy for bad actors to gain entry. Worse, somebody with physical access to the doorbell could easily take control of it by creating an account on the Aiwit app and then pressing down on its button to put it into pairing mode, which then connects it with their phone. And, even if the original owner regains control, the hijacker can still get time-stamped images from the doorbell as long as they know its serial number. If they choose "to share that serial number with other individuals, or even post it online, all those people will be able to monitor the images, too," Consumer Reports explains. 

Based on the ratings these doorbells' listings got on Amazon, the platform has sold thousands to people who were probably expecting the devices to be able to provide some form of security for their homes. Instead, the devices pose a threat to their safety and privacy. The doorbells could even put people's well-being and lives at risk if, say, they have stalkers or are domestic violence victims with dangerous exes who want to follow their every move. 

People who own one of these video doorbells can protect themselves by disconnecting it from their WiFi and physically removing it from their homes. Consumer Reports said it notified the online marketplaces selling them about its findings in hopes that their listings would get pulled down. Temu told the publication that it's looking into the issue, but Amazon, Sears and Shein reportedly didn't even respond. 

This article originally appeared on Engadget at https://www.engadget.com/surprise-this-30-video-doorbell-has-serious-security-issues-130630193.html?src=rss

A year of NordVPN Plus is just $55 right now

If you work over public Wi-Fi, need to access geo-restricted content or just want to add an extra layer of privacy to your internet connection, you may want to use a VPN service. NordVPN is one of the most popular providers out there and right now, a digital code giving you a year of access to NordVPN Plus is going for $55 at Amazon. The plan also throws in one of our top password managers, NordPass. For comparison, right now a year of the Plus service is $72 directly from Nord. Of course, the best deals the company offers is on its two-year plans. Right now two years of the Plus service is $60 from Nord — so you're still saving $5 with Amazon's deal, plus you're not locked into a full two-year commitment. 

If you just want the VPN coverage without the password manager, you can get the standard service. It's $45 for a year of access, a savings of $15 over buying from Nord directly. And if you're just interested in the password manager, two-years of NordPass is down to $35, which is $5 less than going through Nord's site. 

We named Nord's password manager one of the best for cross-platform use in our guide to those services. The service keeps your credentials safe while making it easy to access your vault from whichever device or operating system you happen to be on. It also allows for biometric sign ins, making it even easier to get at your saved passwords. 

As for Nord's VPN, like all such services, it masks your IP address and encrypts your data to and from its destination. It also blocks your ISP from seeing data about your browsing. Just keep in mind that VPNs can't protect against other security risks like phishing and identity theft. Nord's VPN service didn't make the cut in our testing of such services, partly because we thought the price was a bit too high for the features provided. This deal removes some of the hesitation and we did find the service to be speedy and like that it's based on WireGuard, one of the more secure protocols. But the lack of open source software for most of its products and a less-than-stellar record of customer data privacy prevents it from being the best we can recommend. The service that did top our list was ProtonVPN. It's currently $72 for one year of the service alone. A bundle that includes a password manager, email and other services is currently $120 for the year. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/a-year-of-nordvpn-plus-is-just-55-right-now-165142120.html?src=rss

Tumblr and WordPress posts will reportedly be used for OpenAI and Midjourney training

Tumblr and WordPress are reportedly set to strike deals to sell user data to artificial intelligence companies OpenAI and Midjourney. 404 Media reports that the platforms’ parent company, Automattic, is nearing completion of an agreement to provide data to help train the AI companies’ models.

It isn’t clear which data will be included, but the report suggests Automattic may have overreached initially. An alleged internal post from Tumblr product manager Cyle Gage suggests Automattic prepared to send private or partner-related data that wasn’t supposed to be included in the deal. The questionable content reportedly included private posts on public blog posts, deleted or suspended blogs, unanswered (therefore, not publicly posted) questions, private answers, posts marked explicit and content from premium partner blogs (like Apple’s former music site).

The internal post suggests Automattic’s engineers are preparing a list of post IDs that should have been excluded. It isn’t clear whether the data had already been sent to the AI companies. Engadget emailed Automattic to ask for comment on the report, and we’ll update this article if we hear back.

The company reportedly plans to launch a new opt-out tool on Wednesday that claims to allow users to block third parties — including AI companies — from training on their data. 404 Media reviewed an alleged internal FAQ Automattic prepared for the tool, which includes the answer, “If you opt out from the start, we will block crawlers from accessing your content by adding your site on a disallowed list. If you change your mind later, we also plan to update any partners about people who newly opt-out and ask that their content be removed from past sources and future training.” 

The phrasing, describing it as “asking” the AI companies to remove the data, may be relevant.

An alleged internal document from Automattic’s AI head, Andrew Spittle, replying to a staff question about data-removal assurances when using the tool, explains, “We will notify existing partners on a regular basis about anyone who’s opted out since the last time we provided a list. I want this to be an ongoing process where we regularly advocate for past content to be excluded based on current preferences. We will ask that content be deleted and removed from any future training runs. I believe partners will honor this based on our conversations with them to this point. I don’t think they gain much overall by retaining it.”

So, if a Tumblr or WordPress user requests to opt out of AI training, Automattic will allegedly “ask” and “advocate for” their removal. And the company’s AI boss “believes” the AI companies will find it in their best interest to comply “based on our conversations.” (How’s that for reassurance!)

AI data training deals have become a lucrative opportunity for websites treading water in today’s slippery online publishing landscape. (Tumblr’s staff was reportedly reduced to a skeleton crew in late 2023.) Last week, Google struck a deal with Reddit (ahead of the latter’s IPO) to train on the platform’s vast knowledge base of user-created content. Meanwhile, OpenAI rolled out a partnership program last year to collect datasets from third parties to help train its AI models.

This article originally appeared on Engadget at https://www.engadget.com/tumblr-and-wordpress-posts-will-reportedly-be-used-for-openai-and-midjourney-training-204425798.html?src=rss

FTC concludes Twitter didn’t violate data security rules, in spite of Musk's orders

The Federal Trade Commission (FTC) concluded Elon Musk ordered Twitter (now X) employees to take actions that would have violated an FTC consent decree regarding consumers’ data privacy and security. The investigation arose from the late 2022 episode informally known as “The Twitter Files,” where Musk ordered staff to let outside writers access internal documents from the company’s systems. However, the FTC says Twitter security veterans “took appropriate measures to protect consumers’ private information,” likely sparing Musk’s company from government repercussions by ignoring his directive.

FTC Chair Lina Khan discussed the conclusions in a public letter sent Tuesday to House Judiciary Committee Chair Jim Jordan (via The Washington Post). Jordan and his Republican colleagues have tried to turn the FTC’s investigation into a political wedge issue, framing the inquiry as a free speech violation — perhaps to shore up GOP support from Musk’s legion of rabid supporters. Jordan and his peers previously described the investigation as “attempts to harass, intimidate, and target an American business.”

Khan’s response to Jordan adopts a tone resembling that of a patient teacher explaining the nuance of a complicated situation to a child who insists on seeing simplistic absolutes. “FTC staff efforts to ensure Twitter was in compliance with the Order were appropriate and necessary, especially given Twitter’s history of privacy and security lapses and the fact that it had previously violated the 2011 FTC Order,” Khan wrote.

“When a firm has a history of repeat offenses, the FTC takes particular care to ensure compliance with its orders,” the FTC Chair wrote.

House Judiciary Chair Jim Jordan (R-OH)
Tom Williams via Getty Images

The FTC’s investigation stemmed from allegations that Musk, newly minted as Twitter’s owner, ordered staff to give outside writers “full access to everything” in late 2022. Had staff obeyed Musk’s directive, the company likely would have violated a settlement with the FTC (originally from 2011 but updated in 2022) requiring the company to tightly restrict access to consumer data.

In November 2022, the FTC said publicly it was monitoring Twitter’s developments following Musk’s acquisition with “deep concern.” That followed the resignation of chief information security officer Lea Kissner and other members of the company’s data governance committee. They expressed concerns that Musk’s launch of a new account verification system didn’t give them adequate time to deploy security reviews required by the FTC.

Ultimately, Twitter security veterans ignored Musk’s “full access to everything” order. “Longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks,” Khan wrote in the letter. “The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order.”

FTC Chair Lina Khan
Slaven Vlasic via Getty Images

Rather than supplying outside writers with the “full access” Musk wanted them to have, Twitter employees accessed the systems and relayed select information to the group of outsiders. “Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf,” Khan wrote.

The FTC says it will continue to monitor X’s adherence to the order. “When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” FTC spokesman Douglas Farrar said in a statement to The Washington Post. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”

This article originally appeared on Engadget at https://www.engadget.com/ftc-concludes-twitter-didnt-violate-data-security-rules-in-spite-of-musks-orders-191917132.html?src=rss

Reddit reportedly signed a multi-million content licensing deal with an AI company

Ever posted or left a comment on Reddit? Your words will soon be used to train an artificial intelligence companies' models, according to Bloomberg. The website signed a deal that's "worth about $60 million on an annualized basis" earlier this year, it reportedly told potential investors ahead of its expected initial public offering (IPO). Bloomberg didn't name the "large AI company" that's paying Reddit millions for access to its content, but their agreement could apparently serve as a model for future contracts, which could mean more multi-million deals for the firm. 

Reddit first announced that it was going to start charging companies for API access in April last year. It said at the time that pricing will be split in tiers so that even smaller clientele could afford to pay. Companies need that API access to be able to train their chatbots on posts and comments — a lot of which had been written by real people over the past 18 years — from subreddits on a wide variety of topics. However, that API is also used by other developers, including those providing users with third-party clients that are arguably better than Reddit's official app. Thousands of communities shut down last year in protest and even caused stability issues that affected the whole website. 

Reddit could go public as soon as next month with a $5 billion valuation. As Bloomberg notes, the website could convince investors still on fence to take the leap by showing them that it can make big money and grow its revenue through deals with AI companies. The firms behind generative AI technologies are working to update their large language models or LLMs through various partnerships, after all. OpenAI, for instance, already inked an agreement that would give it the right to use Business Insider and Politico articles to train its AI models. It's also in talks with several publishers, including CNN, Fox Corp and Time, Bloomberg says.  

OpenAI is facing several lawsuits that accuse it of using content without the express permission of copyright holders, though, including one filed by The New York Times in December. The AI company previously told Engadget that the lawsuit was unexpected, because it had ongoing "productive conversations" with the publication for a "high-value partnership."

This article originally appeared on Engadget at https://www.engadget.com/reddit-reportedly-signed-a-multi-million-content-licensing-deal-with-an-ai-company-124516009.html?src=rss

Defense Department alerts over 20,000 employees about email data breach

The Department of Defense sent a data breach notification letter to thousands of current and former employees alerting that their personal information had been leaked, DefenseScoop reported on Tuesday. While the department first detected the incident in early 2023, the notifications didn't begin to go out until earlier this month. More than 20,000 individuals appear to be affected by the breach. 

The letter explains that emails messages were "inadvertently exposed to the internet" by a Defense Department "service provider." The emails contained personally identifiable information. While the agency doesn't clarify what type of information, PII generally ranges from information like social security numbers, home address or other sensitive details. "While there is no evidence to suggest that your PII was misused, the department is notifying those individuals whose PII may have been breached as a result of this unfortunate situation," the letter says. It urges affected parties to sign up for identity theft protection.

According to TechCrunch, the breach stems from an unsecured cloud email server that leaked sensitive emails onto the web. The Microsoft server, which was likely misconfigured, could be accessed from the internet without so much as a password. 

"As a matter of practice and operations security, we do not comment on the status of our networks and systems. The affected server was identified and removed from public access on February 20, 2023, and the vendor has resolved the issues that resulted in the exposure," the Department of Defense said in a statement. "DOD continues to engage with the service provider on improving cyber event prevention and detection. Notification to affected individuals is ongoing."

This article originally appeared on Engadget at https://www.engadget.com/defense-department-alerts-over-20000-employees-about-email-data-breach-164528056.html?src=rss

Russian and North Korean hackers used OpenAI tools to hone cyberattacks

Microsoft and OpenAI say that several state-backed hacking groups are using the latter’s generative AI (GAI) tools to bolster cyberattacks. The pair suggests that new research details for the first time how hackers linked to foreign governments are making use of GAI. The groups in question have ties to China, Russia, North Korea and Iran.

According to the companies, the state actors are using GAI for code debugging, looking up open-source information to research targets, developing social engineering techniques, drafting phishing emails and translating text. OpenAI (which powers Microsoft GAI products such as Copilot) says it shut down the groups’ access to its GAI systems after finding out they were using its tools.

Notorious Russian group Forest Blizzard (better known as Fancy Bear or APT 12) was one of the state actors said to have used OpenAI's platform. The hackers used OpenAI tools "primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks," the company said.

As part of its cybersecurity efforts, Microsoft says it tracks north of 300 hacking groups, including 160 nation-state actors. It shared its knowledge of them with OpenAI to help detect the hackers and shut down their accounts.

OpenAI says it invests in resources to pinpoint and disrupt threat actors' activities on its platforms. Its staff uses a number of methods to look into hackers' use of its systems, such as employing its own models to follow leads, analyzing how they interact with OpenAI tools and determining their broader objectives. Once it detects such illicit users, OpenAI says it disrupts their use of the platform through the likes of shutting down their accounts, terminating services or minimizing their access to resources.

This article originally appeared on Engadget at https://www.engadget.com/russian-and-north-korean-hackers-used-openai-tools-to-hone-cyberattacks-152424393.html?src=rss

Mozilla is laying off around 60 workers

Mozilla is the latest in a long line of tech companies to lay off employees this year. The not-for-profit company is firing around 60 people, which equates to roughly five percent of its workforce. Most of those who are leaving Mozilla worked on the product development team. The news was first reported by Bloomberg

“We’re scaling back investment in some product areas in order to focus on areas that we feel have the greatest chance of success,” a Mozilla spokesperson told Engadget in a statement. “To do so, we've made the difficult decision to eliminate approximately 60 roles from across the company. We intend to re-prioritize resources towards products like Firefox Mobile, where there’s a significant opportunity to grow and establish a better model for the industry.”

According to an internal memo obtained by TechCrunch, Mozilla plans to pare back investments on several products, including its VPN and a tool that automatically scrubs a user's personal information from data broker sites. The company announced the latter just one week ago. Hubs, the 3D virtual world Mozilla debuted in 2018, is shutting down while the company is also reducing resources dedicated to its Mastodon instance.

One area into which Mozilla does plan to funnel extra resources is, unsurprisingly, artificial intelligence. "In 2023, generative AI began rapidly shifting the industry landscape. Mozilla seized an opportunity to bring trustworthy AI into Firefox, largely driven by the Fakespot acquisition and the product integration work that followed," the memo reportedly reads. "Additionally, finding great content is still a critical use case for the internet. Therefore, as part of the changes today, we will be bringing together Pocket, Content and the AI/ML teams supporting content with the Firefox Organization."

The reorganization comes after Mozilla appointed a new CEO just last week. Former Airbnb, PayPal and eBay executive Laura Chambers, who joined Mozilla's board three years ago, was appointed chief executive for the rest of this year. "Her focus will be on delivering successful products that advance our mission and building platforms that accelerate momentum," Mitchell Baker, Mozilla's former long-time CEO and its new executive chairman, wrote when Chambers took on the job.

This article originally appeared on Engadget at https://www.engadget.com/mozilla-is-laying-off-around-60-workers-210313813.html?src=rss

Who makes money when AI reads the internet for us?

Last week, The Browser Company, a startup that makes the Arc web browser, released a slick new iPhone app called Arc Search. Instead of displaying links, its brand new “Browse for Me” feature reads the first handful of pages and summarizes them into a single, custom-built, Arc-formatted web page using large language models from OpenAI and others. If a user does click through to any of the actual pages, Arc Search blocks ads, cookies and trackers by default. Arc’s efforts to reimagine web browsing have received near-universal acclaim. But over the last few days, “Browse for Me” earned The Browser Company its first online backlash.

For decades, websites have served ads and pushed people visiting them towards paying for subscriptions. Monetizing traffic is one of the primary ways most creators on the web continue to make a living. Reducing the need for people to visit actual websites deprives those creators of compensation for their work, and disincentivizes them from publishing anything at all.

“Web creators are trying to share their knowledge and get supported while doing so”, tweeted Ben Goodger, a software engineer who helped create both Firefox and Chrome. “I get how this helps users. How does it help creators? Without them there is no web…” After all, if a web browser sucked out all information from web pages without users needing to actually visit them, why would anyone bother making websites in the first place?

The backlash has prompted the company’s co-founder and CEO Josh Miller to question the fundamental nature of how the web is monetized. Miller, who was previously a product director at the White House and worked at Facebook after it acquired his previous startup, Branch, told Goodger on X that how creators monetize web pages needs to evolve. He also told Platformer’s Casey Newton that generative AI presents an opportunity to “shake up the stagnant oligopoly that runs much of the web today” but admitted that he didn’t know how writers and creators who made the actual website that his browser scrapes from would be compensated. “It completely upends the economics of publishing on the internet,” he admitted.

Miller declined to speak to Engadget, and The Browser Company did not respond to Engadget’s questions.

Arc set itself apart from other web browsers by fundamentally rethinking how web browsers look and work ever since it was released to the general public in July last year. It did this by adding features like the ability to split multiple tabs vertically and offering a picture-in-picture mode for Google Meet video conferences. But for the last few months, Arc has been rapidly adding AI-powered features such as automatic web page summaries, ChatGPT integration and giving users the option to switch their default search engine to Perplexity, a Google rival that uses AI to provide answers to search queries by summarizing web pages in a chat-style interface and providing tiny citations to sources. The “Browse for Me” feature lands Arc smack in the middle of one of AI’s biggest ethical quandaries: who pays creators when AI products rip off and repurpose their content?

“The best thing about the internet is that somebody super passionate about something makes a website about the thing that they love,” tech entrepreneur and blogging pioneer Anil Dash told Engadget. “This new feature from Arc intermediates that and diminishes that.” In a post on Threads shortly after Arc released the app, Dash criticized modern search engines and AI chatbots that sucked up the internet’s content and aimed to stop people from visiting websites, calling them “deeply destructive.”

It’s easy, Dash said, to blame the pop-ups, cookies and intrusive advertisements that power the economic engine of the modern web as the reason why browsing feels broken now. And there may be signs that users are warming to the concept of having their information presented to them summarized by large language models rather than manually clicking around multiple web pages. On Thursday, Miller tweeted that people chose “Browse for Me” over regular Google search in Arc Search on mobile for approximately 32 percent of all queries. The company is currently working on making that the default search experience and also bringing it to its desktop browser.

“It’s not intellectually honest to say that this is better for users,” said Dash. “We only focus on short term user benefit and not the idea that users want to be fully informed about the impact they’re having on the entire digital ecosystem by doing this.” Summarizing this double-edged sword succinctly a food blogger tweeted at Miller, "As a consumer, this is awesome. As a blogger, I’m a lil afraid.”

Last week, Matt Karolian, the vice president of platforms, research and development at The Boston Globe typed “top Boston news” into Arc Search and hit “Browse for Me”. Within seconds, the app had scanned local Boston news sites and presented a list of headlines containing local developments and weather updates. “News orgs are gonna lose their shit about Arc Search,” Karolian posted on Threads. “It’ll read your journalism, summarize it for the user…and then if the user does click a link, they block the ads.”

Local news publishers, Karolian told Engadget, almost entirely depend on selling ads and subscriptions to readers who visit their websites to survive. “When tech platforms come along and disintermediate that experience without any regard for the impact it could have, it is deeply disappointing.” Arc Search does include prominent links and citations to the websites it summarizes from. But Karolian said that this misses the point. “It fails to ponder the consequences of what happens when you roll out products like this.”

Arc Search isn’t the only service using AI to summarize information from web pages. Google, the world’s biggest search engine, now offers AI-generated summaries to users’ queries at the top of its search results, something that experts have previously called “a bit like dropping a bomb right at the center of the information nexus.” Arc Search, however, goes a step beyond and eliminates search results altogether. Meanwhile, Miller has continued to tweet throughout the controversy, posting vague musings about websites in an “AI-first internet” while simultaneously releasing products based on concepts he has admittedly still not sorted out.

On a recent episode of The Vergecast that Miller appeared on, he compared what Arc Search might do to the economics of the web to what Craigslist did to business models of print newspapers. “I think it’s absolutely true that Arc Search and the fact that we remove the clutter and the BS and make you faster and get you what you need in a lot less time is objectively good for the vast majority of people, and it is also true that it breaks something,” he says. “It breaks a bit of the value exchange. We are grappling with a revolution with how software works and how computers work and that’s going to mess up some things.”

Karolian from The Globe said that the behavior of tech companies applying AI to content on the web reminded him of a monologue delivered by Ian Malcolm, one of the protagonists in Jurassic Park to park creator John Hammond about applying the power of technology without considering its impact: “Your scientists were so preoccupied with whether or not they could they didn’t stop if they should.”

This article originally appeared on Engadget at https://www.engadget.com/who-makes-money-when-ai-reads-the-internet-for-us-200246690.html?src=rss

Google, Apple, Meta and other huge tech companies join US consortium to advance responsible AI

A whole bunch of big tech companies, 200 in all, have joined a US-based effort to advance responsible AI practices. The US AI Safety Institute Consortium (AISIC) will count Meta, Google, Microsoft and Apple as members. Commerce Secretary Gina Raimondo just announced the group's numerous new members and said that they'll be tasked with carrying out actions indicated by President Biden’s sweeping executive order on artificial intelligence.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

Biden’s October executive order was far-reaching, so this consortium will focus on developing guidelines for “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”

Red-teaming is a cybersecurity term that dates back to the Cold War. It refers to simulations in which the enemy was called the “red team.” In this case, the enemy would be an AI hellbent on behaving badly. Those engaged in this practice will try to trick the AI into doing bad things, like exposing credit card numbers, via prompt hacking. Once people know how to break the system, they can build better protections.

Watermarking synthetic content is another important aspect of Biden’s original order. Consortium members will develop guidelines and actions to ensure that users can easily identify AI-generated materials. This will hopefully decrease deepfake trickery and AI-enhanced misinformation. Digital watermarking has yet to be widely adopted, though this program will “facilitate and help standardize” underlying technical specifications behind the practice.

The consortium’s work is just beginning, though the Commerce Department says it represents the largest collection of testing and evaluation teams in the world. Biden’s executive order and this affiliated consortium are pretty much all we’ve got for now. Congress keeps failing to pass meaningful AI legislation of any kind.

This article originally appeared on Engadget at https://www.engadget.com/google-apple-meta-and-other-huge-tech-companies-join-us-consortium-to-advance-responsible-ai-164352301.html?src=rss