Posts with «author_name|katie malone» label

Benevolent hackers clear stalking spyware from 75,000 phones

Unnamed hackers claim they accessed spyware firm WebDetetive and deleted device information to protect victims from surveillance, TechCrunch reported on Saturday. Users of the spyware won't get any new data from their targets. "Because #fuckstalkerware,” the hackers wrote in a note obtained by TechCrunch.

Spyware software allows users unfettered access to a victim's device, whether that's a government using it to surveil citizens or an abuser using it to stalk a survivor. The spyware advertises the ability to monitor everything a victim types, listen to phone calls and track locations for "less than a cup of coffee" without being seen. It works by downloading an app on a person's phone, under an alias that goes undetected, to give full access to the device. The WebDetetive breach compromised more than 76,000 devices belonging to customers of the stalkerware, and more than 1.5 gigabytes of data freed from app's servers, according to the hackers.

While TechCrunch did not independently confirm the deletion of victim's data from the WebDetetive server, a cache of data shared by the hackers provided a look at what they were able to accomplish. TechCrunch also worked with a nonprofit that logs exposed datasets, DDoSecrets, to verify and analyze the information. Hackers obtained information on customers like IP addresses and devices that they targeted. 

This article originally appeared on Engadget at https://www.engadget.com/benevolent-hackers-clear-stalking-spyware-from-75000-phones-141904990.html?src=rss

Discord's March data breach only affected 180 users, but it's worth a security checkup

Discord started notifying users affected by a March data breach on Monday, about three months after the communications server went public about the attack in May. Of the 150 million monthly users that Discord reports to have, only 180 had sensitive information exposed in the attack, according to a data breach notification filed with the Office of the Maine Attorney General. That means if you're a Discord user, you're much more likely to be impacted by the Discord.io breach that impacted 760,000 users earlier this month, and ultimately led to the site shutting down. 

Discord.io let Discord users make custom links for their channels. On August 14, a major data breach caused by a vulnerability in the website's code let a third-party attacker steal information and put it up for sale on a breached data forum. That includes hashed passwords, billing information and Discord IDs.

"We have decided to take down our site until further notice," Discord.io wrote in a post. The company plans a "a complete rewrite of our website's code, as well as a complete overhaul of our security practices" as it looks for a way to mitigate the breach and prevent future problems.

This is different from the Discord breach that the company may have reached out to you about this week. A separate incident, affecting Discord and not the separate Discord.io entity, happened earlier this year when an unauthorized user gained access to Discord data via a third-party service provider. The hacker stole data on service tickets, which included personal information like driver's license numbers, for 180 users. Discord is reaching out via email to let impacted users know about the incident, and offering credit monitoring and identity theft protection services to prevent further damage. 

This article originally appeared on Engadget at https://www.engadget.com/discord-data-breach-personal-information-discordio-shutdown-142950237.html?src=rss

New York City bans TikTok for government employees

New York City will ban TikTok from government devices, The Verge reported on Wednesday. City agencies have 30 days to remove the ByteDance-owned app from their devices. Employees will not be allowed to download or use TikTok on their city-sanctioned tech effective immediately. This comes three years after New York state banned TikTok from government devices in 2020, according to Times-Union.

NYC Cyber Command, a subset of the Office of Technology and Innovation, spurred the decision after reporting to the city that TikTok posed a security threat. Other states and localities, notably Montana, have made waves banning TikTok more generally across the jurisdiction. But on a wider scale, most legislators have taken an approach banning the app for government employees, including the federal government. Thirty-three states across parties lines now have restrictions on the use of TikTok on government-owned tech.

As legislation continues to resurface considering a total ban on TikTok and other apps affiliated with the Chinese government, ByteDance fights to proven that its not a threat to national security. TikTok CEO Shou Chew even testified in front of Congress reiterating that "ByteDance is not an agent of China."

The NYC Office of Technology and Innovation did not respond to a request for comment by the time of publication.

This article originally appeared on Engadget at https://www.engadget.com/new-york-city-bans-tiktok-for-government-employees-174806575.html?src=rss

With some tech savvy, you can disconnect your robot vacuum from the cloud

Robot vacuums may seem like mindless suction machines with wheels. But today, “​​basically these devices are like smartphones,” Dennis Giese, PhD student at Northeastern University who researches robot vacuum security, said. From internet capabilities to video recording to voice control, robot vacuums have become an advanced Internet of Things technology, but the security upkeep hasn’t caught up.

“You don't have any insight, what kind of data they’re recording, what kind of data is stored on the device, what kind of data is sent to the cloud,” Giese told Engadget. That might seem harmless for a device that sweeps your floors, but the real-life consequences have already taken effect.

Like in 2022 when the iRobot Roomba J7 captured private moments including photos of a woman on the toilet that the company sent to startup Scale AI to label and train AI algorithms. Amazon, which has experienced countless surveillance and data privacy scandals, is currently attempting to acquire iRobot for over $1.4 billion.

With all these features, robot vacuums can act as a surveillance system in your own home, meaning there’s a world where someone can access live view functions and spy on you. Companies can say this information is secure and only used when needed to improve your experience, but there’s not enough transparency for reviewers or consumers to figure out what’s actually going on. “People like me are catching the companies basically lying,” Giese said.

So, Giese is on a mission to give people more control over the robot vacuums in their homes because every device he’s tested has some sort of vulnerability. He spoke at DEF CON on Sunday about how people can hack their devices to disconnect from the cloud. Not only does this help protect your data from being used by the company, but it also gives access to the device so that you can repair it on your own terms. The “right to repair” ethos means that even if the warranty ends or the company goes bankrupt and stops supporting it, you can still use it.

Unfortunately, hacking into your robot vacuum’s firmware isn’t for newbies. It requires a level of technical expertise to figure out, according to Giese, but owners of robot vacuums can take steps to improve on-device data security. What you can do is make sure that you wipe all of the data before selling or getting rid of a robot vacuum. Even if the device is broken, “as a malicious person, I can just repair the device and can just power it on and extract the data from it,” Giese said. “If you can, do factory resets.”

Or, for full data privacy control but none of the convenience, stick to the standard push vacuum.

This article originally appeared on Engadget at https://www.engadget.com/robot-vacuum-security-privacy-irobot-cloud-133008625.html?src=rss

The legal loophole that lets the government search your phone

Despite the US ethos that you’ll be innocent until proven guilty in a court of law, law enforcement finding an excuse to search your digital devices only requires a presumption of wrongdoing. The tech to do this already exists, and murky legislation lets it happen, speakers from the Legal Aid Society said at DEF CON last Friday.

“Technically and legally there's not much really truly blocking the government from getting the information they want if they want it,” Allison Young, digital forensics analyst at The Legal Aid Society, told Engadget. It’s easy, too. Without picking up any new skills or tools, Young was able to find sensitive data that could be used to, for example, prosecute someone being targeted for getting an abortion as it becomes increasingly illegal across the country.

The problem isn’t just the state of local law either, but it’s embedded in the Constitution. As Diane Akerman, digital forensics attorney at the Legal Aid Society explained, the Fourth Amendment hasn’t been updated to account for modern problems like digital data. The Fourth Amendment intends to protect people from “unreasonable searches and seizures” by the US government. This is where we get legal protections like warrants, where law enforcement needs court approval to look for evidence in your home, car or elsewhere.

Today, that includes your digital belongings too, from your phone to the cloud and beyond, making way for legal loopholes as tech advancements outpace the law. For example, there’s no way to challenge a search warrant prior to it being executed, Akerman said. For physical evidence that makes some sense because we don’t want someone flushing evidence down a toilet.

That’s not how your social media accounts or data in the cloud work though, because those digital records are much harder to scrub. So, law enforcement can get a warrant to search your device, and there’s no process to litigate in advance whether the warrant is appropriate. Even if there’s reason for the warrant, Akerman and Young showed that officers can use intentionally vague language to search your entire cell phone when they know the evidence may only be in one account.

“You litigate the issues once they already have the data, which means cat is out of the bag a lot of the time and even if it's suppressed in court, there's still other ways it can be used in court,” Akerman said. “There's no oversight for the way the government is executing words on digital devices.”

The issue only exacerbates across the third-party apps you use. According to the Fourth Amendment, if you give your information to a third party you’ve lost any sense of privacy, Akerman said. The government can often very easily get information from the cloud because of that, even if it’s not entirely relevant to the case. “You would be furious if police busted down your door and copied five years of texts for you walking out on a parking ticket five years ago, it's just not proportional,” Young said.

There are no easy ways for an individual to better protect themselves from these searches. On a case by case basis, there are ways to lock down your device, but that changes with every update or new feature, Young said. Instead, both speakers pushed to put the onus back on the systems and structures that uphold this law, not the individuals affected by it.

“I live in a world where I have to opt out of modern society to not have other people housing my data in some way,” Akerman said. “The question really should be like, what responsibility do those people have to us, since they have made us into their profit, rather than forcing me to opt out in order to protect myself?”

This article originally appeared on Engadget at https://www.engadget.com/government-warrant-search-phone-cloud-fourth-amendment-legal-191533735.html?src=rss

America's original hacking supergroup creates a free framework to improve app security

Cult of the Dead Cow (cDc), a hacking group known for its activist endeavors, built an open source tool for developers to build secure apps. Veilid, launched at DEF CON on Friday, has options like letting users opt out of data collection and online tracking as a part of the group’s mission to fight against the commercialization of the internet.

“We feel that at some point, the internet became less of a landscape of knowledge and idea sharing, and more of a monetized corporate machine,” cDc leader Katelyn “medus4” Bowden said. “Our idea of what the internet should be looks more like the open landscape it once was, before our data became a commodity.”

Similar to other privacy products like Tor, cDc said there’s no profit motive behind the product, which was created “to promote ideals without the compromise of capitalism.” The group emphasized the focus on building for good, not profit, by throwing slight shade at a competing conference for industry professionals, Black Hat, held in Las Vegas at the same time as DEF CON. “If you wanted to go make a bunch of money, you’d be over at Black Hat right now,” Bowden said to the audience of hackers.

The design standards behind Veilid are “like Tor and IPFS had sex and produced this thing,” cDc hacker Christien “DilDog” Rioux said at DEF CON. Tor is the privacy-focused web browser best known for its connections to the “dark web,” or unlisted websites. Run as a non-profit, the developers behind Tor run a system that routes web traffic through various “tunnels” to obscure who you are and what you’re browsing on the web. IPFS, or the InterPlanetary File System, is an open-source set of protocols behind the internet, mainly used for file sharing or publishing data on a decentralized network.

The bigger Veilid gets, the more secure it will be as well, according to Rioux. The strength doesn’t come from the number of apps made on the framework, but by how many people use the apps to further the routing of nodes that make up the network. “The network gains strength by a single popular app,” Rioux said. “The big Veilid network is supported by the entire ecosystem not just your app.” In the presentation, cDc likened the nodes to mutual aid in the sense that they work to strengthen and support each other to make the entire network more secure.

Rious explained that VLD0 will be the cryptography — the protocols that keep information secure — behind Veilid. It’s a mix of existing cryptography frameworks, like Ed25519 to support authentication efforts and xChaCha20-Poy1305 as its 192-bit encryption support. But, recognizing that advancing technology will change cryptography needs over time, cDc already has a plan to handle updates. “Every new version of our crypto system is supported alongside the old ones” so that there are no gaps in security, Rioux said. cDc also put other measures in place like anti-spoofing, end-to-end encryption even at rest and data protection even if you lose your device.

Veiled and cDc aim to build an approachable internet with fewer ads and more privacy, according to Bowden. Veilid Chat, a messaging app similar to Signal, will be the first app built on the framework. You’ll be able to sign up without using a phone number, to decrease personal identifiers, Bowden told Engadget in an email.

cDc is currently in the process of putting together a community and foundation to support the project. “There are a lot of folks who can’t see past web3 as far as privacy (we are more like the web2 we should have had), and really can’t process the idea that we’re doing this without a profit motive,” Bowden said.

Known as the “original hacking supergroup,” cDc’s most noted accomplishments include inventing hacktivism, helping to develop Tor and pushing top companies to take privacy seriously. Notable members include former US representative from Texas, Beto O'Rourke.

This article originally appeared on Engadget at https://www.engadget.com/americas-original-hacking-supergroup-creates-a-free-framework-to-improve-app-security-190043865.html?src=rss

Chip implants get under your skin so you can leave your keys at home

Software engineer Miana Windall has about 25 implants under her skin, ranging from magnets to RFID tech. While that might make your skin crawl if you’re squeamish, “for the most part, they’re not really noticeable,” she told Engadget. At the DEF CON security conference on Thursday, Windall talked about how she became interested in the implants, and her experience programming them for personal use, like scanning into her former office building.

RFID tech powers scannable technology like subway cards or tap to pay. The relatively simple tech was first patented in the 1970s, and body modification dates back millennia. Despite this, RFID implants still haven’t reached their full potential, and they’re still a gimmick for a lot of people, Windall said. But if you want to go clubbing and not bring a bag, you can buy the right style of lock and implant a sensor that you can’t lose to scan in and out of your home.

Still, they’re not magic. “Chip implants don't work like Hollywood movies,” founder of biohacking and implant service Dangerous Things Amal Graafstra told Engadget. “They're not even active or alive or energized when there's no reader that is within a very close proximity”

That means the scope of use for RFID implants is pretty limited and it’s mostly a foundational tech that you’d have to be able to hack yourself for it to be useful. There are limited out-of-the-box use cases, like the Tesla keycard implant that lets you start your car, but usually a user has to be able to copy certain key configurations onto it themself. “When we sell the transponder, we’re selling a key but not the lock,” Graafstra said. The user has to have some technical savvy to make “the lock.”

It’s helpful to know that before going to a body modification artist or piercer to get one put in, or else you might end up with a chip you can’t use. “Do your research and make sure what you want as possible before you have surgery,” Windall said. Although, Windall herself does have some inactive ones that are harmless to keep under the skin.

Companies are now looking for ways to use RFID implants as security tools, too. There’s an inherent vulnerability associated with RFID tech because it requires access credentials to be open to being stolen. But having those credentials as an implant at least prevents someone from easily stealing your access card or information.

“The chances of someone coming along and being able to scan your credential without you knowing about it, it's probably not that high,” Windall said. “You can't have your hand pickpocketed, at least not without a machete.”

Plus as authentication becomes more important to prevent unauthorized account access, these implants could be used to prove your identity. As companies look to replace two-factor authentication with passkeys, putting those credentials under your skin could be possible. Your passkey can be uploaded to a chip implant that can verify your identity, as opposed to a hardware key that could get lost or a text message verification that can be duped, according to Graafstra.

RFID implants don’t require FDA approval because they’re not medical devices. While they appear generally safe and secure, there are risk factors to consider, according to professor in the College of Media at the University of Colorado Boulder Harsha Gangadharbatla, PhD.

“Consumers should be fully aware of the “hidden” costs (privacy, risks, and advertising messages) associated with such tech and not just the cost of getting such implants,” he said in an email to Engadget.

This article originally appeared on Engadget at https://www.engadget.com/chip-implants-get-under-your-skin-so-you-can-leave-your-keys-at-home-170008199.html?src=rss

Tech companies are selling your privacy back to you

Security makes money. Every day companies hawk their latest privacy and security features, be it on a billboard, an internet ad or a commercial in your favorite show. Like Apple’s “Privacy. That’s iPhone” campaign, browsers like DuckDuckGo using privacy to set itself apart or the targeted Google cybersecurity ads on social media that I probably get because of my job. That’s good for consumer awareness on privacy, but adds new jargon and complexity to purchasing decisions.

This resurgence of privacy-focused ads has a lot to do with the popularity of data laws. That’s not to say advertising privacy is new, it dates back as far as these companies themselves, but regulation made compliance a selling point. The General Data Protection Regulation, the California Consumer Privacy Act and various other local laws popping up forced companies into prioritizing data privacy at the same time that consumers honed in on it, too.

Whitney Parker Mitchell, CEO and founder of Beacon Digital Marketing, told Engadget that behind the scenes, when the regulations get put in place, compliance folks get hired for buying decisions and privacy and security get a new emphasis. From there, companies make the decision to advertise or not advertise privacy and security compliance based on the target buyer in mind.

“Where you emphasize that and how much information you put forward within your marketing materials really depends on how important that is to that primary buyer,” Mitchell said. Like a cell phone that feels very personal may make security front of mind, but you may value convenience more than anything in a product like your robot vacuum.

Still, privacy and security are dense and complex, making the concepts less-than-ideal for pithy slogans. Oftentimes when marketers try to reduce it to something catchy, the important nuance gets lost or buzzwords blur reality. “The advertising campaigns can make the issue seem more simple or overly simplistic than it actually is,” Aaron Massey, technologist and senior policy analyst for advertising technologies and platforms at the Future of Privacy Forum said.

It's similar to the market for lemons — used cars, not the fruit — Massey told Engadget. It’s easy to make a marketing claim, but it’s very hard for the buyer to confirm that it’s true because they don’t have the specialized skill set to verify it.

So, to go along with the ad campaigns, more consumer-friendly privacy awareness is cropping up on our devices. “Companies are recognizing that privacy policies are not enough to really help consumers understand what is really happening with the data,” Cobun Zweifel-Keegan, DC managing director of the International Association of Privacy Professionals said. That includes efforts like privacy check ups that direct you to update your settings with a pop up at login.

It’s been a net positive for privacy and security. More regulation and consumer education has ultimately driven these ads. Still, there are things you should look out for before taking them at face value. While every ad can’t be a well-rounded and detailed approach to the topic, subjective claims like ‘We’re the most secure’ should raise skepticism. “It's best to look for claims that the company can clearly stand behind,” Zweifel-Keegan told Engadget, and the high-level staffing like chief privacy or security officers to back it up.

This article originally appeared on Engadget at https://www.engadget.com/tech-companies-data-privacy-ads-marketing-183049670.html?src=rss

Colorado education department discloses data breach spanning 16 years

After a ransomware attack in June, the Colorado Department of Higher Education (CDHE) notified students on Friday of a potential data leak. In June, "unauthorized actor(s)" not yet publicly identified accessed CDHE systems in a ransomware attack. While authorities continue to investigate the full extent of the damage, the department has disclosed that the attack breached personally identifiable information like names and social security numbers.

"The review of the impacted records is ongoing and once complete, CDHE will be notifying individuals who are potentially impacted by mail or email to the extent we have contact information," CDHE wrote in a Notice of Data Incident. But the department warns students that the impact of the breach reaches across programs, from public schools to adult education initiatives, over a 16 year time period.

In response, CDHE is offering free access to Experian credit monitoring and identity theft protection to protect their data. The department recommends impacted groups keep an eye on their account statements and credit reports for suspicious activity. 

Education systems are a popular target for ransomware attacks. In 2022, at least 44 colleges and 45 school districts reported ransomware attacks, compared to 88 total education departments in 2021, according to data from Emsisoft. The Government Accountability Office recommended that the Department of Education and the Department of Homeland Security coordinate to evaluate school cybersecurity efforts across the country. 

This article originally appeared on Engadget at https://www.engadget.com/colorado-department-of-education-data-leak-personal-information-143001196.html?src=rss

FBI investigation reveals that it was unknowingly using NSO-backed spyware

A New York Times investigation uncovered earlier this year that the US government used spyware made by Israeli hacking firm NSO. Now, after an FBI investigation into who was using the tech, the department uncovered a confusing answer: itself, according to the New York Times on Monday

Since 2021, the Biden administration has taken steps toward parting ways with NSO, given the firm's reputation for shady tools like Pegasus that lets governments discreetly download personal information from hacked phones without the user's knowledge. But even after the president signed an executive order banning commercial spyware in March, an FBI contractor used NSO's geolocation product Landmark to track the locations of targets in Mexico. 

The FBI had inked a deal with telecommunications firm Riva Networks to track drug smugglers in Mexico, according to TheTimes. The spyware let US officials track mobile phones because of existing security gaps in the country's cellphone networks. While the FBI says it was misled by Riva Networks into using the tech, and has since terminated the contract, people with direct knowledge of the situation said the FBI used the spyware as recently this year. 

This isn't the FBI's first run in with NSO and its spyware tools. Prior to the executive order banning the products for government use, the agency considered using Pegasus to aid in its criminal investigations. Spyware generally gained a bad reputation for its use to surveil citizens and suppress political dissent, with NSO considered one of the largest in the business

This article originally appeared on Engadget at https://www.engadget.com/fbi-investigates-use-of-nso-spyware-pegasus-landmark-163949655.html?src=rss