Posts with «author_name|katie malone» label

The best VPNs for 2023

VPNs have been having a moment recently. The once-niche way to protect your online behavior took off, in part, due to massive marketing budgets and influencer collaborations convincing consumers they can solve all your security woes. But deciding the best option for your browsing needs requires digging through claims of attributes that aren’t always totally accurate. That has made it harder to figure out which one to subscribe to, or if you really need to use one at all. We tested out nine of the best VPNs available now to help you choose the best one for your needs.

What you should know about VPNs

VPNs are not a one-size-fits-all security solution. Instead, they’re just one part of keeping your data private and secure. Roya Ensafi, assistant professor of computer science and engineering at the University of Michigan, told Engadget that VPNs don’t protect against common threats like phishing attacks, nor do they protect your data from being stolen. But they do come in handy when you’re connecting to an untrusted network somewhere public because they tunnel and encrypt your traffic to the next hop.

In other words, VPNs mask the identity of your computer on the network and create an encrypted "tunnel" that prevents your internet service provider from accessing data about your browsing history. Even then, much of the data or information is stored with the VPN provider instead of your ISP, which means that using a poorly designed or unprotected network can still undermine your security.

That means sweeping claims that seem promising, like military-grade encryption or total digital invisibility, may not be totally accurate. Instead, Yael Grauer, program manager of Consumer Reports’ online security guide, recommends looking for security features like open-source software with reproducible builds, up-to-date support for industry-standard protocols like WireGuard, IPsec or PPTP and the ability to defend against attack vectors like brute force.

Who are VPNs really for?

Before considering a VPN, make sure your online security is up to date in other ways. That means complex passwords, multifactor authentication methods and locking down your data sharing preferences. Even then, you probably don’t need to be using a VPN all the time.

“If you're just worried about somebody sitting there passively and looking at your data then a VPN is great,” Jed Crandall, an associate professor at Arizona State University, told Engadget.

If you use public WiFi a lot, like while working at a coffee shop, then VPNs can help keep your information private. They’re also helpful for hiding information from other people on your ISP if you don’t want members of your household to know what you’re up to online.

Geoblocking has also become a popular use case as it helps you reach services in other parts of the world. For example, you can access shows that are only available on Netflix in other countries, or play online games with people located all over the globe.

Are VPNs worth it?

Whether or not VPNs are worth it depends how often you could use it for the above use cases. If you travel a lot and rely on public WiFi, are looking to browse outside of your home country or want to keep your traffic hidden from your ISP, then investing in a VPN will be useful. But, keep in mind that VPNs often slow down your internet speed, so they may not be ideal all the time.

We recommend not relying on a VPN as your main cybersecurity tool. It can provide a false sense of security, leaving you vulnerable to attack. Plus, if you choose just any VPN, it may not be as secure as just relying on your ISP. That’s because the VPN could be based in a country with weaker data privacy regulation, obligated to hand information over to law enforcement or linked to weak user data protection policies.

For users working in professions like activism or journalism that want to really strengthen their internet security, options like the Tor browser may be a worthwhile alternative, according to Crandall. Tor is free, and while it's less user-friendly, it’s built for anonymity and privacy.

How we tested

To test the security specs of different VPNs, we relied on pre-existing academic work through Consumer Reports, VPNalyzer and other sources. We referenced privacy policies, transparency reports and security audits made available to the public. We also considered past security incidents like data breaches.

We looked at price, usage limits, effects on internet speed, possible use cases, ease of use and additional “extra” features for different VPN providers. The VPNs were tested across an iPhone, Google Pixel and Mac device so we could see the state of the apps across various platforms. We used the “quick connect” feature on the VPNs to connect to the “fastest” provider available when testing internet speed, access to IP address data and DNS and WebRTC leaks or when a fault in the encrypted tunnel reveals requests to an ISP.

Otherwise, we conducted a test of geoblocking content by accessing Canada-exclusive Netflix releases, a streaming test by watching a news livestream on YouTube via a Hong Kong-based VPN and a gaming test by playing on servers in the United Kingdom. By performing these tests at the same time, it also allowed us to test claims about simultaneous device use.

VPNs we tested:

Best VPN overall: ProtonVPN

The VPNs we tried out ranked pretty consistently across all of our tests, but ProtonVPN stood out as a strong option because of its overall security and ease of use. The Proton Technologies suite of services includes mail, calendar, drive and a VPN known for its end-to-end encryption. This makes it a strong contender for overall security, but its VPN specifically came across as a well-rounded independent service.

ProtonVPN’s no-logs security policy has passed audits, and the company has proven not to comply with law enforcement requests. Because it is based in Switzerland, there are no forced logging obligations, according to the company. Plus, it’s based on an open-source framework, and has an official vulnerability disclosure program along with clear definitions on what it does with personal information.

While ProtonVPN offers a free version, it’s limited compared to other options with access to servers in just three countries. Its paid version, starting at about $5.39 per month, includes access to servers in more than 65 countries on 10 devices at a time. For dedicated Proton Technologies users, they can pay closer to $8.63 each month for access to the entire suite.

ProtonVPN passed our geoblock, streaming and gaming tests with only a very small toll on internet speed. It also comes with malware-, ad- and tracker-blocking as an additional service. It’s available on most major operating systems, routers, TV services and more including Firefox, Linux and Android TV.

Best free VPN: Windscribe

By signing up for Windscribe with your email, users can access 10GB per month of data, unlimited connections and access to more than 10 countries. We selected it as the best free VPN because of its high security and wide range of server options compared to other free VPNs. It has over 500 servers in over 60 countries, according to the company, and can be configured to routers, smart TVs and more on top of the usual operating systems.

Windscribe doesn’t have a recent independent security audit, but it does publish a transparency report showing that it has complied with zero requests for its data, runs a vulnerability disclosure program encouraging researchers to report flaws and offers multiple protocols for users to connect with.

On top of that, it’s easy to use. The set up is intuitive and it passed our geoblock, streaming and gaming tests. The paid version costs $5.75 to $9 each month, depending on the plan you choose, and includes unlimited data, access to all servers and an ad/tracker/malware blocker. Or, for $1 per location per month, users can build a plan tailored to the VPNs they want to access.

Best for frequent travel, gaming and streaming: ExpressVPN

We picked the best VPN for travel, gaming and streaming based on which one had access to the most locations with high speed connections and no lag. ExpressVPN met all those criteria.

An internet speed test measured faster upload and download speed compared to using no VPN, practically unheard of compared to the other VPNs tested. But this is likely a fluke due to the VPN service circumventing traffic shaping by the ISP or another disparity because even top VPNs will in some way slow down speeds. With 2,000 servers in 160 cities, according to the company, it had one of the broadest global reaches. It also passed our geoblock, streaming and gaming tests, and it does regular security audits. Subscription costs range from $8.32 to $12.95 per month depending on the term of the plan, and include a password manager.

With ExpressVPN, users can connect to up to five devices at once, which is on the lower side compared to other services. That said, it works on a bunch of devices from smart TVs to game consoles unlike some other services that lack support beyond the usual suspects like smartphones and laptops.

Best cross-platform accessibility: CyberGhost

Because several VPN services connect to routers, cross-platform accessibility isn’t always necessary. By connecting a VPN to your home router, you can actually connect to however many devices you have in your household, as long as they all access the internet through that router.

But if you use VPNs on the go, and across several devices, being able to connect to a wide range of platforms will be indispensable. CyberGhost offers simultaneous connectivity on up to seven devices for $2.11 to $12.99 per month depending on subscription term. It supports several types of gadgets like routers, computers, smart TVs and more. It’s similar to the support that ExpressVPN offers, but CyberGhost provides detailed instructions on how to set up the cross-platform connections, making it a bit more user-friendly for those purposes.

From a security perspective, CyberGhost completed an independent security audit by Deloitte earlier this year, runs a vulnerability disclosure program and provides access to a transparency report explaining requests for its data. While it did pass all of our tests, it’s worth noting that we had trouble connecting to servers in the United Kingdom and had to opt to run our gaming test through an Ireland-based server instead.

Best for multiple devices: Surfshark

As we mentioned before, connecting to a router can provide nearly unlimited access to devices in a single household. But Surfshark is one of few VPNs that offer use on an unlimited number of devices without bandwidth restrictions, according to the company. And you get that convenience without a significant increase in price: Surfshark subscriptions cost about $2.49 to $12.95 per month, and the company recently conducted its first independent audit.

We ran into some trouble connecting to Surfshark’s WireGuard protocol, but tested on an IKEv2 protocol instead. It was a bit slow and struggled to connect for our geoblock test at first, but ultimately passed. What makes it different from other VPNs with unlimited connection options is that it has access to more servers and is available on more types of devices.

This article originally appeared on Engadget at https://www.engadget.com/best-vpn-130004396.html?src=rss

What do AI chatbots know about us, and who are they sharing it with?

AI Chatbots are relatively old by tech standards, but the newest crop — led by OpenAI's ChatGPT and Google's Bard — are vastly more capable than their ancestors, not always for positive reasons. The recent explosion in AI development has already created concerns around misinformation, disinformation, plagiarism and machine-generated malware. What problems might generative AI pose for the privacy of the average internet user? The answer, according to experts, is largely a matter of how these bots are trained and how much we plan to interact with them

In order to replicate human-like interactions, AI chatbots are trained on mass amounts of data, a significant portion of which is derived from repositories like Common Crawl. As the name suggests, Common Crawl has amassed years and petabytes worth of data simply from crawling and scraping the open web. “These models are training on large data sets of publicly available data on the internet,” Megha Srivastava, PhD student at Stanford's computer science department and former AI resident with Microsoft Research, said. Even though ChatGPT and Bard use what they call a "filtered" portion of Common Crawl's data, the sheer size of the model makes it "impossible for anyone to kind of look through the data and sanitize it,” according to Srivastava.

Either through your own carelessness or the poor security practices by a third-party could be in some far-flung corner of the internet right now. Even though it might be difficult to access for the average user, it's possible that information was scraped into a training set, and could be regurgitated by that chatbot down the line. And a bot spitting out someone's actual contact information is in no way a theoretical concern. Bloomberg columnist Dave Lee posted on Twitter that, when someone asked ChatGPT to chat on encrypted messaging platform Signal, it provided his exact phone number. This sort of interaction is likely an edge case, but the information these learning models have access to is still worth considering. "It’s unlikely that OpenAI would want to collect specific information like healthcare data and attribute it to individuals in order to train its models," David Hoelzer, a fellow at security organization the SANS Institute, told Engadget. “But could it inadvertently be in there? Absolutely.”

Open AI, the company behind ChatGPT, did not respond when we asked what measures it takes to protect data privacy, or how it handles personally identifiable information that may be scraped into its training sets. So we did the next best thing and asked ChatGPT itself. It told us that it is "programmed to follow ethical and legal standards that protect users’ privacy and personal information" and that it doesn't "have access to personal information unless it is provided to me." Google for its part told Engadget it programmed similar guardrails into Bard to prevent the sharing of personally identifiable information during conversations.

Helpfully, ChatGPT brought up the second major vector by which generative AI might pose a privacy risk: usage of the software itself — either via information shared directly in chatlogs or device and user information captured by the service during use. OpenAI’s privacy policy cites several categories of standard information it collects on users, which could be identifiable, and upon starting it up, ChatGPT does caution that conversations may be reviewed by its AI trainers to improve systems. 

Google's Bard, meanwhile, does not have a standalone privacy policy, instead uses the blanket privacy document shared by other Google products (and which happens to be tremendously broad.) Conversations with Bard don't have to be saved to the user's Google account, and users can delete the conversations via Google, the company told Engadget. “In order to build and sustain user trust, they're going to have to be very transparent around privacy policies and data protection procedures at the front end,” Rishi Jaitly, professor and distinguished humanities fellow at Virginia Tech, told Engadget.

Despite having a "clear conversations" action, pressing that does not actually delete your data, according to the service’s FAQ page, nor is OpenAI is able to delete specific prompts. While the company discourages users from sharing anything sensitive, seemingly the only way to remove personally identifying information provided to ChatGPT is to delete your account, which the company says will permanently remove all associated data.

Hoelzer told Engadget he’s not worried that ChatGPT is ingesting individual conversations in order to learn. But that conversation data is being stored somewhere, and so its security becomes a reasonable concern. Incidentally, ChatGPT was taken offline briefly in March because a programming error revealed information about users’ chat histories. It's unclear this early in their broad deployment if chat logs from these sorts of AI will become valuable targets for malicious actors.

For the foreseeable future, it's best to treat these sorts of chatbots with the same suspicion users should be treating any other tech product. “A user playing with these models should enter with expectation that any interaction they're having with the model," Srivastava told Engadget, "it's fair game for Open AI or any of these other companies to use for their benefit.”

This article originally appeared on Engadget at https://www.engadget.com/what-do-ai-chatbots-know-about-us-and-who-are-they-sharing-it-with-140013949.html?src=rss

The dos and don’ts of location sharing

It’s easy to say “yes” when an app or website asks for your location data just to get past the pop-up and back to scrolling, but it pays to be thoughtful about who you share it with and why. More often than not, it's more information than apps and websites really need to know about you.

Like other kinds of personal information, location data is presented by companies as a trade-off: consumers willingly expose where they are, usually for a more convenient user experience; the companies in turn gather crucial intel about customers and, more often than not, resell that data to third-parties for additional profit. Those third parties, according to Cooper Quintin, a security researcher at the Electronic Frontier Foundation, can include data brokers and advertisers, as well as law enforcement, bounty hunters, journalists and just about anyone else with the money to purchase this information. It's one of the reasons we feel like our devices “listen” to us — they probably don't hear you telling a friend that you’ve been really craving fast food, but they do know that there’s a McDonald’s nearby, and will serve up an advertisement for its french fries.

Because there aren’t federal laws or regulations currently in place to fully protect consumer information, it falls on individual users to navigate how they want that information to be spread. As you install new apps, don’t blindly agree to share location data, even if you think you have nothing to hide. “It's better just not to generate it in the first place,” Quintin said.

Is there ever a good reason to share this sensitive data with a company? A good rule of thumb is to avoid giving out location information unless the app requires it to function, according to Megan Iorio, senior counsel and amicus director at the Electronic Privacy Information Center. A maps app might need it to give you real-time directions; food delivery apps probably can get by with a simple address. Websites may ask for location permissions to enable convenience features, like a weather service, but will generate the same results from a zip code at a much lower risk. Even with the caveat that sharing location data is unavoidable in some instances, Iorio cautioned that providing apps or sites blanket access is never a good idea. “If you wind up needing location services, then you'll figure that out after using the app, but maybe the best strategy is to just tell everybody no until you actually realize that you need it,” Iorio said.

It's also best practice to revoke location permissions for any apps or sites you no longer use, or may have enabled thoughtlessly in the past. You can see what apps use your location data by going into the settings of your smartphone and navigating to the location sharing tab, usually in the privacy and security settings of most devices. That will list all of the apps with access to your location information, and give options to toggle it on or off. Apple, Samsung, Google and others all provide specific instructions on their websites. Popular browsers like Firefox, Chrome, Edge and Safari, also provide specific instructions on how to disable location sharing. Generally it's best not to choose "always allow" or similarly phrased options in-browser — instead wait for the pop-up requesting access and, if it's necessary, share location data on a case-by-case basis.

This article originally appeared on Engadget at https://www.engadget.com/the-dos-and-donts-of-location-sharing-140002009.html?src=rss

It took a TikToker barely 30 minutes to doxx me

In 30 minutes or less, TikToker and Chicago-based server Kristen Sotakoun can probably find your birth date. She’s not a cybersecurity expert, despite what some of her followers suspect, but has found a hobby in what she calls “consensual doxxing.”

“My first thing is to be entertaining. My second thing is to show you cracks in your social media, which was the totally accidental thing that I became on TikTok,” Sotakoun, who goes by @notkahnjunior, told me.

It’s not quite doxxing, which usually refers to making private information publicly available with malicious intent. Instead, it’s known in the cybersecurity field as open-source intelligence, or OSINT. People unknowingly spell out private details about their lives as a bread crumb trail across social media platforms that, when gathered together, paint a picture of their age, families, embarrassing childhood memories and more. In malicious cases, hackers gather information based on what you or your loved ones have published on the web to get into your accounts, commit fraud, or even socially engineer a user to fall for a scam.

Sotakoun mostly just tracks down an anonymous volunteer's birth date. She doesn’t have malicious intent or interest in a security career, she said she just likes to solve logic puzzles. Before TikTok, that was spending a ride home from a friend’s birthday dinner at Medieval Times discovering the day job of their “knight.” Sotakoun just happened to eventually go viral for her skills.

So, to show me her process, I let Sotakoun “consensually doxx” me. She found my Twitter pretty quickly, but because I keep it pretty locked down, it wasn’t super helpful. Information in author bios from my past jobs, however, helped her figure out where I went to college.

My name plus where I studied led her to my Facebook account, another profile that didn’t reveal much. It did, however, lead her to my sister, who had commented on my cover photo nine years ago. She figured out it was my sister because we shared a last name, and we’re listed as sisters on her Facebook. That’s important to note because I don’t actually share a last name with most of my other siblings, which could’ve been an additional roadblock.

My sister and I have pretty common names though, so Sotakoun also found my stepmom on my sister’s profile. By searching my stepmom’s much more unique name on Instagram, it helped lead Sotakoun to mine and my sister’s Instagram accounts, as opposed to one of the many other Malones online.

Still, my Instagram account is private. So, it was my sister’s Instagram account – that she took off “private” for a Wawa giveaway that ultimately won her a t-shirt – featuring years-old birthday posts that led Sotakoun to the day I was born. That took a ton of scrolling and, to correct for the fact that a birthday post could come a day late or early, Sotakoun relied on the fact that my sister once shared that my birthday coincided with World Penguin Day, April 25.

Then, to find the year, she cross-referenced the year I started college, which was 2016 according to my public LinkedIn, with information in my high school newspaper. My senior year of high school, I won a scholarship only available to seniors, Sotakoun discovered, revealing that I graduated high school in 2016. From there, she counted back 18 years, and told me that I was born on April 25, 1998. She was right.

“My goal is always to find context clues, or find people who care less about their online presence than you do,” Sotakoun said.

Many people will push back on the idea that having personal information online is harmful, according to Matt Edmondson, an OSINT instructor at cybersecurity training organization SANS Institute. While there are obvious repercussions to having your social security number blasted online, people may wonder what the harm is in seemingly trivial information like having your pet’s name easily available on social media. But if that also happens to be the answer to a security question, an attacker may be able to use that to get into your Twitter account or email.

In my case, I’ve always carefully tailored my digital footprint to keep my information hidden. My accounts are private and I don’t share a lot of personal information. Still, Sotakoun’s OSINT methods found plenty to work with.

Facebook and Instagram are Sotakoun’s biggest help for finding information, but she said she has also used Twitter, and even Venmo to confirm relationships. She specifically avoids resources like records databases that could easily give away information.

That means that there’s still a lot of data out there on each of us that Sotakoun isn’t looking for. Especially if you’re in the US, data like your date of birth, home address and more are likely already out there in some form, according to Steven Harris, an OSINT specialist that teaches at SANS.

“Once the data is out there, it’s very hard to take back,” Harris said. “What protects people is not that the information is securely locked away, it’s that most people don’t have the knowledge or inclination to go and find out.”

There are simple things you can do to keep attackers from using these details against you. Complex passwords and multi-factor authentication make it harder for unauthorized users to get into your account, even if they know the answers to your security questions.

That gets a bit more complicated, though, when we think about how much our friends and family post for us. In fact, Sotakoun said she noticed that even if a person takes many measures to hide themselves online, the lack of control over their social circle can help her discover their birth date.

“You have basically no control on your immediate social circle, or even your slightly extended social circle and how they present themselves online,” she said.

This article originally appeared on Engadget at https://www.engadget.com/it-took-a-tiktoker-barely-30-minutes-to-doxx-me-120022880.html?src=rss

Eight months post-Roe, reproductive-health privacy is still messy

Data privacy awareness boomed last June when the Supreme Court overturned Roe v. Wade, limiting access to safe, legal abortion. Now, eight months later, privacy experts say not to let your guard down. Legislative bodies have made little progress on health data security.

We give up so much data each day that it’s easy to tune out. We blindly accept permissions or turn on location sharing, but that data can also be used by governing bodies to prosecute civilians or by attackers looking to extort individuals. That’s why, when SCOTUS declared access to abortion would no longer be a constitutional right, people began to scrutinize the amount of private health data they were sending to reproductive-health apps.

“The burden is really on consumers to figure out how a company, an app, a website is going to collect and then potentially use and share their data,” Andrew Crawford, senior counsel, privacy and data, at the Center for Democracy and Technology said.

There aren’t widespread industry standards or federal legislation to protect sensitive data, despite some increased regulatory action since last year. Even data that isn’t considered personally identifiable or explicitly health related can still put people at risk. Location data, for example, can show if a patient traveled to receive an abortion, possibly putting them at risk of prosecution.

“Companies see that as information they can use to make money,” Jen Caltrider, lead at Mozilla’s consumer privacy organization Privacy Not Included, told Engadget. Research released by Caltrider’s team in August analyzed the security of 25 reproductive-health apps. Eighteen of them earned a privacy warning label for failing to meet privacy standards.

So, what’s left for users of reproductive-health apps to do? The obvious advice is to carefully read the terms and conditions before signing up in order to better understand what’s happening with their data. If you don’t have a legal degree and an hour to spare, though, there are some basic rules to follow. Turning off data sharing that isn’t necessary to the function of the app, using encrypted chats to talk about reproductive care, signing up for a trustworthy VPN and leaving your phone at home if you’re accessing reproductive health care can all help protect your information, according to Crawford.

While industry standards are still lacking, increased public scrutiny has led to some improvements. Some reproductive-health apps now store data locally as opposed to on a server, collect data anonymously so that it cannot be accessed by law enforcement or base operations in places like Europe that have stronger data privacy laws. We spoke with three popular apps that were given warning labels by Privacy Not Included last August to see what’s changed since then.

Glow’s Eve reproductive-health app introduced an option to store data locally instead of on its server, among other security measures. Glow told Engadget that it doesn't sell data and employees are required to take privacy and security training.

A similar app, Flo Health, has introduced an anonymous mode and hired a new privacy exec since the report. The company told Engadget that it hopes to expand its anonymous mode features in the future with additions like the ability to stop receiving IP addresses completely.

Clue, another app that landed on the warning list, adheres to the stricter privacy laws of the European Union known as General Data Protection Regulation, co-CEO Carrie Walter told Engadget. She added that the company will never cooperate with a government authority to use people’s health data against them, and recommended users keep up with updates to its privacy policy for more information.

But there are no one-and-done solutions. With permissions changing frequently, people that use health apps are also signing up to consistently check their settings.

“Apps change constantly, so keep doing your research, which is a burden to ask consumers,” Caltrider said. “Use anonymous modes, when they're available, store things locally, as much as you can. Don't share location if you can opt out of location sharing.”

This article originally appeared on Engadget at https://www.engadget.com/eight-months-post-roe-reproductive-health-privacy-is-still-messy-160058529.html?src=rss

Apple is convinced my dog is stalking me

As far as I know, no one is using an Apple AirTag to stalk me. But if that were to change, I’m not even sure I’d notice Apple’s attempts to warn me. The “AirTag Found Moving With You” notification near-constantly sits on my homescreen, and I’ve gotten used to quickly swiping it away.

But I’m getting ahead of myself – let me tell you about my dog, Rosie. She’s a sweet tempered, mild mannered rescue. Still, there was one catch when we adopted her: She’s a flight risk.

We’ve seen this firsthand when the sound of fireworks or a strong wind causes her to enter a full-blown panic. Rosie shuts down, shakes and, when it’s really bad, tries to run away. We’re working on it, but, in the meantime, we’ve turned to Apple AirTags as an extra reassurance.

The $29 quarter-sized AirTag attached to her collar keeps track of her location so that we can quickly find her if she ever got away. It’s mostly for peace of mind — we’ve only had to use it once — but it’s also quickly become an annoying part of my daily routine.

The problem is that the AirTag is registered to my partner’s device. That means that Apple doesn’t recognize my iPhone in connection with the AirTag, seeing the unknown tracker as a threat to my safety. It sends a notification that there’s an AirTag following me, which won’t go away until I acknowledge its presence in the Find My app, and there’s no way to tell it “hey, that’s just Rosie!” to disable the recurring notification. Plus, it’ll ping and make sounds to alert me of its presence, causing our already skittish dog confusion.

An example of what the unwanted tracking notification looks like and options to proceed.
Katie Malone

These safety features exist for a good reason. They can notify a survivor that they’re being followed, and put them in control to bring it as proof of stalking to law enforcement, if that’s something they feel safe doing, Audace Garnett, technology safety project manager at the National Network to End Domestic Violence, told Engadget. In cases like that, AirTag’s persistence may be a welcome way to manage one’s safety. Competitors like Bluetooth tracker Tile have taken note, implementing a $1 million penalty on people using the product to stalk someone.

“​​For us, who are not being stalked or harassed, it may be an annoyance to us,” Garnett said. “But for someone that's in a domestic violence situation, who is under power and control, this may be a life-saving tool for them.”

There are a few viable solutions, but none quite worked for me. The notification provides an option to disable the AirTag, which would be helpful to stop an unwanted third-party from knowing your location. That feature renders the AirTag useless, though, so it would no longer be able to track my dog if she did get out.

There is a way to pause tracking notifications for that specific AirTag, but it only lasts for 24 hours. Disabling Find My notifications didn’t work, so I tried disabling unwanted tracking notifications. That setting disables all unwanted tracking notifications, not just for this specific AirTag. So, if someone were to slip one in my bag, I wouldn’t get those notifications either. (Either way, the AirTag would still ping and make other noises as a back up safety feature for folks without smartphones.)

My partner and I could always open a Family Sharing iCloud, or a joint account that connects our devices. If we did that, I would unlock an option to cancel notifications for Rosie’s AirTag. We currently have separate accounts, though, and aren’t interested in fully merging our clouds. I could also buy any other tracking device to replace it with, like the slew of options available specifically for pets, if I wanted to spend the additional cash to avoid this feature.

Or, I could deal with the minor inconvenience knowing that somewhere out there, this feature is helping someone else stay safe. I think I’ll go with that.

If you are experiencing domestic violence and similar abuse, you can contact the National Domestic Violence Hotline by phone at 1-800-799-SAFE (7233) or by texting "START" to 88788.

Twitter’s 2FA paywall is a good opportunity to upgrade your security practices

Twitter announced plans to pull a popular method of two-factor authentication for non-paying customers last week. Not only could this make your account more vulnerable to attack, but it may even undermine the platform’s security as a whole and set a dangerous precedent for other sites.

Two-factor authentication, or 2FA, adds a layer of security beyond password protection. Weak passwords that are easily guessed by hackers, leaked passwords or phishing attacks that can lure password details out of a user can all lead to unwanted third-party account access.

With 2FA, a user has another guard up. Simply entering a password isn’t enough to gain account access, and instead the user gets a notification via text message, or uses an authenticator app or security key to approve access.

“Two factor authentication shouldn't be behind a paywall,” Rachel Tobac, CEO of security awareness organization SocialProof Security, told Engadget, “especially not the most introductory level of two factor that we find most everyday users employing.”

Starting March 20, non-subscribers to Twitter will no longer be able to use text message authentication to get into their accounts. The feature will be automatically disabled if users don’t set up another form of 2FA. That puts users who don’t act quickly to update their settings at risk.

If you don’t want to pay $8 to $11 per month for a Twitter Blue subscription, there are still some options to keep your account secure. Under security and account access settings, Twitter users can change to “authentication app” or “security key” as their two-factor authentication method of choice.

Software-based authentication apps like Duo, Authy, Google Authenticator and the 2FA authenticator built into iPhones either send you a notification or, in the case of Twitter, generate a token that will let you complete your login. Instead of just a password, you’ll have to type in the six-digital code you see in the authentication app before it grants access to your Twitter account.

Security keys work in a similar way, requiring an extra step to access an account. It’s a hardware-based option that plugs into your computer or connects wirelessly to confirm your identity. Brands include Yubikey, Thetis, and more.

Security keys are often considered more secure because a hacker would have to physically acquire the device to get in. 2FA methods that require a code to get in, like via text message or authentication app, are phishable, according to Tobac. In other words, hackers can deceive a user into giving up that code in order to get into the account. But hardware like security keys can’t be remotely accessed in the same way.

“Cyber attackers don't stand next to you when they hack you. They're hacking you through the phone, email, text message or social media DM,” Tobac said.

Still, putting any 2FA behind a paywall makes it more inaccessible for users, especially if the version put behind the paywall is as widely used as text-based authentication. Fewer people may be inclined to set it up, or they may be ignoring the pop-ups from Twitter to update their accounts so that they can get back to tweeting, Tobac said.

Without 2FA, it’s a lot easier for unauthorized actors to get into your account. More compromised accounts makes Twitter a less secure platform with more potential for attacks and impersonation.

“When it's easier for us to take over accounts, myths and disinformation increase and bad actors are going to increase on the site because it's easier to gain access to an account with a large following that you can tweet out whatever you like pretending to be them,” Tobac said.

Twitter CEO Elon Musk implied that paywalling text-message based 2FA would save the company money. The controversial decision comes after a privacy and security exodus at Twitter last fall. In the midst of layoffs, high-level officials like former chief information security officer Lea Kissner and former head of integrity and safety Yoel Roth left the company.

Hyundai and Kia release software update to prevent TikTok thefts

Kia and Hyundai released a software update on Monday after a viral TikTok challenge taught users how to hack the vehicles. But for now, it’s only available to a selected one million vehicles, out of the four million cars that will eventually need the patch.

It started as the “Kia Challenge” dating back to at least May on TikTok, demonstrating how “Kia Boys” use USB cords to hot-wire cars. Owners soon caught on to the widespread theft and began suing the car manufacturers for a lack of response. The class action lawsuit said that certain models of Kia and Hyundai cars lacked engine immobilizers, a common device that prevents car theft, making it easy to gain access, TechCrunch reported last September.

Car owners of affected models like the 2017-2020 Elantra, 2015-2019 Sonata and 2020-2021 Venue can visit a local dealership to install the anti-theft update, Hyundai said in a release. The updates include an anti-theft sticker to deter attack, a longer alarm, and the need for a physical key, rather than just a push start, to turn the vehicle on. Updates for other affected vehicles will be available by June, and you can find the whole list on Hyundai’s website.

In the meantime, Kia and Hyundai have provided about 26,000 steering wheel locks to vehicle owners to prevent theft, according to the National Highway Traffic Safety Administration. NHTSA got involved in the saga after thefts sparked by the Kia Challenge resulted in at least 14 reported crashes and eight fatalities, the agency said, turning it into a matter of public safety.

Update your Apple devices now to patch a security flaw

Apple released security updates to its operating systems on Monday to resolve a security flaw. While such updates are common, the company said in the announcement that the issue “may have been actively exploited,” meaning hackers could’ve taken advantage of the issue to access Apple devices.

Apple issued security updates for its macOS Ventura, latest iPhone and iPad products and its Safari web browser. Security updates for its AppleTV and Apple Watch operating systems were also slated to be released on Monday, according to the Apple security updates website, but details have not been released at the time of publication. While the security flaws vary across devices, WebKit, its open-source browser engine, was a common target.

Apple does not have additional details to share on the exploits beyond the update release notes, spokesperson Scott Radcliffe told Engadget.

The company credited Xinru Chi of Pangu Lab, Ned Williamson of Google Project Zero, Wenchao Li and Xiaolong Bai of Alibaba Group and an anonymous researcher for finding the flaws, with additional recognition to The Citizen Lab at The University of Toronto’s Munk School for their assistance.

Patches for security flaws exploited on Apple devices aren’t unusual, but keeping devices up-to-date can help keep users protected from falling victim to attack. Apple generally doesn’t reveal details of an exploit until a patch is publicly available. In August, the company released similarly timely patches for its iPad, iPhone and macOS users.

The Citizen Lab has not responded to a request for comment at the time of publication.