Posts with «personal investing ideas & strategies» label

Instagram finally lets you edit DMs

Instagram launched direct messaging tools way back in 2013, but there hasn’t been a way to edit them after the fact. That changes today. Meta just rolled out a software update for the Instagram app that finally allows for DM edits, with one major caveat. You only have 15 minutes to make any changes.

Here’s how it works. Once you send a direct message and realize you made a huge blunder, like “accidentally” professing your love to an old college buddy, just press and hold on the sent message. This will create a dropdown menu. Look for “edit” and make the required changes. Nobody will ever be the wiser, as long as you got there within 15 minutes.

Meta

This isn’t the only change to DMs found with today’s update. You can also pin up to three of your favorite direct messages on top of the feed, which is useful in the case of ongoing conversations. This can be done with standard one-on-one chats and group chats. Just hold on the conversation tab, look for the dropdown and tap “pin” to make the move.

The update also makes it easier to toggle read receipts on and off, depending on personal preference. The rest of today’s new features are cosmetic. There are new DM themes, and some include unique animations. Finally, the update features a way to save your favorite stickers in your DMs for easy access. Just hold on the sticker and it’ll be there next time you want it.

Meta’s constantly making changes to Instagram. Back in January, it began testing a feature that lets users access a secondary photo grid that only close friends can see. Late last year, the app got customizable story templates.

This article originally appeared on Engadget at https://www.engadget.com/instagram-finally-lets-you-edit-dms-183412692.html?src=rss

Saber Interactive may escape Embracer’s death hug and become a private company

Saber Interactive has reportedly found an exit strategy from the death grip of its parent company, Embracer Group AB. Bloomberg reported Thursday that “a group of private investors” will buy the studio in a deal worth roughly $500 million. Saber would then become a private company with about 3,500 employees.

Engadget emailed a spokesperson from Saber for confirmation about the alleged buyout. The studio declined to comment.

The alleged agreement would be one of Embracer’s most significant cost-cutting moves since the collapse of a reported $2 billion deal with a group backed by Saudi Arabia’s sovereign wealth fund. Some criticized the imperiled deal as the gaming equivalent of “sportswashing,” using popular sporting acquisitions and partnerships to boost beleaguered governments’ global images. That followed US intelligence’s conclusion that the Saudi regime murdered The Washington Post reporter Jamal Khashoggi in late 2018.

Other cost-cutting moves at Embracer have included laying off about 900 employees in September, cutting another 50 or so jobs at Chorus developer Fishlabs and implementing more layoffs at Tiny Tina’s Wonderland developer Lost Boys Interactive, Beamdog, Crystal Dynamics and Saber subsidiary New World Interactive. Embracer also closed Saints Row studio Volition Games and Campfire Cabal.

LucasArts / Aspyr

According to Bloomberg, Saber’s sale won’t affect the studio’s role in developing an upcoming Star Wars: Knights of the Old Republic (KOTOR) remake. That game has already changed hands once: One of Saber’s Eastern European studios took over from Aspyr Media in the summer of 2022.

Aspyr had reportedly already been working on the game for years before providing a demo for Lucasfilm and Sony in June 2022; a week later, Aspyr fired its design director and art director. (Reports of the KOTOR demo costing a disproportionate amount of time and money may indicate a possible reason for the fallout.) By late that summer, Saber had taken over the development of the highly anticipated — and indefinitely delayed — remake.

Embracer bought Saber for $525 million in 2020 as it scooped up gaming studios left and right. It acquired at least 27 companies during that period, folding some of them (Demiurge Studios and New World Interactive) into Saber. Bloomberg reports that the deal to sell Saber to private investors includes an option to “bring along multiple Embracer subsidiaries.”

One studio that’s far too big to be included in this transaction is Borderlands developer Gearbox Entertainment. However, Kotaku reported Thursday that Gearbox CEO Randy Pitchford told staff this week that a decision about the studio’s future had been made. He allegedly said he’d be able to share more details with them next month.

In the meantime, a cloud of uncertainty envelops Gearbox — and Embracer’s other remaining studios. “I’ve personally been looking for roles elsewhere not just due to the Embracer layoff fears, but due to pay,” an anonymous developer reportedly said to Kotaku. “Vague and in a holding pattern is definitely par for the course at the moment and has been for most of 2023.”

This article originally appeared on Engadget at https://www.engadget.com/saber-interactive-may-escape-embracers-death-hug-and-become-a-private-company-203623311.html?src=rss

Google introduces a lightweight open AI model called Gemma

Google has released an open AI model called Gemma, which it says is created using the same research and technology that was used to build its Gemini AI models. The company says Gemma is its contribution to the open community and is meant to help developers "in building AI responsibly." As such, it also introduced the Responsible Generative AI Toolkit alongside Gemma. It contains a debugging tool, as well as a guide with best practices for AI development based on Google's experience.

The company has made Gemma available in two different sizes — Gemma 2B and Gemma 7B — which both come with pre-trained and instruction-tuned variants and are both lightweight enough to run directly on a developer's laptop or desktop computer. Google says Gemma surpasses much larger models when it comes to key benchmarks and that both model sizes outperform other open models out there. 

In addition to being powerful, the Gemma models were trained to be safe. Google used automated techniques to strip personal information from the data it used to train the models, and it used reinforcement learning based on human feedback to ensure Gemma's instruction-tuned variants show responsible behaviors. Companies and independent developers could use Gemma to create AI-powered applications, especially if none of the currently available open models are powerful enough for what they want to build. 

Google has plans to introduce even more Gemma variants in the future for an even more diverse range of applications. That said, those who want to start working with the models right now can access them through data science platform Kaggle, the company's Colab notebooks or through Google Cloud. 

This article originally appeared on Engadget at https://www.engadget.com/google-introduces-a-lightweight-open-ai-model-called-gemma-130053289.html?src=rss

Wyze camera security issue showed 13,000 users other owners' homes

Some Wyze camera owners have reported that they were suddenly given access to cameras that weren't theirs and even got notifications for events inside other people's homes. Wyze cofounder David Crosby has confirmed the issue to The Verge, telling the publications that "some users were able to see thumbnails of cameras that were not their own in the Events tab." Users started seeing strangers' camera feeds in their accounts after an outage that Wyze said was caused by an Amazon Web Services problem. 

Crosby wrote in a post on the Wyze forum that the company's servers got overloaded, which corrupted some user data, after the outage. The security issue that resulted from that event then allowed users to "see thumbnails of cameras that were not their own in the Events tab." Users couldn't view those videos and could only see their thumbnails, he clarified, and they were not able to view live streams from other people's cameras. Wyze was able to identify 14 incidents before taking down the Events tab altogether. 

The company said it's going to notify all affected users and that it has forcibly logged out everyone who've recently used the Wyze app in order to reset tokens. "We will explain in more detail once we finish investigating exactly how this happened and further steps we will take to make sure it doesn’t happen again," Crosby added. 

While the company doesn't have a detailed explanation for what happened yet, its swift confirmation of the incident is a huge departure from how it previously dealt with a security flaw. Back in 2022, cybersecurity firm Bitdefender revealed that in March 2019, it informed Wyze of a major security vulnerability in the Wyze Cam v1 model. The company didn't inform customers about the flaw, however, and didn't even issue a fix until three years later.

Update, February 20 2024, 9:08PM ET: In an email received by Engadget, Wyze admits to affected users that "about 13,000 Wyze users received thumbnails from cameras that were not their own and 1,504 users tapped on them. Most taps enlarged the thumbnail, but in some cases an Event Video was able to be viewed." 

The company went on to explain that this glitch was caused by a mix-up of device ID and user ID mapping, due to a new third-party caching client library struggling to cope with the "unprecedented" data load from client devices rebooting all at once. Wyze promises to prevent this from happening again by adding "a new layer of verification" for connections, and that it'll look for more reliable client libraries to cope with such incidents.

This article originally appeared on Engadget at https://www.engadget.com/wyze-camera-security-issue-showed-13000-users-other-owners-homes-140059551.html?src=rss

Wyze camera security issue allowed users to see other owners' homes

Some Wyze camera owners have reported that they were suddenly given access to cameras that weren't theirs and even got notifications for events inside other people's homes. Wyze cofounder David Crosby has confirmed the issue to The Verge, telling the publications that "some users were able to see thumbnails of cameras that were not their own in the Events tab." Users started seeing strangers' camera feeds in their accounts after an outage that Wyze said was caused by an Amazon Web Services problem. 

Crosby wrote in a post on the Wyze forum that the company's servers got overloaded, which corrupted some user data, after the outage. The security issue that resulted from that event then allowed users to "see thumbnails of cameras that were not their own in the Events tab." Users couldn't view those videos and could only see their thumbnails, he clarified, and they were not able to view live streams from other people's cameras. Wyze was able to identify 14 incidents before taking down the Events tab altogether. 

The company said it's going to notify all affected users and that it has forcibly logged out everyone who've recently used the Wyze app in order to reset tokens. "We will explain in more detail once we finish investigating exactly how this happened and further steps we will take to make sure it doesn’t happen again," Crosby added. 

While the company doesn't have a detailed explanation for what happened yet, its swift confirmation of the incident is a huge departure from how it previously dealt with a security flaw. Back in 2022, cybersecurity firm Bitdefender revealed that in March 2019, it informed Wyze of a major security vulnerability in the Wyze Cam v1 model. The company didn't inform customers about the flaw, however, and didn't even issue a fix until three years later.

This article originally appeared on Engadget at https://www.engadget.com/wyze-camera-security-issue-allowed-users-to-see-other-owners-homes-140059114.html?src=rss

Google's Gemini 1.5 Pro is a new, more efficient AI model

On Thursday, Google unveiled Gemini 1.5 Pro, which the company describes as delivering “dramatically enhanced performance” over the previous model. The company’s AI trajectory — viewed internally as increasingly critical for its future — follows the unveiling of Gemini 1.0 Ultra last week, alongside the rebranding of the Bard chatbot (to Gemini) to align with the new model’s more powerful and versatile capabilities.

In an announcement blog post, Google CEO Sundar Pichai and Google DeepMind CEO Demis Hassabis try to balance assuring their audience about ethical AI safety while touting their models’ rapidly advancing capabilities. “Our teams continue pushing the frontiers of our latest models with safety at the core,” Pichai summarized.

The company needs to emphasize safety for AI skeptics (including one former Google CEO) and government regulators. But it also needs to stress its models’ accelerating performance for AI developers, potential customers and investors concerned the company was too slow to react to OpenAI’s breakout success with ChatGPT.

Pichai and Hassabis say Gemini 1.5 Pro delivers comparable results to Gemini 1.0 Ultra. However, Gemini 1.5 performs at that level more efficiently, with reduced computational requirements. The multimodal capabilities include processing text, images, videos, audio or code. As AI models advance, they’ll continue to offer a more versatile array of capabilities in one prompt box (another recent example was OpenAI integrating DALL-E 3 image generation into ChatGPT).

Google CEO Sundar Pichai
ALAIN JOCARD via Getty Images

Gemini 1.5 Pro can also handle up to one million tokens, or the units of data AI models can process in a single request. Google says Gemini 1.5 Pro can process over 700,000 words, an hour of video, 11 hours of audio and codebases with over 30,000 lines of code. The company says it’s even “successfully tested” a version that supports up to 10 million tokens.

The company says Gemini 1.5 Pro maintains high accuracy in queries with larger token counts when it has more new data to learn. It says the model impressed in the Needle In a Haystack evaluation. In this test, developers insert a small piece of information inside a long text block to see if the AI model can pick it out. Google said Gemini 1.5 Pro could find the embedded text 99 percent of the time in data blocks as long as one million tokens.

Google says Gemini 1.5 Pro can reason about various details from the 402-page Apollo 11 moon mission transcripts. In addition, it can analyze plot points and events from an uploaded 44-minute silent film starring Buster Keaton. “As 1.5 Pro’s long context window is the first of its kind among large-scale models, we’re continuously developing new evaluations and benchmarks for testing its novel capabilities,” Hassabis wrote.

Google is launching Gemini 1.5 Pro with 128,000-token capabilities, the same number at which OpenAI’s (publicly announced) GPT-4 models max out. Hassabis says Google will eventually introduce new pricing tiers that support up to one million-token queries.

Google DeepMind CEO Demis Hassabis
Joy Malone via Getty Images

Gemini 1.5 Pro is also adept at learning new skills from information in long prompts — without additional fine-tuning (“in-context learning”). In a benchmark called Machine Translation from One Book, the model learned a grammar manual for Kalamang, a language with fewer than 200 speakers globally that it hadn’t previously been trained on. The company says Gemini 1.5 Pro learned to perform at a similar level as a human learning the same content when translating English to Kalamang.

In a piece of the announcement that will catch developers’ attention, Google says Gemini 1.5 Pro can perform problem-solving tasks across longer code blocks. “When given a prompt with more than 100,000 lines of code, it can better reason across examples, suggest helpful modifications and give explanations about how different parts of the code works,” Hassabis wrote.

On the ethics and safety front, Google says it’s taking “the same approach to responsible deployment” it took with Gemini 1.0 models. That includes developing and applying red-teaming techniques, where a group of ethical developers essentially serve as devil’s advocate, testing for “a range of potential harms.” In addition, the company says it heavily scrutinizes areas like content safety and representational harms. The company says it continues to develop new ethical and safety tests for its AI tools.

Google is launching Gemini 1.5 in early access for developers and enterprise customers. The company plans to make it more widely available eventually. Gemini 1.0 is currently available for consumers, alongside a Pro variant that costs $20 monthly.

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-15-pro-is-a-new-more-efficient-ai-model-181909354.html?src=rss

Russian and North Korean hackers used OpenAI tools to hone cyberattacks

Microsoft and OpenAI say that several state-backed hacking groups are using the latter’s generative AI (GAI) tools to bolster cyberattacks. The pair suggests that new research details for the first time how hackers linked to foreign governments are making use of GAI. The groups in question have ties to China, Russia, North Korea and Iran.

According to the companies, the state actors are using GAI for code debugging, looking up open-source information to research targets, developing social engineering techniques, drafting phishing emails and translating text. OpenAI (which powers Microsoft GAI products such as Copilot) says it shut down the groups’ access to its GAI systems after finding out they were using its tools.

Notorious Russian group Forest Blizzard (better known as Fancy Bear or APT 12) was one of the state actors said to have used OpenAI's platform. The hackers used OpenAI tools "primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks," the company said.

As part of its cybersecurity efforts, Microsoft says it tracks north of 300 hacking groups, including 160 nation-state actors. It shared its knowledge of them with OpenAI to help detect the hackers and shut down their accounts.

OpenAI says it invests in resources to pinpoint and disrupt threat actors' activities on its platforms. Its staff uses a number of methods to look into hackers' use of its systems, such as employing its own models to follow leads, analyzing how they interact with OpenAI tools and determining their broader objectives. Once it detects such illicit users, OpenAI says it disrupts their use of the platform through the likes of shutting down their accounts, terminating services or minimizing their access to resources.

This article originally appeared on Engadget at https://www.engadget.com/russian-and-north-korean-hackers-used-openai-tools-to-hone-cyberattacks-152424393.html?src=rss

Uber, Lyft and DoorDash drivers are striking on February 14

It could be a challenge hailing a ride from certain airports on Valentine's Day this year. Thousands of rideshare and delivery drivers for Uber, Lyft and DoorDash are planning to hold a demonstration on February 14 to demand fair pay and better security measures, according to Reuters. The strike was announced last week by Justice for App Workers, a coalition representing more than 100,000 rideshare and delivery drivers across the US. 

Based on the group's page for the rally, workers participating in the demonstration won't be taking rides to and from any airport in Austin, Chicago, Hartford, Miami, Newark, Orlando, Philadelphia, Pittsburgh, Rhode Island and Tampa. The coalition is asking drivers to join the event and "demand changes from Uber, Lyft, DoorDash, and all the app companies profiting off of [their] hard work." Meanwhile, Rideshare Drivers United, an independent union for Uber and Lyft drivers in Los Angeles, also revealed that its members are turning off their apps on February 14 to protest "the significant decrease in pay [they've] all felt this winter."

While the strikes could see the participation of tens of thousands of workers, Uber believes it won't have an impact on its business since only a small portion of its drivers typically take part in demonstrations. The company told The Hill and CBS News that a similar protest last year didn't affect its operations and that its driver earnings remain "strong." In the fourth quarter of 2023, "drivers in the US were making about $33 per utilized hour," the spokesperson said. 

The groups announced the strikes just a few days after Lyft promised guaranteed weekly earnings for its drivers in the country, ensuring that they'll make at least 70 percent of what their riders had paid. DoorDash didn't respond to the publications' requests for comment, but it currently pays its drivers $29.93 for every active hour in states with minimum wage requirements for app-based delivery workers. It recently introduced new fees for customers in New York City and Seattle as a response to their new minimum wage regulations.  

This article originally appeared on Engadget at https://www.engadget.com/uber-lyft-and-doordash-drivers-are-striking-on-february-14-055949899.html?src=rss

Google rebrands its Bard AI chatbot as Gemini, which now has its own Android app

Just as Microsoft renamed Bing Chat to Copilot to unify its generative AI branding, Google is doing the same thing with Bard and Duet AI. The services now bear the name Gemini, after Google's multimodal AI model. The name change leaked earlier this month. Google has also debuted a dedicated Gemini Android app alongside a paid version of the chatbot that has more enhanced capabilities.

"Bard has been the best way for people to directly experience our most capable models," Google CEO Sundar Pichai wrote in a blog post. "To reflect the advanced tech at its core, Bard will now simply be called Gemini. It’s available in 40 languages on the web and is coming to a new Gemini app on Android and on the Google app on iOS."

Those who download the Gemini Android app can actually replace Google Assistant as the default assistant on their device. So, when you long press the home button or utter "Hey Google," your phone or tablet can fire up Gemini instead of Assistant. You can also make this switch by opting in through Assistant.

Doing so will enable a new conversational overlay on your display. Along with swift access to Gemini, the overlay will offer contextual suggestions, such as the ability to generate a description for a photo you just took or ask for more information about an article that's on your screen.

You'll also be able to access commonly used Assistant features through the Gemini app, from making calls and setting timers to controlling smart home devices. Google said it will bring more Assistant functions to Gemini in the future. That certainly makes it sound as though Google is phasing out Assistant in favor of Gemini. The app also includes access to Gemini Advanced (more on that in a moment).

As for iOS, there won't be a separate Gemini app for now. Instead, you can access it through the Google app by tapping the Gemini toggle.

Gemini is available on Android and iOS in English in the US starting today. Next week, Google will start offering access to the chatbot in more locales in English, as well as in Japanese and Korean. As you might expect, Gemini is coming to more countries and languages down the line.

In addition, Google is opening up access to what it says is its largest and most capable AI model, Ultra 1.0, through Gemini Advanced. The company claims this is able to have longer and more in-depth conversations with the ability to recall context from previous chats. It says Gemini Advanced "is far more capable at highly complex tasks like coding, logical reasoning, following nuanced instructions and collaborating on creative projects."

Gemini Advanced is available now in English in 150 countries and territories. To access it, you'll need to sign up for the new Google One AI Premium Plan. This costs $20 per month — the same price as Copilot Pro — after a two-month free trial. Along with Gemini Advanced, this subscription includes everything from the Google One Premium Plan, including 2TB of storage and a VPN. Subscribers will also be able to use Gemini in apps such as Gmail, Docs, Slides and Sheets in the near future (this is replacing Duet AI).

Of note, Google says it sought to mitigate concerns such as bias and unsafe content while building Gemini Advanced and other AI products. The company says it carried out "extensive trust and safety checks, including external red-teaming" (i.e. testing by third-party ethical hackers) on Gemini Advanced before refining the model with reinforcement learning and fine tuning.

This article originally appeared on Engadget at https://www.engadget.com/google-rebrands-its-bard-ai-chatbot-as-gemini-which-now-has-its-own-android-app-151303210.html?src=rss

How security experts unravel ransomware

Hackers use ransomware to go after every industry, charging as much money as they can to return access to a victim's files. It’s a lucrative business to be in. In the first six months of 2023, ransomware gangs bilked $449 million from their targets, even though most governments advise against paying ransoms. Increasingly, security professionals are coming together with law enforcement to provide free decryption tools — freeing locked files and eliminating the temptation for victims to pony up.

There are a couple main ways that ransomware decryptors go about coming up with tools: reverse engineering for mistakes, working with law enforcement and gathering publicly available encryption keys. The length of the process varies depending on how complex the code is, but it usually requires information on the encrypted files, unencrypted versions of the files and server information from the hacking group. “Just having the output encrypted file is usually useless. You need the sample itself, the executable file,” said Jakub Kroustek, malware research director at antivirus business Avast. It’s not easy, but does pay dividends to the impacted victims when it works.

First, we have to understand how encryption works. For a very basic example, let's say a piece of data might have started as a cognizable sentence, but appears like "J qsfgfs dbut up epht" once it's been encrypted. If we know that one of the unencrypted words in "J qsfgfs dbut up epht" is supposed to be "cats," we can start to determine what pattern was applied to the original text to get the encrypted result. In this case, it's just the standard English alphabet with each letter moved forward one place: A becomes B, B becomes C, and "I prefer cats to dogs" becomes the string of nonsense above. It’s much more complex for the sorts of encryption used by ransomware gangs, but the principle remains the same. The pattern of encryption is also known as the 'key', and by deducing the key, researchers can create a tool that can decrypt the files.

Some forms of encryption, like the Advanced Encryption Standard of 128, 192 or 256 bit keys, are virtually unbreakable. At its most advanced level, bits of unencrypted "plaintext" data, divided into chunks called "blocks," are put through 14 rounds of transformation, and then output in their encrypted — or "ciphertext" — form. “We don’t have the quantum computing technology yet that can break encryption technology,” said Jon Clay, vice president of threat intelligence at security software company Trend Micro. But luckily for victims, hackers don’t always use strong methods like AES to encrypt files.

While some cryptographic schemes are virtually uncrackable it’s a difficult science to perfect, and inexperienced hackers will likely make mistakes. If the hackers don’t apply a standard scheme, like AES, and instead opt to build their own, the researchers can then dig around for errors. Why would they do this? Mostly ego. “They want to do something themselves because they like it or they think it's better for speed purposes,” Jornt van der Wiel, a cybersecurity researcher at Kaspersky, said.

For example, here’s how Kaspersky decrypted the Yanluowang ransomware strain. It was a targeted strain aimed at specific companies, with an unknown list of victims. Yanluowang used the Sosemanuk stream cipher to encrypt data: a free-for-use process that encrypts the plaintext file one digit at a time. Then, it encrypted the key using an RSA algorithm, another type of encryption standard. But there was a flaw in the pattern. The researchers were able to compare the plaintext to the encrypted version, as explained above, and reverse engineer a decryption tool now made available for free. In fact, there are tons that have already been cracked by the No More Ransom project.

Ransomware decryptors will use their knowledge of software engineering and cryptography to get the ransomware key and, from there, create a decryption tool, according to Kroustek. More advanced cryptographic processes may require either brute forcing, or making educated guesses based on the information available. Sometimes hackers use a pseudo-random number generator to create the key. A true RNG will be random, duh, but that means it won’t be easily predicted. A pseudo-RNG, as explained by van der Wiel, may rely on an existing pattern in order to appear random when it's actually not — the pattern might be based on the time it was created, for example. If researchers know a portion of that, they can try different time values until they deduce the key.

But getting that key often relies on working with law enforcement to get more information about how the hacking groups work. If researchers are able to get the hacker’s IP address, they can request the local police to seize servers and get a memory dump of their contents. Or, if hackers have used a proxy server to obscure their location, police might use traffic analyzers like NetFlow to determine where the traffic goes and get the information from there, according to van der Wiel. The Budapest Convention on Cybercrime makes this possible across international borders because it lets police request an image of a server in another country urgently while they wait for the official request to go through.

The server provides information on the hacker’s activities, like who they might be targeting or their process for extorting a ransom. This can tell ransomware decryptors the process the hackers went through in order to encrypt the data, details about the encryption key or access to files that can help them reverse engineer the process. The researchers comb through the server logs for details in the same way you may help your friend dig up details on their Tinder date to make sure they’re legit, looking for clues or details about malicious patterns that can help suss out true intentions. Researchers may, for example, discover part of the plaintext file to compare to the encrypted file to begin the process of reverse engineering the key, or maybe they’ll find parts of the pseudo-RNG that can begin to explain the encryption pattern.

Working with law enforcement helped Cisco Talos create a decryption tool for the Babuk Tortilla ransomware. This version of ransomware targeted healthcare, manufacturing and national infrastructure, encrypting victims' devices and deleting valuable backups. Avast had already created a generic Babuk decryptor, but the Tortilla strain proved difficult to crack. The Dutch Police and Cisco Talos worked together to apprehend the person behind the strain, and gained access to the Tortilla decryptor in the process.

But often the easiest way to come up with these decryption tools stems from the ransomware gangs themselves. Maybe they’re retiring, or just feeling generous, but attackers will sometimes publicly release their encryption key. Security experts can then use the key to make a decryption tool and release that for victims to use going forward.

Generally, experts can’t share a lot about the process without giving ransomware gangs a leg up. If they divulge common mistakes, hackers can use that to easily improve their next ransomware attempts. If researchers tell us what encrypted files they’re working on now, gangs will know they’re on to them. But the best way to avoid paying is to be proactive. “If you’ve done a good job of backing up your data, you have a much higher opportunity to not have to pay,” said Clay.

This article originally appeared on Engadget at https://www.engadget.com/how-security-experts-unravel-ransomware-184531451.html?src=rss