When Google began rolling out Android’s , the company addressed a “High” severity vulnerability involving the Pixel’s Markup screenshot tool. Over the weekend, and , the reverse engineers who discovered CVE-2023-21036, shared more information about the security flaw, revealing Pixel users are still at risk of their older images being compromised due to the nature of Google’s oversight.
In short, the “aCropalypse” flaw allowed someone to take a PNG screenshot cropped in Markup and undo at least some of the edits in the image. It’s easy to imagine scenarios where a bad actor could abuse that capability. For instance, if a Pixel owner used Markup to redact an image that included sensitive information about themselves, someone could exploit the flaw to reveal that information. You can find the technical details on .
Introducing acropalypse: a serious privacy vulnerability in the Google Pixel's inbuilt screenshot editing tool, Markup, enabling partial recovery of the original, unedited image data of a cropped and/or redacted screenshot. Huge thanks to @David3141593 for his help throughout! pic.twitter.com/BXNQomnHbr
According to Buchanan, the flaw has existed for about five years, coinciding with the release of Markup alongside . And therein lies the problem. While March’s security patch will prevent Markup from compromising future images, some screenshots Pixel users may have shared in the past are still at risk.
It’s hard to say how concerned Pixel users should be about the flaw. According to a forthcoming Aarons and Buchanan shared with and , some websites, including Twitter, process images in such a way that someone could not exploit the vulnerability to reverse edit a screenshot or image. Users on other platforms aren’t so lucky. Aarons and Buchanan specifically identify Discord, noting the chat app did not patch out the exploit until its recent January 17th update. At the moment, it’s unclear if images shared on other social media and chat apps were left similarly vulnerable.
Google did not immediately respond to Engadget’s request for comment and more information. The March security update is currently available on the Pixel 4a, 5a, 7 and 7 Pro, meaning Markup can still produce vulnerable images on some Pixel devices. It’s unclear when Google will push the patch to other Pixel devices. If you own a Pixel phone without the patch, avoid using Markup to share sensitive images.
This article originally appeared on Engadget at https://www.engadget.com/google-pixel-vulnerability-allows-bad-actors-to-undo-markup-screenshot-edits-and-redactions-195322267.html?src=rss
US law enforcement authorities this week arrested the person allegedly responsible for . As reported by (via ), FBI agents on Wednesday arrested Conor Brian Fitzpatrick on suspicion of running BreachForums. As Brian Krebs notes, the website’s administrator, “Pompompurin,” is responsible for or connected to some of the most high-profile hacks in recent memory, including multiple incidents involving the FBI.
In 2021, Pompompurin took credit for compromising the agency’s email servers and sending thousands of fake cybersecurity warnings. Pompompurin is also linked to the 2022 breach of the , an incident that saw the contact information of its more than 80,000 members go on sale. Separately, Pompompurin is connected to the that saw the data of 7 million users compromised, and the .
In a sworn affidavit, one of the FBI agents involved in the arrest claims Fitzpatrick identified himself as Pompompurin and admitted to being the owner of BreachForums. The forum rose from the ashes of RaidForums, which the FBI raided and . For the moment, BreachForums is still up and running. "I think it's safe to assume [Pompompurin] won't be coming back, so I'll be taking ownership of the forum," said a user named Baphomet. "I have most, if not all the access necessary to protect BF infrastructure and users." Fitzpatrick will appear before a federal court on .
This article originally appeared on Engadget at https://www.engadget.com/us-authorities-arrest-alleged-breachforums-owner-and-fbi-hacker-pompompurin-170009266.html?src=rss
If you’re in the market for a new Android phone, now is a good time to pick up one of the best at a significant discount. Google has reduced the price of the entire Pixel family, including the flagship 7 Pro. On both Amazon and the Google Store, you can get the Pixel 7 Pro for $150 off. That includes all colorways and storage variants, meaning the 128GB, 256GB and 512GB models are priced at $749, $849 and $949 at the moment. The more affordable Pixel 7 is also $150 off. Once again, all three colorways are included in the sale, as are both storage variants. As a result, you can get the 128GB model for $449 and the 256GB one for $549. When they’re not on sale, those two will set you back $599 and $699, respectively. Last but not least, the Pixel 6a is likewise $150 off, making it $299.
Between the Pixel 7 Pro, Pixel 7 and Pixel 6a, there isn’t a bad choice between them. All three phones are found in Engadget’s . If you want a simple, affordable and easy-to-use device, the . It features a bright and vivid 6.1-inch OLED display, IP67-certified water and dust proofing, 6GB of RAM and Google’s in-house Tensor chip. Best of all, like all the other Pixels, the 6a comes with Google’s excellent photo processing software. One thing to note is Google is likely to announce the Pixel 6a’s successor soon. The company is widely expected to debut the phone at .
If you have a bigger budget, both the are compelling options too. Of the two, the latter is the one to go for if you love snapping photos. It features a 5x telephoto camera that’s ideal for capturing images of faraway subjects. Additionally, the wide-angle camera can capture macro shots, making it great for getting up close to small subjects like flowers and bugs.
In the years leading up to, and through, World War II, animal behaviorist researchers thoroughly embraced motion picture technology as a means to better capture the daily experiences of their test subjects — whether exploring the nuances of contemporary chimpanzee society or running macabre rat-eat-rat survival experiments to determine the Earth's "carrying capacity." However, once the studies had run their course, much of that scientific content was simply shelved.
In his new book, The Celluloid Specimen: Moving Image Research into Animal Life, Seattle University Assistant Professor of Film Studies Dr. Ben Schultz-Figueroa, pulls these historic archives out of the vacuum of academic research to examine how they have influenced America's scientific and moral compasses since. In the excerpt below, Schultz-Figueroa recounts the Allied war effort to guide precision aerial munitions towards their targets using live pigeons as onboard targeting reticles.
Project Pigeon: Rendering the War Animal through Optical Technology
In his 1979 autobiography, The Shaping of a Behaviorist, B. F. Skinner recounted a fateful train ride to Chicago in 1940, just after the Nazis had invaded Denmark. Gazing out the train window, the renowned behaviorist was ruminating on the destructive power of aerial warfare when his eye unexpectedly caught a “flock of birds lifting and wheeling in formation as they flew alongside the train.” Skinner recounts: “Suddenly I saw them as ‘devices’ with excellent vision and extraordinary maneuverability. Could they not guide a missile?” Observing the coordination of the flock, its “lifting and wheeling,” inspired in Skinner a new vision of aerial warfare, one that yoked the senses and movements of living animals to the destructive power of modern ballistics. This momentary inspiration began a three-year project to weaponize pigeons, code-named “Project Pigeon,” by having them guide the flight of a bomb from inside its nose, a project that tied together laboratory research, military technology, and private industry.
This strange story is popularly discussed as a historical fluke of sorts, a wacky one-off in military research and development. As Skinner himself described it, one of the main obstacles to Project Pigeon even at the time was the perception of a pigeon guided missile as a “crackpot idea.” But in this section I will argue that it is, in fact, a telling example of the weaponization of animals in a modern technological setting where optical media was increasingly deployed on the battlefield, a transformation with increasing strategic and ethical implications for the way war is fought today. I demonstrate that Project Pigeon was historically placed at the intersection of a crucial shift in warfare away from the model of an elaborate chess game played out by generals and their armies and toward an ecological framework in which a wide array of nonhuman agents play crucial roles. As Jussi Parikka recently described a similar shift in artificial intelligence, this was a movement toward “agents that expressed complex behavior, not through preprogramming and centralization, but through autonomy, emergence, and distributed functioning.” The missile developed and marketed by Project Pigeon was premised on a conversion of the pigeon from an individual consciousness to a living machine, emptied of intentionality in order to leave behind only a controllable, yet dynamic and complex, behavior that could be designed and trusted to operate without the oversight of a human commander. Here is a reimagining of what a combatant can be, no longer dependent on a decision-making human actor but rather on a complex array of interactions among an organism, device, and environment. As we will see, the vision of a pigeon-guided bomb presaged the nonhuman sight of the smart bomb, drone, and military robot, where artificial intelligence and computer algorithms replace the operations of its animal counterpart.
Media and cinema scholars have written extensively about the transforming visual landscape of the battlefield and film’s place within this shifting history. Militaries from across the globe have pushed film to be used in dramatically unorthodox ways. Lee Grieveson and Haidee Wasson argue that the US military historically used film as “an iterative apparatus with multiple capacities and functions,” experimenting with the design of the camera, projector, and screen to fit new strategic interests as they arose. As Wasson argues in her chapter dedicated to experimental projection practices, the US Army “boldly dissembled cinema’s settled routines and structures, rearticulating film projection as but one integral element of a growing institution with highly complex needs.” As propaganda, film was used to portray the military to civilians at home and abroad; as training films, it was used to consistently instruct large numbers of recruits; as industrial and advertising films, different branches of the military used it to speak to each other. Like these examples, Project Pigeon relied on a radically unorthodox use of film that directed it into new terrains, intervening in the long-standing relationship between the moving image and its spectators to marshal its influence on nonhuman viewers, as well as humans. Here, we will see a hitherto unstudied use of the optical media, in which film was a catalyst for transforming animals into weapons and combatants.
Project Pigeon was one of the earliest projects to come out of an illustrious and influential career. Skinner would go on to become one of the most well-known voices in American psychology, introducing the “Skinner box” to the study of animal behavior and the vastly influential theory of “operant conditioning.” His influence was not limited to the sciences but was broadly felt across conversations in political theory, linguistics, and philosophy as well. As James Capshew has shown, much of Skinner’s later, more well-known research originated in this military research into pigeon-guided ballistics. Growing from initial independent trials in 1940, Project Pigeon secured funding from the US Army’s Office of Scientific Research and Development in 1943. The culmination of this work placed three pigeons in the head of a missile; the birds had been trained to peck at a screen showing incoming targets. These pecks were then translated into instructions for the missile’s guidance system. The goal was a 1940s version of a smart bomb, which was capable of course correcting mid-flight in response to the movement of a target. Although Project Pigeon developed relatively rapidly, the US Army was ultimately denied further funds in December of 1943, effectively ending Skinner’s brief oversight of the project. In 1948, however, the US Naval Research Laboratory picked up Skinner’s research and renamed it “Project ORCON” — a contraction of “organic” and “control.” Here, with Skinner’s consultation, the pigeons’ tracking capacity for guiding missiles to their intended targets was methodically tested, demonstrating a wide variance in reliability. In the end, the pigeons’ performance and accuracy relied on so many uncontrollable factors that Project ORCON, like Project Pigeon before it, was discontinued.
Moving images played two central roles in Project Pigeon: first, as a means of orienting the pigeons in space and testing the accuracy of their responses, examples of what Harun Farocki calls “operational images,” and, second, as a tool for convincing potential sponsors of the pigeon’s capacity to act as a weapon. The first use of moving image technology shows up in the final design of Project Pigeon, where each of the three pigeons was constantly responding to camera obscuras that were installed in the front of the bomb. The pigeons were trained to pinpoint the shape of incoming targets on individual screens (or “plates”) by pecking them as the bomb dropped, which would then cause it to change course. This screen was connected to the bomb’s guidance through four small rubber pneumatic tubes that were attached to each of side of the frame, which directed a constant airflow to a pneumatic pickup system that controlled the thrusters of the bomb. As Skinner explained: “When the missile was on target, the pigeon pecked the center of the plate, all valves admitted equal amounts of air, and the tambours remained in neutral positions. But if the image moved as little as a quarter of an inch off-center, corresponding to a very small angular displacement of the target, more air was admitted by the valves on one side, and the resulting displacement of the tambours sent appropriate correcting orders directly to the servo system.”
In the later iteration of Project ORCON, the pigeons were tested and trained with color films taken from footage recorded on a jet making diving runs on a destroyer and a freighter, and the pneumatic relays between the servo system and the screen were replaced with electric currents. Here, the camera obscura and the training films were used to integrate the living behavior of the pigeon into the mechanism of the bomb itself and to produce immersive simulations for these nonhuman pilots in order to fully operationalize their behavior.
The second use of moving images for this research was realized in a set of promotional films for Project Pigeon, which Skinner largely credited for procuring its initial funding from General Mills Inc. and the navy’s later renewal of the research as Project ORCON. Skinner’s letters indicate that there were multiple films made for this purpose, which were often recut in order to incorporate new footage. Currently, I have been able to locate only a single version of the multiple films produced by Skinner, the latest iteration that was made to promote Project ORCON. Whether previous versions exist and have yet to be found or whether they were taken apart to create each new version is unclear. Based on the surviving example, it appears that these promotional films were used to dramatically depict the pigeons as reliable and controllable tools. Their imagery presents the birds surrounded by cutting-edge technology, rapidly and competently responding to a dynamic array of changing stimuli. These promotional films played a pivotal rhetorical role in convincing government and private sponsors to back the project. Skinner wrote that one demonstration film was shown “so often that it was completely worn out—but to good effect for support was eventually found for a thorough investigation.” This contrasted starkly with the live presentation of the pigeons’ work, of which Skinner wrote: “the spectacle of a living pigeon carrying out its assignment, no matter how beautifully, simply reminded the committee of how utterly fantastic our proposal was.” Here, the moving image performed an essentially symbolic function, concerned primarily with shaping the image of the weaponized animal bodies.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-celluloid-specimen-benjamin-schultz-figueroa-university-of-california-press-143028555.html?src=rss
Disco Elysium, one of the best releases of 2019 and , finally has a dedicated photo mode, but it’s not like the one . , the game’s new Collage Mode grants players full access to all the characters, environments and props found within the RPG. As you might imagine, you can use that power to pose your favorite NPCs in “a range of silly and sensible poses.” You’re then free to add filters and change the time of day to alter the mood of your capture.
But most interesting of all, Collage Mode gives you the freedom to write your own dialogue for Disco Elysium, and make it look like it came directly from the game. “Fabricate completely new dramas from unforgivable punch-ups to fruity yet forbidden kisses,” developer ZA/UM Studio suggests. “Corroborate your fan fiction with screenshots directly from the game.” Disco Elysium fan fiction will never be the same.
As , Collage Mode arrives amid an ongoing public dispute between ZA/UM and a handful of the studio’s former employees. The disagreement dates back to 2022 when three members of the Disco Elysium team – Robert Kurvitz, Helen Hindpere and Aleksander Rostov – said they were fired from their jobs following the studio’s takeover by a pair of Estonian businessmen in 2021. Kurvitz and Rostov went on to accuse ZA/UM’s new owners of . On Tuesday, ZA/UM published a press release announcing the legal proceedings Kurvitz and Rostov had brought against it had been resolved after a court dropped the case. The two later the announcement was "wrong and misleading in several respects," and that they would continue pursuing other legal options against their former employer.
This article originally appeared on Engadget at https://www.engadget.com/disco-elysiums-collage-mode-allows-you-to-write-new-dialogue-220437086.html?src=rss
Since its release in 2021, one of the most consistent criticisms of has been Microsoft’s . Compared to Windows 10, the newer OS makes it more complicated for users to move away from the company’s first-party offerings. For example, if you don’t want Edge to open every time you click on a webpage or PDF, you’re forced to launch Windows 11’s Settings menu and change the default app by file and link type. It’s an unnecessarily long process that makes customizing Windows 11 convoluted.
Microsoft is finally addressing some of those criticisms. In a Friday (via ), the company said it was “reaffirming our long-standing approach to put people in control of their Windows PC experience.” Microsoft announced a feature it said would ensure Windows 11 users are in control of changes to their app defaults. Later this year, the company will introduce a new deep link uniform resource identifier (URI) that will allow developers to send users to the correct section of the Settings menu when they want to change how Windows 11 responds to specific links and file types.
Microsoft says it will also give users more control over what apps get pinned to their desktop, start menu and taskbar with a new public API that will display a prompt asking you to grant programs permission before they show up on those interface elements. Both features will first roll out to PCs enrolled in the Windows Insider Dev Channel in the coming months before arriving in the public release of Windows 11. Notably, Microsoft says it will “lead by example” and release updates for Edge that will see the browser add support for those features as they become available.
This article originally appeared on Engadget at https://www.engadget.com/microsoft-is-making-it-easier-to-set-default-apps-in-windows-11-202940444.html?src=rss
It’s safe to say Diablo IV’s hasn’t gone as smoothly as Blizzard likely hoped it would. Shortly after the beta went live on Friday, many players found themselves in lengthy login queues. In my case, I had to wait nearly two hours before I got a chance to play the game, only to be quickly disconnected after about 15 minutes.
Blizzard addressed the issue after players took to social media and the official Diablo IV forums to complain. “The team is working through some issues behind the scenes that have been affecting players and causing them to be disconnected from the servers,” Blizzard said in its on the subject. “This is done so we can ensure stability amongst players who get into the game after the queue process.”
If you’re waiting to play, Blizzard asks that you stay in the login queue so as not to reset your timer. The studio said it would have a more accurate countdown in place by the start of next weekend’s open beta when anyone who wants to try Diablo IV before its can do so. “We are actively working on these issues for this weekend,” Blizzard said. “Once these are resolved, we will be able to increase the influx of players and queue times will be significantly reduced.”
Later in the day, the studio shared an , noting it was also working to resolve a handful of other issues that players had filed reports about, including a bug preventing some from joining parties. As of Saturday afternoon, the queue to play Diablo IV was much shorter. I got to the character selection screen in under a minute. “Many players have successfully logged in to the game, but we are aware that some have experienced longer than expected wait times,” Blizzard said. ”As we continue to roll out improvements to our server stability, we expect our players to see continued improvements to the queue time.”
Hiccups are expected during a beta, particularly when a studio stress tests a live-service game like Diablo IV. The last thing Blizzard wants is a repeat of Diablo III’s launch when interest in the game overloaded Battle.net’s login servers, preventing many from playing the game at all.
This article originally appeared on Engadget at https://www.engadget.com/blizzard-is-working-to-shorten-diablo-iv-beta-queue-times-180535837.html?src=rss
After 15 years in space, NASA’s AIM mission is ending. In a brief blog post spotted by Gizmodo, the agency said Thursday it was ending operational support for the spacecraft due to a battery power failure. NASA first noticed issues with AIM’s battery in 2019, but the probe was still sending a “significant amount of data” back to Earth. Following another recent decline in battery power, NASA says AIM has become unresponsive. The AIM team will monitor the spacecraft for another two weeks in case it reboots, but judging from the tone of NASA’s post, the agency isn’t holding its breath.
NASA launched the AIM – Aeronomy of Ice in the Mesosphere – mission in 2007 to study noctilucent or night-shining clouds, which are sometimes known as fossilized clouds due to the fact they can last hundreds of years in the Earth's upper atmosphere. From its vantage point 370 miles above the planet's surface, the spacecraft proved invaluable to scientists, with data collected by AIM appearing in 379 peer-reviewed papers, including a recent 2018 study that found methane emissions from human-driven climate change are causing night-shining clouds to form more frequently. Pretty good for a mission NASA initially expected to operate for only two years. AIM’s demise follows that of another long-serving NASA spacecraft. At the start of the year, the agency deorbited the Earth Radiation Budget Satellite following a nearly four-decade run collecting ozone and atmospheric measurements.
This article originally appeared on Engadget at https://www.engadget.com/nasas-aim-spacecraft-goes-silent-after-a-15-year-run-studying-the-earths-oldest-clouds-162853411.html?src=rss
In the lawsuit (PDF), filed by Alfredo Alberto Rodriguez Perez, the plaintiff argues that Go stores constantly use customers' biometrics "by scanning [their palms] to identify them and by applying computer vision, deep learning algorithms, and sensor fusion that measure the shape and size of each customer’s body to identify customers, track where they move in the stores, and determine what they have purchased." It said the company only put up signs about its biometric tracking activities over a year after the law went into effect.
Amazon's Go stores give shoppers the option to take whatever product they have off shelves and walk out without the need to check out. To be able to enter these stores, customers will need to scan a code from the Amazon app with a connected credit card. However, some locations offer Amazon One, the e-commerce giant's palm-based identity and payment service, as an entry option. The plaintiff's complaint said the sign informs customers that Amazon will not be collecting their biometrics unless they choose to sign up for Amazon One. However, "Amazon Go stores do collect biometric identifier information on every single customer, including information on the size and shape of every customers body," the complaint argues.
In a statement sent to NBC News, an Amazon spokesperson defended the company's practices and technologies. They explained that Amazon does not use facial recognition, and any system it uses to identify shoppers inside its Go stores don't constitute biometric tech. "Only shoppers who choose to enroll in Amazon One and choose to be identified by hovering their palm over the Amazon One device have their palm-biometric data securely collected," they insisted, "and these individuals are provided the appropriate privacy disclosures during the enrollment process."
The lawsuit's outcome could then depend on whether the court sees someone's body shape and size as biometric information. In the complaint, the plaintiff quotes NYC Admin Code 22-1201's definition of a biometric identifier in context of the law as "a physiological or biological characteristic that is used by or on behalf of a commercial establishment, singly or in combination, to identify, or assist in identifying, an individual, including, but not limited to: (i) a retina or iris scan, (ii) a fingerprint or voiceprint, (iii) a scan of hand or face geometry, or any other identifying characteristic."
This article originally appeared on Engadget at https://www.engadget.com/amazon-faces-lawsuit-over-alleged-biometric-tracking-at-go-stores-in-new-york-144429703.html?src=rss
Discord is finally giving you the power to customize your desktop app's interface with various themes for its latest beta test. The messaging app has introduced Themes — one of its most requested features — with 16 pre-made options to choose from. The not-so-good news? You'll only be able to apply them if you're paying for Nitro, its most expensive subscription option.
Nitro does have other perks, including a bigger file-sharing limit, 4K and 60fps streaming, as well as the ability to send messages up to 4,000 characters in length. But if you don't really need any of them, it's a matter of deciding whether it's worth paying $10 a month or $100 a year for the subscription tier just to be able to access Discord's themes.
In case you have been waiting for the feature to drop and do decide to pay for Nitro, you can choose from the available color schemes by going to Appearance under Settings. You'll now see a new Color section under the existing Light and Dark themes, where you can find the main 16 choices. There's apparently another hidden color scheme you can see when you click on the Preview Themes button to test out each option before applying one. Thankfully, Discord is allowing you to use the preview button even if you don't have an existing Nitro subscription, so you can at least check out what's available before you make a purchase.
This article originally appeared on Engadget at https://www.engadget.com/discord-themes-nitro-subscription-100135630.html?src=rss