Posts with «personal finance - career & education» label

Why humans can't use natural language processing to speak with the animals

We’ve been wondering what goes on inside the minds of animals since antiquity. Dr. Doolittle’s talent was far from novel when it was first published in 1920; Greco-Roman literature is lousy with speaking animals, writers in Zhanguo-era China routinely ascribed language to certain animal species and they’re also prevalent in Indian, Egyptian, Hebrew and Native American storytelling traditions.

Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.

We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.

What is language?

“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”

From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.

“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”

That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.

Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”

Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”

Tea Stražičić for Engadget/Silica Magazine

Hush, the computers are speaking

Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.

The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.

Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”

“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”

Talkin’ jive ain’t just for turkeys

While many species’ communication systems have been studied in recent years — most notably cetaceans like whales and dolphins, but also the southern pied babbler, for its song’s potentially syntactic qualities, and vervet monkeys’ communal predator warning system — none have shown the sheer degree of complexity as the call of the avian family Paridae: the chickadees, tits and titmice.

Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”

These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”

The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”

Tea Stražičić for Engadget/Silica Magazine

Dolphins have no need for kings

Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.

“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.

Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.

Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”

Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.

“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”

And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”

Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”

Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.

The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.

What if birds are just constantly screaming about the heat?

Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”

“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”

However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”

Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”

“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring. 

Conversely, 2021’s research, The effect of music on livestock: cattle, poultry and pigs, has shown that playing music helps to calm livestock and reduce stress during times of intensive production. We can measure that reduction in stress based on what sorts of happy sounds those animals make. Like listening to music in another language, we can get with the vibe, even if we can't understand the lyrics

This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss

Hitting the Books: The dangerous real-world consequences of our online attention economy

If reality television has taught us anything, it's there's not much people won't do if offered enough money and attention. Sometimes, even just the latter. Unfortunately for the future prospects of our civilization, modern social media has focused upon those same character foibles and optimized them at a global scale, sacrifices at the altar of audience growth and engagement. In Outrage Machine, writer and technologist Tobias Rose-Stockwell, walks readers through the inner workings of these modern technologies, illustrating how they're designed to capture and keep our attention, regardless of what they have to do in order to do it. In the excerpt below, Rose-Stockwell examines the human cost of feeding the content machine through a discussion on YouTube personality Nikocado Avocado's rise to internet stardom.

 

Legacy Lit

Excerpted from OUTRAGE MACHINE: How Tech Amplifies Discontent, Disrupts Democracy—And What We Can Do About It by Tobias Rose-Stockwell. Copyright © 2023 by Tobias Rose-Stockwell. Reprinted with permission of Legacy Lit. All rights reserved.


This Game Is Not Just a Game

Social media can seem like a game. When we open our apps and craft a post, the way we look to score points in the form of likes and followers distinctly resembles a strange new playful competition. But while it feels like a game, it is unlike any other game we might play in our spare time.

The academic C. Thi Nguyen has explained how games are different: “Actions in games are screened off, in important ways, from ordinary life. When we are playing basketball, and you block my pass, I do not take this to be a sign of your long-term hostility towards me. When we are playing at having an insult contest, we don’t take each other’s speech to be indicative of our actual attitudes or beliefs about the world.” Games happen in what the Dutch historian Johan Huizinga famously called “the magic circle”— where the players take on alternate roles, and our actions take on alternate meanings.

With social media we never exit the game. Our phones are always with us. We don’t extricate ourselves from the mechanics. And since the goal of the game designers of social media is to keep us there as long as possible, it’s an active competition with real life. With a constant type of habituated attention being pulled into the metrics, we never leave these digital spaces. In doing so, social media has colonized our world with its game mechanics.

Metrics are Money

While we are paid in the small rushes of dopamine that come from accumulating abstract numbers, metrics also translate into hard cash. Acquiring these metrics don’t just provide us with hits of emotional validation. They are transferable into economic value that is quantifiable and very real.

It’s no secret that the ability to consistently capture attention is an asset that brands will pay for. A follower is a tangible, monetizable asset worth money. If you’re trying to purchase followers, Twitter will charge you between $2 and $4 to acquire a new one using their promoted accounts feature.

If you have a significant enough following, brands will pay you to post sponsored items on their behalf. Depending on the size of your following in Instagram, for instance, these payouts can range from $75 per post (to an account with two thousand followers), up to hundreds of thousands of dollars per post (for accounts with hundreds of thousands of followers).

Between 2017 and 2021, the average cost for reaching a thousand Twitter users (the metric advertisers use is CPM, or cost per mille) was between $5 and $7. It costs that much to get a thousand eyeballs on your post. Any strategies that increase how much your content is shared also have a financial value.

Let’s now bring this economic incentive back to Billy Brady’s accounting of the engagement value of moral outrage. He found that adding a single moral or emotional word to a post on Twitter increased the viral spread of that content by 17 percent per word. All of our posts to social media exist in a marketplace for attention — they vie for the top of our followers’ feeds. Our posts are always competing against other people’s posts. If outraged posts have an advantage in this competition, they are literally worth more money.

For a brand or an individual, if you want to increase the value of a post, then including moral outrage, or linking to a larger movement that signals its moral conviction, might increase the reach of that content by at least that much. Moreover, it might actually improve the perception and brand affinity by appealing to the moral foundations of the brand’s consumers and employees, increasing sales and burnishing their reputation. This can be an inherently polarizing strategy, as a company that picks a cause to support, whose audience is morally diverse, might then alienate a sizable percentage of their customer base who disagree with that cause. But these economics can also make sense — if a company knows enough about its consumers’ and employees’ moral affiliations — it can make sure to pick a cause-sector that’s in line with its customers.

Since moral content is a reliable tool for capturing attention, it can also be used for psychographic profiling for future marketing opportunities. Many major brands do this with tremendous success — creating viral campaigns that utilize moral righteousness and outrage to gain traction and attention among core consumers who have a similar moral disposition. These campaigns also often get a secondary boost due to the proliferation of pile- ons and think pieces discussing these ad spots. Brands that moralize their products often succeed in the attention marketplace.

This basic economic incentive can help to explain how and why so many brands have begun to link themselves with online cause-related issues. While it may make strong moral sense to those decision-makers, it can make clear economic sense to the company as a whole as well. Social media provides measurable financial incentives for companies to include moral language in their quest to burnish their brands and perceptions.

But as nefarious as this sounds, moralization of content is not always the result of callous manipulation and greed. Social metrics do something else that influences our behavior in pernicious ways.

Audience Capture

In the latter days of 2016, I wrote an article about how social media was diminishing our capacity for empathy. In the wake of that year’s presidential election, the article went hugely viral, and was shared with several million people. At the time I was working on other projects full time. When the article took off, I shifted my focus away from the consulting work I had been doing for years, and began focusing instead on writing full time. One of the by-products of that tremendous signal from this new audience is the book you’re reading right now.

A sizable new audience of strangers had given me a clear message: This was important. Do more of it. When many people we care about tell us what we should be doing, we listen.

This is the result of “audience capture”: how we influence, and are influenced by those who observe us. We don’t just capture an audience — we are also captured by their feedback. This is often a wonderful thing, provoking us to produce more useful and interesting works. As creators, the signal from our audience is a huge part of why we do what we do.

But it also has a dark side. The writer Gurwinder Boghal has explained the phenomena of audience capture for influencers illustrating the story of a young YouTuber named Nicholas Perry. In 2016, Perry began a You- Tube channel as a skinny vegan violinist. After a year of getting little traction online, he abandoned veganism, citing health concerns, and shifted to uploading mukbang (eating show) videos of him trying different foods for his followers. These followers began demanding more and more extreme feats of food consumption. Before long, in an attempt to appease his increasingly demanding audience, he was posting videos of himself eating whole fast-food menus in a single sitting.

He found a large audience with this new format. In terms of metrics, this new format was overwhelmingly successful. After several years of following his audience’s continued requests, he amassed millions of followers, and over a billion total views. But in the process, his online identity and physical character changed dramatically as well. Nicholas Perry became the personality Nikocado — an obese parody of himself, ballooning to more than four hundred pounds, voraciously consuming anything his audience asked him to eat. Following his audience’s desires caused him to pursue increasingly extreme feats at the expense of his mental and physical health.

Legacy Lit

Nicholas Perry, left, and Nikocado, right, after several years of building a following on YouTube. Source: Nikocado Avocado YouTube Channel.

Boghal summarizes this cross-directional influence.

When influencers are analyzing audience feedback, they often find that their more outlandish behavior receives the most attention and approval, which leads them to recalibrate their personalities according to far more extreme social cues than those they’d receive in real life. In doing this they exaggerate the more idiosyncratic facets of their personalities, becoming crude caricatures of themselves.

This need not only apply to influencers. We are signal-processing machines. We respond to the types of positive signals we receive from those who observe us. Our audiences online reflect back to us what their opinion of our behavior is, and we adapt to fit it. The metrics (likes, followers, shares, and comments) available to us now on social media allow for us to measure that feedback far more precisely than we previously could, leading to us internalizing what is “good” behavior.

As we find ourselves more and more inside of these online spaces, this influence becomes more pronounced. As Boghal notes, “We are all gaining online audiences.” Anytime we post to our followers, we are entering into a process of exchange with our viewers — one that is beholden to the same extreme engagement problems found everywhere else on social media.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-dangerous-real-world-consequences-of-our-online-attention-economy-143050602.html?src=rss

Tor’s shadowy reputation will only end if we all use it

“Tor” evokes an image of the dark web; a place to hire hitmen or buy drugs that, at this point, is overrun by feds trying to catch you in the act. The reality, however, is a lot more boring than that — but it’s also more secure.

The Onion Router, now called Tor, is a privacy-focused web browser run by a nonprofit group. You can download it for free and use it to shop online or browse social media, just like you would on Chrome or Firefox or Safari, but with additional access to unlisted websites ending in .onion. This is what people think of as the “dark web,” because the sites aren’t indexed by search engines. But those sites aren’t an inherently criminal endeavor.

“This is not a hacker tool,” said Pavel Zoneff, director of strategic communications at The Tor Project. “It is a browser just as easy to use as any other browser that people are used to.”

That’s right, despite common misconceptions, Tor can be used for any internet browsing you usually do. The key difference with Tor is that the network hides your IP address and other system information for full anonymity. This may sound familiar, because it’s how a lot of people approach VPNs, but the difference is in the details.

VPNs are just encrypted tunnels hiding your traffic from one hop to another. The company behind a VPN can still access your information, sell it or pass it along to law enforcement. With Tor, there’s no link between you and your traffic, according to Jed Crandall, an associate professor at Arizona State University. Tor is built in the “higher layers” of the network and routes your traffic through separate tunnels, instead of a single encrypted tunnel. While the first tunnel may know some personal information and the last one may know the sites you visited, there is virtually nothing connecting those data points because your IP address and other identifying information are bounced from server to server into obscurity.

In simpler terms: using regular browsers directly connects you and your traffic, adding a VPN routes that information through an encrypted tunnel so that your internet service provider can’t see it and Tor scatters your identity and your search traffic until it becomes almost anonymous, and very difficult to identify.

Accessing unindexed websites adds extra perks, like secure communication. While a platform like WhatsApp offers encrypted conversations, there could be traces that the conversation happened left on the device if it’s ever investigated, according to Crandall. Tor's communication tunnels are secure and much harder to trace that the conversation ever happened.

Other use cases may include keeping the identities of sensitive populations like undocumented immigrants anonymous, trying to unionize a workplace without the company shutting it down, victims of domestic violence looking for resources without their abuser finding out or, as Crandall said, wanting to make embarrassing Google searches without related targeted ads following you around forever.

Still, with added layers of security can come some additional hiccups, like lag or longer loading times. That could be true for some users depending on what they do online, but anecdotally it's gotten a lot faster in recent years, and users have said they barely notice a difference compared to other browsers. Sameer Patil, associate professor at the School of Computing at the University of Utah, studied this by having students and staff try out Tor as their main browser. “I was personally very surprised at how many sites and things just work fine in the Tor browser. So not only did they work as intended, but they also were fast enough,” Patil said.

But even if online privacy isn’t your main concern personally, using Tor can help support industries that heavily rely on it. By using the anonymous and secure browser, you’re supporting activists, journalists and everyone else’s privacy because the more people that use it, the more secure it gets, according to Patil. If only certain sensitive groups use it, it’ll be easier to deanonymize and ultimately track down identities. When you’re one in a billion using it, that task becomes nearly impossible.

This article originally appeared on Engadget at https://www.engadget.com/tor-dark-web-privacy-secure-browser-anonymous-130048839.html?src=rss

Astrophysicist who claimed to find alien tech may have done the science wrong

Last month, theoretical physicist Avi Loeb made headlines with the sensational claim that tiny spherules recovered from the bottom of the ocean were probably of alien origin. “It’s most likely a technological gadget with artificial intelligence,” he said to The New York Times, which published a story today about the Harvard professor’s contentious claims. Although the biggest scientific breakthroughs often start with a bold hypothesis, Loeb’s peers believe the decorated astrophysicist’s assertions can be called many things — but “good science” isn’t one of them.

Loeb’s proclamations stem from an object that US government sensors logged on January 8th, 2014: a fireball from space that blazed into the western Pacific Ocean off the northeastern coast of Papua New Guinea. Highlighting its logged speed and direction as an anomaly, Loeb and undergraduate assistant Amir Siraj targeted the otherwise inconsequential planetary entry as an object worthy of further investigation.

Fast-forward to last month, when Loeb led a voyage — funded by a crypto entrepreneur — to recover evidence from the fireball’s calculated crash path. Dragging a magnetic sled attached to the expedition boat across the ocean floor, the team recovered a series of tiny spherical objects which Loeb says “appear under a microscope as beautiful metallic marbles.” Preliminary analysis indicated that the sub-millimeter orbs were 84 percent iron, with silicon, magnesium and trace elements comprising the rest. Loeb believes that “as a result of being exposed to the fireball’s heat, the surface of the object likely disintegrated into tiny spherules, similar in number per unit area to those recovered by the expedition.”

Avi Loeb / Medium

Not one to exercise much caution with public pronouncements, Loeb wrote in a Medium post, “Their discovery opens a new frontier in astronomy, where what lay outside the solar system is studied through a microscope rather than a telescope.” He summarized, in an equally dramatic manner, “The discovery of spherules felt like a miracle.” Soon after, CBS News picked up on his excitement and published an attention-grabbing article titled, “Harvard professor Avi Loeb believes he’s found fragments of alien technology.” Loeb has sent the mysterious spheres to Harvard University, the University of California, Berkeley and the Bruker Corporation in Germany for more in-depth analysis.

“It has material strength that is tougher than all space rock that were seen before, and catalogued by NASA,” CBS Newsreported Loeb as saying earlier this month. “We calculated its speed outside the solar system. It was 60 km per second, faster than 95% of all stars in the vicinity of the sun. The fact that it was made of materials tougher than even iron meteorites, and moving faster than 95% of all stars in the vicinity of the sun, suggested potentially it could be a spacecraft from another civilization or some technological gadget.”

It all sounds fascinating, especially with the resurgent interest in UFOs and the quest to discover signs of alien life. But there’s one problem: The scientific community, by and large, believes Loeb is, if not entirely full of it, practicing something far outside what they’d call science.

Peter Brown, a meteor physicist at Western University in Ontario, said that “several percent” of detected events appear interstellar at first but almost always end up chalked up to a measurement error. Steve Desch, an astrophysicist at Arizona State University, argued at a recent conference that if the object were traveling as fast as the data suggests — one of the points Loeb uses to indicate its origin was from outside our solar system — it would have been wholly incinerated entering the Earth’s atmosphere. Brown and other scientists also highlight Loeb’s lack of engagement with peers who study similar unidentified fireballs.

Brown recently presented data (accepted for publication in The Astrophysical Journal) demonstrating that NASA’s recordings in cases like these often end up being proven untrustworthy. He believes the fireball likely impacted at a slower speed than the recorded data suggested. “If the speed was overestimated, then the object becomes, more or less, within the realm of what we see in terms of other bound solar system objects,” he said. (Loeb retorted by citing an unbendable trust in government data: “They are responsible for national security. I think they know what they are doing.”) The New York Times adds that the government is unlikely to declassify the data that would allow the scientific community to learn how precise (or not) it is.

Avi Loeb / Medium

Regardless of the spherules’ origins, researchers are alarmed by Loeb’s penchant for venturing outside science to make bold (and highly publicized) claims — with his scientific background boosting their perceived legitimacy. The gist of their alarm is that becoming a Harvard-employed astrophysicist doesn’t grant you the wizard-like ability to know the answers to questions the scientific method hasn’t yet confirmed. On the contrary, it’s supposed to mean your peers respect you for exercising restraint and doing quite the opposite. “[Loeb’s claims are] a real breakdown of the peer review process and the scientific method,” Desch said to The New York Times. “And it’s so demoralizing and tiring.”

Loeb’s views about his peers’ harsh response can be summarized in his cited quote from philosopher Arthur Schopenhauer from a recent blog post. “All truth passes through three stages: First, it is ridiculed; second, it is violently opposed; and third, it is accepted as self-evident.” Notably, Loeb seemingly refers to his team’s preliminary findings — with plenty of question marks still intact — as “truth.”

The Oxford English Dictionary defines confirmation bias as “the tendency to interpret new evidence as confirmation of one's existing beliefs or theories.” Loeb’s words and excited tone suggest he knows the answer and that his peers’ criticism stems from their resistance to the new frontier he’s discovered. However, their criticism seems only partially about his specific conclusions; it’s paired with a larger concern about an esteemed cohort jumping to conclusions that fall far outside of the scientific method. “What the public is seeing in Loeb is not how science works,” remarked Desch. “And they shouldn’t go away thinking that.”

This article originally appeared on Engadget at https://www.engadget.com/astrophysicist-who-claimed-to-find-alien-tech-may-have-done-the-science-wrong-214008434.html?src=rss

The best finance and security apps for college students

You may be more or less prepared for the academics of college, but the other life stuff can be an eye-opener. College might be the first time you’re in charge of your own finances, and with new living situations, new jobs and new connections, you may also be expanding the amount of personal data you’re putting out into the world. If you could use a little help with budgeting, remembering passwords or making sure everything you do online is secure, here are the finance and security apps we’ve used, tested and ultimately recommend.

Mint

If you’re new to tracking finances, getting an overview of your banking, credit and loan accounts in one place can be helpful. Mint is a simple and free app that does just that. I tested it for our subscription guide and continue to use it. The interface is intuitive and it’s pretty good at correctly categorizing purchases. The main features, like transaction history, self-budgeting and goal-setting, are available free. For $5 per month, you can have Mint cancel subscriptions on your behalf and you won’t see as many ad links peppered throughout the app (though, I’ve never found the ads particularly distracting.)

YNAB

For help creating a more formal budget, a few Engadget staffers use YNAB (You Need A Budget) and we recommend it in our guide to student budgeting. It’s based around a theory that imposes four “rules” to improve your money management, and learning those principles now will benefit you long after graduation. The browser and mobile app interfaces are pretty easy to use, and YNAB has a ton of instructional content for newbies that can point you in the right direction when you’re first setting up expense categories, debt trackers and sinking funds. It’s usually $15 per month or $99 per year, but students who can prove they’re in school can get a year for free.

Goodbudget

Between loans, jobs and, if you’re lucky, scholarships and financial aid, a student’s “extra” money can be pretty limited. Goodbudget translates the envelope technique to an app format, earmarking your money for the things you need to pay for. By visualizing what you have and what you need, you can see when there’s room for stuff you want, like going out with friends or decorating your first apartment. Plenty of graphs and sliders help map out your situation, and Goodbudget also offers free online classes for those who want to get better with money (granted, that may be a hard sell when you’re already in school). The free version gives you twenty total envelopes, split between expenses and goals, and lets you add one bank account. For unlimited accounts and envelopes, the paid version is $8 per month or $70 per year.

Acorns investment

Say you indulge in an Iced Toasted Vanilla Oatmilk Shaken Espresso for $5.75. The Acorns investment app rounds up that last 25 cents and deposits it into an investment account, and over time, your money grows. By providing a simple app and recommending just a few different portfolios, Acorns takes some of the complexity out of investing. For students in particular, it’s also easier to invest a few cents here and there than larger chunks of cash when you’re already just trying to get by. The monthly plan defaults to $5 per month with an option of a $3 plan at sign up. Both come with a checking and a retirement savings account in addition to the investment features, so if you’re totally starting fresh, this could prove useful.

1Password

Our senior security reporter, Katie Malone, put 1Password at the top of Engadget’s guide to password managers. Like all services like this, 1Password one helps you create unique and complex credentials for every site you use, and then saves them securely so you don’t have to remember them all. It works across most platforms and even lets you share logins and credit card info with other people as needed, which will make it easier to access any family accounts you may need while in school. The security and encryption measures are top-notch, with a zero-knowledge policy that ensures the company doesn’t store your data, as well as a bug bounty program that rewards ethical hackers who discover any vulnerabilities.

Proton VPN

If you study in public places where the WiFi is suspect, a VPN can give you an extra layer of protection. It’s not a cure-all for online security woes, but VPNs do create a protected “tunnel” to keep out people who may otherwise have access to your data, like your internet service provider or hackers targeting public WiFi. Proton VPN is the best overall option not just because it’s easy to use. The Switzerland-based company also enforces a no-log policy and their open-source software continually stands up to independent audits. Unlike some VPNs, it didn’t tank our connection speeds in our tests, either. Proton goes for $10 per month to access servers in 65 countries, or you can get the free version with access to just three.

ProtonMail

Free email services are everywhere, but finding one that isn’t propped up by selling your habits and history to advertisers is almost impossible. And while you might get a school email address, a good personal email will serve you long after access to your alumni mail is discontinued. ProtonMail is focused on privacy: It uses end-to-end encryption, whereas a service like Gmail encrypts messages in transit only. Proton’s open-source encryption methods are independently audited, and since the service is supported by paid subscriptions and not advertising, the company has little incentive to snoop your info. Free plans give you one gigabyte of storage and allow for 150 emails per day, while a $12-per-month subscription grants 500GB of storage and removes email limits.

Signal

As a non-profit, there's no tech giant behind the wheel at Signal, which sets it apart from most other messaging services. A phone number is required for set up, but that’s about all the information Signal ever collects. It’s a favorite of journalists, protestors and people living in unstable territories, but students who realize their communications are no one else’s business will find the app useful, too. Texts, videos and images you send are end-to-end encrypted using open-source protocols, and you can even set messages to expire. Recent additions that enhance group chats may make Signal feel a little more like other messaging apps, but the core structure of the service will always be fundamentally more private than many competitors.

Noonlight

Staying safe in college extends beyond online safety, which is where apps like Noonlight come in. Tinder bought a stake in the app a few years ago to help people in the event of a date gone wrong. Within the app, you’ll find a giant white button that you press and hold in sketchy situations. As long as you hold the button, nothing happens. Let go of it, and unless you enter a secret pin to prove you’re safe, the police will be dispatched to your location. A timeline feature lets you add names and images when you’re meeting someone new. The safety network allows your friends and family to request check-ins and take action when they don’t hear from you. The free version includes all three of the features mentioned above, while the $5-per-month plan adds an iPhone widget and the ability to sync with rideshare apps.

This article originally appeared on Engadget at https://www.engadget.com/best-finance-and-security-apps-for-college-students-130035602.html?src=rss

Natural Language Programming AIs are taking the drudgery out of coding

“Learn to code.” That three-word pejorative is perpetually on the lips and at the fingertips of internet trolls and tech bros whenever media layoffs are announced. A useless sentiment in its own right, but with the recent advent of code generating AIs, knowing the ins and outs of a programming language like Python could soon be about as useful as knowing how to fluently speak a dead language like Sanskrit. In fact, these genAIs are already helping professional software developers code faster and more effectively by handling much of the programming grunt work.

How coding works

Two of today’s most widely distributed and written coding languages are Java and Python. The former almost single handedly revolutionized cross-platform operation when it was released in the mid-’90s and now drives “everything from smartcards to space vehicles,” according to Java Magazine in 2020 — not to mention Wikipedia’s search function and all of Minecraft. The latter actually predates Java by a few years and serves as the code basis for many modern apps like Dropbox, Spotify and Instagram.

They differ significantly in their operation in that Java needs to be compiled (having its human-readable code translated into computer-executable machine code) before it can run, while Python is an interpreted language which means that its human code is converted into machine code line-by-line as the program executes, enabling it to run without first being compiled. The interpretation method allows code to be more easily written for multiple platforms while compiled code tends to be focused to a specific processor type. Regardless of how they run, the actual code-writing process is nearly identical between the two: somebody has to sit down, crack open a text editor or Integrated Development Environment (IDE) and actually write out all those lines of instruction. And up until recently, that somebody typically was a human.

The “classical programming” writing process of today isn’t that different from the process those of ENIAC, with a software engineer taking a problem, breaking it down into a series of sub-problems, writing code to solve each of those sub-problems in order, and then repeatedly debugging and recompiling the code until it runs. “Automatic programming,” on the other hand, removes the programmer by a degree of separation. Instead of a human writing each line of code individually, the person creates a high-level abstraction of the task for the computer to then generate low level code to address. This differs from “interactive” programming, which allows you to code a program while it is already running.

Today’s conversational AI coding systems, like what we see in Github’s Copilot or OpenAI’s ChatGPT, remove the programmer even further by hiding the coding process behind a veneer of natural language. The programmer tells the AI what they want programmed and how, and the machine can automatically generate the required code.

Building the tools to build the tools allowing any tool to build tools

Among the first of this new breed of conversational coding AIs was Codex, which was developed by OpenAI and released in late 2021. OpenAI had already implemented GPT-3 (precursor to GPT-3.5 that powers BingChat public) by this point, the large language model remarkably adept at mimicking human speech and writing after being trained on billions of words from the public web. The company then fine-tuned that model using 100-plus gigabytes of GitHub data to create Codex. It is capable of generating code in 12 different languages and can translate existing programs between them.

Codex is adept at generating small, simple or repeatable assets, like “a big red button that briefly shakes the screen when clicked” or regular functions like the email address validator on a Google Web Form. But no matter how prolific your prose, you won’t be using it for complex projects like coding a server-side load balancing program — it’s just too complicated an ask.

Google’s DeepMind developed AlphaCode specifically to address such challenges. Like Codex, AlphaCode was first trained on multiple gigabytes of existing GitHub code archives, but was then fed thousands of coding challenges pulled from online programming competitions, like figuring out how many binary strings with a given length don’t contain consecutive zeroes.

To do this, AlphaCode will generate as many as a million code candidates, then reject all but the top 1 percent to pass its test cases. The system will then group the remaining programs based on the similarity of their outputs and sequentially test them until it finds a candidate that successfully solves the given problem. Per a 2022 study published in Science, AlphaCode managed to correctly answer those challenge questions 34 percent of the time (compared to Codex’s single-digit success on the same benchmarks, that’s not bad). DeepMind even entered AlphaCode in a 5,000-competitor online programming contest, where it surpassed nearly 46 percent of the human competitors.

Now even the AI has notes

Just as GPT-3.5 serves as a foundational model for ChatGPT, Codex serves as the basis for GitHub’s Copilot AI. Trained on billions of lines of code assembled from the public web, Copilot offers cloud-based AI-assisted coding autocomplete features through a subscription plugin for the Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs).

Initially released as a developer’s preview in June of 2021, Copilot was among the very first coding capable AIs to reach the market. More than a million devs have leveraged the system in the two years since, GitHub's VP of Product Ryan J Salva, told Engadget during a recent interview. With Copilot, users can generate runnable code from natural language text inputs as well as autocomplete commonly repeated code sections and programming functions.

Salva notes that prior to Copilot’s release, GitHub’s previous machine-generated coding suggestions were only accepted by users 14 - 17 percent of the time, “which is fine. It means it was helping developers along.” In the two years since Copilot’s debut, that figure has grown to 35 percent, “and that's netting out to just under half of the amount of code being written [on GitHub] — 46 percent by AI to be exact.”

“[It’s] not a matter of just percentage of code written,” Salva clarified. “It's really about the productivity, the focus, the satisfaction of the developers who are creating.”

As with the outputs of natural language generators like ChatGPT, the code coming from Copilot is largely legible, but like any large language model trained on the open internet, GitHub made sure to incorporate additional safeguards against the system unintentionally producing exploitable code.

“Between when the model produces a suggestion and when that suggestion is presented to the developer,” Salva said, “we at runtime perform … a code quality analysis for the developer, looking for common errors or vulnerabilities in the code like cross-site scripting or path injection.”

That auditing step is meant to improve the quality of recommended code over time rather than monitor or police what the code might be used for. Copilot can help developers create the code that makes up malware, the system won’t prevent it. “We've taken the position that Copilot is there as a tool to help developers produce code,” Salva said, pointing to the numerous White Hat applications for such a system. “Putting a tool like Copilot in their hands … makes them more capable security researchers,” he continued.

As the technology continues to develop, Salva sees generative AI coding to expand far beyond its current technological bounds. That includes “taking a big bet” on conversational AI. “We also see AI-assisted development really percolating up into other parts of the software development life cycle,” he said, like using AI to autonomously repair a CI/CD build errors, patch security vulnerabilities, or have the AI review human-written code.

“Just as we use compilers to produce machine-level code today, I do think they'll eventually get to another layer of abstraction with AI that allows developers to express themselves in a different language,” Salva said. “Maybe it's natural language like English or French, or Korean. And that then gets ‘compiled down’ to something that the machines can understand,” freeing up engineers and developers to focus on the overall growth of the project rather than the nuts and bolts of its construction.

From coders to gabbers

With human decision-making still firmly wedged within the AI programming loop, at least for now, we have little to fear from having software writing software. As Salva noted, computers already do this to a degree when compiling code, and digital gray goos have yet to take over because of it. Instead, the most immediate challenges facing programming AI mirror those of generative AI in general: inherent biases skewing training data, model outputs that violate copyright, and concerns surrounding user data privacy when it comes to training large language models.

GitHub is far from alone in its efforts to build an AI programming buddy. OpenAI’s ChatGPT is capable of generating code — as are the already countless indie variants being built atop the GPT platform. So too is Amazon’s AWS CodeWhisperer system, which provides much of the same autocomplete functionality as Copilot, but optimized for use within the AWS framework. After multiple requests from users, Google incorporated code generation and debugging capabilities into Bard this past April as well, ahead of its ecosystem-wide pivot to embrace AI at I/O 2023 and the release of Codey, Alphabet’s answer to Copilot. We can’t be sure yet what generative coding systems will eventually become or how it might impact the tech industry — we could be looking at the earliest iterations of a transformative democratizing technology, or it could be Clippy for a new generation.

This article originally appeared on Engadget at https://www.engadget.com/natural-language-programming-ais-are-taking-the-drudgery-out-of-coding-140015594.html?src=rss

‘Black Mirror’ finds new life in our modern hellscape

In the three years since Black Mirror's previous (and somewhat disappointing) season, we've lived through a global pandemic, watched a US president trigger a mob attack on the Capitol, and AI has gone mainstream. We’re barreling towards the future faster than ever, but loneliness remains a key issue in modern life. What better time for Charlie Brooker to bring back his feel-bad series for another season?

In 2019, I argued that Brooker was running out of things to say with the show, despite his deft ability to predict our tech-infused dystopia with Black Mirror's first few seasons. Something was lost with his transition to Netflix, which led to bigger budgets and more notable stars, but less of the sharp insight that made the show so memorable. (At least we got San Junipero,” though.) Thankfully, a few years away from the project seems to have helped. Season six of Black Mirror, which hit Netflix on June 15th, is the series at its best: Shocking, incisive and often hilarious. It also finds new life by looking back into the past frequently, as well as exploring horror more directly than before.

Minor spoilers ahead of Black Mirror season six.

"Joan is Awful" is the perfect way to kick off the new season – it's the most stereotypical Black Mirror setup. A disaffected big tech HR worker is surprised to find a show on Streamberry (an obvious Netflix stand-in) that recounts her daily life. That includes the cringeworthy layoff of a colleague (and supposed friend), and a therapist appointment where she reveals she's dissatisfied with her fiance.

It's a relatable Millennial malaise setup, the sort of thing Charlie Brooker captured so well early on in the series. Joan, played by Schitt's Creek star Annie Murphy, says she doesn't feel like a main character in her own life, so she coasts through everything on autopilot, almost always taking the easiest and less confrontational option. You'd think that it would be illegal for a network to just recount her life for all of its subscribers — turns out, she should have read the Terms of Service more closely.

I won't spoil where, exactly, that episode goes, or the familiar faces you end up seeing. But as the twists revealed themselves and it reached its inevitable bonkers conclusion, I couldn't help but smile. It was like Charlie Brooker shouting at me through the screen, "Black Mirror is back, baby!"

Netflix

What's truly surprising, though, is that this season of the series also feels refreshing in the ways it veers away from what we expect. "Loch Henry" is a fascinating exploration of our obsession with true crime dramas, and the impact they can have on the people affected by those stories. But aside from the presence of Streamberry as a service thirsty for true crime narratives, the story is more cultural than technology criticism.

Sure, we have more tools than ever to make true crime documentaries – there's a drone being used to make sweeping aerial shots, and the digital cameras are perfectly suited to shooting in dimly lit basements – but the desire to tell and consume these stories is purely human. And when it comes to macabre drama we can't help ourselves.

Black Mirror also gains some fresh perspective by exploring the past — or at least, timelines without smartphones and ubiquitous fast cellular internet. “Beyond the Sea” is an elegant yet brutal story set in 1969, focusing on two astronauts on a deep space mission who also wirelessly control mechanical bodies back on Earth. The episode is less interested in how any of that tech works — just accept the mystery, folks — and more about how it affects those astronauts, their families and society as a whole.

It's not too surprising when deranged hippie cultists appear, believing that mechanoid people are an affront to humanity. Both astronauts, played by Aaron Paul (Breaking Bad) and former heartthrob Josh Hartnett, are also trapped by the societal norms of the '60s. They may be world-class astronauts, but they're also men who can't share their feelings properly, who hit their kids to "keep them in line," and who have rigid expectations from the women in their lives. Beyond the Sea may not fully earn its tragic conclusion, but the journey is certainly powerful.

I was surprised to see how much Black Mirror leans into pure horror this season: “Demon 79” is a direct callback to '70s horror films, from its explosive score to its overall aesthetic. The story revolves around an immigrant shoe sales clerk who inadvertently summons a demon, and is tasked with murdering three people to prevent the apocalypse. There isn't a sliver of tech involved — perhaps that’s why the opening credits refer to it as a "Red Mirror" episode. But it's still a fun horror romp, with plenty of subtext around the South Asian experience in '70s London (thanks to co-writer Bisha K. Ali, who also served as the showrunner for Ms. Marvel).

“Mazey Day” also brings Black Mirror into fresh territory, but you're better off discovering how for yourself. I can reveal that its story of a young paparazzi photographer (Zazie Beetz) is a refreshing glimpse of the mid-2000's, filled with then cutting edge tech (the square iPod Shuffle! Dashboard GPS!), but also plenty of old school touches. You still needed big paper map books in that era, because GPS wasn't always reliable. And even though high speed internet was widely available, it wasn't unusual to find people still relying on dial-up in 2005.

It’s impossible for Black Mirror to feel as fresh as it did over a decade ago. Since then, the downsides of Big Tech have become impossible to ignore. But at least now, especially with some extra time to craft these episodes, it seems like Charlie Brooker has found something new to say with the show.

This article originally appeared on Engadget at https://www.engadget.com/black-mirror-season-six-review-netflix-130015184.html?src=rss

Everyone is selling VPNs, and that's a problem for security

Whatever YouTube rabbit hole you’ve spiraled down lately — gaming playthroughs, political commentary, niche eight-hour video essays — you’ve encountered an ad for virtual private network, or VPN, services. The influencers promise military grade encryption and streaming content from anywhere as long as you use code FOLLOWME10 at checkout so that they get their cut.

It’s not just anecdotal that VPN ads are everywhere on YouTube. Since the beginning of 2016, VPN companies have collectively sponsored about 247,000 YouTube videos, according to Daniel Conn, co-founder of influencer marketing consulting firm ThoughtLeaders. Almost none came up before then, signaling rapid growth as both influencer marketing and VPN companies took off.

For the YouTubers, it’s a lucrative and consistent way to fund their aspirations; for VPN providers, it’s helping to bring the obscure security product into the mainstream. But for the casual viewer, the sharp spike in VPN ads adds to the confusion and jargon around cybersecurity — and it could be misleading us on how secure we really are.

“If you do think of it like education, it might be the most pervasive form of security education out there,” said Dave Levin, assistant professor in computer science at the University of Maryland.

Researchers at the University of Maryland took a random sample of those hundreds of thousands of ads to better understand what these influencers are saying about security. While not explicitly inaccurate, most of the ads featured vague or exaggerated claims on what VPNs could do, according to Michelle Mazurek, also an associate professor in computer science at the university.

All a VPN can really do is mask your IP address and the identity of your computer on the network by creating an encrypted "tunnel" that prevents your internet service provider from accessing data about your browsing history. They can’t keep your identity secret, protect from financial exploitation, offer “military-grade encryption” or other marketing terms these companies use. Military-grade encryption refers to AES-256, but that’s become an industry standard, and won’t protect you from security threats like phishing attacks. 

Still, it’s sold as a one-step security solution, when it’s really just the start of what you can do to protect yourself online. The companies and the ads are “overselling what a functional one could do,” Omer Akgul, the PhD student at University of Maryland who led the research paper on VPN advertising, said. “It's problematic that users think they're getting protections where they really aren't.”

Most advertising comes with these caveats, but in a field as high risk and difficult to understand as security, the exaggerated claims can be damaging. If a YouTuber sells you on a new electric toothbrush, you can get first-hand experience deciding whether it’s worth your money. You can feel whether it leaves your teeth feeling clean, see real results when you go in for your next dentist appointment and easily compare it to other options on the market. But security isn’t tangible. One VPN service might be more user friendly than the next, but we rely on recommendations from others to tell us whether or not one is “more secure.”

The power behind influencer marketing lies in those recommendations. We trust the people we follow as we build parasocial relationships and see them advertise the same services over and over again. According to the UMD research, influencers use this to tailor their approaches to VPN ads. A far-right conspiracy channel will tout a VPN’s privacy protections from government snooping because, while a movie reviewer will say the VPN can help you access streaming platforms in different countries, Akgul said, “because YouTubers know who their audiences are, they can frame it in such a way that their audience would be interested or understand.”

Influencers tend to be tight-lipped about these advertising relationships because it can put future earnings in jeopardy. But according to Conn, the influencers he’s encountered generally like working with VPN providers because they can be so lucrative. And for VPNs, the competition is fierce to secure top converters, and includes exclusivity periods to prevent top YouTubers from working with competitors. They’re also actively recruiting with companies like Surfshark, NordVPN and ExpressVPN all touting open calls for influencers to sell their services.

“It's a battleground,” Conn said. “Because of these exclusivity causes, it's a race between them to scoop up in inventory because effectively you're blocking your competitor from the advertising space as well with those clauses. It’s a very aggressive market for VPNs.”

If you’re looking to hide your internet data from your ISP, want to stream Netflix abroad or are connecting to an untrusted public network, a VPN would be a worthwhile investment. But just because you’ve seen more ads online, doesn’t mean the use cases for VPNs have changed. Plus, as it becomes a more lucrative way for influencers to make money online, it probably means you should be even more skeptical of both the advertisements and the provider themselves.

This article originally appeared on Engadget at https://www.engadget.com/youtube-influencer-selling-vpns-security-problems-153046206.html?src=rss

Pocket users can now create multiple collections of articles, videos and websites

Read-it-later service Pocket has unveiled some new features, including the option to create private lists of saved articles, videos and websites. Pocket Lists are only available in the US on the web for now, but the feature will be available globally starting next month and on mobile later this year.

You'll be able to create multiple lists with titles and descriptions. In the near future, you'll have the option to add several items to a list at once and attach notes to help you remember why an item is there. Later this year, Pocket will roll out the option to publish lists and share them with other users.

The Pocket team suggests that you might set up lists for things like recipes, trip planning or simply stuff that puts a smile on your face for whenever you need it. This is a handy update from Pocket, particularly for those who like to keep things organized. You might think of it as a bit like having bookmark folders in Pocket or a different place to save Pinterest-style collections.

Elsewhere, Pocket has built a new version of its iOS app with the aim of rolling out features more rapidly — the plan is to release updates every two weeks. You'll need to be on at least iOS 16 to use the latest app, which offers personalized recommendations and a more streamlined user interface, Pocket says. The My List tab is now called Saves, and it will offer access to features such as search, tagged items, favorites and a way to listen to audio versions of articles all in one place. One other handy update means that you'll be able to swiftly archive items with a swipe.

On Android, there's a very welcome update rolling out today. Pocket will now save log-in credentials for websites you've saved stuff from, so you'll no longer need to sign in every time you visit them. While in article view, you'll be able to move between saved items using Previous and Next buttons.

Pocket, which Mozilla bought back in 2017, added that it has removed some features. The team plans to bring back some of those within a few months, such as the option to highlight articles. Other features are gone for good, however, including the ability to recommend items to other users, which has been removed in favor of lists. To that end, here's hoping Pocket rolls out the option to share lists fairly swiftly.

This article originally appeared on Engadget at https://www.engadget.com/pocket-users-can-now-create-multiple-collections-of-articles-videos-and-websites-160043227.html?src=rss

The government is very hackable, and they have your data

Data breaches and security failures happen everyday. There’s little we can do about that if we want to participate in modern society, except maybe switch out the companies we interact with for their competitors if we presume one to be more secure. There’s one service that we don’t have a choice on whether to interact with, no matter how high profile its security incidents become: the federal government.

A breach of the Office of Personnel Management announced in 2015 it had leaked background investigation records, impacting 21.5 million individuals, according to the agency. The highly publicized Solarwinds hack discovered in 2020 exposed government and business records to Russian insiders. Earlier this year, the US Marshals Service division of the Department of Justice became a target, when hackers stole personal information about investigation targets, personnel and more.

The attacks were targeted, usually seeking out some type of sensitive state information. But we all have sensitive information stored throughout federal agencies like our social security numbers or home addresses. Probably even more information is at stake if you utilize federal services like Medicare, student loans or SNAP benefits. We have no choice but to give the federal government access to our personal information in exchange for certain services, unless you’re reading this while living off grid.

“If we want to live in the information age, and we're using some of these systems, we are inherently giving up control,” Kevin Cleary, clinical assistant professor of management science and systems at University at Buffalo, told Engadget. “You have to trust that agency has put forward all the best controls and practices.”

In response, the federal government has developed agencies like the Cybersecurity and Infrastructure Security Agency to lead better security initiatives across departments. In part, this is intended to help you feel a little bit better about storing your data within federal servers by setting higher standards for how it safeguards your data. According to Michael Duffy, associate director of the cybersecurity division at CISA, since the agency’s establishment in 2018, it’s spearheaded the most progress he’s seen in his federal cybersecurity career.

So, things are improving, and you can probably trust the federal government to keep your data safe in the same way you trust the companies you interact with everyday. What makes the government so different, though, is that it’s a high profile target. Adversarial countries want in on state secrets while, at the same time, it’s hard to prioritize spending on security measures. Getting tax-payer funds to fill a pothole on your local highway is hard enough when the damage is tangible and obvious, while security is hard to quantify the benefits of until an attack occurs. In other words, the value of security investments aren’t proven until it’s already too late.

This has gotten better. Security investments in the federal government largely trend upwards. Still, it’s not enough. “Sometimes their budgets don't allow them to take every step or to everything that they would like to do, because you just simply don't have the money,” Marisol Cruz Cain, director of information technology and cybersecurity at GAO, said.

But the reason why the federal government may appear less secure is because of its obligation for transparency. There’s a responsibility to share lessons learned after an incident, and make sure citizens know what happened. That’s actually a big part of CISA’s job. “We are really looking at ways that we are making it more acceptable to raise the hand and say this is the way that we were attacked or an incident occurred,” Duffy said.

The government also interacts with a ton of outside businesses. So, say a government contractor experiences a breach or security incident, that means that data held in federal tech could be exposed. This opens up a slew of new attack vectors, and possibilities for malpractice.

You can actually see how secure certain agencies are thanks to the Government Accountability Office (GAO) and legislation like the Federal Information Technology Acquisition Reform Act. The latter documents tech modernization efforts across major agencies, including cyber readiness. GAO, for its part, audits cybersecurity efforts and develops privacy impact assessments that are publicly available descriptions about what information the agency collects, how they use it and more.

But with all these audits come a relatively bleak conclusion. Agencies aren’t evaluating their policies and procedures to make sure that high profile incidents don’t happen on a regular basis, Cruz Cain said. Your information will be on those servers whether you like it or not.

This article originally appeared on Engadget at https://www.engadget.com/the-government-is-very-hackable-and-they-have-your-data-163034576.html?src=rss