Astronomers have used advanced instruments to calculate a more accurate age of Maisie’s galaxy, discovered by the James Webb Space Telescope (JWST) in June 2022. Although the star system isn’t quite as old as initially estimated, it’s still one of the oldest recorded, from 390 million years after the Big Bang — making it about 13.4 billion years old. That’s a mere 70 million years younger than JADES-GS-z13-0, the (current) oldest-known system.
A team led by the University of Texas at Austin astronomer Steven Finkelstein discovered the system last summer. (The name “Maisie’s galaxy” is an ode to his daughter because they spotted it on her birthday.) The group initially estimated that it was only 290 million years after the Big Bang, but analyzing the galaxy with more advanced equipment revealed it’s about 100 million years older than that. “The exciting thing about Maisie’s galaxy is that it was one of the first distant galaxies identified by JWST, and of that set, it’s the first to actually be spectroscopically confirmed,” said Finkelstein.
The spectroscopic confirmation came courtesy of the JWST’s Near InfraRed Spectrograph (NIRSpec) conducted by the Cosmic Evolution Early Release Science Survey (CEERS). The NIRSpec “splits an object’s light into many different narrow frequencies to more accurately identify its chemical makeup, heat output, intrinsic brightness and relative motion.” Redshift — the movement of light towards longer (redder) wavelengths to indicate motion away from the observer — held the key to more accurate dating than the original photometry-based estimate. The advanced tools assigned a redshift of z=11.4 to Maisie’s galaxy, helping the researchers settle on the revised estimate of 390 million years after the Big Bang.
James Webb Space Telescope
ASSOCIATED PRESS
The astronomers also examined CEERS-93316, a galaxy initially estimated at 235 million years pre-Big Bang — which would have made it astonishingly old. After studying this system, it revealed a redshift of z=4.9, which places it at a mere one billion years after the Big Bang. The first faulty estimate about CEERS-93316 was understandable: The galaxy emitted an unusual amount of light in narrow frequency bands associated with oxygen and hydrogen, making it appear bluer than it was.
Finkelstein chalks up the miss to bad luck. “This was a kind of weird case,” he said. “Of the many tens of high redshift candidates that have been observed spectroscopically, this is the only instance of the true redshift being much less than our initial guess.” Finkelstein added, “It would have been really challenging to explain how the universe could create such a massive galaxy so soon. So, I think this was probably always the most likely outcome, because it was so extreme, so bright, at such an apparent high redshift.”
This article originally appeared on Engadget at https://www.engadget.com/astronomers-confirm-maisies-galaxy-is-one-of-the-oldest-observed-205246905.html?src=rss
If you've been looking to pick up Samsung's new Galaxy Z Flip 5, the foldable phone is now down to $900 at Amazon for a 256GB model. That's a $100 discount for a device that only officially went on sale last week. You'll just need to clip an on-page coupon to see the deal at checkout.
Previously, Amazon ran a pre-order deal that bundled the 512GB version of the Z Flip 5 with a $150 Amazon gift card for $1,000. This new offer isn't quite as strong of a value, but it's the first cash discount we've seen for the device. If you shop at Amazon regularly, you can still get the 256GB model with that $150 gift card, but you have to pay $1,000 upfront.
In any event, we gave the Galaxy Z Flip 5 a review score of 88 earlier this month, and we currently list it as the "best foldable for selfies" in our guide to the best smartphones. As our Deputy Editor Cherlynn Low notes in her review, the big upgrade this year is a roomier cover display; at 3.4 inches, it's much more useful for replying to texts, checking notifications, using apps, and yes, taking selfies without having to physically unfold the phone. You have to jump through a few hoops to get any app to work in full on the outer display, but once you do, it becomes a bit more versatile.
Beyond that, there's a new hinge that lets the whole thing fold flat when closed. The device is still capable as a "normal" phone, with a flagship-level Snapdragon 8 Gen 2 processor and a vibrant 6.7-inch OLED interior display. Samsung also promises four years of OS updates and five years of security patches, which is more extensive than many Android manufacturers.
The Galaxy Z Flip 5 is still a foldable, so you'll have to take more care than usual when handling it. Its IPX8 water-resistance rating means it can survive a brief dunking, but it may be more susceptible to damage from dust or sand. Its camera performance isn't as impressive as the best standard phones in its price range, particularly in low-light settings, and its battery life is only so-so by comparison. There's a visible crease on the interior display as well. Nevertheless, this is the best flip-style foldable you can buy, and this deal makes it a little more affordable.
AI and climate change represent two ways humans may ravage life as we know it on Earth, but the former can also help with the consequences of the latter. The California Department of Forestry and Fire Protection (Cal Fire) revealed a new program today that uses AI to detect wildfires. Created in partnership with the University of California San Diego, the Alert California AI program takes feeds from 1,032 360-degree rotating cameras and uses AI to “identify abnormalities within the camera feeds.” It then notifies emergency services and other authorities to check if a potential blaze warrants a response.
The program, launched in July, has already quelled at least one potential wildfire, according toReuters. A camera reportedly recorded a fledgling fire burning at 3 am in the remote Cleveland National Forest east of San Diego. The AI spotted the inferno and alerted a fire captain “who called in about 60 firefighters including seven engines, two bulldozers, two water tankers and two hand crews.” Cal Fire says the flames were extinguished within 45 minutes.
Cal Fire / University of California San Diego
The Alert California technology website says it uses LiDAR scans taken from airplanes and drones to create “equally precise, three-dimensional information about scanned surfaces.” It combines this with the physical traits of tree species to learn more about California’s forest biomass and carbon content. Cal Fire says the ML model leverages petabytes of data from the cameras to differentiate between smoke and other airborne particles.
The system was developed by UCSD engineers using AI from the California-based company DigitalPath. Cal Fire has invested over $20 million in the program over the past four years and promises an additional $3,516,000 in the near future.
“We’re in extreme climate right now. So we give them the data, because this problem is bigger than all of us,” Neal Driscoll, geology and geophysics professor at UCSD who serves as the program’s principal investigator, told Reuters. “We need to use technology to help move the needle, even if it’s a little bit.” However, Driscoll adds that the current sample size is too small to determine the program’s overall effectiveness.
You can check out Alert California’s “camera quilt” on your computer or mobile device. The website displays a grid of the remotely operated live camera views from across the region.
This article originally appeared on Engadget at https://www.engadget.com/california-deploys-ai-to-detect-wildfires-before-they-start-spreading-194535845.html?src=rss
X is giving advertisers new ways to have some control over what type of content can appear near their ads. The company formerly known as Twitter introduced new “sensitivity settings” that allow advertisers to choose between different types of content filtering for their ads.
The new controls arrive as X is increasingly desperate to win back advertisers. The company’s ad revenue has dropped 50 percent since Elon Musk took over as brands cut spending on the platform amid concerns about the rise of hate speech and other unsavory content. Since then, watchdog groups have reported several instances of ads from major brands being placed near neo-Nazi accounts, Holocaust deniers and other previously suspended users.
With the new tool, X says it “will use machine learning to reduce adjacency to varying levels of content according to a brand’s sensitivity threshold in an upcoming campaign.” For now, the two settings available to advertisers include “conservative” and “standard.” The company notes that all rule-breaking content is meant to be excluded from ads regardless of what advertisers have opted into.
X
Under the most restrictive “conservative” setting, ads would be excluded from appearing near “targeted hate speech, sexual content, gratuitous gore, excessive profanity, obscenity, spam and drugs” in the “for You” timeline. The “standard” option would avoid the same topics, but allow spam and drug-related content, according to an example shared by X. The company also plans to add a “relaxed” setting for advertisers who want to “maximize reach” of their ads with the fewest limits on what can appear nearby.
The update isn’t the first time X has introduced tools to promote brand safety. The company previously added other keyword-based “adjacency controls” that were also meant to limit ad-buyers’ exposure to problematic content. But those changes, introduced in December, seem to have had little effect on X’s ad business.
Though the company has repeatedly claimed that it has successfully limited the reach of hate speech on its site, researchers have said otherwise. Last month, Bloombergreported that researchers at the Center for Countering Digital Hate (CCDH) indicating hate speech has surged, both in terms of sheer volume and in engagement, since Musk’s takeover of the company. X disputed the findings and is now suing CCDH, alleging the group “illegally” scraped data.
This article originally appeared on Engadget at https://www.engadget.com/x-hopes-sensitivity-settings-will-bring-back-advertisers-215439809.html?src=rss
Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.
We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.
What is language?
“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”
From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.
“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”
That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.
Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”
Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”
Tea Stražičić for Engadget/Silica Magazine
Hush, the computers are speaking
Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.
The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.
Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”
“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”
Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”
These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”
The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”
Tea Stražičić for Engadget/Silica Magazine
Dolphins have no need for kings
Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.
“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.
Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.
Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”
Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.
“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”
And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”
Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”
Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.
The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.
What if birds are just constantly screaming about the heat?
Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”
“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”
However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”
Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”
“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring.
This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss
Behaviour Interactive is bringing yet another classic horror franchise to Dead by Daylight. Not long after the publisher added Nicolas Cage to the game as a playable character, Behaviour revealed in a teaser video that a crossover with the Alien series is coming very soon.
The clip includes several shots of what looks like the Nostromo, the spaceship from the original movie, as the Alien logo is gradually revealed. That suggests the chapter includes a new map set on the ship. Perhaps unsurprisingly, the iconic Xenomorph is featured as well. Given the terrifying creature's troubled history with humans and the fact it lunges toward the camera here, the smart money is on the Xenomorph being the game's latest killer.
According to the DbD roadmap, Behaviour plans to release a new chapter this month, suggesting the Alien DLC is only a few weeks away at most. The roadmap also indicates the chapter includes a survivor. What are the odds that individual turns out to be Ripley? In any case, we won't have to wait long to find out, as more details about the Alien chapter will be revealed on August 8th.
Dead by Daylight has many original survivors, killers and maps, but crossovers with major horror franchises help to bring more attention to the game. Over the years, Behaviour has secured collaborations with the likes of The Ring, Resident Evil, Silent Hilland Stranger Things, to name but a few.
This article originally appeared on Engadget at https://www.engadget.com/looks-like-the-xenomorph-from-alien-will-be-dead-by-daylights-next-killer-170828650.html?src=rss
If reality television has taught us anything, it's there's not much people won't do if offered enough money and attention. Sometimes, even just the latter. Unfortunately for the future prospects of our civilization, modern social media has focused upon those same character foibles and optimized them at a global scale, sacrifices at the altar of audience growth and engagement. In Outrage Machine, writer and technologist Tobias Rose-Stockwell, walks readers through the inner workings of these modern technologies, illustrating how they're designed to capture and keep our attention, regardless of what they have to do in order to do it. In the excerpt below, Rose-Stockwell examines the human cost of feeding the content machine through a discussion on YouTube personality Nikocado Avocado's rise to internet stardom.
Social media can seem like a game. When we open our apps and craft a post, the way we look to score points in the form of likes and followers distinctly resembles a strange new playful competition. But while it feels like a game, it is unlike any other game we might play in our spare time.
The academic C. Thi Nguyen has explained how games are different: “Actions in games are screened off, in important ways, from ordinary life. When we are playing basketball, and you block my pass, I do not take this to be a sign of your long-term hostility towards me. When we are playing at having an insult contest, we don’t take each other’s speech to be indicative of our actual attitudes or beliefs about the world.” Games happen in what the Dutch historian Johan Huizinga famously called “the magic circle”— where the players take on alternate roles, and our actions take on alternate meanings.
With social media we never exit the game. Our phones are always with us. We don’t extricate ourselves from the mechanics. And since the goal of the game designers of social media is to keep us there as long as possible, it’s an active competition with real life. With a constant type of habituated attention being pulled into the metrics, we never leave these digital spaces. In doing so, social media has colonized our world with its game mechanics.
Metrics are Money
While we are paid in the small rushes of dopamine that come from accumulating abstract numbers, metrics also translate into hard cash. Acquiring these metrics don’t just provide us with hits of emotional validation. They are transferable into economic value that is quantifiable and very real.
It’s no secret that the ability to consistently capture attention is an asset that brands will pay for. A follower is a tangible, monetizable asset worth money. If you’re trying to purchase followers, Twitter will charge you between $2 and $4 to acquire a new one using their promoted accounts feature.
If you have a significant enough following, brands will pay you to post sponsored items on their behalf. Depending on the size of your following in Instagram, for instance, these payouts can range from $75 per post (to an account with two thousand followers), up to hundreds of thousands of dollars per post (for accounts with hundreds of thousands of followers).
Between 2017 and 2021, the average cost for reaching a thousand Twitter users (the metric advertisers use is CPM, or cost per mille) was between $5 and $7. It costs that much to get a thousand eyeballs on your post. Any strategies that increase how much your content is shared also have a financial value.
Let’s now bring this economic incentive back to Billy Brady’s accounting of the engagement value of moral outrage. He found that adding a single moral or emotional word to a post on Twitter increased the viral spread of that content by 17 percent per word. All of our posts to social media exist in a marketplace for attention — they vie for the top of our followers’ feeds. Our posts are always competing against other people’s posts. If outraged posts have an advantage in this competition, they are literally worth more money.
For a brand or an individual, if you want to increase the value of a post, then including moral outrage, or linking to a larger movement that signals its moral conviction, might increase the reach of that content by at least that much. Moreover, it might actually improve the perception and brand affinity by appealing to the moral foundations of the brand’s consumers and employees, increasing sales and burnishing their reputation. This can be an inherently polarizing strategy, as a company that picks a cause to support, whose audience is morally diverse, might then alienate a sizable percentage of their customer base who disagree with that cause. But these economics can also make sense — if a company knows enough about its consumers’ and employees’ moral affiliations — it can make sure to pick a cause-sector that’s in line with its customers.
Since moral content is a reliable tool for capturing attention, it can also be used for psychographic profiling for future marketing opportunities. Many major brands do this with tremendous success — creating viral campaigns that utilize moral righteousness and outrage to gain traction and attention among core consumers who have a similar moral disposition. These campaigns also often get a secondary boost due to the proliferation of pile- ons and think pieces discussing these ad spots. Brands that moralize their products often succeed in the attention marketplace.
This basic economic incentive can help to explain how and why so many brands have begun to link themselves with online cause-related issues. While it may make strong moral sense to those decision-makers, it can make clear economic sense to the company as a whole as well. Social media provides measurable financial incentives for companies to include moral language in their quest to burnish their brands and perceptions.
But as nefarious as this sounds, moralization of content is not always the result of callous manipulation and greed. Social metrics do something else that influences our behavior in pernicious ways.
Audience Capture
In the latter days of 2016, I wrote an article about how social media was diminishing our capacity for empathy. In the wake of that year’s presidential election, the article went hugely viral, and was shared with several million people. At the time I was working on other projects full time. When the article took off, I shifted my focus away from the consulting work I had been doing for years, and began focusing instead on writing full time. One of the by-products of that tremendous signal from this new audience is the book you’re reading right now.
A sizable new audience of strangers had given me a clear message: This was important. Do more of it. When many people we care about tell us what we should be doing, we listen.
This is the result of “audience capture”: how we influence, and are influenced by those who observe us. We don’t just capture an audience — we are also captured by their feedback. This is often a wonderful thing, provoking us to produce more useful and interesting works. As creators, the signal from our audience is a huge part of why we do what we do.
But it also has a dark side. The writer Gurwinder Boghal has explained the phenomena of audience capture for influencers illustrating the story of a young YouTuber named Nicholas Perry. In 2016, Perry began a You- Tube channel as a skinny vegan violinist. After a year of getting little traction online, he abandoned veganism, citing health concerns, and shifted to uploading mukbang (eating show) videos of him trying different foods for his followers. These followers began demanding more and more extreme feats of food consumption. Before long, in an attempt to appease his increasingly demanding audience, he was posting videos of himself eating whole fast-food menus in a single sitting.
He found a large audience with this new format. In terms of metrics, this new format was overwhelmingly successful. After several years of following his audience’s continued requests, he amassed millions of followers, and over a billion total views. But in the process, his online identity and physical character changed dramatically as well. Nicholas Perry became the personality Nikocado — an obese parody of himself, ballooning to more than four hundred pounds, voraciously consuming anything his audience asked him to eat. Following his audience’s desires caused him to pursue increasingly extreme feats at the expense of his mental and physical health.
Legacy Lit
Nicholas Perry, left, and Nikocado, right, after several years of building a following on YouTube. Source: Nikocado Avocado YouTube Channel.
Boghal summarizes this cross-directional influence.
When influencers are analyzing audience feedback, they often find that their more outlandish behavior receives the most attention and approval, which leads them to recalibrate their personalities according to far more extreme social cues than those they’d receive in real life. In doing this they exaggerate the more idiosyncratic facets of their personalities, becoming crude caricatures of themselves.
This need not only apply to influencers. We are signal-processing machines. We respond to the types of positive signals we receive from those who observe us. Our audiences online reflect back to us what their opinion of our behavior is, and we adapt to fit it. The metrics (likes, followers, shares, and comments) available to us now on social media allow for us to measure that feedback far more precisely than we previously could, leading to us internalizing what is “good” behavior.
As we find ourselves more and more inside of these online spaces, this influence becomes more pronounced. As Boghal notes, “We are all gaining online audiences.” Anytime we post to our followers, we are entering into a process of exchange with our viewers — one that is beholden to the same extreme engagement problems found everywhere else on social media.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-dangerous-real-world-consequences-of-our-online-attention-economy-143050602.html?src=rss
If Apple is going to make the Vision Pro a success, it's going to need compelling apps — and that means giving developers hardware ahead of time. Accordingly, the company is now making Vision Pro developer kits available. If you qualify, you'll get a loaned mixed reality headset as well as help with setup, expert "check-ins" and extra support requests beyond what developers normally get.
The operative term, as you might guess, is "if." You're submitting an application, not buying a product like the old Apple Silicon Developer Transition Kit. In addition to being part of the Apple Developer Program, you'll need to detail your existing apps and overall team talent. The company will favor creators whose app "takes advantage" of the Vision Pro's features. You can't just assume you'll get a headset, then, and you're less likely to get one if you're a newcomer or simply porting an iPad app. You'll have to be content with the visionOS beta software if you don't make the cut.
You also can't use the wearable for bragging rights. Apple requires that developers keep the Vision Pro in a secure workspace that only authorized team members can access. The company can also request a unit return at any time. Don't expect many leaked details, in other words.
The current kit may only end up in the hands of larger developers as a result. However, the launch shows how Apple intends to court app creators, and what titles you're likely to see when Vision Pro arrives early next year. The focus is on polished experiences that help sell the concept, rather than a huge catalog. That's not surprising when the Vision Pro is a $3,499 device aimed at professionals and enthusiasts, but you may have to wait a while before small studios release apps based on real-world testing.
This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-developer-kits-are-available-now-181026904.html?src=rss
Some redditors seem very excited about a new World of Warcraft feature called Glorbo, which some believe will "make a huge impact on the game." Their palpable enthusiasm for Glorbo caught the attention of a blog named The Portal, which publishes "gaming content powered by Z League," an app that aims to bring gamers together.
Just one problem: Glorbo isn't real. The Portal appears to be using AI to scrape Reddit posts and turn them into content.
Redditor u/kaefer_kriegerin noticed that The Portal was seemingly turning discussions from some gaming subreddits into blog posts. They decided to try and trick the content farm into covering a fake WoW feature. The ruse was a success. Other redditors played along, as did some Blizzard developers, as WoW Head notes.
Feels soooooo good to be able to talk about Glorbo finally, I remember my first day at Blizzard we were just starting to work on implementation, and that was almost 15 years ago!
The Portal's now-deleted blog post even quoted u/kaefer_kriegerin as stating, "Honestly, this new feature makes me so happy! I just really want some major bot operated news websites to publish an article about this." You almost couldn't make this up. An archived version of the post is still available.
There appears to be at least some level of human input on The Portal. The site added "(Satire)" to the headline of the post before eventually deleting it entirely. It also published an article based on another Reddit troll post about WoW taking away players' keys (which is not a thing that's happening). That blog post is also gone from The Portal.
Engadget has contacted Blizzard to find out whether it will address the hype for Glorbo and actually bring the feature to WoW. As it happens, Blizzard is reportedly using AI to help create character outfits and concept art. We've also asked Z League for comment, and we'll let you know if it sends us a (presumably AI-generated) statement.
Given the rise of generative AI in recent months, we're likely to see a tidal wave of AI-generated guff appearing on websites, even including mainstream publications. Earlier this year, CNET had to correct dozens of AI-generated finance posts after errors were found. The site's staff has pushed back against CNET's plans to keep using AI amid efforts to unionize. Gizmodo publisher G/O Media is also forging ahead with AI-generated blog posts, despite one that was widely mocked for getting a chronological list of Star Wars movies and TV shows very wrong. That and other AI-generated articles that appeared across the G/O network this month infuriated the company's human writers and editors.
Mistakes happen. Human writers can't get everything right all of the time. But any journalist worth their salt will strive to make sure their work is as accurate and fair as possible. Generative AI isn't exactly there yet. There have been many instances of AI chatbots surfacing misinformation. However, some believe AI can help to actually combat misinformation by, for instance, assisting newsrooms with fact checking.
Meanwhile, Google appears to be working on an AI tool that can whip up news articles and automate certain tasks to help out journalists. Some critics who have seen the tool in action have suggested that it takes the work of producing accurate and digestible news stories for granted.
This article originally appeared on Engadget at https://www.engadget.com/redditors-troll-an-ai-content-farm-into-covering-a-fake-wow-feature-145006066.html?src=rss
You may be more or less prepared for the academics of college, but the other life stuff can be an eye-opener. College might be the first time you’re in charge of your own finances, and with new living situations, new jobs and new connections, you may also be expanding the amount of personal data you’re putting out into the world. If you could use a little help with budgeting, remembering passwords or making sure everything you do online is secure, here are the finance and security apps we’ve used, tested and ultimately recommend.
Mint
If you’re new to tracking finances, getting an overview of your banking, credit and loan accounts in one place can be helpful. Mint is a simple and free app that does just that. I tested it for our subscription guide and continue to use it. The interface is intuitive and it’s pretty good at correctly categorizing purchases. The main features, like transaction history, self-budgeting and goal-setting, are available free. For $5 per month, you can have Mint cancel subscriptions on your behalf and you won’t see as many ad links peppered throughout the app (though, I’ve never found the ads particularly distracting.)
YNAB
For help creating a more formal budget, a few Engadget staffers use YNAB (You Need A Budget) and we recommend it in our guide to student budgeting. It’s based around a theory that imposes four “rules” to improve your money management, and learning those principles now will benefit you long after graduation. The browser and mobile app interfaces are pretty easy to use, and YNAB has a ton of instructional content for newbies that can point you in the right direction when you’re first setting up expense categories, debt trackers and sinking funds. It’s usually $15 per month or $99 per year, but students who can prove they’re in school can get a year for free.
Goodbudget
Between loans, jobs and, if you’re lucky, scholarships and financial aid, a student’s “extra” money can be pretty limited. Goodbudget translates the envelope technique to an app format, earmarking your money for the things you need to pay for. By visualizing what you have and what you need, you can see when there’s room for stuff you want, like going out with friends or decorating your first apartment. Plenty of graphs and sliders help map out your situation, and Goodbudget also offers free online classes for those who want to get better with money (granted, that may be a hard sell when you’re already in school). The free version gives you twenty total envelopes, split between expenses and goals, and lets you add one bank account. For unlimited accounts and envelopes, the paid version is $8 per month or $70 per year.
Acorns investment
Say you indulge in an Iced Toasted Vanilla Oatmilk Shaken Espresso for $5.75. The Acorns investment app rounds up that last 25 cents and deposits it into an investment account, and over time, your money grows. By providing a simple app and recommending just a few different portfolios, Acorns takes some of the complexity out of investing. For students in particular, it’s also easier to invest a few cents here and there than larger chunks of cash when you’re already just trying to get by. The monthly plan defaults to $5 per month with an option of a $3 plan at sign up. Both come with a checking and a retirement savings account in addition to the investment features, so if you’re totally starting fresh, this could prove useful.
1Password
Our senior security reporter, Katie Malone, put 1Password at the top of Engadget’s guide to password managers. Like all services like this, 1Password one helps you create unique and complex credentials for every site you use, and then saves them securely so you don’t have to remember them all. It works across most platforms and even lets you share logins and credit card info with other people as needed, which will make it easier to access any family accounts you may need while in school. The security and encryption measures are top-notch, with a zero-knowledge policy that ensures the company doesn’t store your data, as well as a bug bounty program that rewards ethical hackers who discover any vulnerabilities.
Proton VPN
If you study in public places where the WiFi is suspect, a VPN can give you an extra layer of protection. It’s not a cure-all for online security woes, but VPNs do create a protected “tunnel” to keep out people who may otherwise have access to your data, like your internet service provider or hackers targeting public WiFi. Proton VPN is the best overall option not just because it’s easy to use. The Switzerland-based company also enforces a no-log policy and their open-source software continually stands up to independent audits. Unlike some VPNs, it didn’t tank our connection speeds in our tests, either. Proton goes for $10 per month to access servers in 65 countries, or you can get the free version with access to just three.
ProtonMail
Free email services are everywhere, but finding one that isn’t propped up by selling your habits and history to advertisers is almost impossible. And while you might get a school email address, a good personal email will serve you long after access to your alumni mail is discontinued. ProtonMail is focused on privacy: It uses end-to-end encryption, whereas a service like Gmail encrypts messages in transit only. Proton’s open-source encryption methods are independently audited, and since the service is supported by paid subscriptions and not advertising, the company has little incentive to snoop your info. Free plans give you one gigabyte of storage and allow for 150 emails per day, while a $12-per-month subscription grants 500GB of storage and removes email limits.
Signal
As a non-profit, there's no tech giant behind the wheel at Signal, which sets it apart from most other messaging services. A phone number is required for set up, but that’s about all the information Signal ever collects. It’s a favorite of journalists, protestors and people living in unstable territories, but students who realize their communications are no one else’s business will find the app useful, too. Texts, videos and images you send are end-to-end encrypted using open-source protocols, and you can even set messages to expire. Recent additions that enhance group chats may make Signal feel a little more like other messaging apps, but the core structure of the service will always be fundamentally more private than many competitors.
Noonlight
Staying safe in college extends beyond online safety, which is where apps like Noonlight come in. Tinder bought a stake in the app a few years ago to help people in the event of a date gone wrong. Within the app, you’ll find a giant white button that you press and hold in sketchy situations. As long as you hold the button, nothing happens. Let go of it, and unless you enter a secret pin to prove you’re safe, the police will be dispatched to your location. A timeline feature lets you add names and images when you’re meeting someone new. The safety network allows your friends and family to request check-ins and take action when they don’t hear from you. The free version includes all three of the features mentioned above, while the $5-per-month plan adds an iPhone widget and the ability to sync with rideshare apps.
This article originally appeared on Engadget at https://www.engadget.com/best-finance-and-security-apps-for-college-students-130035602.html?src=rss