Posts with «personal finance - career & education» label

Every car is a smart car, and it's a privacy nightmare

Mozilla recently reported that of the car brands it reviewed, all 25 failed its privacy tests. While all, in Mozilla's estimation, overreached in their policies around data collection and use, some even included caveats about obtaining highly invasive types of information, like your sexual history and genetic information. As it turns out, this isn’t just hypothetical: The technology in today’s cars has the ability to collect these kinds of personal information, and the fine print of user agreements describes how manufacturers get you to consent every time you put the keys in the ignition.

“These privacy policies are written in a way to ensure that whatever is happening in the car, if there's an inference that can be made, they are still ensuring that there is protection, and that they are compliant with different state laws,” Adonne Washington, policy council at the Future of Privacy Forum, said. The policies also account for technological advances that could happen while you own the car. Tools to do one thing could eventually do more, so manufacturers have to be mindful of that, according to Washington.

So, it makes sense that a car manufacturer would include every type of data imaginable in its privacy policy to cover the company legally if it stumbled into certain data collection territory. Nissan’s privacy policy, for example, covers broad and frankly irrelevant classes of user information, such as “sexual orientation, sexual activity, precise geolocation, health diagnosis data, and genetic information” under types of personal data collected. 

Companies claim ownership in advance, so that you can’t sue if they accidentally record you having sex in the backseat, for example. Nissan claimed in a statement that this is more or less why its privacy policy remains so broad. The company says it "does not knowingly collect or disclose customer information on sexual activity or sexual orientation," but its policy retains those clauses because "some U.S. state laws require us to account for inadvertent data we have or could infer but do not request or use." Some companies Engadget reached out to — like Ford, Stellantis and GM — affirmed their commitment, broadly, to consumer data privacy; Toyota, Kia and Tesla did not respond to a request for comment.

Beyond covering all imaginable legal bases, there simply isn't any way to know why these companies would want deeply personal information on their drivers, or what they'd do with it. And even if it's not what you would consider a “smart” car, any vehicle equipped with USB, Bluetooth or recording capabilities can capture a lot of data about the driver. And in much the same way a "dumb" tv is considerably harder to find these days, most consumers would be hard pressed to find a new vehicle option that doesn't include some level of onboard tech with the capacity to record their data. A study commissioned by Senator Ed Markey nearly a decade ago found all modern cars had some form of wireless technology included. Even the ranks of internet listicles claiming to contain low-tech cars for "technophobes" are riddled with dashboard touchscreens and infotainment systems.

“How it works in practice we don’t have as much insight into, as car companies, data companies, and advertising companies tend to hold those secrets more close to the vest,” Jen Caltrider, a researcher behind Mozilla’s car study, said. “We did our research by combing through privacy policies and public documentation where car companies talked about what they *can* do. It is much harder to tell what they are actually doing as they aren’t required to be as public about that.”

The unavailability of disconnected cars combined with the lack of transparency around driver data use means consumers have essentially no choice to trust their information is being used responsibly, or that at least some of the classes of data — like Nissan's decision to include "genetic information" — listed in these worrying privacy policies are purely related to hypothetical liability. The options are essentially: read every one of these policies and find the least draconian, buy a very old, likely fuel-inefficient car with no smart features whatsoever or simply do without a car, period. To that last point, only about eight percent of American households are carless, often not because they live in a walkable city with robust public transit, but because they cannot afford one.

This gets even more complicated when you think about how cars are shared. Rental cars change drivers all the time, or a minor in your household might borrow your car to learn how to drive. Unlike a cell phone, which is typically a single user device, cars don’t work like and vehicle manufacturers struggle to address that in their policies. And cars have the ability to collect information not just on drivers but their passengers.

If simply trusting manufacturers after they ask for the right to collect your genetic characteristics tests credulity, the burden of anyone other than a contract lawyer reading back a software license agreement to the folks in the backseat is beyond absurd. Ford’s privacy policy explicitly states that the owners of its vehicles “must inform others who drive the vehicle, and passengers who connect their mobile devices to the vehicle, about the information in this Notice.” That’s about 60 pages of information to relay, if you’re printing it directly from Ford’s website — just for the company and not even the specific car.

And these contracts tend to compound on one another. If that 60-page privacy policy seems insurmountable, well, there's also a terms of service and a separate policy regarding the use of Sirius XM (on a website with its own 'accept cookies' popover, with its own agreement.) In fairness to Ford, its privacy notice does allow drivers to opt out of certain data sharing and connected services, but that would require drivers to actually comb through the documentation. Mozilla found many other manufacturers offered no such means to avoid being tracked, and a complete opt-out is something which the Alliance for Automotive Innovation — a trade group representing nearly all car and truck makers in the US, including Ford — has actively resisted. To top things off, academics, legal scholars and even one cheeky anti-spyware company have repeatedly shown consumers almost universally do not read these kinds of contracts anyway. 

The burden of these agreements doesn't end with their presumptive data collection, or the onus to relay them to every person riding in or borrowing your car. The data held in-vehicle and manufacturer's servers becomes yet another hurdle for drivers should they opt to sell the thing down the line. According to Privacy4Cars founder Andrea Amico, be sure to get it in writing from the dealer how they plan to delete your data from the vehicle before reselling it. “There's a lot of things that consumers can do to actually start to protect themselves, and it's not going to be perfect, but it's going to make a meaningful difference in their lives,” Amico said.

Consumers are effectively hamstrung by the state of legal contract interpretation, and manufacturers are incentivized to mitigate risk by continuing to bloat these (often unread) agreements with increasingly invasive classes of data. Many researchers will tell you the only real solution here is federal regulation. There have been some cases of state privacy law being leveraged for consumers' benefit, as in California and Massachusetts, but on the main it's something drivers aren't even aware they should be outraged about, and even if they are, they have no choice but to own a car anyway.

This article originally appeared on Engadget at https://www.engadget.com/every-car-is-a-smart-car-and-its-a-privacy-nightmare-193010478.html?src=rss

The best white elephant gift ideas for 2023

According to legend, the King of Siam would give a white elephant to courtiers who had upset them. It was a far more devious punishment than simply having them executed. The recipient had no choice but to simply thank the king for such an opulent gift, knowing that they likely could not afford the upkeep for such an animal. It would inevitably lead them to financial ruin. This story is almost certainly untrue, but it has led to a modern holiday staple: the White Elephant gift exchange.

Getting a White Elephant gift right requires walking a very fine line. The goal isn’t to just buy something terrible and force someone to take it home with them. It should be useful or amusing enough that it won’t immediately end up in the trash. It also shouldn’t be easily tossed in a junk drawer and forgotten about. So here are a few suggestions that will not only get you a few chuckles, but will also make the recipient feel (slightly) burdened.

Clocky Alarm Clock on Wheels

KFC Fire Starter Log by Enviro-Log

LDKCOK USB 2.0 Active Repeater Extension Cable

Galaxy Projector

Msraynsford Useless Machine 2.0

Lightsaber Chopsticks

MMX Marshmallow Crossbow

Banana Phone

Friendship Lamp

FAQs

What is white elephant?

A white elephant gift exchange is a party game typically played around the holidays in which people exchange funny, impractical gifts.

How does white elephant work?

A group of people each bring one wrapped gift to the white elephant gift exchange, and each gift is typically of a similar value. All gifts are then placed together and the group decides the order in which they will each claim a gift. The first person picks a white elephant gift from the pile, unwraps it and their turn ends. The following players can either decide to unwrap another gift and claim it as their own, or steal a gift from someone who has already taken a turn. The rules can vary from there, including the guidelines around how often a single item can be stolen — some say twice, max. The game ends when every person has a white elephant gift.

Why is it called white elephant?

The term “white elephant” is said to come from the legend of the King of Siam gifting white elephants to courtiers who upset him. While it seems like a lavish gift on its face, the belief is that the courtiers would be ruined by the animal’s upkeep costs.

This article originally appeared on Engadget at https://www.engadget.com/white-elephant-gift-ideas-2023-130058973.html?src=rss

World of Horror is a skin-crawling dread machine that does its inspirations proud

I am fully encased in a bundle of spider’s silk, only my eyeballs still visible as I wait for my turn to be devoured. I’ve failed to save the city from the insatiable arachnidian Old God, and now myself and all the inhabitants of Shiokawa, Japan are caught in its web. I’d come so far this time, solved all of the mysteries tacked to my bulletin board, but in the end, I couldn’t escape the doom that had been closing in on me.

If World of Horror could be reduced to a single word, it’d be “dread.” It's a point-and-click cosmic horror game created by Polish developer and dentist, Pawel Kozminski (also known as Panstasz). And after spending years in early access, Ysbryd Games finally released it to the public this month on Steam, PlayStation 4 and 5, and Nintendo Switch. It was well worth the wait.

World of Horror is heavily text-based, and plays like a choose your own adventure story — one in which most of your options are bad ones that will inevitably lead you to a gruesome death or irrevocable insanity. Players must solve five mysteries that are tormenting the townspeople, gathering information and fighting off the monstrous entities that attempt to get in your way. A slippery, boil-covered former teacher here, a woman with shards of broken ribs jammed into her gaping hole of a face, there.

All the while, you’ll be working to stave off whichever Old God has set its sights on Shiokawa for that run, and must keep an eye on the ever-ticking Doom meter to know how close you are to being overcome. Only after you’ve obtained five keys by solving each of the five mysteries can you unlock the town’s lighthouse, where you can banish the Old God. That is, if you’re able to make it through the trials on the way to the top. It’s a roguelite, too, so prepare to start from the beginning every time you make a fatal misstep.

The horror-manga-style RPG doesn't hide its Junji Ito and HP Lovecraft influences. It's so disquieting that you’ll find yourself jumpy and on edge even when nothing’s happening, which in some investigations is most of the time. The evil may not be coming for you right that moment, but there’s the sense that it could at any turn.

Ysbryd Games

When those little jump scares do come — a particularly revolting attacker or a booming sound that cuts through the chiptune score — they’re made all the more jarring by the high-contrast 1- or 2-bit visuals (you can choose at the beginning) that were created, incredibly, in MS Paint. It nails the often hard to stomach Ito-esque gore, and there are a few scenes I had to force myself not to turn away from (a certain DIY eyeball operation comes to mind).

You’re given a few options for approaching the game, in terms of difficulty and complexity. Its short tutorial, “Spine-Chilling Story of School Scissors,” is a straightforward introduction. And in the beginner-level main story mode, “Extracurricular Activities,” you'll start with one mystery already solved.

Players also have the choice of a “Quick Play” mode, in which elements like your character, Old God and backstory are randomly selected, or a fully customized playthrough where you choose your own character and story elements. That last one is the most challenging route. You can also choose from a slew of color palettes at the start of each game, if you want to mix it up.

Ysbryd Games

While the turn-based combat is nothing revolutionary, I found it to be engaging enough. There’s no guarantee all of your hits will land, and relying on spiritual attacks when going up against a ghost-type foe is a stressful game of “guess the right combo.” It keeps things interesting, albeit a bit frustrating. Since the runs are relatively short — about an hour, give or take 30 minutes — it doesn’t feel soul crushing every time you die and have to start fresh. If anything, it becomes an addicting cycle.

Where World of Horror truly excels is in its attention to horrifying detail. A TV playing in your home runs grisly newscasts nonstop, including one about a dentist who replaced his human patients’ teeth with dogs’ teeth. (Remember, the developer is also a dentist). Look through the peephole of your apartment door and you might see a shadow man down the hall, or the quickly retreating face of someone lurking around the corner, or just an empty corridor. Twisted ghouls wait behind dead-end classroom doors.

Things are rarely the same when you come back to them. Each mystery has multiple endings and multiple ways to get you there, so you can’t quite predict what’s going to happen next even if you just played 10 runs in a row. Some stories are more involved than others, better thought through. But each has at least one ghastly element that justifies its place among the rest. If World of Horror is anything, it’s effective, and I haven’t been able to stop thinking about it.

This article originally appeared on Engadget at https://www.engadget.com/world-of-horror-is-a-skin-crawling-dread-machine-that-does-its-inspirations-proud-183000816.html?src=rss

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.

HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss

Assassin's Creed Mirage review: A warm, bloody hug from an old friend

Editor's note: This article contains mild spoilers for Assassin's Creed Mirage.

The deeper I got into Assassin’s Creed Mirage, the more a sense of warm nostalgia washed over me. It felt like a cozy hug from an old friend. A comforting, bloody embrace.

The latest entry in Ubisoft's long-running open-world adventure franchise takes the series back to its roots. Mirage mostly forgoes the RPG approach Ubisoft adopted in the last three main games: Assassin's Creed Origins, Odyssey and Valhalla. I'd only played the latter of those and it didn't click for me, largely because of Ubisoft's propensity to ovestuff its games and partially because it strayed so far away from the earlier titles.

Some of Valhalla's DNA carries over to Mirage, which shouldn't be surprising as the latest game was originally envisioned as an expansion to the last 100-plus-hour epic. There is some loot to hunt for in the form of swords, daggers and outfits that give protagonist Basim some small upgrades, such as reducing the level of notoriety he gains while carrying out illegal actions or passively regenerating some health. These items are upgradable, as are your tools. One neat, if unrealistic perk, makes an enemy disintegrate after Basim eliminates them with a throwing knife. So, you can tweak your build to fit your playstyle to a certain degree.

Ubisoft

There are skill trees too, but rather than unlocking things like a slight increase to the damage Basim deals, the abilities here are genuinely impactful. Pinpointing opponents and important items from further away, reducing fall damage and a chain assassination ability are all super useful tools for Basim to have in his belt.

Ubisoft has pulled back quite a bit on the RPG elements of the previous few games. You won’t be using bows, shields or two-handed weapons as you might in Valhalla, for instance. Still, there's just enough customization for folks who want to optimize (or min/max) Basim for the way they like to play.

"Just enough" is a thought I kept coming back to in the 17 hours it took me to beat the main story. Mirage is just the right length. There are just enough collectibles and side-quests to make the world feel rich but not overwhelming. There's just enough to the story, which is fairly by-the-numbers though gets more intriguing in the last couple of hours. There's just enough variety to the enemies.

There are only a few enemy types, and I love that Mirage doesn't go down the well-worn and nonsensical path of arbitrarily making them stronger based on their geographical location — an aspect of Dead Island 2 I greatly disliked. Although Basim largely has to make do with his sword and dagger (and, of course, the Hidden Blade), enemies have a variety of weapons. A trio of goons will pose a different threat when they have spears instead of swords. You'll have to navigate that melange of weaponry carefully, especially so when enemies surround you. Putting an onus on that and the level design for encounters helps make Mirage feel like more of a refreshing throwback.

Ubisoft

In the main missions, I only encountered one traditional boss fight toward the end of the story. Practically every other enemy was susceptible to a single-button slaying. I absolutely made the most of that by sneaking up on assassination targets or distracting them with noise-making devices. The game actually discourages open combat, anyway. You won't gain experience points by killing tons of enemies. Staying stealthy is usually the way to go — unless you're a completionist, since there's a trophy/achievement that requires you to stay in open combat for 10 minutes. Thankfully, the game makes it fairly easy for you to slink around.

Contrary to my first impressions, the guards of Baghdad aren't all that smart. They'll often be briefly puzzled when they encounter the dead body of a colleague they were chatting with seconds earlier before walking away. They'll quickly give up on a hunt for Basim. They'll see a cohort being yanked around a corner and think nothing of it. That breaks the immersion a bit, but it does make it easier to mess with these idiots.

I took some delight in tormenting my opponents, even if that may not match up to the code of conduct the assassins live by. One larger grunt was trapped in a room alone to guard a chest. I entered, used a smoke bomb to distract him, opened the chest and left, blocking the path behind me. I then made my way around to a gate that kept the guard locked in from the other side and spent a few minutes whistling at him, for no reason other than to annoy him and amuse myself.

The real star of the show is the version of ninth-century Baghdad Ubisoft has built. It feels rich and lived-in, with bystanders simply going about their day as a hooded figure darts by them to climb up the side of a building. Unfortunately, that level of detail wasn't reflected in the character models. Main characters and NPCs alike looked far less refined than their surroundings.

Ubisoft

Some Arab critics and reviewers appreciated how Ubisoft represented Baghdad and Muslim culture in the game, and that's a positive sign. In that sense, Mirage seems like a prime candidate for the historical educational modes that Ubisoft has added to recent Assassin's Creed games.

I can't personally speak to the authenticity of the environment Ubisoft has created. The same goes for the Arabic used in the game, but the developers at least strove to avoid anachronisms. I spent an hour or so playing in Arabic with English subtitles and found it a compelling way to experience the game, though I missed hearing the velvet-voiced Shohreh Aghdashloo's portrayal of Basim's mentor Roshan too much.

Aghdashloo's performance is one of several highlights of a solid game. Developer Ubisoft Bordeaux has achieved what it set out to do in bringing back the format of early Assassin's Creed titles while adding some modern bells and whistles (such as a gameplay option to avoid the turgid pickpocketing minigame) and avoiding some of the old trappings.

No part of the game that I've encountered is set in the modern day. That's a wise move, since those parts of previous games pulled me out of the main experience and into some tedious sections that sought to serve a larger story. I didn't hear the word "animus" once this time around. Mirage does tie back into the broader Assassin's Creed narrative — Basim makes an appearance in Valhalla, after all — but you won't get sidetracked by Desmond Miles or Layla Hassan. That meant I could spend more of my time roaming the streets and rooftops of this well-crafted city, scouting enemy camps from above and figuring out the best way to approach an assassination mission.

Mirage probably won't be for everyone, including those who appreciated the format of the last three big Assassin's Creed games, but it struck a chord with me. Even though I've wrapped up the main story and have a bunch of other games to play (I'm looking at you, Cocoon and Spider-Man 2), I'll probably spend a little while longer nuzzled up in the comfort of Mirage.

Assassin's Creed Mirage is out now on PC, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S. It's coming to iPhone 15 Pro devices next year.

This article originally appeared on Engadget at https://www.engadget.com/assassins-creed-mirage-review-a-warm-bloody-hug-from-an-old-friend-181918323.html?src=rss

ElevenLabs is building a universal AI dubbing machine

After Disney releases a new film in English, the company will go back and localize it in as many as 46 global languages to make the movie accesible to as wide an audience as possible. This is a massive undertaking, one for which Disney has an entire division — Disney Character Voices International Inc — to handle the task. And it's not like you're getting Chris Pratt back in the recording booth to dub his GotG III lines in Icelandic and Swahili — each version sounds a little different given the local voice actors. But with a new "AI dubbing" system from ElevenLabs, we could soon get a close recreation of Pratt's voice, regardless of the language spoken on-screen.   

ElevenLabs is an AI startup that offers a voice cloning service, allowing subscribers to generate nearly identical vocalizations with AI based on a few minutes worth of audio sample uploads. Not wholly unsurprising, as soon as the feature was released in beta, it was immediately exploited to impersonate celebrities, sometimes even without their prior knowledge and consent

The new AI dubbing feature does essentially the same thing — in more than 20 different languages including Hindi, Portuguese, Spanish, Japanese, Ukrainian, Polish and Arabic — but legitimately, and with permission. This tool is designed for use by media companies, educators and internet influencers who don't have Disney Money™ to fund their global adaptation efforts.

ElevenLabs asserts that the system will be able to not only translate "spoken content to another language in minutes" but also generate new spoken dialog in the target language using the actor's own voice. Or, at least, a AI generated recreation. The system is even reportedly capable of maintaining the "emotion and intonation" of the existing dialog and transferring that over to the generated translation.

 "It will help audiences enjoy any content they want, regardless of the language they speak," ElevenLabs CEO Mati Staniszewski said in a press statement. "And it will mean content creators can easily and authentically access a far bigger audience across the world."

This article originally appeared on Engadget at https://www.engadget.com/elevenlabs-is-building-a-universal-ai-dubbing-machine-130053504.html?src=rss

Nintendo's new mobile game lets you pluck Pikmin on your browser

Nintendo has teamed up with Niantic for a new Pikmin mobile game that's mostly good for passing time than serious gaming. It's called Pikmin Finder, and as Nintendo Life notes, the companies have released it in time for the Nintendo Live event in Seattle. You can access the augmented reality game from any browser on your mobile, whether it's an iPhone or an Android device. We've tried it on several browsers, including Chrome and Opera, and we can verify that it works, as long as you allow it to access your camera. 

Similar to Pikmin Bloom, the game superimposes Pikmin on your environment as seen through your phone's camera. You can then pluck the creatures by swiping up — take note that there are typically more of the same color lurking around when you do spot one. Afterward, you can use the Pikmin you've plucked to search for treasures, including cakes and rubber duckies. You'll even see them bring you those treasures on your screen. 

Pikmin Finder

To play the game, you can go to its website on a mobile browser and start catching Pikmin on your phone. You can also scan the QR code that shows up on the website when you open it on a desktop browser.

This article originally appeared on Engadget at https://www.engadget.com/nintendos-new-mobile-game-lets-you-pluck-pikmin-on-your-browser-064423362.html?src=rss

New AP guidelines lay the groundwork for AI-assisted newsrooms

The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP’s blessing as a license to use generative AI more excessively or underhandedly.

The organization’s AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is — not a replacement for trained writers, editors and reporters exercising their best judgment. “We do not see AI as a replacement of journalists in any way,” the AP’s Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. “It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share.”

The article directs its journalists to view AI-generated content as “unvetted source material,” to which editorial staff “must apply their editorial judgment and AP’s sourcing standards when considering any information for publication.” It says employees may “experiment with ChatGPT with caution” but not create publishable content with it. That includes images, too. “In accordance with our standards, we do not alter any elements of our photos, video or audio,” it states. “Therefore, we do not allow the use of generative AI to add or subtract any elements.” However, it carved an exception for stories where AI illustrations or art are a story’s subject — and even then, it has to be clearly labeled as such.

Barrett warns about AI’s potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists “should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.” To protect privacy, the guidelines also prohibit writers from entering “confidential or sensitive information into AI tools.”

Although that’s a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. CNET was caught early this year publishing error-ridden AI-generated financial explainer articles (only labeled as computer-made if you clicked on the article’s byline). Gizmodo found itself in a similar spotlight this summer when it ran a Star Wars article full of inaccuracies. It’s not hard to imagine other outlets — desperate for an edge in the highly competitive media landscape — viewing the AP’s (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.

This article originally appeared on Engadget at https://www.engadget.com/new-ap-guidelines-lay-the-groundwork-for-ai-assisted-newsrooms-201009363.html?src=rss

Why humans can't use natural language processing to speak with the animals

We’ve been wondering what goes on inside the minds of animals since antiquity. Dr. Doolittle’s talent was far from novel when it was first published in 1920; Greco-Roman literature is lousy with speaking animals, writers in Zhanguo-era China routinely ascribed language to certain animal species and they’re also prevalent in Indian, Egyptian, Hebrew and Native American storytelling traditions.

Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.

We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.

What is language?

“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”

From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.

“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”

That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.

Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”

Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”

Tea Stražičić for Engadget/Silica Magazine

Hush, the computers are speaking

Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.

The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.

Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”

“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”

Talkin’ jive ain’t just for turkeys

While many species’ communication systems have been studied in recent years — most notably cetaceans like whales and dolphins, but also the southern pied babbler, for its song’s potentially syntactic qualities, and vervet monkeys’ communal predator warning system — none have shown the sheer degree of complexity as the call of the avian family Paridae: the chickadees, tits and titmice.

Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”

These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”

The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”

Tea Stražičić for Engadget/Silica Magazine

Dolphins have no need for kings

Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.

“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.

Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.

Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”

Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.

“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”

And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”

Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”

Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.

The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.

What if birds are just constantly screaming about the heat?

Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”

“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”

However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”

Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”

“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring. 

Conversely, 2021’s research, The effect of music on livestock: cattle, poultry and pigs, has shown that playing music helps to calm livestock and reduce stress during times of intensive production. We can measure that reduction in stress based on what sorts of happy sounds those animals make. Like listening to music in another language, we can get with the vibe, even if we can't understand the lyrics

This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss

Hitting the Books: The dangerous real-world consequences of our online attention economy

If reality television has taught us anything, it's there's not much people won't do if offered enough money and attention. Sometimes, even just the latter. Unfortunately for the future prospects of our civilization, modern social media has focused upon those same character foibles and optimized them at a global scale, sacrifices at the altar of audience growth and engagement. In Outrage Machine, writer and technologist Tobias Rose-Stockwell, walks readers through the inner workings of these modern technologies, illustrating how they're designed to capture and keep our attention, regardless of what they have to do in order to do it. In the excerpt below, Rose-Stockwell examines the human cost of feeding the content machine through a discussion on YouTube personality Nikocado Avocado's rise to internet stardom.

 

Legacy Lit

Excerpted from OUTRAGE MACHINE: How Tech Amplifies Discontent, Disrupts Democracy—And What We Can Do About It by Tobias Rose-Stockwell. Copyright © 2023 by Tobias Rose-Stockwell. Reprinted with permission of Legacy Lit. All rights reserved.


This Game Is Not Just a Game

Social media can seem like a game. When we open our apps and craft a post, the way we look to score points in the form of likes and followers distinctly resembles a strange new playful competition. But while it feels like a game, it is unlike any other game we might play in our spare time.

The academic C. Thi Nguyen has explained how games are different: “Actions in games are screened off, in important ways, from ordinary life. When we are playing basketball, and you block my pass, I do not take this to be a sign of your long-term hostility towards me. When we are playing at having an insult contest, we don’t take each other’s speech to be indicative of our actual attitudes or beliefs about the world.” Games happen in what the Dutch historian Johan Huizinga famously called “the magic circle”— where the players take on alternate roles, and our actions take on alternate meanings.

With social media we never exit the game. Our phones are always with us. We don’t extricate ourselves from the mechanics. And since the goal of the game designers of social media is to keep us there as long as possible, it’s an active competition with real life. With a constant type of habituated attention being pulled into the metrics, we never leave these digital spaces. In doing so, social media has colonized our world with its game mechanics.

Metrics are Money

While we are paid in the small rushes of dopamine that come from accumulating abstract numbers, metrics also translate into hard cash. Acquiring these metrics don’t just provide us with hits of emotional validation. They are transferable into economic value that is quantifiable and very real.

It’s no secret that the ability to consistently capture attention is an asset that brands will pay for. A follower is a tangible, monetizable asset worth money. If you’re trying to purchase followers, Twitter will charge you between $2 and $4 to acquire a new one using their promoted accounts feature.

If you have a significant enough following, brands will pay you to post sponsored items on their behalf. Depending on the size of your following in Instagram, for instance, these payouts can range from $75 per post (to an account with two thousand followers), up to hundreds of thousands of dollars per post (for accounts with hundreds of thousands of followers).

Between 2017 and 2021, the average cost for reaching a thousand Twitter users (the metric advertisers use is CPM, or cost per mille) was between $5 and $7. It costs that much to get a thousand eyeballs on your post. Any strategies that increase how much your content is shared also have a financial value.

Let’s now bring this economic incentive back to Billy Brady’s accounting of the engagement value of moral outrage. He found that adding a single moral or emotional word to a post on Twitter increased the viral spread of that content by 17 percent per word. All of our posts to social media exist in a marketplace for attention — they vie for the top of our followers’ feeds. Our posts are always competing against other people’s posts. If outraged posts have an advantage in this competition, they are literally worth more money.

For a brand or an individual, if you want to increase the value of a post, then including moral outrage, or linking to a larger movement that signals its moral conviction, might increase the reach of that content by at least that much. Moreover, it might actually improve the perception and brand affinity by appealing to the moral foundations of the brand’s consumers and employees, increasing sales and burnishing their reputation. This can be an inherently polarizing strategy, as a company that picks a cause to support, whose audience is morally diverse, might then alienate a sizable percentage of their customer base who disagree with that cause. But these economics can also make sense — if a company knows enough about its consumers’ and employees’ moral affiliations — it can make sure to pick a cause-sector that’s in line with its customers.

Since moral content is a reliable tool for capturing attention, it can also be used for psychographic profiling for future marketing opportunities. Many major brands do this with tremendous success — creating viral campaigns that utilize moral righteousness and outrage to gain traction and attention among core consumers who have a similar moral disposition. These campaigns also often get a secondary boost due to the proliferation of pile- ons and think pieces discussing these ad spots. Brands that moralize their products often succeed in the attention marketplace.

This basic economic incentive can help to explain how and why so many brands have begun to link themselves with online cause-related issues. While it may make strong moral sense to those decision-makers, it can make clear economic sense to the company as a whole as well. Social media provides measurable financial incentives for companies to include moral language in their quest to burnish their brands and perceptions.

But as nefarious as this sounds, moralization of content is not always the result of callous manipulation and greed. Social metrics do something else that influences our behavior in pernicious ways.

Audience Capture

In the latter days of 2016, I wrote an article about how social media was diminishing our capacity for empathy. In the wake of that year’s presidential election, the article went hugely viral, and was shared with several million people. At the time I was working on other projects full time. When the article took off, I shifted my focus away from the consulting work I had been doing for years, and began focusing instead on writing full time. One of the by-products of that tremendous signal from this new audience is the book you’re reading right now.

A sizable new audience of strangers had given me a clear message: This was important. Do more of it. When many people we care about tell us what we should be doing, we listen.

This is the result of “audience capture”: how we influence, and are influenced by those who observe us. We don’t just capture an audience — we are also captured by their feedback. This is often a wonderful thing, provoking us to produce more useful and interesting works. As creators, the signal from our audience is a huge part of why we do what we do.

But it also has a dark side. The writer Gurwinder Boghal has explained the phenomena of audience capture for influencers illustrating the story of a young YouTuber named Nicholas Perry. In 2016, Perry began a You- Tube channel as a skinny vegan violinist. After a year of getting little traction online, he abandoned veganism, citing health concerns, and shifted to uploading mukbang (eating show) videos of him trying different foods for his followers. These followers began demanding more and more extreme feats of food consumption. Before long, in an attempt to appease his increasingly demanding audience, he was posting videos of himself eating whole fast-food menus in a single sitting.

He found a large audience with this new format. In terms of metrics, this new format was overwhelmingly successful. After several years of following his audience’s continued requests, he amassed millions of followers, and over a billion total views. But in the process, his online identity and physical character changed dramatically as well. Nicholas Perry became the personality Nikocado — an obese parody of himself, ballooning to more than four hundred pounds, voraciously consuming anything his audience asked him to eat. Following his audience’s desires caused him to pursue increasingly extreme feats at the expense of his mental and physical health.

Legacy Lit

Nicholas Perry, left, and Nikocado, right, after several years of building a following on YouTube. Source: Nikocado Avocado YouTube Channel.

Boghal summarizes this cross-directional influence.

When influencers are analyzing audience feedback, they often find that their more outlandish behavior receives the most attention and approval, which leads them to recalibrate their personalities according to far more extreme social cues than those they’d receive in real life. In doing this they exaggerate the more idiosyncratic facets of their personalities, becoming crude caricatures of themselves.

This need not only apply to influencers. We are signal-processing machines. We respond to the types of positive signals we receive from those who observe us. Our audiences online reflect back to us what their opinion of our behavior is, and we adapt to fit it. The metrics (likes, followers, shares, and comments) available to us now on social media allow for us to measure that feedback far more precisely than we previously could, leading to us internalizing what is “good” behavior.

As we find ourselves more and more inside of these online spaces, this influence becomes more pronounced. As Boghal notes, “We are all gaining online audiences.” Anytime we post to our followers, we are entering into a process of exchange with our viewers — one that is beholden to the same extreme engagement problems found everywhere else on social media.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-dangerous-real-world-consequences-of-our-online-attention-economy-143050602.html?src=rss