Posts with «author_name|andrew tarantola» label

How do you prevent an AI-generated game from losing the plot?

Did you ever get to the end of Wizard of Oz and have notes – the nagging intuition that you could have taken down all those pesky flying monkeys or handled the backstabbing intricacies of Munchkin guild politics more effectively than Dorothy and her band of misfits did in the books? Thanks to the new AI storytelling platform Hidden Door, which plops players into TTRPG-like adventures based in their favorite literary universes, you’ll soon have the chance to walk the Yellow Brick Road however you see fit.

What’s behind (hidden) door number one

Hidden Door is both the company and the game. Hidden Door, the company, was co-founded by Hilary Mason, who is also CEO, and Matt Brandwein in 2020 with a mission to “inspire creativity through play with narrative AI.” The staff is split nearly evenly between machine learning engineers and traditional game designers, Mason told Engadget.

Hidden Door, the game, is the company’s currently-in-development social roleplaying narrative AI project. “[We are] trying to take all the joys of a tabletop game and allow you to play it without all the friction [of having to do it physically], and AI is the technology enabling that,” Mason said.

Leveraging the capabilities of large language models and procedural generation systems, Hidden Door creates immersive RPG campaigns using the player’s preferred IP — it could be Wizard of Oz, as was released on Monday, or Star Trek, Old Man’s War, Dungeon Crawler Carl or Agatha Christie’s assembled murder mystery library. (Just so long as the IP owner agrees to license their proprietary universe for use, which the latter four have not, the former of which has been dead long enough for it to no longer matter.)

“We solve a fundamentally different, technical problem than what you would see if you were just plugging content into an LLM like ChatGPT,” Mason said. “There, what you do is take an unstructured text prompt and put it into a model which is largely a black box.”

“GPT-3 came out a few months into our project and it was clearly incredibly biased – uncontrollable and … not useful in doing something like keeping a story on the rails,” she explained. “The core of our design came from that initial desire to build a safe, controllable system for telling amazing stories.

“We realized that if we were able to accomplish our safety goals,” she continued, “we would also be able to create something controllable enough that authors would be comfortable allowing people to play in their worlds.”

The building blocks of a cursed village

Take The Wizard of Oz, for example – a public domain series originally written in 1904 by L. Frank Baum that spans 14 books in total. Hidden Door has adapted that corpus of text into an immersive in-game universe that the user, and up to three teammates, can explore. The system does so by taking unstructured inputs from the players and mapping them to the Hidden Door game state, “which is essentially a game engine that represents in a database the characters, locations, items, relationships, and their conditions,” Mason explained.

Each player starts out making a character sheet to establish their avatar’s stats and backstories. From there, the system will incorporate that data, as well as the users’ responses to in-game prompts to generate a story. Rather than create each scenario for each story from scratch every time, the story engine works on what are essentially pre-computed tropes, Mason explained, “We call them 'story thread templates' and they're at the level of things like … a cursed village. Your objective for the scene is to figure out where the curse is coming from and resolve it.”

Hidden Door

The templates serve as the basic building blocks of the story, establishing the narrative, providing structure for the players to explore and interact with the scene, and ultimately helping define when the story ends. The village curse, “you don't know what it is,” Mason said. “You don't know who has cursed the village or why, so it sets those things up and then it lets you loose so you explore, you interact, you set things up.”

Every template is either handwritten or generated and hand-edited by a person. The team has already created thousands of such templates. By stringing three or four such templates together, the game can create a compelling narrative arc that allows players to deeply explore these universes but while maintaining strong content and safety guardrails.

Safety (and inclusivity) first

We’ve already seen way too many examples of what goes wrong when you let a chatbot off its leash. Whether it’s spouting Nazi propaganda or making incorrect claims about space telescopes, today’s large language models are highly susceptible to veering unbidden into hate speech, “hallucinating” facts, and on occasion, bullying people into suicide. These are all issues you don’t want popping up in an all-ages game, so there are many things you cannot say while playing.

“You cannot submit anything you want,” Mason said. The system will generate suggested actions based on what the player writes, but will not accept the written input directly. The system will even give feedback and comment on what the player is suggesting, “it might say, ‘Oh, no one's ever tried that before’ or ‘that's gonna be really hard for you,’” she continued, but any action suggested by the system can be pre-approved.

“There is no word ever in one of those constructed sentences that's not in our dictionary,” Mason said. “That gives us control, both for safety and for preventing inappropriate content – like, if you were to type in, ‘I joined the Nazis,’ it would reply with, ‘you get a bowl of nachos.’ We're not gonna let you do that – and also, for keeping the story inside the bounds of believability for the in-game world.”

Hidden Door

The company’s adherence to inclusivity is also easily recognizable in the character creation process. “We made a very deliberate decision to pull things out where we thought a model might inject bias [like a character’s pronouns],” Mason said, “such that they are essentially on a pre-computed distribution.”

That is, there is no machine learning associated with it, they’re hard coded into the gameplay. “Things like roles are in no way coupled to your avatar, your skills or anything like that. You decide your pronouns and they're respected throughout the system,” she said. “There's no machine learning model that is deciding that a doctor should be a he and a nurse should be a she. It'll be randomly assigned.”

Go ahead, snoop around

Aside from committing war atrocities, telling aristocrats jokes and other forms of mass violence, players can do most anything they want once the game starts. In Oz, each instance starts at the same point in the story, right when Dorothy splatters the Wicked Witch of the East under her house. The players aren’t part of Dorothy’s direct story but exist in the same time and space. “It's the moment most of us think about when we think about that world, which is why we chose it,” Mason said.

But from there, the player’s decisions and actions make the Land of Oz their own. ”We think of the world almost as its own character that is collectively growing as people play the story,” Mason said. “You're discovering new locations that get generated as you're playing these stories and the world grows.”

And nothing says that you have to follow the conventional “off to see the Wizard” storyline. If a player gets to the Munchkin village, looks around and decides to declare themselves mayor, the game will absolutely adapt the story to those new conditions. Instead of completing quests of battling flying monkeys and tipping pails of water, players will be tasked with running political campaigns and winning support from key members of the community. But again, you wouldn’t be able to walk into town, declare yourself Warlord and begin summary dissident purges — because those words aren’t in Hidden Door’s dictionary.

“We have thread templates that would be, ‘you're persuading a bunch of people to support you in a political race,’” Mason said, “And once you are a mayor, you would be able to tell stories that just start in a different place.”

Those decisions are also persistent within the game instance. Deciding to help (or not) an NPC will impact their opinion of the player and influence their future interactions, for example. What’s more, those generated NPCs will reappear in subsequent playthroughs as recurring characters within your specific game instance.

“You can play as many stories in the same world as you want,” Mason said, “and everybody's version of the Wizard of Oz will be really different depending on how they play over time.” NPCs and other generated assets aren’t sharable between groups yet, but that is something the team might look at implementing in the future.

In order to prevent playthroughs from getting bogged down in side quests, the Hidden Door team has developed a design philosophy that Mason refers to as “Chekhov’s Armory.” It’s basically where the system keeps track of all of the player’s in-game decisions and their influences on other assets within the story. Whenever the system needs to move the plot forward, or inject some additional drama to keep the players engaged, it can dip back into the Armory to pull out an earlier plot thread or previously wronged enemy. This also helps the system maintain continuity of the overall storyline and prevent catch-22s from forming.

“The idea was to create this feeling of the story, where your choices matter, where you have that full agency, but also there are rails moving you forward,” Mason said. “That's been one of our most frequent design challenges, to adjust how much freedom versus how much we should motivate the story forward.”

16 secret herbs and language models

Hidden Door’s LLM differs significantly from the likes of ChatGPT in that it is not a monolithic model but rather 16 individual ML algorithms, each specialized to address a specific sub-task within the larger generative task.

We use a variety of models, some of them were building on open source models, some of them are proprietary,” Mason explained. “It's not just one big LLM, it's decomposing it into an interpretable system where we can use the best [AI] at the right moment.” It also enables the team to quickly plug in and benchmark newly released AI models against the existing system to see if it can improve game quality. “Frankly, we design these engines so that game designers and narrative designers can be the ones to come in and tune it, which means we have to give them those knobs”

“One big question we worked on for a while was a plot-prediction algorithm,” Mason continued. “So, ‘what should happen next based on the series of actions that is just happened?’” Interestingly the team quickly found that they could generate incredibly dull stories simply by consistently choosing the system’s top recommendation — because that choice is invariably, “the most obvious thing,” that could happen. Conversely, if the system works in too many twists and surprise reveals, the story quickly turns into chaos.

This granularity is what enables the designers to tweak the underlying game architecture to work for (example) a light-hearted Pride and Prejudice RPG as well as a grimdark Pride and Prejudice and Zombies version. “We think a lot about how our creative colleagues are going to be able to use this system to create the story experiences,” Mason said.

Gore and smooching are A-OK (but only if it’s canon)

While the game is designed to be family friendly, Hidden Door’s target demographic is the 18-35 age range and, as such, more mature themes are very much on the table top for designers, so long as they make sense within the existing story. For Wizard of Oz, violence is both ok and a major plot point.

“We work directly with authors and creators and can use as little, or as much, written material as they have,” Mason said. “We extract the characters, the types of plots, the vocabulary, the elements, the writing style, the locations.”

Hidden Door

The team also uses what it calls a “sub-genre based model” that helps to generate the “formula” of the story. “The Wizard of Oz is largely fantasy that has a few additional rules to it, like animals can talk, but there are no dragons or other sort of fantastical creatures.” Essentially, the system takes a more general “fantasy tale” template and molds it into the specific form of the story, “down to the specific rules of the Wizard of Oz universe,” Mason said. Authors that license their works for use in the game will be able to dictate not just the initial starting plot points of the story, but the specific behaviors of NPCs and inclusion of story arcs.

There is no “Adult” story module currently available but in-game physical affection is allowed. “You can make them kiss,” Mason said. “We have a very tasteful fade to black and then you're on to the next scene. The NPC may also reject you if they don't like you or you don't have the kind of relationship. That is something that's very tunable but we try to keep it at the level of relationship in the core material.”

The future of interactive fandom

“It raises the floor for creation dramatically,” Mason said of generative AI’s broader promise to the game industry, “but it doesn't raise the ceiling.” We’re just beginning to see gen AIs used for improving NPC dialog, Mason points out, and could be as little as a year or two away from seeing a game “fully realized” using generative AI. “The brilliance of a human with a creative vision is not something we see generally out of these systems and that is in part because of what they are: a compression of a large amount of data and an aspiration to the median.”

“I do think there's a lot of excitement in being able to raise the floor. I think it makes creativity more accessible to a large number of people who may then decide to pursue it in their own way or use it as a tool in their process,” she continued. “I also think it makes it possible for more people to be fans of things and to have some autonomy in the way they want to interact with creativity that we don't currently have.”

If you want to try Hidden Door for yourself, you can sign up for the waitlist ahead of future test runs.

This article originally appeared on Engadget at https://www.engadget.com/how-do-you-prevent-an-ai-generated-game-from-losing-the-plot-170002788.html?src=rss

Lawmakers seek 'blue-ribbon commission' to study impacts of AI tools

The wheels of government have finally begun to turn on the issue of generative AI regulation. US Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation on Monday that would establish a 20-person commission to study ways to “mitigate the risks and possible harms” of AI while “protecting” America's position as a global technology power. 

The bill would require the Executive branch to appoint experts from throughout government, academia and industry to conduct the study over the course of two years, producing three reports during that period. The president would appoint eight members of the committee, while Congress, in an effort "to ensure bipartisanship," would split the remaining 12 positions evenly between the two parties (thereby ensuring the entire process devolves into a partisan circus).

"[Generative AI] can be disruptive to society, from the arts to medicine to architecture to so many different fields, and it could also potentially harm us and that's why I think we need to take a somewhat different approach,” Lieu told the Washington Post. He views the commission as a way to give lawmakers — the same folks routinely befuddled by TikTok — a bit of "breathing room" in understanding how the cutting-edge technology functions.

Senator Brian Schatz (D-HI) plans to introduce the bill's upper house counterpart, Lieu's team told WaPo, though no timeline for that happening was provided. Lieu also noted that Congress as a whole would do well to avoid trying to pass major legislation on the subject until the commission has had its time. “I just think we need some experts to inform us and just have a little bit of time pass before we put something massive into law,” Lieu said.

Of course, that would then push the passage any sort of meaningful Congressional regulation on generative AI out to 2027, at the very earliest, rather than right now, when we actually need it. Given how rapidly both the technology and the use cases for it have evolved in just the last six months, this study will have its work cut out just keeping pace with the changes, much less convincing the octogenarians running our nation of the potential dangers AI poses to our democracy.

This article originally appeared on Engadget at https://www.engadget.com/lawmakers-seek-blue-ribbon-commission-to-study-impacts-of-ai-tools-152550502.html?src=rss

Hitting the Books: Why AI won't be taking our cosmology jobs

The problem with studying the universe around us is that it is simply too big. The stars overhead remain too far away to interact with directly, so we are relegated to testing our theories on the formation of the galaxies based on observable data. 

Simulating these celestial bodies on computers has proven an immensely useful aid in wrapping our heads around the nature of reality and, as Andrew Pontzen explains in his new book, The Universe in a Box: Simulations and the Quest to Code the Cosmos, recent advances in supercomputing technology are further revolutionizing our capability to model the complexities of the cosmos (not to mention myriad Earth-based challenges) on a smaller scale. In the excerpt below, Pontzen looks at the recent emergence of astronomy-focused AI systems, what they're capable of accomplishing in the field and why he's not too worried about losing his job to one.  

Riverhead Books

Adapted from THE UNIVERSE IN A BOX: Simulations and the Quest to Code the Cosmos by Andrew Pontzen published on June 13, 2023 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2023 Andrew Pontzen.


As a cosmologist, I spend a large fraction of my time working with supercomputers, generating simulations of the universe to compare with data from real telescopes. The goal is to understand the effect of mysterious substances like dark matter, but no human can digest all the data held on the universe, nor all the results from simulations. For that reason, artificial intelligence and machine learning is a key part of cosmologists’ work.

Consider the Vera Rubin Observatory, a giant telescope built atop a Chilean mountain and designed to repeatedly photograph the sky over the coming decade. It will not just build a static picture: it will particularly be searching for objects that move (asteroids and comets), or change brightness (flickering stars, quasars and supernovae), as part of our ongoing campaign to understand the ever-changing cosmos. Machine learning can be trained to spot these objects, allowing them to be studied with other, more specialized telescopes. Similar techniques can even help sift through the changing brightness of vast numbers of stars to find telltale signs of which host planets, contributing to the search for life in the universe. Beyond astronomy there are no shortage of scientific applications: Google’s artificial intelligence subsidiary DeepMind, for instance, has built a network that can outperform all known techniques for predicting the shapes of proteins starting from their molecular structure, a crucial and difficult step in understanding many biological processes.

These examples illustrate why scientific excitement around machine learning has built during this century, and there have been strong claims that we are witnessing a scientific revolution. As far back as 2008, Chris Anderson wrote an article for Wired magazine that declared the scientific method, in which humans propose and test specific hypotheses, obsolete: ‘We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.’

I think this is taking things too far. Machine learning can simplify and improve certain aspects of traditional scientific approaches, especially where processing of complex information is required. Or it can digest text and answer factual questions, as illustrated by systems like ChatGPT. But it cannot entirely supplant scientific reasoning, because that is about the search for an improved understanding of the universe around us. Finding new patterns in data or restating existing facts are only narrow aspects of that search. There is a long way to go before machines can do meaningful science without any human oversight.

To understand the importance of context and understanding in science, consider the case of the OPERA experiment which in 2011 seemingly determined that neutrinos travel faster than the speed of light. The claim is close to a physics blasphemy, because relativity would have to be rewritten; the speed limit is integral to its formulation. Given the enormous weight of experimental evidence that supports relativity, casting doubt on its foundations is not a step to be taken lightly.

Knowing this, theoretical physicists queued up to dismiss the result, suspecting the neutrinos must actually be traveling slower than the measurements indicated. Yet, no problem with the measurement could be found – until, six months later, OPERA announced that a cable had been loose during their experiment, accounting for the discrepancy. Neutrinos travelled no faster than light; the data suggesting otherwise had been wrong.

Surprising data can lead to revelations under the right circumstances. The planet Neptune was discovered when astronomers noticed something awry with the orbits of the other planets. But where a claim is discrepant with existing theories, it is much more likely that there is a fault with the data; this was the gut feeling that physicists trusted when seeing the OPERA results. It is hard to formalize such a reaction into a simple rule for programming into a computer intelligence, because it is midway between the knowledge-recall and pattern-searching worlds.

The human elements of science will not be replicated by machines unless they can integrate their flexible data processing with a broader corpus of knowledge. There is an explosion of different approaches toward this goal, driven in part by the commercial need for computer intelligences to explain their decisions. In Europe, if a machine makes a decision that impacts you personally – declining your application for a mortgage, maybe, or increasing your insurance premiums, or pulling you aside at an airport – you have a legal right to ask for an explanation. That explanation must necessarily reach outside the narrow world of data in order to connect to a human sense of what is reasonable or unreasonable.

Problematically, it is often not possible to generate a full account of how machine-learning systems reach a particular decision. They use many different pieces of information, combining them in complex ways; the only truly accurate description is to write down the computer code and show the way the machine was trained. That is accurate but not very explanatory. At the other extreme, one might point to an obvious factor that dominated a machine’s decision: you are a lifelong smoker, perhaps, and other lifelong smokers died young, so you have been declined for life insurance. That is a more useful explanation, but might not be very accurate: other smokers with a different employment history and medical record have been accepted, so what precisely is the difference? Explaining decisions in a fruitful way requires a balance between accuracy and comprehensibility.

In the case of physics, using machines to create digestible, accurate explanations which are anchored in existing laws and frameworks is an approach in its infancy. It starts with the same demands as commercial artificial intelligence: the machine must not just point to its decision (that it has found a new supernova, say) but also give a small, digestible amount of information about why it has reached that decision. That way, you can start to understand what it is in the data that has prompted a particular conclusion, and see whether it agrees with your existing ideas and theories of cause and effect. This approach has started to bear fruit, producing simple but useful insights into quantum mechanics, string theory, and (from my own collaborations) cosmology.

These applications are still all framed and interpreted by humans. Could we imagine instead having the computer framing its own scientific hypotheses, balancing new data with the weight of existing theories, and going on to explain its discoveries by writing a scholarly paper without any human assistance? This is not Anderson’s vision of the theory-free future of science, but a more exciting, more disruptive and much harder goal: for machines to build and test new theories atop hundreds of years of human insight.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-universe-in-a-box-andrew-pontzen-riverhead-books-153005483.html?src=rss

Meta's Voicebox AI is a Dall-E for text-to-speech

Today, we are one step closer to the immortal celebrity future we have long been promised (since April). Meta has unveiled Voicebox, its generative text-to-speech model that promises to do for the spoken word what ChatGPT and Dall-E, respectfully, did for text and image generation. 

Essentially, its a text-to-output generator just like GPT or Dall-E — just instead of creating prose or pretty pictures, it spits out audio clips. Meta defines the system as “a non-autoregressive flow-matching model trained to infill speech, given audio context and text.” It’s been trained on more than 50,000 hours of unfiltered audio. Specifically, Meta used recorded speech and transcripts from a bunch of public domain audiobooks written in English, French, Spanish, German, Polish, and Portuguese.

That diverse data set allows the system to generate more conversational sounding speech, regardless of the languages spoken by each party, according to the researchers. “Our results show that speech recognition models trained on Voicebox-generated synthetic speech perform almost as well as models trained on real speech.” What’s more the computer generated speech performed with just a 1 percent error rate degradation, compared to the 45 to 70 percent drop-off seen with existing TTS models.

The system was first taught to predict speech segments based on the segments around them as well as the passage’s transcript. “Having learned to infill speech from context, the model can then apply this across speech generation tasks, including generating portions in the middle of an audio recording without having to recreate the entire input,” the Meta researchers explained.

Voicebox is also reportedly capable of actively editing audio clips, eliminating noise from the speech and even replacing misspoken words. “A person could identify which raw segment of the speech is corrupted by noise (like a dog barking), crop it, and instruct the model to regenerate that segment,” the researchers said, much like using image-editing software to clean up photographs.

Text-to-Speech generators haver been around for a minute — they’re how your parents’ TomToms were able to give dodgy driving directions in Morgan Freeman’s voice. Modern iterations like Speechify or Elevenlab’s Prime Voice AI are far more capable but they still largely require mountains of source material in order to properly mimic their subject — and then another mountain of different data for every. single. other. subject you want it trained on.

Voicebox doesn’t, thanks to a novel new zero-shot text-to-speech training method Meta calls Flow Matching. The benchmark results aren’t even close as Meta’s AI reportedly outperformed the current state of the art both in intelligibility (a 1.9 percent word error rate vs 5.9 percent) and “audio similarity” (a composite score of 0.681 to the SOA’s 0.580), all while operating as much as 20 times faster that today’s best TTS systems.

But don’t get your celebrity navigators lined up just yet, neither the Voicebox app nor its source code is being released to the public at this time, Meta confirmed on Friday, citing “the potential risks of misuse” despite the “many exciting use cases for generative speech models.” Instead, the company released a series of audio examples (see above/below) as well as a the program’s initial research paper. In the future, the research team hopes the technology will find its way into prosthetics for patients with vocal cord damage, in-game NPCs and digital assistants.

This article originally appeared on Engadget at https://www.engadget.com/metas-voicebox-ai-is-a-dall-e-for-text-to-speech-150021287.html?src=rss

'Clockwork Revolution' is a time-traveling RPG of steampunk anarchy

Who doesn't love a good Victorian era-adjacent dystopia? If you're a fan of the Dishonored or Bioshock universes, Clockwork Revolution from inXile entertainment could be right up your cobblestone alley. 

Set in the steampunk metropolis of Avalon, Clockwork Revolution pits players against the powers that be — and the secrets they're trying to conceal. Turns out the town's leader, Lady Ironwood, has been fiddling with the weft of fate, using a time-traveling device to jump back through history to selectively change events and shower both wealth and power upon herself. It'll be up to you to use her own Chronometer against her, undoing the temporal damage she has wrought. But beware, your own past actions will have very real consequences when you return to the present. 

The game is still in early development and the action we saw during the Xbox event was all from pre-alpha builds. InXile has not shared an expected release date but has promised to provide future updates, "... in due time."

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/clockwork-revolution-is-a-time-traveling-rpg-of-steampunk-anarchy-183507054.html?src=rss

Rob banks with reckless abandon in 'Payday 3' on September 21st

We got our first glimpse at gameplay from the newest entry in the long-running Payday game series during the Xbox Games Showcase at Summer Game Fest Sunday. Starbreeze Studios may have assumed the franchise's helm from Overkill, but it looks like Payday 3 will deliver the same fraught and frenetic first person shooting that fans have come to expect. 

In the demo at Sunday's showcase, a familiar-looking quartet of gunmen sets about taking over and looting a neighborhood bank through the application of aggressive intimidation tactics, a variety of high tech gadgets and a whole lot of downrange fire. Gameplay mechanics like hostage-taking (which previously allowed robbers to trade in captured civilians after for taking too much damage and being "arrested") appear to still be very much a part of the new title. Payday 3 arrives on September 21st for Xbox X|S (and presumably other platforms as well). 

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/rob-banks-with-reckless-abandon-in-payday-3-on-september-21st-174340939.html?src=rss

Hitting the books: Why you shouldn't blog about asking a cop to go shopping for you

You get free stuff, you get free travel, you get the nifty cool title of "brand ambassador," what's not to love about being an internet influencer? There's the consequences, for one. Not even just the warranted consequences of your actual actions, mind you, but also those arriving unbidden based on the perception of your actions by your audience — and those can be two markedly different things bearing entirely disparate social costs. In her new book, Swipe Up for More! Inside the Unfiltered Lives of Influencers, Stephanie McNeal takes an unflinching look at the interplay between the public personas and private lives of three of the internet's most influential lifestyle bloggers: Caitlin Covington, Mirna Valerio, and Shannon Bird. 

Equal parts fascinating and disquieting — like a slow-motion car crash where everybody's really, really good looking — Swipe up for More explores the people and personalities behind the product placement. In the excerpt below, mommy blogger Shannon Bird recounts the internet's response to her 911 call asking a local cop to make a midnight milk run for her hungry baby.   

Portfolio

Excerpted from Swipe Up for More!: Inside the Unfiltered Lives of Influencers by Stephanie McNeal, in agreement with Portfolio, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © Stephanie McNeal, 2023.


On January 28, 2020, Shannon and her kids were home alone. Dallin was on a work trip. Her youngest child, London, was only six weeks old and her son Brooklyn had recently broken his leg. That night, she found herself unable to produce any breast milk. She was taking medication that made her supply decline. When she realized she had no formula or saved breast milk, she grew desperate to feed her hungry baby but didn’t want to rouse all her kids to bring them to the store with her. After calling some friends and neighbors, around two a.m. she called 911.

She knew the officer who responded. When we were wandering the neighborhood on my “Mormon blogger tour,” we even saw him driving by. According to Shannon, the police officer usually posts up on their street and stays there all night waiting for a call. It’s kind of funny, she says, because there’s not much going on in Alpine, so he spends many nights just chilling. Shannon often chats with him when she goes to get her mail.

So, when she began to rack her brain for who may be up at two in the morning, she immediately thought of the police officer.

“It didn’t even faze me in a way,” she said. “I was like, ‘I know who’s awake!’”

The officer came through for her, buying baby formula and delivering it to her house in the middle of the night. Shannon was grateful to him and decided to share the saga on her Instagram Story.

In Shannon’s mind, the story was both her typical, Sandler-esque goofy fare (silly her, ending up in this situation) and also a feel-good story about a nice cop doing good in her community. She never expected the story to go viral. However, it was catnip for local news stations like KSL in Utah. (Local news loves a “good cop doing a good deed” story, which is controversial, to say the least.)

The story then spread like wildfire. Shannon was featured on CNN (“As a mother of five young children, Shannon Bird said she considers herself somewhat of a pro at the baby-raising game,” the story reads) and outlets as far away as Chicago and the UK.

At first, the attention from the media was kind of cool. Shannon described it as a “whirlwind,” ticking off all the shows that contacted her and excitedly telling me she got a free trip to New York to do interviews. She had producers “pounding on [her] door” asking her for exclusives. For a minute, everyone seemed to want to talk to her.

Then came the backlash. Online, Shannon was painted as the epitome of a clueless white woman, using her privilege to call upon law enforcement as her personal errand boy. Many questioned how a mother of color would have been treated by police in this situation (probably very differently). People called Shannon a neglectful mother, pathetic, and an attention seeker, and accused her of perpetrating a publicity stunt.

In retrospect, Shannon says she didn’t really think about the implications of what she was posting. In her mind, she wasn’t taking resources away from her larger community. She figured her local cops likely were not out responding to a crime in the middle of the night.

“I was like, ‘Wait, you’re the ones bringing race into this, I didn’t think it was a racist thing at all.’ That’s just because I really am color-blind I didn’t know my white privilege, I guess,” she said.

This decision to post about the cop and the formula has had a profound impact on every aspect of Shannon’s life since and has radically changed her perspective on both her life and her career as an influencer. It’s constantly on her mind. Even two years later, in January 2022 when I visited her, she brought up her 911 call within the first ten minutes of my arrival and referred to it constantly afterward.

The most serious and devastating impact it had on the Bird family was the real-world one. Shortly after the incident went viral, Shannon started getting more hate than she had ever before online. Then, things started to show up at her house. Her mailbox filled up with empty formula cans, though she had no idea how anyone had found her address to send them to.

Shannon wondered if the strange missives were coming from haters online or people in her community. She grew worried. Did everyone in her neighborhood know about the formula thing? What about the other parents at her kids’ school? Everywhere she looked she felt judged. More than ever, Shannon felt like the walls of Alpine were closing in on her.

Then, she said, Child and Family Services showed up at her house. Someone had called in a tip that the Bird children were in danger, and the agency needed to do a full investigation to clear the charges. Her kids had to be interviewed. Shannon was relieved when the officers seemed to be confused as to why they had been called to the Birds.

“You live in a seven-thousand-square-foot house,” she said they told her. “Your kids are eating takeout sushi right now. Like, what are they talking about?”

While she can make little jokes about it occasionally, Shannon was extremely traumatized by the DCFS visit. Dallin, on the other hand, is so easygoing that she said he was never really concerned when DCFS came, calling the whole saga “ridiculous.”

That’s his attitude to most things Shannon posts online, including the formula saga. When I asked him if online criticism ever bothered him, he shook his head with a laugh. Even he doesn’t really understand how he’s able to not let it bother him.

“You know, I just don’t care,” he said.

Sure, he may wish she didn’t post every single thing that comes into her head, but he long ago made his peace with the fact that he can’t control what Shannon wants to do. He is capable of tuning out the opinions of strangers, “You have to get to a point where, like, it’s funny. It’s funny to you,” he told me. “If you really, really care, then you can’t do this,” he said.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-swipe-up-for-more-stephanie-mcneal-portfolio-143057280.html?src=rss

Generative AI can help bring tomorrow's gaming NPCs to life

Elves and Argonians clipping through walls and stepping through tables, blacksmiths who won’t acknowledge your existence until you take single step to the left, Draugers that drop into rag-doll seizures the moment you put an arrow through their eye — Bethesda’s Elder Scrolls long-running RPG series is beloved for many reasons, the realism of their non-playable characters (NPCs) is not among them. But the days of hearing the same rote quotes and watching the same half-hearted search patterns perpetually repeated from NPCs are quickly coming to an end. It’s all thanks to the emergence of generative chatbots that are helping game developers craft more lifelike, realistic characters and in-game action.

“Game AI is seldom about any deep intelligence but rather about the illusion of intelligence,” Steve Rabin, Principal Software Engineer at Electronic Arts , wrote in the 2017 essay, The Illusion of Intelligence. “Often we are trying to create believable human behavior, but the actual intelligence that we are able to program is fairly constrained and painfully brittle.”

Just as with other forms of media, video games require the player to suspend their disbelief for the illusions to work. That’s not a particularly big ask given the fundamentally interactive nature of gaming, “Players are incredibly forgiving as long as the virtual humans do not make any glaring mistakes,” Rabin continued. “Players simply need the right clues and suggestions for them to share and fully participate in the deception.”

Early days

Take Space Invaders and Pac-Mac, for example. In Space Invaders, the falling enemies remained steadfast on their zig-zag path towards Earth’s annihilation, regardless of the player’s actions, with the only change coming as a speed increase when they got close enough to the ground. There was no enemy intelligence to speak of, only the player’s skill in leading targets would carry the day. Pac-Man, on the other hand, used enemy interactions as a tentpost of gameplay.

Under normal circumstances, the Ghost Gang will coordinate to track and trap The Pac — unless the player gobbled up a Power Pellet before vengefully hunting down Blinky, Pinky, Inky and Clyde. That simple, two-state behavior, essentially a fancy if-then statement in C, proved revolutionary for the nascent gaming industry and became a de facto method of programming NPC reactions for years to come using finite-state machines (FSMs).

Finite-state machines

A finite-state machine is a mathematical model that abstracts a theoretical “machine” capable of existing in any number of states — ally/enemy, alive/dead, red/green/blue/yellow/black — but occupying exclusively one state at a time. It consists, “of a set of states and a set of transitions making it possible to go from one state to another one,” Viktor Lundstrom wrote in 2016’s Human-like decision making for bots in mobile gaming. “A transition connects two states but only one way so that if the FSM is in a state that can transit to another state, it will do so if the transition requirements are met. Those requirements can be internal like how much health a character has, or it can be external like how big of a threat it is facing.”

Like light switches in Half-Life and Fallout, or the electric generators in Dead Island: FSM’s are either on or they’re off or they’re in a rigidly defined alternative state (real world examples would include a traffic light or your kitchen microwave). These machines can transition back and forth between states given the player’s actions but half measures like dimmer switches and low power modes do not exist in these universes. There are few limits on the number of states that an FSM can exist in beyond the logistical challenges of programming and maintaining them all, as you can see with the Ghost Gang’s behavioral flowcharts on Jared Mitchell’s blog post, AI Programming Examples. Lundstrom points out that FSM, “offers lots of flexibility but has the downside of producing a lot of method calls” which tie up additional system resources.

Decision and behavior trees

Alternately, game AIs can be modeled using decision trees. “There are usually no logical checks such as AND or OR because they are implicitly defined by the tree itself,” Lundstrom wrote, noting that the trees “can be built in a non-binary fashion making each decision have more than two possible outcomes.”

Behavior trees are a logical step above that and offer players contextual actions to take by chaining multiple smaller decision actions together. For example, if the character is faced with the task of passing through a closed door, they can either perform the action to turn the handle to open it or, upon finding the door locked, take the “composite action” of pulling a crowbar from inventory and breaking the locking mechanism.

“Behavior trees use what is called a reactive design where the AI tends to try things and makes its decisions from things it has gotten signals from,” Lundstrom explained. “This is good for fast phasing games where situations change quite often. On the other hand, this is bad in more strategic games where many moves should be planned into the future without real feedback.”

GOAPs and RadiantAI

From behavior trees grew GOAPs (Goal-Oriented Action Planners), which we first saw in 2005’s F.E.A.R. An AI agent empowered with GOAP will use the actions available to choose from any number of goals to work towards, which have been prioritized based on environmental factors. “This prioritization can in real-time be changed if as an example the goal of being healthy increases in priority when the health goes down,” Lundstrom wrote. He asserts that they are “a step in the right direction” but suffers the drawback that “it is harder to understand conceptually and implement, especially when bot behaviors come from emergent properties.”

Radiant AI, which Bethesda developed first for Elder Scrolls IV: Oblivion and then adapted to Skyrim, Fallout 3, Fallout 4 and Fallout: New Vegas, operates on a similar principle to GOAP. Whereas NPCs in Oblivion were only programmed with five or six set actions, resulting in highly predictable behaviors, by Skyrim, those behaviors had expanded to location-specific sets, so that NPCs working in mines and lumber yards wouldn’t mirror the movements of folks in town. What’s more, the character’s moral and social standing with the NPC’s faction in Skyrim began to influence the AI’s reactions to the player’s actions. “Your friend would let you eat the apple in his house,” Bethesda Studios creative director Todd Howard told Game Informer in 2011, rather than reporting you to the town guard like they would if the relationship were strained.

Modern AIs

Naughty Dog’s The Last of Us series offers some of today’s most advanced NPC behaviors for enemies and allies alike. “Characters give the illusion of intelligence when they are placed in well thought-out setups, are responsive to the player, play convincing animations and sounds, and behave in interesting ways,” Mark Botta, Senior Software Engineer at Ripple Effect Studios, wrote in Infected AI in The Last of Us. “Yet all of this is easily undermined when they mindlessly run into walls or do any of the endless variety of things that plague AI characters.”

“Not only does eliminating these glitches provide a more polished experience,” he continued, “but it is amazing how much intelligence is attributed to characters that simply don’t do stupid things.”

You can see this in both the actions of enemies, whether they’re human Hunters or infected Clickers, or allies like Joel’s ward, Ellie. The game’s two primary flavors of enemy combatant are built on the same base AI system but “feel fundamentally different” from one another thanks to a “modular AI architecture that allows us to easily add, remove, or change decision-making logic,” Botta wrote.

The key to this architecture was never referring to the enemy character types in the code but rather, “[specifying] sets of characteristics that define each type of character,” Botta said. “For example, the code refers to the vision type of the character instead of testing if the character is a Runner or a Clicker … Rather than spreading the character definitions as conditional checks throughout the code, it centralizes them in tunable data.” Doing so empowers the designers to adjust character variations directly instead of having to ask for help from the AI team.

The AI system is divided into high-level logic (aka “skills”) that dictate the character’s strategy and the low-level “behaviors” that they use to achieve the goal. Botta points to a character’s “move-to behavior” as one such example. So when Joel and Ellie come across a crowd of enemy characters, their approach either by stealth or by force is determined by that character’s skills.

“Skills decide what to do based on the motivations and capabilities of the character, as well as the current state of the environment,” he wrote. “They answer questions like ‘Do I want to attack, hide, or flee?’ and ‘What is the best place for me to be?’” And then once the character/player makes that decision, the lower level behaviors trigger to perform the action. This could be Joel automatically ducking into cover and drawing a weapon or Ellie scampering off to a separate nearby hiding spot, avoiding obstacles and enemy sight lines along the way (at least for the Hunters — Clickers can hear you breathing).

Tomorrow’s AIs

Generative AI systems have made headlines recently due in large part to the runaway success of next-generation chatbots from Google, Meta, OpenAI and others, but they’ve been a mainstay in game design for years. Dwarf Fortress and Black Rock Galactic just wouldn’t be the same without their procedurally generated levels and environments — but what if we could apply those generative principles to dialog creation too? That’s what Ubisoft is attempting with its new Ghostwriter AI.

“Crowd chatter and barks are central features of player immersion in games – NPCs speaking to each other, enemy dialogue during combat, or an exchange triggered when entering an area all provide a more realistic world experience and make the player feel like the game around them exists outside of their actions,” Ubisoft’s Roxane Barth wrote in a March blog post. “However, both require time and creative effort from scriptwriters that could be spent on other core plot items. Ghostwriter frees up that time, but still allows the scriptwriters a degree of creative control.”

The use process isn’t all that different from messing around with public chatbots like BingChat and Bard, albeit with a few important distinctions. The scriptwriter will first come up with a character and the general idea of what that person would say. That gets fed into Ghostwriter which then returns a rough list of potential barks. The scriptwriter can then choose a bark and edit it to meet their specific needs. The system will generate these barks in pairs and selecting one over the other serves as a quick training and refinement method, learning from the preferred choice and, with a few thousand repetitions, begins generating more accurate and desirable barks from the outset.

“Ghostwriter was specifically created with games writers, for the purpose of accelerating their creative iteration workflow when writing barks [short phrases]” Yves Jacquier, Executive Director at Ubisoft La Forge, told Engadget via email. “Unlike other existing chatbots, prompts are meant to generate short dialogue lines, not to create general answers.”

“From here, there are two important differences,” Jacquier continued. “One is on the technical aspect: for using Ghostwriter writers have the ability to control and give input on dialogue generation. Second, and it’s a key advantage of having developed our in-house technology: we control on the costs, copyrights and confidentiality of our data, which we can re-use to further train our own model.”

Ghostwriter’s assistance doesn’t just make scriptwriters’ jobs easier, it in turn helps improve the overall quality of the game. “Creating believable large open worlds is daunting,” Jacquier said. “As a player, you want to explore this world and feel that each character and each situation is unique, and involve a vast variety of characters in different moods and with different backgrounds. As such there is a need to create many variations to any mundane situation, such as one character buying fish from another in a market.”

Writing 20 different iterations of ways to shout “fish for sale” is not the most effective use of a writer’s time. “They might come up with a handful of examples before the task might become tedious,” Jacquier said. “This is exactly where Ghostwriter kicks in: proposing such dialogs and their variations to a writer, which gives the writer more variations to work with and more time to polish the most important narrative elements.”

Ghostwriter is one of a growing number of generative AI systems Ubisoft has begun to use, including voice synthesis and text-to-speech. “Generative AI has quickly found its use among artists and creators for ideation or concept art,“ Jacquier said, but clarified that humans will remain in charge of the development process for the foreseeable future, regardless of coming AI advancements . “Games are a balance of technological innovation and creativity and what makes great games is our talent – the rest are tools. While the future may involve more technology, it doesn’t take away the human in the loop.”

7.4887 billion reasons to get excited

Per a recent Market.us report, the value of generative AI in the gaming market could as much as septuple by 2032. Growing from around $1.1 billion in 2023 to nearly $7.5 billion in the next decade, these gains will be driven by improvements to NPC behaviors, productivity gains by automating digital asset generation and procedurally generated content creation.

And it won’t just be major studios cranking out AAA titles that will benefit from the generative AI revolution. Just as we are already seeing dozens and hundreds of mobile apps built atop ChatGPT mushrooming up on Google Play and the App Store for myriad purposes, these foundational models (not necessarily Ghostwriter itself but its invariable open-source derivative) are poised to spawn countless tools which will in turn empower indie game devs, modders and individual players alike. And given how quickly the need to know how to program in proper code rather than natural language is falling off, our holodeck immersive gaming days could be closer than we ever dared hope.

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/generative-ai-can-help-bring-tomorrows-gaming-npcs-to-life-163037183.html?src=rss

Screenshots of Instagram's answer to Twitter leak online

After Elon Musk finalized his purchase of Twitter last October, Mark Zuckerberg's Meta reportedly began working on a social media platform of its own, codenamed Project 92. During a company-wide meeting on Thursday, Meta chief product officer Chris Cox showed off a set UI mock-ups to the assembled employees, which were promptly leaked online.

The project's existence was first officially confirmed in March when the company told reporters, "We're exploring a standalone decentralized social network for sharing text updates. We believe there's an opportunity for a separate space where creators and public figures can share timely updates about their interests." A set of design images shared internally in May were leaked online as well.

The new platform, which Cox referred to as “our response to Twitter,” will be a standalone program, based on Instagram and integrating ActivityPub, the same networking protocol that powers Mastodon. The leaked images include a shot of the secure sign in screen; the main feed, which looks suspiciously like Twitter's existing mobile app; and the reply screen. There's no word yet on when the app will be available for public release.

“We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run, that they believe that they can trust and rely upon for distribution,” Cox said, per a Verge report. Celebrities including Oprah and the Dalai Lama have both reportedly been attached to the project.

This article originally appeared on Engadget at https://www.engadget.com/screenshots-of-instagrams-answer-to-twitter-leak-online-212427998.html?src=rss

'John Carpenter's Toxic Commando' brings a co-op apocalypse to PS5, PC and Xbox

From Halloween to The Thing, Christine to They Live, John Carpenter is a modern master of cinematic horror. During Summer Games Fest on Thursday, Focus Home Entertainment and Saber Interactive announced that his unique zompocalyptic vision will be coming to Xbox X|S, the Epic Games Store and Playstation 5 in 2024 with the release of John Carpenter's Toxic Commando.

The game's premise, based on the trailer that debuted on Thursday, is straightforward: you see a zombie, you shoot it until it stops twitching. The plot is equally nuanced, wherein an experiment seeking to draw energy from the Earth's core has instead unleashed an ambulatory zombie plague known as the Sludge God. Players will have to kill it and its unending army of undead monstrosities with an array of melee, edged and ranged weapons, and special abilities. Get ready to rock out with your Glock out because the enemies will be coming at you in hordes. 

Brace yourselves for an explosive, co-op shooter inspired by 80s Horror and Action in John Carpenter's #ToxicCommando!

Drive wicked vehicles unleash mayhem on hordes of monsters to save the world. Time to go commando!

Coming to Epic Games Store in 2024. #SGFpic.twitter.com/mpz1LQFwRX

— Epic Games Store (@EpicGames) June 8, 2023

A firm release date has not yet been set, however the studio did announce that there will be a closed beta offered ahead of its release. If you want to get in on the undead butchery ahead of time, sign up for PC on the on the beta website

Catch up on all of the news from Summer Game Fest right here!

This article originally appeared on Engadget at https://www.engadget.com/john-carpenters-toxic-commando-brings-a-co-op-apocalypse-to-ps5-and-xbox-203721417.html?src=rss