Posts with «personal finance - career & education» label

Inside the 'arms race' between YouTube and ad blockers

YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it'll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.

Although this is an aggressive move that seemingly left ad blocking companies scrambling to respond, it didn’t come out the blue — YouTube had been testing something similar for months. And even before this most recent clampdown, the Google-owned video service has been engaged in an ongoing conflict — a game of cat-and-mouse, an arms race, pick your metaphor — with ad-blocking software: YouTube rolls out new ways to serve ads to viewers with ad blockers, then ad blockers develop new strategies to circumvent those ad-serving measures.

As noted in a blog post by the ad- and tracker-blocking company Ghostery, YouTube employs a wide variety of techniques to circumvent ad blockers, such as embedding an ad in the video itself (so the ad blocker can’t distinguish between the two), or serving ads from the same domain as the video, fooling filters that have been set up to block ads served from third-party domains.

It’s not that YouTube is alone in these efforts; many digital publishers make similar attempts to stymie ad blockers. To some extent, YouTube’s moves just get more attention because the service is so popular. As AdGuard CTO Andrey Meshkov put it in an email, “Even when they run a test on a share of users… the number of affected people is very high.”

At the same time, according to Ghostery’s director of product and engineering Krzysztof Modras, it’s also true that “as one of the world’s largest publishers, YouTube constantly invests in circumventing ad blocking.” And that those investments have been effective. Many of the most common ad blocking strategies, including DNS filtering (filtering for third-party domains), network filtering (which Modras described as “more selective” and better at blocking first-party requests) and cosmetic filtering (which can blocks ads without leaving ad-shaped holes in the website content) no longer work on the site.

Now, Modras said, YouTube seems to be “adapting [its] methods more frequently than ever before. To counteract its changes to ad delivery and ad blocker detection, block lists have to be updated at minimum on a daily basis, and sometimes even more often. While all players in the space are innovating, some ad blockers are simply unable to keep up with these changes.”

Keeping pace with YouTube will likely become even more challenging next year, when Google’s Chrome browser adopts the Manifest V3 standard, which significantly limits what extensions are allowed to do. Modras said that under Manifest V3, whenever an ad blocker wants to update its blocklist — again, something they may need to do multiple times a day — it will have to release a full update and undergo a review “which can take anywhere between [a] few hours to even a few weeks.”

“Through Manifest V3, Google will close the door for innovation in the ad blocking landscape and introduce another layer of gatekeeping that will slow down how ad blockers can react to new ads and online tracking methods,” he said.

For many users, the battle between YouTube and ad blockers has largely been invisible, or at least ignorable, until now. The new wall dramatically changes this dynamic, forcing users to adapt their behavior if they want to access YouTube videos at all. Still, the ad blocking companies suggest it’s more of a policy change than a technical breakthrough — a sign of a new willingness on YouTube’s part to risk alienating its users.

“It's not that YouTube's move is something new, many publishers went [down] this road already,” Meshkov said. “The difference is [the] scale of YouTube.” That scale affects both the number of users impacted, as well as the number of resources required to maintain these defenses on the publisher's side. “Going this road is very, very expensive, it requires constant maintenance," he added, "you basically need a team dedicated to this. There's just a handful of companies that can afford it."

As ever, ad blockers are figuring out how to adapt, even if it’s requiring more effort from their users, too. For example, Modras noted that “throughout much of October, Ghostery experienced three to five times the typical number of both uninstalls and installs per day, as well as a 30 percent increase in downloads on Microsoft Edge, where our ad blocker was still working on YouTube for a period of time.” All of this activity suggests that users are quickly cycling through different products and strategies to get around YouTube’s anti-ad block efforts, then discarding them when they stop working.

Meanwhile, uBlock Origin still seems to work on YouTube. But a detailed Reddit post outlining how to avoid tripping the platform's ad-block detection measures notes that because “YouTube changes their detection scripts regularly,” users may still encounter the site’s pop-up warnings and anti-adblock wall in “brief periods of time" between script changes (on the platform's end) or filter updates (on uBlock's side.) uBlock Origin may also stop working on Chrome next year thanks to the aforementioned Manifest V3. And if you’re hoping to use it on a non-Chrome browser, Google has allegedly begun deprecating YouTube's load times on alternate browsers, seemingly as part of the anti-ad block effort. While 404 Media and Android Authority, which both reported on this issue, were not able to replicate these artificially slowed load times, users were seemingly able to avoid them through the use of a “user-agent switcher,” which disguises one browser (say, Firefox) as another (in this case, Chrome).

Why do some ad blockers still work? The answer seems to boil down to a new approach: Scriptlet injection, which uses scripts to alter website behavior in a more fine-grained way. For example, Meshkov said an ad blocker could write a scriptlet to remove a cookie with a given name, or to stop the execution of JavaScript on a web page when it tries to access a page property with a given name.

On YouTube, Modras said, scriptlets can alter the data being loaded before it’s used by the page script. For example, a scriptlet might look for specific data identifiers and remove them, making this approach “subtle enough” to block ads that have been mixed in with website functionality, without affecting the functionality.

Scriptlet injection also plays a role in an increasingly crucial part of the ad blocker’s job: escaping detection. AdGuard’s Meshkov said this is something that teams like his are already working on, since they try escape detection as a general rule — both by avoiding activity that would alert a website to their presence, and by using scriptlets to prevent common fingerprinting functions that websites use to detect ad blockers.

Scriptlet injection seems to be the most promising approach right now — in fact, Modras described it as currently “the only reliable way of ad blocking on YouTube.”

Meshkov said that assessment is accurate if you limit yourself to browser extensions (which is how most popular ad blockers are distributed). But he pointed to network-level ad blockers and alternative YouTube clients, such as NewPipe, as other approaches that can work. A recent AdGuard blog post outlined additional other steps that users can try, such as checking for filter updates, making sure multiple ad blockers aren't installed and using a desktop ad-blocking app, which should be harder to detect than an extension. (AdGuard itself offers both network-level blocking and desktop apps.)

At least one popular ad blocker, AdBlock Plus, won’t be trying to get around YouTube’s wall at all. Vergard Johnsen, chief product officer at AdBlock Plus developer eyeo, said he respects YouTube’s decision to start “a conversation” with users about how content gets monetized.

Referencing the now independently run Acceptable Ads program (which eyeo created and participates in), Johnsen said, “the vast majority of our users have really embraced the fact that there will be ads [...] we’ve made it clear we don’t believe in circumvention.”

Similarly, a YouTube spokesperson reiterated that the platform’s ads support “a diverse ecosystem of creators globally” and that “the use of ad blockers violate YouTube’s Terms of Service.”

As the battle between YouTube and ad blockers continues, Modras suggested that his side has at least one major advantage: They’re open source and can draw on knowledge from the broader community.

“Scriptlet injection is already getting more powerful, and it’s becoming harder for anti-ad blockers to detect,” he said. “In some ways, the current situation has spurred an arms race. YouTube has inadvertently improved ad blockers, as the new knowledge and techniques gained from innovating within the YouTube platform are also applicable to other ad and tracking systems.”

But even if most users grow frustrated with the new countermeasures and decide to whitelist YouTube in their ad block product of choice, Modras suggested that ad blockers can still affect the platform's bottom line: “If users disable ad blocking on only YouTube and maintain their protection on other websites as they browse, the platform will quickly learn that they are still unable to effectively target ads to these users,” since it won’t have data about user activity on those other sites.

Regardless of what YouTube does next, he suggested that other publishers are unlikely to build a similar wall, because few if any services enjoy the same chokehold on an entire media ecosystem — not only owning the most popular video sharing service, but also the most popular web browser on which to view it. "YouTube is in a unique position as it is de facto a monopoly," he said. "That's not true for most of the other publishers.”

Even against those odds, ad block diehards aren't dissuaded in their mission. As Andrey Meshkov put it bluntly: “YouTube’s policy is just a good motivation to do it better.”

This article originally appeared on Engadget at https://www.engadget.com/inside-the-arms-race-between-youtube-and-ad-blockers-140031824.html?src=rss

How OpenAI's ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."

That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss

Every car is a smart car, and it's a privacy nightmare

Mozilla recently reported that of the car brands it reviewed, all 25 failed its privacy tests. While all, in Mozilla's estimation, overreached in their policies around data collection and use, some even included caveats about obtaining highly invasive types of information, like your sexual history and genetic information. As it turns out, this isn’t just hypothetical: The technology in today’s cars has the ability to collect these kinds of personal information, and the fine print of user agreements describes how manufacturers get you to consent every time you put the keys in the ignition.

“These privacy policies are written in a way to ensure that whatever is happening in the car, if there's an inference that can be made, they are still ensuring that there is protection, and that they are compliant with different state laws,” Adonne Washington, policy council at the Future of Privacy Forum, said. The policies also account for technological advances that could happen while you own the car. Tools to do one thing could eventually do more, so manufacturers have to be mindful of that, according to Washington.

So, it makes sense that a car manufacturer would include every type of data imaginable in its privacy policy to cover the company legally if it stumbled into certain data collection territory. Nissan’s privacy policy, for example, covers broad and frankly irrelevant classes of user information, such as “sexual orientation, sexual activity, precise geolocation, health diagnosis data, and genetic information” under types of personal data collected. 

Companies claim ownership in advance, so that you can’t sue if they accidentally record you having sex in the backseat, for example. Nissan claimed in a statement that this is more or less why its privacy policy remains so broad. The company says it "does not knowingly collect or disclose customer information on sexual activity or sexual orientation," but its policy retains those clauses because "some U.S. state laws require us to account for inadvertent data we have or could infer but do not request or use." Some companies Engadget reached out to — like Ford, Stellantis and GM — affirmed their commitment, broadly, to consumer data privacy; Toyota, Kia and Tesla did not respond to a request for comment.

Beyond covering all imaginable legal bases, there simply isn't any way to know why these companies would want deeply personal information on their drivers, or what they'd do with it. And even if it's not what you would consider a “smart” car, any vehicle equipped with USB, Bluetooth or recording capabilities can capture a lot of data about the driver. And in much the same way a "dumb" tv is considerably harder to find these days, most consumers would be hard pressed to find a new vehicle option that doesn't include some level of onboard tech with the capacity to record their data. A study commissioned by Senator Ed Markey nearly a decade ago found all modern cars had some form of wireless technology included. Even the ranks of internet listicles claiming to contain low-tech cars for "technophobes" are riddled with dashboard touchscreens and infotainment systems.

“How it works in practice we don’t have as much insight into, as car companies, data companies, and advertising companies tend to hold those secrets more close to the vest,” Jen Caltrider, a researcher behind Mozilla’s car study, said. “We did our research by combing through privacy policies and public documentation where car companies talked about what they *can* do. It is much harder to tell what they are actually doing as they aren’t required to be as public about that.”

The unavailability of disconnected cars combined with the lack of transparency around driver data use means consumers have essentially no choice to trust their information is being used responsibly, or that at least some of the classes of data — like Nissan's decision to include "genetic information" — listed in these worrying privacy policies are purely related to hypothetical liability. The options are essentially: read every one of these policies and find the least draconian, buy a very old, likely fuel-inefficient car with no smart features whatsoever or simply do without a car, period. To that last point, only about eight percent of American households are carless, often not because they live in a walkable city with robust public transit, but because they cannot afford one.

This gets even more complicated when you think about how cars are shared. Rental cars change drivers all the time, or a minor in your household might borrow your car to learn how to drive. Unlike a cell phone, which is typically a single user device, cars don’t work like and vehicle manufacturers struggle to address that in their policies. And cars have the ability to collect information not just on drivers but their passengers.

If simply trusting manufacturers after they ask for the right to collect your genetic characteristics tests credulity, the burden of anyone other than a contract lawyer reading back a software license agreement to the folks in the backseat is beyond absurd. Ford’s privacy policy explicitly states that the owners of its vehicles “must inform others who drive the vehicle, and passengers who connect their mobile devices to the vehicle, about the information in this Notice.” That’s about 60 pages of information to relay, if you’re printing it directly from Ford’s website — just for the company and not even the specific car.

And these contracts tend to compound on one another. If that 60-page privacy policy seems insurmountable, well, there's also a terms of service and a separate policy regarding the use of Sirius XM (on a website with its own 'accept cookies' popover, with its own agreement.) In fairness to Ford, its privacy notice does allow drivers to opt out of certain data sharing and connected services, but that would require drivers to actually comb through the documentation. Mozilla found many other manufacturers offered no such means to avoid being tracked, and a complete opt-out is something which the Alliance for Automotive Innovation — a trade group representing nearly all car and truck makers in the US, including Ford — has actively resisted. To top things off, academics, legal scholars and even one cheeky anti-spyware company have repeatedly shown consumers almost universally do not read these kinds of contracts anyway. 

The burden of these agreements doesn't end with their presumptive data collection, or the onus to relay them to every person riding in or borrowing your car. The data held in-vehicle and manufacturer's servers becomes yet another hurdle for drivers should they opt to sell the thing down the line. According to Privacy4Cars founder Andrea Amico, be sure to get it in writing from the dealer how they plan to delete your data from the vehicle before reselling it. “There's a lot of things that consumers can do to actually start to protect themselves, and it's not going to be perfect, but it's going to make a meaningful difference in their lives,” Amico said.

Consumers are effectively hamstrung by the state of legal contract interpretation, and manufacturers are incentivized to mitigate risk by continuing to bloat these (often unread) agreements with increasingly invasive classes of data. Many researchers will tell you the only real solution here is federal regulation. There have been some cases of state privacy law being leveraged for consumers' benefit, as in California and Massachusetts, but on the main it's something drivers aren't even aware they should be outraged about, and even if they are, they have no choice but to own a car anyway.

This article originally appeared on Engadget at https://www.engadget.com/every-car-is-a-smart-car-and-its-a-privacy-nightmare-193010478.html?src=rss

The best white elephant gift ideas for 2023

According to legend, the King of Siam would give a white elephant to courtiers who had upset them. It was a far more devious punishment than simply having them executed. The recipient had no choice but to simply thank the king for such an opulent gift, knowing that they likely could not afford the upkeep for such an animal. It would inevitably lead them to financial ruin. This story is almost certainly untrue, but it has led to a modern holiday staple: the White Elephant gift exchange.

Getting a White Elephant gift right requires walking a very fine line. The goal isn’t to just buy something terrible and force someone to take it home with them. It should be useful or amusing enough that it won’t immediately end up in the trash. It also shouldn’t be easily tossed in a junk drawer and forgotten about. So here are a few suggestions that will not only get you a few chuckles, but will also make the recipient feel (slightly) burdened.

Clocky Alarm Clock on Wheels

KFC Fire Starter Log by Enviro-Log

LDKCOK USB 2.0 Active Repeater Extension Cable

Galaxy Projector

Msraynsford Useless Machine 2.0

Lightsaber Chopsticks

MMX Marshmallow Crossbow

Banana Phone

Friendship Lamp

FAQs

What is white elephant?

A white elephant gift exchange is a party game typically played around the holidays in which people exchange funny, impractical gifts.

How does white elephant work?

A group of people each bring one wrapped gift to the white elephant gift exchange, and each gift is typically of a similar value. All gifts are then placed together and the group decides the order in which they will each claim a gift. The first person picks a white elephant gift from the pile, unwraps it and their turn ends. The following players can either decide to unwrap another gift and claim it as their own, or steal a gift from someone who has already taken a turn. The rules can vary from there, including the guidelines around how often a single item can be stolen — some say twice, max. The game ends when every person has a white elephant gift.

Why is it called white elephant?

The term “white elephant” is said to come from the legend of the King of Siam gifting white elephants to courtiers who upset him. While it seems like a lavish gift on its face, the belief is that the courtiers would be ruined by the animal’s upkeep costs.

This article originally appeared on Engadget at https://www.engadget.com/white-elephant-gift-ideas-2023-130058973.html?src=rss

World of Horror is a skin-crawling dread machine that does its inspirations proud

I am fully encased in a bundle of spider’s silk, only my eyeballs still visible as I wait for my turn to be devoured. I’ve failed to save the city from the insatiable arachnidian Old God, and now myself and all the inhabitants of Shiokawa, Japan are caught in its web. I’d come so far this time, solved all of the mysteries tacked to my bulletin board, but in the end, I couldn’t escape the doom that had been closing in on me.

If World of Horror could be reduced to a single word, it’d be “dread.” It's a point-and-click cosmic horror game created by Polish developer and dentist, Pawel Kozminski (also known as Panstasz). And after spending years in early access, Ysbryd Games finally released it to the public this month on Steam, PlayStation 4 and 5, and Nintendo Switch. It was well worth the wait.

World of Horror is heavily text-based, and plays like a choose your own adventure story — one in which most of your options are bad ones that will inevitably lead you to a gruesome death or irrevocable insanity. Players must solve five mysteries that are tormenting the townspeople, gathering information and fighting off the monstrous entities that attempt to get in your way. A slippery, boil-covered former teacher here, a woman with shards of broken ribs jammed into her gaping hole of a face, there.

All the while, you’ll be working to stave off whichever Old God has set its sights on Shiokawa for that run, and must keep an eye on the ever-ticking Doom meter to know how close you are to being overcome. Only after you’ve obtained five keys by solving each of the five mysteries can you unlock the town’s lighthouse, where you can banish the Old God. That is, if you’re able to make it through the trials on the way to the top. It’s a roguelite, too, so prepare to start from the beginning every time you make a fatal misstep.

The horror-manga-style RPG doesn't hide its Junji Ito and HP Lovecraft influences. It's so disquieting that you’ll find yourself jumpy and on edge even when nothing’s happening, which in some investigations is most of the time. The evil may not be coming for you right that moment, but there’s the sense that it could at any turn.

Ysbryd Games

When those little jump scares do come — a particularly revolting attacker or a booming sound that cuts through the chiptune score — they’re made all the more jarring by the high-contrast 1- or 2-bit visuals (you can choose at the beginning) that were created, incredibly, in MS Paint. It nails the often hard to stomach Ito-esque gore, and there are a few scenes I had to force myself not to turn away from (a certain DIY eyeball operation comes to mind).

You’re given a few options for approaching the game, in terms of difficulty and complexity. Its short tutorial, “Spine-Chilling Story of School Scissors,” is a straightforward introduction. And in the beginner-level main story mode, “Extracurricular Activities,” you'll start with one mystery already solved.

Players also have the choice of a “Quick Play” mode, in which elements like your character, Old God and backstory are randomly selected, or a fully customized playthrough where you choose your own character and story elements. That last one is the most challenging route. You can also choose from a slew of color palettes at the start of each game, if you want to mix it up.

Ysbryd Games

While the turn-based combat is nothing revolutionary, I found it to be engaging enough. There’s no guarantee all of your hits will land, and relying on spiritual attacks when going up against a ghost-type foe is a stressful game of “guess the right combo.” It keeps things interesting, albeit a bit frustrating. Since the runs are relatively short — about an hour, give or take 30 minutes — it doesn’t feel soul crushing every time you die and have to start fresh. If anything, it becomes an addicting cycle.

Where World of Horror truly excels is in its attention to horrifying detail. A TV playing in your home runs grisly newscasts nonstop, including one about a dentist who replaced his human patients’ teeth with dogs’ teeth. (Remember, the developer is also a dentist). Look through the peephole of your apartment door and you might see a shadow man down the hall, or the quickly retreating face of someone lurking around the corner, or just an empty corridor. Twisted ghouls wait behind dead-end classroom doors.

Things are rarely the same when you come back to them. Each mystery has multiple endings and multiple ways to get you there, so you can’t quite predict what’s going to happen next even if you just played 10 runs in a row. Some stories are more involved than others, better thought through. But each has at least one ghastly element that justifies its place among the rest. If World of Horror is anything, it’s effective, and I haven’t been able to stop thinking about it.

This article originally appeared on Engadget at https://www.engadget.com/world-of-horror-is-a-skin-crawling-dread-machine-that-does-its-inspirations-proud-183000816.html?src=rss

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.

HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss

Assassin's Creed Mirage review: A warm, bloody hug from an old friend

Editor's note: This article contains mild spoilers for Assassin's Creed Mirage.

The deeper I got into Assassin’s Creed Mirage, the more a sense of warm nostalgia washed over me. It felt like a cozy hug from an old friend. A comforting, bloody embrace.

The latest entry in Ubisoft's long-running open-world adventure franchise takes the series back to its roots. Mirage mostly forgoes the RPG approach Ubisoft adopted in the last three main games: Assassin's Creed Origins, Odyssey and Valhalla. I'd only played the latter of those and it didn't click for me, largely because of Ubisoft's propensity to ovestuff its games and partially because it strayed so far away from the earlier titles.

Some of Valhalla's DNA carries over to Mirage, which shouldn't be surprising as the latest game was originally envisioned as an expansion to the last 100-plus-hour epic. There is some loot to hunt for in the form of swords, daggers and outfits that give protagonist Basim some small upgrades, such as reducing the level of notoriety he gains while carrying out illegal actions or passively regenerating some health. These items are upgradable, as are your tools. One neat, if unrealistic perk, makes an enemy disintegrate after Basim eliminates them with a throwing knife. So, you can tweak your build to fit your playstyle to a certain degree.

Ubisoft

There are skill trees too, but rather than unlocking things like a slight increase to the damage Basim deals, the abilities here are genuinely impactful. Pinpointing opponents and important items from further away, reducing fall damage and a chain assassination ability are all super useful tools for Basim to have in his belt.

Ubisoft has pulled back quite a bit on the RPG elements of the previous few games. You won’t be using bows, shields or two-handed weapons as you might in Valhalla, for instance. Still, there's just enough customization for folks who want to optimize (or min/max) Basim for the way they like to play.

"Just enough" is a thought I kept coming back to in the 17 hours it took me to beat the main story. Mirage is just the right length. There are just enough collectibles and side-quests to make the world feel rich but not overwhelming. There's just enough to the story, which is fairly by-the-numbers though gets more intriguing in the last couple of hours. There's just enough variety to the enemies.

There are only a few enemy types, and I love that Mirage doesn't go down the well-worn and nonsensical path of arbitrarily making them stronger based on their geographical location — an aspect of Dead Island 2 I greatly disliked. Although Basim largely has to make do with his sword and dagger (and, of course, the Hidden Blade), enemies have a variety of weapons. A trio of goons will pose a different threat when they have spears instead of swords. You'll have to navigate that melange of weaponry carefully, especially so when enemies surround you. Putting an onus on that and the level design for encounters helps make Mirage feel like more of a refreshing throwback.

Ubisoft

In the main missions, I only encountered one traditional boss fight toward the end of the story. Practically every other enemy was susceptible to a single-button slaying. I absolutely made the most of that by sneaking up on assassination targets or distracting them with noise-making devices. The game actually discourages open combat, anyway. You won't gain experience points by killing tons of enemies. Staying stealthy is usually the way to go — unless you're a completionist, since there's a trophy/achievement that requires you to stay in open combat for 10 minutes. Thankfully, the game makes it fairly easy for you to slink around.

Contrary to my first impressions, the guards of Baghdad aren't all that smart. They'll often be briefly puzzled when they encounter the dead body of a colleague they were chatting with seconds earlier before walking away. They'll quickly give up on a hunt for Basim. They'll see a cohort being yanked around a corner and think nothing of it. That breaks the immersion a bit, but it does make it easier to mess with these idiots.

I took some delight in tormenting my opponents, even if that may not match up to the code of conduct the assassins live by. One larger grunt was trapped in a room alone to guard a chest. I entered, used a smoke bomb to distract him, opened the chest and left, blocking the path behind me. I then made my way around to a gate that kept the guard locked in from the other side and spent a few minutes whistling at him, for no reason other than to annoy him and amuse myself.

The real star of the show is the version of ninth-century Baghdad Ubisoft has built. It feels rich and lived-in, with bystanders simply going about their day as a hooded figure darts by them to climb up the side of a building. Unfortunately, that level of detail wasn't reflected in the character models. Main characters and NPCs alike looked far less refined than their surroundings.

Ubisoft

Some Arab critics and reviewers appreciated how Ubisoft represented Baghdad and Muslim culture in the game, and that's a positive sign. In that sense, Mirage seems like a prime candidate for the historical educational modes that Ubisoft has added to recent Assassin's Creed games.

I can't personally speak to the authenticity of the environment Ubisoft has created. The same goes for the Arabic used in the game, but the developers at least strove to avoid anachronisms. I spent an hour or so playing in Arabic with English subtitles and found it a compelling way to experience the game, though I missed hearing the velvet-voiced Shohreh Aghdashloo's portrayal of Basim's mentor Roshan too much.

Aghdashloo's performance is one of several highlights of a solid game. Developer Ubisoft Bordeaux has achieved what it set out to do in bringing back the format of early Assassin's Creed titles while adding some modern bells and whistles (such as a gameplay option to avoid the turgid pickpocketing minigame) and avoiding some of the old trappings.

No part of the game that I've encountered is set in the modern day. That's a wise move, since those parts of previous games pulled me out of the main experience and into some tedious sections that sought to serve a larger story. I didn't hear the word "animus" once this time around. Mirage does tie back into the broader Assassin's Creed narrative — Basim makes an appearance in Valhalla, after all — but you won't get sidetracked by Desmond Miles or Layla Hassan. That meant I could spend more of my time roaming the streets and rooftops of this well-crafted city, scouting enemy camps from above and figuring out the best way to approach an assassination mission.

Mirage probably won't be for everyone, including those who appreciated the format of the last three big Assassin's Creed games, but it struck a chord with me. Even though I've wrapped up the main story and have a bunch of other games to play (I'm looking at you, Cocoon and Spider-Man 2), I'll probably spend a little while longer nuzzled up in the comfort of Mirage.

Assassin's Creed Mirage is out now on PC, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S. It's coming to iPhone 15 Pro devices next year.

This article originally appeared on Engadget at https://www.engadget.com/assassins-creed-mirage-review-a-warm-bloody-hug-from-an-old-friend-181918323.html?src=rss

ElevenLabs is building a universal AI dubbing machine

After Disney releases a new film in English, the company will go back and localize it in as many as 46 global languages to make the movie accesible to as wide an audience as possible. This is a massive undertaking, one for which Disney has an entire division — Disney Character Voices International Inc — to handle the task. And it's not like you're getting Chris Pratt back in the recording booth to dub his GotG III lines in Icelandic and Swahili — each version sounds a little different given the local voice actors. But with a new "AI dubbing" system from ElevenLabs, we could soon get a close recreation of Pratt's voice, regardless of the language spoken on-screen.   

ElevenLabs is an AI startup that offers a voice cloning service, allowing subscribers to generate nearly identical vocalizations with AI based on a few minutes worth of audio sample uploads. Not wholly unsurprising, as soon as the feature was released in beta, it was immediately exploited to impersonate celebrities, sometimes even without their prior knowledge and consent

The new AI dubbing feature does essentially the same thing — in more than 20 different languages including Hindi, Portuguese, Spanish, Japanese, Ukrainian, Polish and Arabic — but legitimately, and with permission. This tool is designed for use by media companies, educators and internet influencers who don't have Disney Money™ to fund their global adaptation efforts.

ElevenLabs asserts that the system will be able to not only translate "spoken content to another language in minutes" but also generate new spoken dialog in the target language using the actor's own voice. Or, at least, a AI generated recreation. The system is even reportedly capable of maintaining the "emotion and intonation" of the existing dialog and transferring that over to the generated translation.

 "It will help audiences enjoy any content they want, regardless of the language they speak," ElevenLabs CEO Mati Staniszewski said in a press statement. "And it will mean content creators can easily and authentically access a far bigger audience across the world."

This article originally appeared on Engadget at https://www.engadget.com/elevenlabs-is-building-a-universal-ai-dubbing-machine-130053504.html?src=rss

Nintendo's new mobile game lets you pluck Pikmin on your browser

Nintendo has teamed up with Niantic for a new Pikmin mobile game that's mostly good for passing time than serious gaming. It's called Pikmin Finder, and as Nintendo Life notes, the companies have released it in time for the Nintendo Live event in Seattle. You can access the augmented reality game from any browser on your mobile, whether it's an iPhone or an Android device. We've tried it on several browsers, including Chrome and Opera, and we can verify that it works, as long as you allow it to access your camera. 

Similar to Pikmin Bloom, the game superimposes Pikmin on your environment as seen through your phone's camera. You can then pluck the creatures by swiping up — take note that there are typically more of the same color lurking around when you do spot one. Afterward, you can use the Pikmin you've plucked to search for treasures, including cakes and rubber duckies. You'll even see them bring you those treasures on your screen. 

Pikmin Finder

To play the game, you can go to its website on a mobile browser and start catching Pikmin on your phone. You can also scan the QR code that shows up on the website when you open it on a desktop browser.

This article originally appeared on Engadget at https://www.engadget.com/nintendos-new-mobile-game-lets-you-pluck-pikmin-on-your-browser-064423362.html?src=rss

New AP guidelines lay the groundwork for AI-assisted newsrooms

The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP’s blessing as a license to use generative AI more excessively or underhandedly.

The organization’s AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is — not a replacement for trained writers, editors and reporters exercising their best judgment. “We do not see AI as a replacement of journalists in any way,” the AP’s Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. “It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share.”

The article directs its journalists to view AI-generated content as “unvetted source material,” to which editorial staff “must apply their editorial judgment and AP’s sourcing standards when considering any information for publication.” It says employees may “experiment with ChatGPT with caution” but not create publishable content with it. That includes images, too. “In accordance with our standards, we do not alter any elements of our photos, video or audio,” it states. “Therefore, we do not allow the use of generative AI to add or subtract any elements.” However, it carved an exception for stories where AI illustrations or art are a story’s subject — and even then, it has to be clearly labeled as such.

Barrett warns about AI’s potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists “should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.” To protect privacy, the guidelines also prohibit writers from entering “confidential or sensitive information into AI tools.”

Although that’s a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. CNET was caught early this year publishing error-ridden AI-generated financial explainer articles (only labeled as computer-made if you clicked on the article’s byline). Gizmodo found itself in a similar spotlight this summer when it ran a Star Wars article full of inaccuracies. It’s not hard to imagine other outlets — desperate for an edge in the highly competitive media landscape — viewing the AP’s (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.

This article originally appeared on Engadget at https://www.engadget.com/new-ap-guidelines-lay-the-groundwork-for-ai-assisted-newsrooms-201009363.html?src=rss