Posts with «author_name|pranav dixit» label

Apple has reportedly resumed talks with OpenAI to build a chatbot for the iPhone

Apple has resumed conversations with OpenAI, the maker of ChatGPT, to power some AI features coming to iOS 18, according to a new report in Bloomberg. Apple is also building its own large language models to power some iOS 18 features, but its talks with OpenAI are centered around a “chatbot/search component,” according to Bloomberg reporter Mark Gurman. 

Apple is also reportedly in talks with Google to license Gemini, Google’s own AI-powered chatbot, for iOS 18. Bloomberg reports that those talks are still on, and things could still go either way because Apple hasn’t made a final decision on which company’s technology to use. It’s conceivable, Gurman says, that Apple could ultimately end up licensing AI tech from both companies or none of them.

So far, Apple has been notably quiet about its AI efforts even as the rest of Silicon Valley has descended into an AI arms race. But it has dropped enough hints to indicate that it’s cooking up something. When the company announced its earnings in February, CEO Tim Cook said that Apple is continuing to work and invest in artificial intelligence and is “excited to share the details of our ongoing work in that space later this year.” It claimed that the brand new M3 MacBook Air that it launched last month was the “world’s best consumer laptop for AI,” and will reportedly start releasing AI-centric laptops and desktops later this year. And earlier this week, Apple also released a handful of open-source large language models that are designed to run locally on devices rather than in the cloud.

It’s still unclear what Apple’s AI features in iPhones and other devices will look like. Generative AI is still notoriously unreliable and prone to making up answers. Recent AI-powered gadgets like the Humane Ai Pin released to disastrous reviews, while others like the Rabbit R1 have yet to prove themselves valuable.

We’ll find out more at WWDC on June 10.

This article originally appeared on Engadget at https://www.engadget.com/apple-has-reportedly-resumed-talks-with-openai-to-build-a-chatbot-for-the-iphone-002302644.html?src=rss

The world's leading AI companies pledge to protect the safety of children online

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

This article originally appeared on Engadget at https://www.engadget.com/the-worlds-leading-ai-companies-pledge-to-protect-the-safety-of-children-online-213558797.html?src=rss

Adobe Photoshop's latest beta makes AI-generated images from simple text prompts

Nearly a year after adding generative AI-powered editing capabilities to Photoshop, Adobe is souping up its flagship product with even more AI. On Tuesday, the company announced that Photoshop is getting the ability to generate images with simple text prompts directly within the app. There are also new features to let the AI draw inspiration from reference images to create new ones and generate backgrounds more easily. The tools will make using Photoshop easier for both professionals as well as casual enthusiasts who may have found the app’s learning curve to be steep, Adobe thinks.

“A big, blank canvas can sometimes be the biggest barrier,” Erin Boyce, Photoshop’s senior marketing director, told Engadget in an interview. “This really speeds up time to creation. The idea of getting something from your mind to the canvas has never been easier.” The new feature is simply called “Generate Image” and will be available as an option in Photoshop right alongside the traditional option that lets you import images into the app.

An existing AI-powered feature called Generative Fill that previously let you add, extend or remove specific parts of an image has been upgraded too. It now allows users to add AI-generated images to an existing image that blend in seamlessly with the original. In a demo shown to Engadget, an Adobe executive was able to circle a picture of an empty salad dish, for instance, and ask Photoshop to fill it with a picture of AI-generated tomatoes. She was also able to generate variations of the tomatoes and choose one of them to be part of the final image. In another example, the executive replaced an acoustic guitar held by an AI-generated bear with multiple versions of electric guitars just by using text prompts, and without resorting to Photoshop’s complex tools or brushes.

Adobe

These updates are powered by Firefly Image 3, the latest version of Adobe’s family of generative AI models that the company also unveiled today. Adobe said Firefly 3 produces images of a higher quality than previous models, provides more variations, and understands your prompts better. The company claims that more than 7 billion images have been generated so far using Firefly.

Adobe is far from the only company stuffing generative AI features into its products. Over the last year, companies, big and small, have revamped up their products and services with AI. Both Google and Microsoft, for instance, have upgraded their cash cows, Search and Office respectively, with AI features. More recently, Meta has started putting its own AI chatbot into Facebook, Messenger, WhatsApp, and Instagram. But while it’s still unclear how these bets will pan out, Adobe’s updates to Photoshop seem more materially useful for creators. The company said Photoshop’s new AI features had driven a 30 percent increase in Photoshop subscriptions.

Meanwhile, generative AI has been in the crosshairs of artists, authors, and other creative professionals, who say that the foundational models that power the tech were trained on copyrighted media without consent or compensation. Generative AI companies are currently battling lawsuits from dozens of artists and authors. Adobe says that Firefly was trained on licensed media from Adobe Stock, since it was designed to create content for commercial use, unlike competitors like Midjourney whose models are trained in part by illegally scraping images off the internet. But a recent report from Bloomberg showed that Firefly, too, was trained, in part, on AI-generated images from the same rivals including Midjourney (an Adobe spokesperson told Bloomberg that less than 5 percent of images in its training data came from other AI rivals).

To address concerns about the use of generative AI to create disinformation, Adobe said that all images created in Photoshop using generative AI tools will automatically include tamper-proof “Content Credentials”, which act like digital “nutrition labels” indicating that an image was generated with AI, in the file’s metadata. However, it's still not a perfect defense against image misuse, with several ways to sidestep metadata and watermarks

The new features will be available in beta in Photoshop starting today and will roll out to everyone later this year. Meanwhile, you can play with Firefly 3 on Adobe’s website for free. 

This article originally appeared on Engadget at https://www.engadget.com/adobe-photoshops-latest-beta-makes-ai-generated-images-from-simple-text-prompts-090056096.html?src=rss

Mozilla urges WhatsApp to combat misinformation ahead of global elections

In 2024, four billion people — about half the world’s population — in 64 countries including large democracies like the US and India, will head to the polls. Social media companies like Meta, YouTube and TikTok, have promised to protect the integrity of those elections, at least as far as discourse and factual claims being made on their platforms are concerned. Missing from the conversation, however, is closed messaging app WhatsApp, which now rivals public social media platforms in both scope and reach. That absence has researchers from non-profit Mozilla worried.

“Almost 90% of the safety interventions pledged by Meta ahead of these elections are focused on Facebook and Instagram,” Odanga Madung, a senior researcher at Mozilla focused on elections and platform integrity, told Engadget. “Why has Meta not publicly committed to a public road map of exactly how it’s going to protect elections within [WhatsApp]?”

Over the last ten years, WhatsApp, which Meta (then Facebook) bought for $19 billion in 2014, has become the default way for most of the world outside the US to communicate. In 2020, WhatsApp announced that it had more than two billion users around the world — a scale that dwarfs every other social or messaging app except Facebook itself.

Despite that scale, Meta’s focus has mostly been only on Facebook when it comes to election-related safety measures. Mozilla’s analysis found that while Facebook had made 95 policy announcements related to elections since 2016, the year the social network came under scrutiny for helping spread fake news and foster extreme political sentiments. WhatsApp only made 14. By comparison, Google and YouTube made 35 and 27 announcements each, while X and TikTok had 34 and 21 announcements respectively. “From what we can tell from its public announcements, Meta’s election efforts seem to overwhelmingly prioritize Facebook,” wrote Madung in the report.

Mozilla is now calling on Meta to make major changes to how WhatsApp functions during polling days and in the months before and after a country’s elections. They include adding disinformation labels to viral content (“Highly forwarded: please verify” instead of the current “forwarded many times), restricting broadcast and Communities features that let people blast messages to hundreds of people at the same time and nudging people to “pause and reflect” before they forward anything. More than 16,000 people have signed Mozilla’s pledge asking WhatsApp to slow the spread of political disinformation, a company spokesperson told Engadget.

WhatsApp first started adding friction to its service after dozens of people were killed in India, the company’s largest market, in a series of lynchings sparked by misinformation that went viral on the platform. This included limiting the number of people and groups that users could forward a piece of content to, and distinguishing forwarded messages with “forwarded” labels. Adding a “forwarded” label was a measure to curb misinformation — the idea was that people might treat forwarded content with greater skepticism.

“Someone in Kenya or Nigeria or India using WhatsApp for the first time is not going to think about the meaning of the ‘forwarded’ label in the context of misinformation,” Madung said. “In fact, it might have the opposite effect — that something has been highly forwarded, so it must be credible. For many communities, social proof is an important factor in establishing the credibility of something.”

The idea of asking people to pause and reflect came from a feature that Twitter once implemented where the app prompted people to actually read an article before retweeting it if they hadn’t opened it first. Twitter said that the prompt led to a 40% increase in people opening articles before retweeting them

And asking WhatsApp to temporarily disable its broadcast and Communities features arose from concerns over their potential to blast messages, forwarded or otherwise, to thousands of people at once. “They’re trying to turn this into the next big social media platform,” Madung said. “But without the consideration for the rollout of safety features.”

“WhatsApp is one of the only technology companies to intentionally constrain sharing by introducing forwarding limits and labeling messages that have been forwarded many times,” a WhatsApp spokesperson told Engadget. “We’ve built new tools to empower users to seek accurate information while protecting them from unwanted contact, which we detail on our website.”

Mozilla’s demands came out of research around platforms and elections that the company did in Brazil, India and Liberia. The former are two of WhatsApp’s largest markets, while most of the population of Liberia lives in rural areas with low internet penetration, making traditional online fact-checking nearly impossible. Across all three countries, Mozilla found political parties using WhatsApp’s broadcast feature heavily to “micro-target” voters with propaganda, and, in some cases, hate speech.

WhatsApp’s encrypted nature also makes it impossible for researchers to monitor what is circulating within the platform’s ecosystem — a limitation that isn’t stopping some of them from trying. In 2022, two Rutgers professors, Kiran Garimella and Simon Chandrachud visited the offices of political parties in India and managed to convince officials to add them to 500 WhatsApp groups that they ran. The data that they gathered formed the basis of an award-winning paper they wrote called “What circulates on Partisan WhatsApp in India?” Although the findings were surprising — Garimella and Chandrachud found that misinformation and hate speech did not, in fact, make up a majority of the content of these groups — the authors clarified that their sample size was small, and they may have deliberately been excluded from groups where hate speech and political misinformation flowed freely.

“Encryption is a red herring to prevent accountability on the platform,” Madung said. “In an electoral context, the problems are not necessarily with the content purely. It’s about the fact that a small group of people can end up significantly influencing groups of people with ease. These apps have removed the friction of the transmission of information through society.”

This article originally appeared on Engadget at https://www.engadget.com/mozilla-urges-whatsapp-to-combat-misinformation-ahead-of-global-elections-200002024.html?src=rss

Netflix is done telling us how many people use Netflix

Netflix will stop disclosing the number of people who signed up for its service, as well as the revenue it generates from each subscriber from next year, the company announced on Thursday. It will focus, instead, on highlighting revenue growth and the amount of time spent on its platform.

“In our early days, when we had little revenue or profit, membership growth was a strong indicator of our future potential,” the company said in a letter to shareholders. “But now we’re generating very substantial profit and free cash flow.”

Netflix revealed that the service added 9.33 million subscribers over the last few months, bringing the total number of paying households worldwide to nearly 270 million. Despite its decision to stop reporting user numbers each quarter, Netflix said that the company will “announce major subscriber milestones as we cross them,” which means we’ll probably hear about it when it crosses 300 million.

Netflix estimates that more than half a billion people around the world watch TV shows and movies through its service, an audience it is now figuring out how to squeeze even more money out of through new pricing tiers, a crackdown on password-sharing, and showing ads. Over the last few years, it has also steadily added games like the Grand Theft Auto trilogy, Hades, Dead Cells, Braid, and more, to its catalog.

Subscriber metrics are an important signal to Wall Street because they show how quickly a company is growing. But Netflix’s move to stop reporting these is something that we’ve seen from other companies before. In February, Meta announced that it would no longer break out the number of daily and monthly Facebook users each quarter but only reveal how many people collectively used Facebook, WhatsApp, Messenger, and Instagram. In 2018, Apple, too, stopped reporting the number of iPhones, iPads, and Macs it sold each quarter, choosing to focus, instead, on how much money it made in each category.

This article originally appeared on Engadget at https://www.engadget.com/netflix-is-done-telling-us-how-many-people-use-netflix-215149971.html?src=rss

Meta is stuffing its AI chatbot into your Instagram DMs

On Friday, people around the web noticed a new addition to their Instagram: Meta AI, the company’s general-purpose, AI-powered chatbot that can answer questions, write poetry and generate images with a simple text prompt. The move isn’t surprising. Meta revealed Meta AI in September 2023 and has spent the last few months adding the chatbot to products like Facebook Messenger and WhatsApp, so adding it to Instagram seems like a no-brainer. 

Just got access to Meta AI on one of my Instagram accounts. pic.twitter.com/VNyRa5wbG4

— Krish Murjani  (@appleforever18) April 11, 2024

“Our generative AI-powered experiences are under development in various phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told Engadget, which suggests that not everyone has the feature available yet. TechCrunch, which first noted the news, said that Meta AI was showing up in Instagram’s search bar. But for some of us at Engadget, the feature actually showed up in the search bar in Instagram’s Direct Messaging inbox. 

Tapping it let me start a conversation with Meta AI just I would DM a friend on Instagram. I was able to ask the chatbot to give me definitions of words, suggest headlines for some stories I’m working on, and generate images of dogs on skateboards. I was also able to ask Meta AI to recommend Reels with cats in them, which it was able to do easily.

But when my colleague Aaron Souppouris asked Meta AI in WhatsApp to recommend Reels, it showed him some Reels in that app too — suggesting that the bot in Instagram isn’t really doing anything specific to Instagram. Instead, Meta is simply shoehorning the same chatbot into every app it owns.

If you tap a hamburger menu within the bot, Meta AI will also show you a long list of possible actions you ask the bot to take.

Aaron Souppouris

Why would you want a chatbot in Instagram to suggest tips for dealing with credit card debit, have a debate about cardio versus weights, or suggest hacks to travel with points, I do not know. But the point is that if you want to, you can.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-stuffing-its-ai-chatbot-into-your-instagram-dms-231855991.html?src=rss

Google's new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss

How WhatsApp became the world’s default communication app

In 2014, WIRED asked me to write a few lines about my most-used app as part of an internship application. I wrote about WhatsApp because it was a no-brainer. I was an international student from India, and it was my lifeline to my family and to my girlfriend, now my wife, who lived on the other side of the world. “This cross-platform messenger gets all the credit for my long-distance relationship of two years, which is still going strong,” I wrote in my application. “Skype is great, Google+ Hangouts are the best thing to have happened since Gmail but nothing says ‘I love you’ like a WhatsApp text message.”

A few months into that internship, Facebook announced it was buying WhatsApp for a staggering $19 billion. In WIRED’s newsroom, there were audible gasps at this seemingly minor player's price tag. American journalists weren’t exactly unfamiliar with WhatsApp. Much of the country was still locked in a battle between green and blue bubbles, even as the rest of the world had switched to an app created by two former Yahoo! engineers in WIRED’s Mountain View backyard.

Text messaging was one of the few things you could do on WhatsApp in 2014. There were no emoji you could react with, no high-definition videos you could send, no GIFs or stickers, no read receipts until the end of that year and certainly no voice or video calling. And yet, more than 500 million people around the world were hooked, reveling in the freedom of using nascent cellular data to swap unlimited messages with friends and family instead of paying mobile carriers per text.

WhatsApp’s founders, Jan Koum and Brian Acton, launched the app in 2009 simply to display status messages next to people’s names in a phone’s contact book. But after Apple introduced push notifications on the iPhone later that year, it evolved into a full-blown messaging service. Now, 15 years later, WhatsApp has become a lot more — an integral part of the propaganda machinery of political parties in India and Brazil, a way for millions of businesses to reach customers, a way to send money to people and merchants, a distribution platform for publications, brands and influencers, a video conferencing system and a private social network for older adults. And it is still a great way for long-distance lovers to stay connected.

“WhatsApp is kind of like a media platform and kind of like a messaging platform, but it’s also not quite those things,” Surya Mattu, a researcher at Princeton who runs the university’s Digital Witness Lab, which studies how information flows through WhatsApp, told Engadget. “It has the scale of a social media platform, but it doesn’t have the traditional problems of one because there are no recommendations and no social graph.”

Indeed, WhatsApp’s scale dwarfs nearly every social network and messaging app out there. In 2020, WhatsApp announced it had more than two billion users around the world. It’s bigger than iMessage (1.3 billion users), TikTok (1 billion), Telegram (800 million), Snap (400 million) and Signal (40 million.) It stands head and shoulders above fellow Meta platform Instagram, which captures around 1.4 billion users. The only thing bigger than WhatsApp is Facebook itself, with more than three billion users .

WhatsApp has become the world’s default communications platform. Ten years after it was acquired, its growth shows no sign of stopping. Even in the US, it is finally beginning to break through the green and blue bubble battles and is reportedly one of Meta’s fastest-growing services. As Meta CEO Mark Zuckerberg told the New York Times last year, WhatsApp is the “next chapter” for the company.

Will Cathcart, a longtime Meta executive, who took over WhatsApp in 2019 after its original founders departed the company, credits WhatsApp’s early global growth to it being free (or nearly free — at one point, WhatsApp charged people $1 a year), running on almost any phone, including the world’s millions of low-end Android devices, reliably delivering messages even in large swathes of the planet with suboptimal network conditions and, most importantly, was dead simple, free of the bells and whistles that bloat most other messaging apps. In 2013, a year before Facebook acquired it, WhatsApp added the ability to send short audio messages.

“That was really powerful,” Cathcart told Engadget, “People who don’t have high rates of literacy or someone new to the internet could spin up WhatsApp, use it for the first time and understand it.”

In 2016, WhatsApp added end-to-end encryption, something Cathcart said was a huge selling point. The feature made WhatsApp a black box, hiding the contents of messages from everyone — even WhatsApp — except the sender and the receiver. The same year, WhatsApp announced that one billion people were using the service every month.

That explosive growth came with a huge flip side: As hundreds of millions of people in heavily populated regions, like Brazil and India, came online for the first time, thanks to inexpensive smartphone and data prices, WhatsApp became a conduit for hoaxes and misinformation to flow freely. In India, currently WhatsApp’s largest market with more than 700 million users, the app overflowed with propaganda and disinformation against opposition political parties, cheerleading Narendra Modi, the country’s nationalist Prime Minister accused of destroying its secular fabric.

Then people started dying. In 2017 and 2018, frenzied mobs in remote parts of the country high on baseless rumors about child abductors forwarded through WhatsApp, lynched nearly two dozen people in 13 separate incidents. In response to the crisis, WhatsApp swung into action. Among other things, it made significant product changes, such as clearly labeling forwarded messages — the primary way misformation spread across the service — as well as severely restricting the number of people and groups users could forward content to at the same time.

In Brazil, the app is widely seen as a key tool in the country’s former President Jair Bolsonaro’s 2018 win. Bolsonaro, a far-right strongman, was accused of getting his supporters to circumvent WhatsApp’s spam controls to run elaborate misinformation campaigns, blasting thousands of WhatsApp messages attacking his opponent, Fernando Haddad.

Since these incidents, WhatsApp has established fact-checking partnerships with more than 50 fact-checking organizations globally (because WhatsApp is encrypted, fact-checkers depend on users reporting messages to their WhatsApp hotlines and respond with fact checks). It also made additional product changes, like letting users quickly Google a forwarded message to fact-check it within the app. “Over time, there might be more things we can do,” said Cathcart, including potentially using AI to help with WhatsApp’s fact-checking. “There’s a bunch of interesting things we could do there, I don’t think we’re done,” he said.

Recently, WhatsApp has rapidly added new features, such as the ability to share large files, messages that auto-destruct after they’re viewed, Instagram-like Stories (called Statuses) and larger group calls, among other things. But a brand new feature rolled out globally in fall 2023 called Channels points to WhatsApp’s ambitions to become more than a messaging app. WhatsApp described Channels, in a blog post announcing the launch, as “a one-way broadcast tool for admins to send text, photos, videos, stickers and polls.” They’re a bit like a Twitter feed from brands, publishers and people you choose to follow. It has a dedicated tab in WhatsApp, although interaction with content is limited to responding with emoji — no replies. There are currently thousands of Channels on WhatsApp and 250-plus have more than a million followers each, WhatsApp told Engadget. They include Puerto Rican rapper Bad Bunny (18.9 million followers), Narendra Modi (13.8 million followers), FC Barcelona (27.7 million followers) and the WWE (10.9 million followers). And even though it’s early days, Channels is fast becoming a way for publishers to distribute their content and build an audience.

“It took a year for us to grow to an audience of 35,000 on Telegram,” Rachel Banning-Lover, the head of social media and development at the Financial Times (155,000 followers) told Nieman Lab in November. “Comparatively, we [grew] a similar-sized following [on WhatsApp] in two weeks.”

WhatsApp’s success at consistently adding new functionality without succumbing to feature sprawl has allowed it to thrive, both with its core audience and also, more recently, with users in the US. According to data that analytics firm Data.ai shared with Engadget, WhatsApp had nearly 83 million users in the US in January 2024, compared to 80 million a year before. A couple of years ago, WhatsApp ran an advertising campaign in the US — its first in the country — where billboards and TV spots touted the app’s focus on privacy.

It’s a sentiment shared by Zuckerberg himself, who, in 2021, shared a “privacy-focused vision for social networking” on his Facebook page. “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident that what they say to each other stays secure and their messages and content won’t stick around,” he wrote. “This is the future I hope we will help bring about.”

Meta has now begun using WhatsApp’s sheer scale to generate revenue, although it’s unclear so far how much money, if any, the app makes. “The business model we’re really excited about and one that we’ve been growing for a couple of years successfully is helping people talk to businesses on WhatsApp,” Cathcart said. “That’s a great experience.” Meta monetizes WhatsApp by charging large businesses to integrate the platform directly into existing systems they use to manage interactions with customers. And it integrates the whole system with Facebook, allowing businesses to place ads on Facebook that, when clicked, open directly to a WhatsApp chat with the business. These have become the fastest-growing ad format across Meta, the company told The New York Times.

A few years ago, a configuration change in Facebook’s internal network knocked multiple Facebook services, including WhatsApp, off the internet for more than six hours and ground the world to a halt.

“It’s like the equivalent of your phone and the phones of all of your loved ones being turned off without warning. [WhatsApp] essentially functions as an unregulated utility,” journalist Aura Bogado reportedly wrote on then-Twitter. In New Delhi and Brazil, gig workers were unable to reach customers and lost out on wages. In London, crypto trades stopped as traders were unable to communicate with clients. One firm claimed a drop of 15 percent. In Russia, oil markets were hit after traders were unable to get in touch with buyers in Europe and Asia placing orders.

Fifteen years after it was created, the messaging app runs the world.


To celebrate Engadget's 20th anniversary, we're taking a look back at the products and services that have changed the industry since March 2, 2004.

This article originally appeared on Engadget at https://www.engadget.com/how-whatsapp-became-the-worlds-default-communication-app-144520113.html?src=rss

NVIDIA's GPUs powered the AI revolution. Its new Blackwell chips are up to 30 times faster

In less than two years, NVIDIA’s H100 chips, which are used by nearly every AI company in the world to train large language models that power services like ChatGPT, made it one of the world’s most valuable companies. On Monday, NVIDIA announced a next-generation platform called Blackwell, whose chips are between seven and 30 times faster than the H100 and use 25 times less power.

“Blackwell GPUs are the engine to power this new Industrial Revolution,” said NVIDIA CEO Jensen Huang at the company’s annual GTC event in San Jose attended by thousands of developers, and which some compared to a Taylor Swift concert. “Generative AI is the defining technology of our time. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry,” Huang added in a press release.

NVIDIA’s Blackwell chips are named in honor of David Harold Blackwell, a mathematician who specialized in game theory and statistics. NVIDIA claims that Blackwell is the world’s most powerful chip. It offers a significant performance upgrade to AI companies with speeds of 20 petaflops compared to just 4 petaflops that the H100 provided. Much of this speed is made possible thanks the 208 billion transistors in Blackwell chips compared to 80 billion in the H100. To achieve this, NVIDIA connected two large chip dies that can talk to each other at speeds up to 10 terabytes per second.

In a sign of just how dependent our modern AI revolution is on NVIDIA’s chips, the company’s press release includes testimonials from seven CEOs who collectively lead companies worth trillions of dollars. They include OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella, Alphabet CEO Sundar Pichai, Meta CEO Mark Zuckerberg, Google DeepMind CEO Demis Hassabis, Oracle chairman Larry Ellison, Dell CEO Michael Dell, and Tesla CEO Elon Musk.

“There is currently nothing better than NVIDIA hardware for AI,” Musk says in the statement. "Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute,” Altman says.

NVIDIA did not disclose how much Blackwell chips would cost. Its H100 chips currently run between 25,000 and $40,000 per chip, according to CNBC, and entire systems powered by these chips can cost as much as $200,000.

Despite their costs, NVIDIA’s chips are in high demand. Last year, delivery wait times were as high as 11 months. And having access to NVIDIA’s AI chips is increasingly seen as a status symbol for tech companies looking to attract AI talent. Earlier this year, Zuckerberg touted the company’s efforts to build “a massive amount of infrastructure” to power Meta’s AI efforts. “At the end of this year,” Zuckerberg wrote, “we will have ~350k Nvidia H100s — and overall ~600k H100s H100 equivalents of compute if you include other GPUs.”

This article originally appeared on Engadget at https://www.engadget.com/nvidias-gpus-powered-the-ai-revolution-its-new-blackwell-chips-are-up-to-30-times-faster-001059577.html?src=rss

TikTok's CEO urges users to 'protect your constitutional rights' as US ban looms

Hours after the House passed a bill that could ban TikTok in the United States, Shou Chew, the company’s CEO urged users to “protect your constitutional rights.” Chew also implied that TikTok would mount a legal challenge if the bill is passed into law.

“We will not stop fighting and advocating for you,” Chew said in a video posted to X. “We will continue to do all we can including exercising our legal rights to protect this amazing platform that we have built with you.” He also asked TikTok users in the US to share their stories with friends, families, and senators. “This legislation, if passed into law, will lead to a ban of TikTok in the United States,” Chew said. “Even the bill’s sponsors admit that’s their goal.”

Our CEO Shou Chew's response to the TikTok ban bill: pic.twitter.com/7AnDYOLD96

— TikTok Policy (@TikTokPolicy) March 13, 2024

The bill, known as the “Protecting Americans from Foreign Adversary Controlled Applications Act” passed the House on Wednesday with bipartisan support just days after it was introduced. Should the bill pass into law, it would force TikTok’s parent company ByteDance, a Chinese corporation, to sell TikTok to a US company within six months, or be banned from US app stores and web hosting services. TikTok has challenged state-level bans in the past. Last year, TikTok sued Montana, which banned the app in the state. A federal judge temporarily blocked that ban in November before it went into effect.

Last week, TikTok sent push notifications to the app’s more than 170 million users in the US urging them to call their representatives about the potential ban. “Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression,” the notification said. The wave of notifications reportedly led to House staffers being inundated with calls from high schoolers asking what a Congressman is. Lawmakers criticized the company they perceived as trying to “interfere” with the legislative process.

In his appeal, Chew said that banning TikTok would give “more power to a handful of other social media companies.” Former President Donald Trump, who once tried to force ByteDance to sell TikTok in the US, recently expressed a similar sentiment, claiming that banning TikTok would strengthen Meta whose platform, Reels, competes with TikTok directly. Chew also added that taking TikTok away would also hurt hundreds of thousands of American jobs, creators, and small businesses.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-ceo-urges-users-to-protect-your-constitutional-rights-as-us-ban-looms-002806276.html?src=rss