Meta sues FTC to block new restrictions on monetizing kids’ data

Meta has sued the Federal Trade Commission (FTC) in an attempt to stop regulators from reopening a landmark $5 billion privacy settlement from 2020 and to allow it to monetize kids’ data across apps like Facebook, Instagram and Whatsapp. This comes after a federal judge ruled on Monday that the FTC would be allowed to expand on 2020’s privacy settlement, paving the way for the agency to propose tough new rules on how the social media giant could operate in the wake of the Cambridge Analytica scandal.

Today’s lawsuit demands an immediate stop to the FTC’s proceedings, calling it an “obvious power grab” and an “unconstitutional adjudication by fiat.” A Meta spokesperson even referred to the FTC as “prosecutor, judge, and jury in the same case”, as reported by Bloomberg. This is the second attempt by Facebook’s parent company to stop the sanctions in court.

The FTC, for its part, says that Meta has repeatedly violated the terms of 2020’s settlement regarding user privacy. The agency also says that the company has violated the Children’s Online Privacy Protection Act (COPPA) by monetizing the data of younger users. The FTC has already been given the go ahead by a judge to restrict this type of monetization, a decision Meta hopes to overturn.

The FTC also seeks to implement new restrictions that limit Meta’s use of facial recognition, as well as a complete moratorium on new products and services until a third-party completes an audit to determine if the company’s complying with its privacy obligations.

“Facebook has repeatedly violated its privacy promises,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.” To that end, multiple states have sued Meta to stop the monetization of children’s data, along with the EU.

The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter "future innovation and competitive rivalry." The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior.

Corporations have been all over the FTC lately in attempts to paint the agency as a prime example of government overreach. Beyond Meta, biotech giant Illumina is suing the FTC to halt a decision that stops it from a $7 billion acquisition of the cancer detection startup Grail.

This article originally appeared on Engadget at https://www.engadget.com/meta-sues-ftc-to-block-new-restrictions-on-monetizing-kids-data-185051764.html?src=rss

Can digital watermarking protect us from generative AI?

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.

A quick history of watermarking

Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.

Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.

Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.

The here and now

Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.

The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.

“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”

Zhao says that while the White House's executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”

He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”

“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.

We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.

“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.

How Content Credentials work

With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 

CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.

“The analogy that we've used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that's where the watermark sits. It's actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”

Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.

Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That's the beauty of Content Credentials and watermarks together," Sickles said. "They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually." Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.

That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it's adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.

Nightshade: The CR alternative that’s deadly to databases

Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.

Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these "glazed" images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.

While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it's trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.

The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.

Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”

This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss

YouTube Music brings personalized album art to its 2023 Recap

YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You'll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There's also the option to view your Recap in the main YouTube app, along with some other new features for 2023.

This year, you'll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.

YouTube Music

YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).

This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss

Evernote officially limits free users to 50 notes and one measly notebook

Evernote has confirmed the service’s tightly leashed new free plan, which the company tested with some users earlier this week. Starting December 4, the note-taking app will restrict new and current accounts to 50 notes and one notebook. Existing free customers who exceed those limits can still view, edit, delete and export their notes, but they’ll need to upgrade to a paid plan (or delete enough old ones) to create new notes that exceed the new confines.

The company says most free accounts are already inside those lines. “When setting the new limits, we considered that the majority of our Free users fall below the threshold of fifty notes and one notebook,” the company wrote in an announcement blog post. “As a result, the everyday experience for most Free users will remain unchanged.” Engadget reached out to Evernote to clarify whether “the majority of Free users” staying within those bounds includes long-dormant accounts that may have tried the app for a few minutes a decade ago and never logged in again. We’ll update this article if we hear back.

Evernote’s premium plans, now practically essential for anything more than minimal use, include a $15 monthly Personal plan with 10GB of monthly uploads. You can double that to 20GB (and get other perks) with an $18 tier. It also offers annual versions of those plans for $130 and $170, respectively.

The company acknowledged in its announcement post that “these changes may lead you to reconsider your relationship with Evernote.” Leading alternatives with more bountiful free plans include Notion, Microsoft OneNote, Google Keep, Bear (Apple devices only), Obsidian and SimpleNote.

Earlier this year, Evernote’s parent company, Bending Spoons, moved its operations from the US and Chile to Europe, laying off nearly all of the note-taking app’s employees. When doing so, it said the app had been “unprofitable for years.”

This article originally appeared on Engadget at https://www.engadget.com/evernote-officially-limits-free-users-to-50-notes-and-one-measly-notebook-174436735.html?src=rss

Expressive E Osmose review: A game-changing MPE keyboard, but a frustrating synthesizer

When I first got to see the Expressive E Osmose way back in 2019, I knew it was special. In my 15-plus years covering technology, it was one of the only devices I’ve experienced that actually had the potential to be truly “game changing.” And I’m not being hyperbolic.

But, that was four years ago, almost to the day. A lot has changed in that time. MPE (MIDI Polyphonic Expression) has gone from futuristic curiosity to being embraced by big names like Ableton and Arturia. New players have entered and exited the scene. More importantly, the Osmose is no longer a promising prototype, but an actual commercial product. The questions, then, are obvious: Does the Osmose live up to its potential? And, does it seem as revolutionary today as it did all those years ago? The answers, however, are less clear.

Terrence O'Brien / Engadget

What sets the Osmose ($1,799) apart from every other MIDI controller and synthesizer (MPE or otherwise) is its keybed. At first glance, it looks like almost any other keyboard, albeit a really nice one. The body is mostly plastic, but it feels solid and the top plate is made of metal. (Shoutout to Expressive E, by the way, for building the OSMOSE out of 66 percent recycled materials and for making the whole thing user repairable — no glue or speciality screws to be found.)

The keys themselves have this lovely, almost matte finish and a healthy amount of heft. It’s a nice change of pace from the shiny, springy keys on even some higher-end MIDI controllers. But the moment you press down on a key you’ll see what sets it apart — the keys move side to side. And this is not because it’s cheaply assembled and there’s a ton of wiggle. This is a purposeful design. You can bend notes (or control other parameters) by actually bending the keys, much like you would on a stringed instrument.

This is huge for someone like me who is primarily a guitar player. Bending strings and wiggling my fingers back and forth to add vibrato comes naturally. And, as I mentioned in my review of Roli’s Seaboard Rise 2, I find myself doing this even on keyboards where I know it will have no effect. It’s a reflex.

It’s a very simple thing to explain, but very difficult to encapsulate its effect on your playing. It’s all of the same things that make playing the Seaboard special: the slight pitch instability from the unintentional micro movements of your fingers, the ability to bend individual notes for shifting harmonies and the polyphonic aftertouch that allows you to alter things like filter cutoff on a per-note basis.

These tiny changes in tuning and expression add an almost ineffable fluidity to your playing. In particular, for sounds based on acoustic instruments like flutes and strings, it adds an organic element missing from almost every other synthesizer. There is a bit of a learning curve, but I got the hang of it after just a few days.

What separates it from the Roli, though, is its formfactor. While the Seaboard is keyboard-esque, it’s still a giant squishy slab of silicone. It might not appeal to someone who grew up taking piano lessons every week. The Osmose, on the other hand, is a traditional keyboard, with full-sized keys and a very satisfying action. It’s probably the most familiar and approachable implementation of MPE out there.

If you are a pianist, or an accomplished keyboard player, this is probably the MPE controller you’ve been waiting for. And it’s hands-down one of the best on the market.

Where things get a little dicier is when looking at the Osmose as a standalone synthesizer. But let’s start where it goes right: the interface. The screen to the left of the keyboard is decently sized (around 4 inches) and easy to read at any angle. There are even some cute graphics for parameters such as timbre (a log), release (a yo-yo) and drive (a steering wheel).

Terrence O'Brien / Engadget

There aren’t a ton of hands-on controls, but menu diving is kept to a minimum with some smart organization. The four buttons across the top of the screen take you to different sections for presets, synth (parameters and macros), sensitivity (MPE and aftertouch controls) and playing (mostly just for the arpeggiator at the moment). Then to the left of the screen there are two encoders for navigating the submenus, and the four knobs below control whatever option is listed above them on the screen. So, no, you’re not going to be doing a lot of live tweaking, but you also won’t spend 30 minutes trying to dial in a patch.

Part of the reason you won’t spend 30 minutes dialing in a patch is because there really isn’t much to dial in. The engine driving the Osmose is Haken Audio’s EaganMatrix and Expressive E keeps most of it hidden behind six macro controls. In fact, you can’t really design a patch from scratch — at least not the synth directly. You need to download the Haken Editor, which requires Max (not the streaming service), to do serious sound design. Then you need to upload your new patch to the Osmose over USB. Other than that, you’re stuck tweaking presets.

Terrence O'Brien / Engadget

This isn’t necessarily a bad thing because, frankly, EaganMatrix feels less like a musical instrument and more like a PHD thesis. It is undeniably powerful, but it’s also confusing as hell. Expressive E even describes it as “a laboratory of synthesis,” and that seems about right; patching in the EaganMatrix is like doing science. Except, it’s not the fun science you see on TV with fancy machines and test tubes. Instead it’s more like the daily grind of real life science where you stare at a nearly inscrutable series of numbers, letters, mathematical constants and formulas.

I couldn’t get the Osmose and Haken Editor to talk to each other on my studio laptop (a five-year-old Dell XPS), though I did manage to get it to work on my work-issue MacBook. That being said, it was mostly a pointless endeavor. I simply can’t wrap my head around the EaganMatrix. I was able to build a very basic patch with the help of a tutorial, but I couldn’t actually make anything usable.

There are some presets available on Patchstorage, but the community is nowhere near as robust as what you’d find for the Organelle or ZOIA. And, it’s not obvious how to actually upload those handful of presets to the Osmose. You can drag and drop the .mid files you download to the empty slots across the top of the Haken Editor and that will add them to the Osmose's user presets. But you wont actually see that reflected on the Osmose itself until you turn it off and turn it back on.

Honestly, many of the presets available on Patchstorage cover the same ground as 500 or so factory ones that ship with the Osmose. And it’s while browsing those hundreds of presets that both the power and the limitations of the EaganMatrix become obvious. It’s capable of covering everything from virtual analog, to FM to physical modeling, and even some pseudo-granular effects. Its modular, matrix-based patching system is so robust that it would almost certainly be impossible to recreate physically (at least without spending thousands of dollars).

Now, this is largely a matter of taste, but I find the sounds that come out of this obviously over-powered synth often underwhelming. They’re definitely unique and in some cases probably only possible with the EaganMatrix. But the virtual analog patches aren’t very “analog,” the FM ones lack the character of a DX7 or the modern sheen of a Digitone, and the bass patches could use some extra oomph. Sometimes patches on the Osmose feel like tech demos rather than something you’d actually use musically.

Terrence O'Brien / Engadget

That’s not to say there’s no good presets. There are some solid analog-ish sounds and there are a few decent FM pads. But it’s the physical modeling patches where EaganMatrix is at its best. They definitely land in a kind of uncanny valley, though — not convincing enough to be mistaken for the real thing, but close enough that it doesn’t seem quite right coming out of a synthesizer.

Still, the way tuned drums and plucked or bowed strings are handled by Osmose is impressive. Quickly tapping a key can get you a ringing resonant sound, while holding it down mutes it. Aftertouch can be used to trigger repeated plucks that increase in intensity as you press harder. And bowed patches can be smart enough to play notes within a certain range of each other as legato, while still allowing you to play more spaced out chords with your other hand. (This latter feature is called Pressure Glide and can be fine tuned to suit your needs.)

The level of precision with which you can gently coax sound out of some presets with the lightest touch is unmatched by any synth or MIDI controller I’ve ever tested. And that becomes all the more shocking when you realize that very same patch can also be a percussive blast if you strike the keys hard.

But, at the end of the day, I rarely find myself reaching for Osmose — at least not as a synthesizer. I’ve been testing one for a few months now, and while I have used it quite extensively in my studio, it’s been mostly as a controller for MPE-enabled soft synths like Arturia’s Pigments and Ableton’s Drift. It’s undeniably one of the most powerful MIDI controllers on the market. My one major complaint on that front being that its incredible arpeggiator isn’t available in controller mode.

The Osmose is a gorgeous instrument that, in the right hands, is capable of delivering nuanced performances unlike anything else. Even if, at times, the borrowed sound engine doesn’t live up to the keyboard’s lofty potential.

This article originally appeared on Engadget at https://www.engadget.com/expressive-e-osmose-review-a-game-changing-mpe-keyboard-but-a-frustrating-synthesizer-170001300.html?src=rss

Google's latest Android update includes AI-created image descriptions and animations for voice messages

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.

Wear OS is getting a bunch of little updates. You can control more smart home devices and light groups directly from a watch, which comes in handy when creating mood lighting. You can also tell your smart home devices that you are home or away with a tap. There’s a new Assistant Routines feature that automates daily tasks and an Assistant At a Glance shortcut on the watch face that displays information relevant to your day, like the weather and traffic data.

As for Google TV, there are ten new free channels to choose from, bringing the grand total to well over 800. None of these channels require an additional subscription, but they will have commercials. All of these updates begin rolling out today, but it could be a few weeks before they hit everyone’s inbox.

This article originally appeared on Engadget at https://www.engadget.com/googles-latest-android-update-includes-ai-created-image-descriptions-and-animations-for-voice-messages-172522129.html?src=rss

Google Messages now lets you choose your own chat bubble colors

Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.

With the help of on-device Google AI (meaning you'll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you'd like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.

The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you're furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.

In addition, there are more than 15 Screen Effects you can trigger by typing things like "It's snowing" or "I love you." These will make "your screen erupt in a symphony of colors and motion," Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.

Google

On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn't in your group chats. It could help you know the identity of everyone in a group chat too.

Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.

Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.

For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.

This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss

Tesla will deliver the first Cybertrucks today at 3PM ET

If you’ve long dreamed of watching a very small number of vehicles roll off an assembly line, today’s your chance. Tesla is holding a livestream event to highlight deliveries of its long-awaited Cybertruck. The company has only managed to manufacture ten of them so far, despite a 2019 reveal, so that’s what we’ll be watching.

You can catch the Texas-based livestream on X, of course, but the event is also available via Tesla’s website. It all goes down at 3PM EST. Being as how there will only be ten trucks to show off, the livestream should also go over pertinent details regarding battery range, towing capacity, up-to-date pricing and, of course, general availability. Tesla plans on ramping up production in 2024 for the cute lil dystopian wonder cars.

It’s easy to make jokes at the automaker’s expense, given the recent history of its CEO, but this is something of a big deal. It’s Tesla’s first truck, despite looking nothing like a classic pickup. The aesthetics are absolutely wild, with it resembling something out of a 1970s sci-fi flick instead of something you’d spot at a tailgate party. As for performance, it remains to be seen if the Cybertruck can compete with rival vehicles in the off-road market.

Tesla’s Cybertruck has been plagued with issues from inception. During its 2019 product debut, Elon Musk crowed about the unbreakable glass window and invited a customer to try to break it by hurling a bowling ball. Well, it shattered, leading to a muttered curse from the embattled CEO. Despite that embarrassment, the company still says the vehicle boasts a “nearly impenetrable” exoskeleton that resists dents, damage and long-term corrosion. We shall see. There have been multiple delays and a redesign back in 2020.

There’s also the matter of price. When it was first revealed, the Cybertruck was set to cost around $40,000. However, the company’s been fairly silent on the subject since then and a lot has changed since 2019. You can reserve a vehicle right now from Tesla by plopping down $100, but who knows when actual shipments will start. Despite that, Musk recently told investors that it has accrued more than one million reservations. Those folks will be waiting a while, as even generous estimates allow for Tesla to manufacture around 200,000 Cybertrucks each year.

The real question. Will Joe Rogan be one of the ten lucky golden ticket holders? We just might find out at 3PM EST.

This article originally appeared on Engadget at https://www.engadget.com/tesla-will-deliver-the-first-cybertrucks-today-at-3pm-et-160932259.html?src=rss

Logitech's Litra Glow streamer light falls to a new low of $40

It's getting dark much too early, and that means a lot more time in movies or live streaming with a bright overhead light or frustrating shadows. Logitech's Litra Glow is a fantastic option for ensuring you look good on camera, and right now, it's at a new all-time low price. The light is down to $40 from $60 thanks to a 17 percent off sale and an additional $10 coupon applied at checkout. 

Logitech's Litra Glow is a Premium LED Streaming Light designed for creators and is our recommendation for game-streaming gear that will make you feel like a pro. It clips right onto your computer next to its webcam with three-way mounting, letting you adjust its height, tilt and rotation. The light is USB-powered, so you'll want room for its cord to hide behind your monitor.

The Litra Glow is equipped with Truesoft technology, so you won't just have a painfully bright light in your face. You can also adjust the light's brightness and temperature (a great tool for warm light fans) based on the time of day and personal preference. You can make these changes using manual controls or Logitech's app.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/logitechs-litra-glow-streamer-light-falls-to-a-new-low-of-40-141910194.html?src=rss

How OpenAI's ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."

That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss