Ever found yourself with a killer podcast idea, only for it to fizzle out once you realize all the hoops you have to jump through just to make it? Learning an audio editing tool is a skill of its own and, while getting your audio masterpiece online has never been easier, today’s listeners are savvy and won’t tolerate subpar sound and editing for long. These are all problems that Adobe’s browser-based new Podcast tool aims to solve.
Adobe Podcast, formerly known as Project Shasta, is a cloud-based audio production tool. As the name suggests, it’s aimed primarily at podcast production, though it might interest anyone that works with narrative audio. The main thing to know is there’s no audio timeline here and no mixer view with channels. The first thing you’ll notice is how it doesn’t look like an audio editor at all. In fact, it almost never was.
“The goal was to come up with a broader voice strategy for Adobe,” Mark Webster, Director of Product told Engadget. “That could have been creating a creative cloud voice assistant or speaking to Photoshop. But we kind of took a step back [...] it was really about just building services and a platform to make it really easy to create spoken audio.’“
The result is Adobe Podcast which is still in beta. Anyone can apply for access, but currently you’ll need to be based in the US.
Unlike traditional audio editors, including Adobe’s own Audition, you won’t work left to right or even really work with audio files at all. Instead you’ll work on your podcasts like you would a text document. And not just because you work top down, but for the most part, you really are just editing a text document. Anything you record through Adobe Podcast will be automatically transcribed and you simply edit the text to make changes (which are then magically reflected in the audio). There are even some extra tools for creating artwork (as seen above).
“We don't think of Adobe Podcast as another audio tool. It really is a storytelling tool. When you think about it as a storytelling tool, suddenly all the things that are in traditional audio tools, like looking at the audio waveforms and decibel levels, they're actually not relevant.” Sam Anderson, Adobe Podcast’s Lead Designer told Engadget.
Apps like Descript have been doing it this way for a while. And it makes some sense. Podcasts are about what is being said, so it’s logical to work on the text first rather than the raw audio.
Not to mention, being able to see what’s being said without endlessly playing it back to find the right spot is also much easier on the ears, eyes and soul. But it’s not without some trade offs.
For one, there’s a certain amount of control you have to learn to relinquish. In an audio editor, you can choose exactly where you want to trim a segment of audio to. In Adobe Podcast, you can only highlight text and the finer details of the edit are taken care of by the backend. For the most part that’s fine, but if you wanted to add or trim some silence, for example, you can’t do that here, you’ll have to get creative.
Image by James Trew / Engadget
For example, removing a sentence is as easy as highlighting it in the transcription and smacking the delete key. Similarly, you can cut/paste to move things around as you see fit. But you might not quite get the smooth edit you would if you did this manually in an audio editing app. So, for now at least, you might still have to make some minor edits after you export from Podcast. In the future, the system might leverage AI to make these sorts of edits for you.
“I think we could use some really interesting technology to look at the space between words and when you make deletions and just find a way to just do it automatically.” Anderson said.
One of the major benefits for online tools like Podcast or similar services such as Riverside Fm and Zencastr is how easy it is to invite guests. In the past you might have had to have a pre-brief with a guest to figure out their audio setup, maybe guide them into recording it locally with Audacity and then deal with transferring large audio files around after the fact.
With Podcast, your guests simply accept an invite, much like they would for a Zoom meeting, and then you converse in real time while the local audio is uploaded in the background. The result is an incredibly frictionless way to get local audio, transcribed and ready to be edited in one fell swoop.
Perhaps Adobe’s secret weapon here is two-fold. First, unlike the rival products mentioned above, Podcast has a singular focus on audio, so there are no video editing, presentation or livestreaming tools you might not need. Second would be some proprietary tools - notably “Enhance Speech.” With one click, this magic button basically transforms garbage audio recorded in the worst of rooms into something that sounds more professional.
In testing this, I recorded a conversation between my colleague Mat Smith and myself. I was using a dedicated XLR podcasting mic (Focusrite’s DM14v) into an audio interface. Mat, on the other hand, was just speaking into his Macbook’s built-in microphone. Once we finished our recording, I tapped the “Enhance” toggle and suddenly it sounded like we were in the same room with the same equipment. You can hear the untreated and treated audio below.
Now audio purists might find the treated audio a little too dry or isolated (with no sense of space). Especially right now as there are no controls - the effect is either fully on or off. But Webster explained that in the future you’ll be able to adjust the amount of the effect if the default setting isn’t to your liking.
The effect was good enough though that I tried uploading the audio for a telephone interview I conducted for a story a few weeks ago. The result was good enough that I am considering cutting that down into an audio version of the article it was for.
Another feature in the works is the removal of filler words (uhms & ahhs etc). Again, this is something you can find on rival products, but right now there’s not even a way to edit them out as the transcription doesn’t show them so this is something you’d have to do in post.
Handily, Adobe Podcast includes lots of free music for you to use for intros/outros and transitions. Editing them to work with your speech isn’t as intuitive as it could be, but this is an example of why the service is still in beta. You can be creative. For example, if you want to talk over a bit of music and then have it fade up to full volume, you can splice it in two and set one to “background” and achieve the effect that way. Webster explained that they’re figuring out the best way for adding such tools that will guide novices without alienating more advanced users (and vice versa).
If you’re wondering if Adobe will add in an AI voice tool so you can not only the audio you have with text, but actually add words by typing them in (something you can do in Descript), don’t hold your breath. Webster pointed out that to make an effective voice model it needs to be trained on enough material so it only makes sense for your own voice. Given that AI voices can be clunky, they decided to just make it really really easy to re-record the line you wanted. After all, this isn't a video where patching over a misspeak is a lot more complicated.
Perhaps the best feature of all is the lack of friction between ideas and getting something down on the page. If you can use Google Docs, you can make something with Adobe Podcast. And with the bundled music and mic-enhancement tools there’s a solid chance it’ll sound pretty good, too.
For now, Podcast will remain in beta for the foreseeable future, and Webster confirmed that there will always be a free tier. And if you don’t even want to make a podcast, but you like the sound of the speech enhancing feature, you don’t even need to sign up for the beta, it’s available right here, right now.
This article originally appeared on Engadget at https://www.engadget.com/adobe-podcasts-text-based-editing-turns-limitation-into-liberation-133001520.html?src=rss
I love to sleep. Then, after I wake up, I love to find out how well I slept. It might be because I’m highly competitive or that I like the validation of an app confirming whether I’ve had a good or bad night’s rest. Despite this, I’ve avoided most sleep trackers because they’re generally too intrusive or uncomfortable. So when Amazon unveiled the Halo Rise, I was excited by its premise. For $140 (on sale now for $100), the Rise promises to use motion sensing to track your breathing rate and use that information to calculate how long you’ve slept. It’s also a bedside lamp, clock and smart alarm, and looks pretty, to boot.
Design
It fits nicely into my life in many ways. First, physically. The Halo Rise is a gray CD-sized disc (remember those? And yes I know the D already stands for Disc) that’s flat on one side and convex on the other and rest on top of a metal stand. The even surface houses LEDs that show the time, as well as an arc of lights that can be set to simulate the gradual glow of sunrise and wake you up more gently.
I like the Rise’s clean, modern aesthetic that should blend in with most furnishings. Setup was also surprisingly painless. Like Google’s Nest Hub, which similarly uses motion detection to track your sleep, the Halo Rise needs to be next to your bed within arm’s reach. I was worried that my nightstand wasn’t tall enough for the device, but it was able to work even though it was set a few inches lower than Amazon recommended.
Placing the Rise close to your bed is also important because, unlike the Nest Hub, it doesn’t have an onboard mic, which means you’ll have to reach over and hit the snooze button when it goes off. This brings me to one of my small complaints: There are two buttons on the top of the Rise. A small, pinky-sized one for dismissing the alarm and a larger one on its left for snoozing. I know this is how most alarm clocks are designed and it makes sense – if you’re awake enough to accurately press the tinier button then you likely won’t need a follow up. But since there’s no way to vocally stop the Halo Rise, the fact that the buttons are so small and close to each other is pretty frustrating. I accidentally hit snooze so many times and had to run back to my bedroom while brushing my teeth when the device rang again ten minutes later.
That’s my main gripe with the Halo Rise’s hardware, and it honestly isn’t much. I also wish it were a bit bigger so the buttons could be easier to hit and the clock font easier to read. But those are the only times you have to physically interact with it, everything else happens in the app.
Sleep-tracking
Every morning, Amazon will show you a summary of the last night, including a score of and amount of time you’ve been asleep. Alongside that is a message either congratulating you on doing well or cautioning you to go easy that day if you hadn’t caught enough shut eye. I’ve definitely used the feedback from the Rise as an excuse to get out of working or working out in the last few weeks, when it told me to take it easy after getting just two hours of sleep.
Photo by: Cherlynn Low / Engadget
In general, I’ve found the Halo Rise pretty accurate at detecting when I’ve dozed off and woken up. It actually performed better than Google’s smart display, which would often mistake when I awoke. I don’t like how, unlike most other sleep trackers, Amazon also includes my “time taken to fall asleep” as part of my so-called performance each night. Typically, after I get in bed, I spend some time scrolling Reddit or playing games and I don’t consider that time spent trying to fall asleep. I wish the Rise were smart enough to use its onboard light sensor to determine when I put my phone away and turn off the light. That is when I’m actually trying to drift into la la land, but I guess not everyone sleeps in the dark so this might not be suitable for all.
Still, I found the app surprisingly informative. Tapping into details brings up a chart of the sleep stages I was in the night before, as well as a timeline below it showing at which points during the night there were “Light Disruptions.” For me, the results were unsurprising – since I don’t use blackout curtains, my room got bright at sunrise every day. Otherwise, unless I had gotten up and turned on my lamp, there were no disruptions. This page also tells me the average brightness, humidity and temperature in my room overnight.
What was most helpful was understanding that my sleep environment was warmer than I thought. I was struggling to fall and stay asleep until the app suggested I adjust it to the recommended range of 60 to 70 degrees (Fahrenheit). As someone who avoids using the air conditioner out of guilt, having this information validated my desire and I started to turn it on more often right before bedtime. I slept much better after that, and the app congratulated me on keeping my room’s temperature within the ideal range.
To be clear, the Halo Rise isn’t the only sleep tracker that can do this. The Nest Hub also tracks your room’s temperature and light. But instead of humidity, Google uses its onboard mics to listen for sounds of snoring or coughing. As someone who doesn’t snore, but coughs a lot due to dry air, I found it more helpful to get insight on how humid my environment was. Depending on your concerns, your preferences here might differ.
Another key difference between the Halo Rise and the Nest Hub is that Google will track daytime naps while Amazon does not. If you go back to bed in the middle of the day, the Rise will not track your sleep. However, on one particular Saturday when I was recovering from a long, hard week, I stayed in bed for hours after waking up and passed out at 1:48pm. I finally got out of bed at about 4:43pm, and the Amazon app actually updated afterwards to add those three-ish hours to my record.
Wrap-up
Every morning in the past, I’d reach for my phone, check my notifications and the weather, as well as my horoscope. I know, it’s not scientific and I don’t put a lot of stock in it, but I think of it as a way to start my day off better prepared. Since setting up the Halo Rise, my first check-in has been replaced by looking at the Halo app. It’ll tell me whether I should take my daily workout easy, and how early I might need to get to bed that night.
The Halo Rise is also a small but significant piece of Amazon’s ongoing foray into the business of health and wellness. The device sits in the most intimate of our spaces and offers help on a specific area of wellbeing. Together with products like the Halo Band and app features like body composition scanning, mobility and posture assessment, as well as the controversial tone monitoring that monitors how you speak, the company is clearly investing in health management tools. Considering Amazon also recently finished acquiring One Medical and launched its pharmacy in 2020, its ambitions are obvious. The question is whether we’re willing to trade our personal data for the potential convenience that an all-Amazon healthcare infrastructure might bring.
This article originally appeared on Engadget at https://www.engadget.com/amazon-halo-rise-review-an-unobtrusive-bedside-sleep-tracker-thats-surprisingly-helpful-130037788.html?src=rss
On the eve of the launch of The Super Mario Bros. Movie, the pudgy plumber's days on smartphones may be dwindling. In an interview with Variety, celebrated Nintendo designer Shigeru Miyamoto said that "mobile apps will not be the primary path of future Mario games." Instead, he said, the company's strategy going forward is a "hardware and software integrated gaming experience."
Miyamoto's remarks aren't too surprising, considering that the last Mario game on mobile, Dr. Mario World, was pulled from the market just two years after its release. 2016's Super Mario Run grossed $60 million in its first year, while Mario Kart Tour has taken in $300 million so far. That compares with Nintendo's $3 billion gross to date on Mario Kart 8 for Wii U and Switch.
The designer said that since control intuitiveness is a key part of the gaming experience, smartphone development is problematic. "When we explored the opportunity of making Mario games for the mobile phone — which is a more common, generic device — it was challenging to determine what that game should be," he said. "That is why I played the role of director for Super Mario Run, to be able to translate that Nintendo hardware experience into the smart devices."
Miyamoto didn't address other mobile Nintendo mobile properties, including Animal Crossing Pocket Camp and Fire Emblem Heroes. The latter is Nintendo's top earning mobile game by far, having crossed the $1 billion mark in June of last year, according to SensorTower. Miyamoto declined to say when the next Super Mario game would arrive, but The Super Mario Bros movie starring Chris Pratt is set to arrive today amid strong audience and tepid critic reviews.
This article originally appeared on Engadget at https://www.engadget.com/nintendos-miyamoto-says-smartphones-wont-ever-be-marios-primary-platform-124417055.html?src=rss
Rakuten-owned Kobo unveiled its newest e-reader today, a $400 alternative to the Kindle Scribe and reMarkable 2. The Kobo Elipsa 2E iterates on its 2021 predecessor with a better stylus, more versatile lighting / color-temperature adjustments and other improvements.
The Kobo Elipsa 2E has a 10.3-inch e-ink touchscreen (like its predecessor), but the new model gets a resolution bump to 300ppi. Additionally, it adds ComfortLight Pro, which adjusts the front light’s color temperature and brightness to reduce eye strain. Kobo says its battery lasts longer, especially when using the stylus, although its description is only as specific as “weeks of battery life.”
Kobo says the new e-reader has a faster (dual-core 2GHz) processor, leading to lower latency and speedier zooming / page-turning. It also includes the Kobo Stylus 2, an improved (rechargeable and 25 percent lighter) digital pen for jotting notes. The stylus has an “eraser” on its back end and a separate highlighter button. In addition, the optional SleepCover includes a magnetic attachment for stashing away the stylus when you aren’t using it. Finally, the device has an improved design using recycled plastic and metals.
Kobo
The Rakuten-owned company announced the launch of Kobo Plus, its answer to Kindle Unlimited and Audible. The tier-based subscription service offers unlimited access to over 1.3 million e-books and 100,000 audiobooks. It starts at $8 per month for either e-books or audiobooks or $10 per month for both.
The Kobo Elipsa 2E will cost $400 when it launches in stores and online on April 19th. Pre-orders begin April 5th at Kobo’s website, and customers who reserve one before the launch date in the US, UK and Australia will get a $25 Kobo e-gift card for digital reading content. The e-reader will be available globally in the US, Canada, UK, Netherlands, Belgium, France, Italy, Spain, Portugal, Sweden, Switzerland, Australia, New Zealand, Poland, the Czech Republic, Romania, Singapore, Malaysia, Taiwan, Hong Kong, Japan and Turkey.
This article originally appeared on Engadget at https://www.engadget.com/kobo-takes-on-the-kindle-scribe-with-improved-elipsa-2e-e-ink-tablet-040148388.html?src=rss
Our reverence towards stars and celebrities was not borne of the 19th century’s cinematic revolution, but rather has been a resilient aspect of our culture for millennia. Ancient tales of immortal gods rising again and again after fatal injury, the veneration and deification of social and political leaders, Madame Tussauds’ wax museums and the Academy Awards’ annual In Memoriam segment, they’re are all facets of the human compulsion to put well-known thought leaders, tastemakers and trendsetters up on pedestals. And with a new, startlingly lifelike generation of generative artificial intelligence (gen-AI) at our disposal, today’s celebrities could potentially remain with us long after their natural deaths. Like ghosts, but still on TV, touting Bitcoin and Metaverse apps. Probably.
Fame is the name of the game
American Historian Daniel Boorstin once quipped, “to be famous is to be well known for being well-known.” With the rise of social media, achieving celebrity is now easier than ever, for better or worse.
“Whereas stars are often associated with a kind of meritocracy,” Dr. Claire Sisco King, Associate Professor of Communication Studies and Chair of the Cinema and Media Arts program at Vanderbilt. “Celebrity can be acquired through all kinds of means, and of course, the advent of digital media has, in many ways, changed the contours of celebrity because so-called ordinary people can achieve fame in ways that were not accessible to them prior to social media.”
What’s more, social media provides an unprecedented degree of access and intimacy between a celebrity and their fans, even at the peak of the paparazzi era. “We develop these imagined intimacies with celebrities and think about them as friends and loved ones,” King continued. “I think that those kinds of relationships illustrate the longing that people have for senses of connectedness and interrelatedness.”
For as vapid as the modern celebrity existence is portrayed in popular media, famous people have long served important roles in society as trend-setters and cultural guides. During the Victorian era, for example, British folks would wear miniature portraits of Queen Victoria to signal their fealty and her choice to wear a white wedding gown in 1840 is what started the modern tradition. In the US, that manifests with celebrities as personifications of the American Dream — each and every single one having pulled themselves up by the bootstraps and sworn off avocado toast to achieve greatness, despite their humble beginnings presumably in a suburban garage of some sort.
“The narratives that we return to, “ King said, “can become comforts for making sense of that inevitable part of the human experience: our finiteness.” But what if our cultural heroes didn’t die? At least not entirely? What if, even after Tom Hanks shuffles off this mortal coil, his likeness and personality were digitally preserved in perpetuity? We’re already sending long-dead recording artists like Roy Orbison, Tupac Shakur and Whitney Houston back out on tour as holographic performers. The Large Language Models (LLMs) that power popular chatbots like ChatGPT, Bing Chat, and Bard, are already capable of mimicking the writing styles of whichever authors they’ve been trained on. What’s to stop us from smashing these technologies together into an interactive Tucker-Dolcetto amalgamation of synthesized content? Turns out, not much beyond the threat of a bad news cycle.
How to build a 21st century puppet
Cheating death has been an aspirational goal of humanity since prehistory. The themes of resurrection, youthful preservation and outright immortality are common tropes throughout our collective imagination — notions that have founded religions, instigated wars, and launched billion dollar beauty and skin care empires. If a society’s elites weren’t mummifying themselves ahead of a glorious afterlife, bits and pieces of their bodies and possessions were collected and revered as holy relics, cultural artifacts to be cherished and treasured as a physical connection to the great figures and deeds of yore.
Technological advances since the Middle Ages have, thankfully, by and large eliminated the need to carry desiccated bits of your heroes in a coat pocket. Today, fans can connect with their favorite celebrities — whether still alive or long-since passed — through the star’s available catalog of work. For example, you can watch Robin Williams’ movies, stand up specials, Mork and Mindy, and read his books arguably more easily now than when he was alive. Nobody’s toting scraps of hallowed rainbow suspender when they can rent Jumanji from YouTube on their phone for $2.99. It’s equally true for William Shakespeare, whose collected works you can read on a Kindle as you wait in line at the DMV.
At this point, it doesn’t really matter how long a beloved celebrity has been gone — so long as sufficiently large archives of their work remain, digital avatars can be constructed in their stead using today’s projection technologies, generative AI systems, and deepfake audio/video. Take the recent fad of deceased singers and entertainers “going back out on tour” as holographic projections of themselves for example.
The projection systems developed by BASE Hologram and the now-defunct HologramUSA, which made headlines in the middle of the last decade for their spectral representations of famously deceased celebrities, used a well-known projection effect known as Pepper’s Ghost. Developed in the early 19th century by British inventor John Henry Pepper, the image of an off-stage performer is reflected onto a transparent sheet of glass interposed between the stage and audience to produce a translucent, ethereal effect ideal for depicting the untethered spirits that routinely haunted theatrical protagonists at the time.
Public Domain - Wikipedia
Turns out, the technique works just as well with high-definition video feeds and LED light sources as it did with people wiggling in bedsheets by candlelight. The modern equivalent is called the "Musion Eyeliner" and rather than a transparent sheet of glass, it uses a thin metalized film set at a 45 degree angle towards the audience. It’s how the Gorillaz played “live” at the 2006 Grammy Awards and how Tupac posthumously performed at Coachella in 2012, but the technology is limited by the size of the transparent sheet. If we’re ever going to get the Jaws 19 signage Back to the Future II promised us, we’re likely going to use arrays of fan projectors like those developed by London-based holographic startup, Hypervsn, to do so.
“Holographic fans are types of displays that produce a 3-dimensional image seemingly floating in the air using the principle of POV (Persistence of Vision), using strips of RGB LEDs attached to the blades of the fan and a control-unit lighting up the pixels,” Dr Priya C, Associate Professor at Sri Sairam Engineering College, and team wrote in a 2020 study on the technology. “As the fan rotates, the display produces a full picture.”
Dr Priya C goes on to say “Generally complex data can be interpreted more effectively when displayed in three dimensions. In the information display industry, three dimensional (3D) imaging, display, and visualization are therefore considered to be one of the key technology developments that will enter our daily life in the near future.”
“From a technical standpoint, the size [of a display] is just a matter of how many devices you are using and how you actually combine them,” Hypervsn Lead Product Manager, Anastasia Sheluto, told Engadget. “The biggest wall we have ever considered was around 400 devices, that was actually a facade of one building. A wall of 12 or 15 [projectors] will get you up to 4k resolution.” While the fan arrays need to be enclosed to protect them from the elements and the rest of us from getting whacked by a piece of plastic revolving at a few thousand RPMs, these displays are already finding use in museums and malls, trade shows and industry showcases.
What’s more, these projector systems are rapidly gaining streaming capabilities, allowing them to project live interactions rather than merely pre-recorded messages. Finally, Steven Van Zandt’s avatar in the ARHT Media Holographic Cube at Newark International will do more than stare like he’s not mad, just disappointed, and the digital TSA assistants of tomorrow may do more than repeat rote instructions for passing travelers as the human ones do today.
Getting Avatar Van Zandt to sound like the man it’s based on is no longer much of a difficult feat either. Advances in the field of deepfake audio, more formally known as speech synthesis, and text-to-speech AI, such as Amazon Polly or Speech Services by Google, have led to a commercialization of synthesized celebrity voice overs.
Where once a choice between Morgan Freeman and Darth Vader reading our TomTom directions was considered bleeding-edge cool, today, companies like Speechify offer voice models from Snoop Dogg, Gwyneth Paltrow, and other celebs who (or whose estates) have licensed their voice models for use. Even recording artists who haven’t given express permission for their voices to be used are finding deep fakes of their work popping up across the internet.
In Speechify’s case at least, “our celebrity voices are strictly limited to personal consumption and exclusively part of our non-commercial text-to-speech (TTS) reader,” Tyler Weitzman, Speechify Co-Founder and Head of AI, told Engadget via email. “They're not part of our Voice Over Studio. If a customer wants to turn their own voice into a synthetic AI voice for their own use, we're open to conversations.”
“Text-to-speech is one of the most important technologies in the world to advance humanity,” Weitzman continued. “[It] has the potential to dramatically increase literacy rates, spread human knowledge, and break cultural barriers.”
ElevenLabs’ Prime Voice AI software similarly can recreate near perfect vocal clones from uploaded voice samples — the entry level Instant Voice Cloning service only requires around a minute of audio but doesn’t utilize actual AI model training (limiting its range of speech) and an enterprise version that can only be accessed after showing proof that the voice they’re cloning is licensed for that specific use. What’s more, “Cloning features are limited to paid accounts so if any content created using ElevenLabs is shared or used in a way that contravenes the law, we can help trace it back to the content creator,” ElevenLabs added.
The Enterprise-grade service also requires nearly 3 hours of input data to properly train the language model but company reps assure Engadget that, “the results are almost indistinguishable from the original person’s voice.” Surely Steve Van Zandt was onscreen for that long over the course of Lillyhammer’s four-season run.
Unfortunately, the current need for expansive, preferably high-quality, audio recordings on which to train an AI TTS model severely limits which celebrity personalities we’d be able to bring back. Stars and public figures from the second half of the 20th century would obviously have far more chance of having three hours of tape available for training than, say, Presidents Jefferson or Lincoln. Sure, a user could conceivably reverse engineer a voiceprint from historical records — ElevenLabs Voice Design allows users to generate unique voices with adjustable qualities like age, gender, or accent — and potentially recreate Theodore Roosevelt’s signature squeaky sound, but it’ll never be quite the same as hearing the 26th President himself.
Providing something for the synthesized voices to say is proving to be a significant challenge — at least providing something historically accurate, as the GPT-3-powered iOS app, Historical Figures Chat has shown. Riding the excitement around ChatGPT, the app was billed as able to impersonate any of 20,000 famous folks from the annals of history. Despite its viral popularity in January, the app has been criticized by historians for returning numerous factual and characteristic inaccuracies from its figure models. Genocidal Cambodian dictator, Pol Pot, at no point in his reign showed remorse for his nation’s Killing Fields, nor did Nazi general and Holocaust architect, Heinrich Himmler, but even gentle prodding was enough to have their digital recreations begin spouting mea culpas.
“It’s as if all of the ghosts of all of these people have hired the same PR consultants and are parroting the same PR nonsense,” Zane Cooper, a researcher at the University of Pennsylvania, remarked to the Washington Post.
We can, but should we?
Accuracy issues aren’t the only challenges generative AI “ghosts” currently face, as apparently, even death itself will not save us from copyright and trademark litigation. “There's already a lot of issues emerging,” Dan Schwartz, partner and IP trial lawyer at Nixon Peabody, told Engadget. “Especially for things like ChatGPT and generative AI tools, there will be questions regarding ownership of any intellectual property on the resulting output.
“Whether it's artwork, whether it's a journalistic piece, whether it's a literary piece, whether it is an academic piece, there will be issues over the ownership of what comes out of that,” he continued. “That issue has really yet to be defined and I think we're still a ways away from intellectual property laws fully having an opportunity to address it. I think these technologies have to percolate and develop a little bit and there will be some growing pains before we get to meaningful regulation on them.”
The US Copyright Office in March announced that AI-generated art cannot be copyrighted by the user under US law, equating the act of prompting the computer to produce a desired output with asking a human artist the same. "When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the 'traditional elements of authorship' are determined and executed by the technology — not the human user," the office stated.
This is the opposite of the stance taken by a Federal Appeals Court. “[Patent law regarding AI] for the most part, is pretty well settled here in the US,” Schwartz said, “that an AI system cannot be an inventor of a new, patentable invention. It's got to be a human, so that will impact how people apply for patents that come out of generative AI tools.”
Output-based infringement aside, the training methods used by firms like OpenAI and Stability AI, which rely on trawling the public web for data with which to teach their models, have proven problematic as well, having repeatedly caught lawsuits for getting handsy with other people’s licensed artwork. What’s more, generative AI has already shown tremendous capacity and capability in creating illegal content. Deepfake porn ads featuring the synthetic likenesses of Emma Watson and Scarlett Johansson ran on Facebook for more than two days in March before being flagged and removed, for example.
Until the wheels of government can turn enough to catch up to these emerging technologies, we’ll have to rely on market forces to keep companies from disrupting the rest of us back into the stone age. So far, such forces have proved quick and efficient. When Google’s new Bard system immediately (but confidently) fumbled basic facts about the James Webb Space Telescope, that little whoopsie-doodle immediately wiped $100 billion off the company’s stock value. The Historical Figures Chat app, similarly, is no longer available for download on the App Store, despite reportedly receiving multiple investment offers in January. It has since been replaced with numerous, similarly-named clone apps.
“I think what is better for society is to have a system of liability in place so that people understand what the risks are,” Schwartz argued. “So that if you put something out there that creates racist, homophobic, anti-any protected class, inappropriate content, whoever’s responsible for making that tool available, will likely end up facing the potential of liability. And I think that's going to be pretty well played out over the course of the next year or two.”
Celebrity as an American industry
While the term “celebrity” has been around since being coined in 17th century France, during the days of John Jacques Rousseau, it was the Americans in the 20th century who first built the concept into a commercial enterprise.
By the late 1920s, with the advent of Talkies, the auxiliary industry of fandom was already in full swing. “You [had] fan magazines like Motion Picture, Story Magazine or Photoplay that would have pictures of celebrities on the cover, have stories about celebrities behind the scenes, stories about what happened on the film set,” King explained. “So, as the film industry develops alongside this, you start to get Hollywood Studios.” And with Hollywood Studios came the star system.
“Celebrity has always been about manufacturing images, creating stories,” King said. The star system existed in the 1930s and ‘40s and did to young actors and actrices what Crypton Future Media did to Hatsune Miku: it assembled them into products, constructing synthetic personalities for them from the ground up.
Actors, along with screenwriters, directors and studio executives of the era, would coordinate to craft specific personas for their stars. “You have the ingénue or the bombshell,” King said. “The studios worked really closely with fan magazines, with their own publicity arms and with gossip columnist to tell very calculated stories about who the actors were.” This diverted focus from the film itself and placed it squarely on the constructed, steerable, personas crafted by the studio — another mask for actors to wear, publicly and even after the cameras were turned off.
“Celebrity has existed for centuries and the way it exists now is not fundamentally different from how it used to be,” King added. “But it has been really amplified, intensified and made more ubiquitous because of changing industry and technological norms that have developed in the 20th and 21st centuries.”
Even after Tom Hanks is dead, Tom Hanks Prime will live forever
Between the breakneck pace of technological advancement with generative AI (including deepfake audio and video), the promise of future “touchable” plasma displays offering hard light-style tactile feedback through femtosecond laser bursts, and Silicon Valley’s gleeful disregard towards the negative public costs borne from their “disruptive” ideas, the arrival of immortal digitized celebrities hawking eczema creams and comforting lies during commercial breaks is now far more likely a matter of when, rather than if.
But what does that mean for celebrities who are still alive? How will knowing that even after the ravages of time take Tom Hanks from us, that at least a lightly interactable likeness might continue to exist digitally? Does the visceral knowledge that we’ll never truly be rid of Jimmy Fallon empower us to loathe him even more?
“This notion of the simulacra of the celebrity, again, is not entirely new,” King explained. “We can point to something like the Madame Tussaud's wax museum, which is an attempt to give us a version of the celebrity, there are impersonators who dress and perform as them, so I think that people take a certain kind of pleasure in having access to an approximation of the celebrity. But that experience never fully lives up.”
“If you go and visit the Mona Lisa in the Louvre, there's a kind of aura [to the space],” she continued. “There's something intangible, almost magical about experiencing that work of art in person versus seeing a print of it on a poster or on a museum tote bag or, you know, coffee mug that it loses some of its kind of ineffable quality.”
This article originally appeared on Engadget at https://www.engadget.com/immortal-hologram-celebrities-chatgpt-ai-deep-fake-back-catalogs-180030493.html?src=rss
Microsoft is no stranger to making elaborate laptop docks, but its latest may be particularly appealing if you need a genuinely robust hub for work. The company has unveiled a Surface Thunderbolt 4 Dock that, as the name implies, uses speedy Thunderbolt 4 (and hence USB 4) to connect your laptop or tablet to all your peripherals. There's enough bandwidth to connect two 4K monitors at 60Hz, as well as 96W of power that's enough to recharge some demanding portable PCs.
The dock offers a healthy mix of modern and legacy ports, plus a few helpful design touches. You'll find two USB-C ports, two USB-A ports, a 3.5mm headphone jack and 2.5Gbps Ethernet on the back, but you'll also find one USB-C and one USB-A port on the front — it shouldn't be awkward to plug in a thumb drive or phone. Tactile indicators on the back make it easier for people of various abilities to find ports by feel, while the 20 percent ocean-bound plastic reduces the environmental impact.
Before you ask: while the dock is designed with the Surface Laptop 5, Surface Laptop Studio and Intel-based Surface Pro 9 in mind, that's not a strict requirement. Any computer with Thunderbolt 4/USB 4 ports should work. You could attach a MacBook Pro, if you're feeling ironic.
The Surface Thunderbolt 4 Dock is available today on Microsoft's store for $300. That's considerably more expensive than many laptop docks, and you may wish it had features like a full-size SD card reader. The price is on par with similarly powerful docks, though, and it may be worthwhile if you'd rather not spend valuable minutes plugging in peripherals when you sit at your desk.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-surface-thunderbolt-4-dock-is-a-high-speed-laptop-and-tablet-hub-161856424.html?src=rss
Apple's latest Mac Mini has dropped to $549 at Amazon and B&H. Outside of special discounts for education customers, this matches the lowest price we've seen for the entry-level model with 8GB of RAM, a 256GB SSD and Apple's M2 chip. For reference, Apple normally sells this variant for $599.
We gave the Mac Mini with the beefier M2 Pro chip a review score of 86 earlier this year. This model won't be as powerful for video editing or software development, but the hardware is just as compact, and the base M2 is still plenty fast and quiet for web browsing, less hardcore work and general use. Just make sure that's all you want out of the device first, as, like most Macs, you can't upgrade the Mini's internals over time. And while the Mini's lack of front-facing ports is annoying, on the back it has two Thunderbolt 4 USB-C ports, two USB-A ports, an HDMI port, an Ethernet jack and a headphone jack.
As with other recent Macs, this entry-level Mac Mini technically has slower SSD performance than its predecessor, but the drop-off shouldn't be significant in real-world use, especially if you stick to the less intense tasks at which this model is aimed. If you think you'll need more storage and don't want to use an external drive, a variant with a 512GB SSD is available for $749. If you plan on using the desktop daily for the next several years, buying a model with at least 16GB of RAM may be a better value; those options start at $799. But if you just want the cheapest Mac desktop possible, the base model is still a great compact PC for the essentials, and this discount makes it a little more affordable.
Sony just made it decidedly easier to find games that accommodate people with disabilities. As of this week, the company is rolling out accessibility tags on the PlayStation Store for PS5 users. Press the triangle button when looking at game's hub and you'll see whether a title has features to support those with visual, audio and motor needs. You'll know if a game has alternative colors, a screen reader or controller adjustments, for instance.
The tags will be generally available this week. Most of the initial support revolves around marquee games like Death Stranding Director's Cut, God of War Ragnarök and Spider-Man: Miles Morales. Sony says it's working with a "wide range of developers" to deploy tags going forward, so you can expect to see them from smaller studios.
The option comes roughly a year and a half after Microsoft unveiled similar tags for Xbox gamers. Not that PlayStation developers have been waiting for Sony to act. The Last of Us creator Naughty Dog has made a point of prioritizing accessibility in its games, such as a feature that plays dialogue through the PS5's DualSense controller as haptic feedback. In that regard, the store upgrade helps expose and promote these efforts.
Sony hasn't been standing still. The firm is developing an accessible PS5 controller that, like Microsoft's Xbox Adaptive Controller, helps people with limited motor control play games that might otherwise be unusable. The tags are just part of a broader strategy to make gaming viable for many more people — provided they can find a PS5 in the first place, of course.
This article originally appeared on Engadget at https://www.engadget.com/playstation-store-finally-adds-accessibility-tags-for-ps4-and-ps5-games-144030448.html?src=rss
A UK privacy watchdog has fined TikTok £12.7 million ($15.8 million) for what it says are several breaches of data protection laws, including how the app handled children's personal information. The Information Commissioner's Office (ICO) says that, in 2020, TikTok allowed as many as 1.4 million kids aged under 13 to use the app in breach of its own rules.
The ICO states that companies offering "information society services" to under-13s need to obtain consent from the kids' parents or guardians. TikTok didn't do that, according to the regulator, which noted the company "ought to have been aware that under-13s were using its platform." Moreover, the ICO (an independent public body) said TikTok didn't do enough to find and remove underage users from the app — despite some senior employees raising concerns about the issue.
The office determined that, between May 2018 and July 2020, TikTok breached the UK General Data Protection Regulation in several ways. Among other things, the ICO says TikTok failed to properly inform users in an easy-to-understand way how it handles and shares their data. As such, TikTok users, including kids, "were unlikely to be able to make informed choices about whether and how to engage" with the app. The office added that TikTok failed to make sure that it was processing the data it held on UK users "lawfully, fairly and in a transparent manner."
“We invest heavily to help keep under-13s off the platform and our 40,000-strong safety team works around the clock to help keep the platform safe for our community,” TikTok told ABC News. “We will continue to review the decision and are considering next steps.”
The fine is not as steep as previously expected. After publishing the preliminary findings of its TikTok investigation, which started in February 2019, the ICO warned the company in September that it faced a fine of as much as £27 million ($33.7 million). The probe started around the time the Federal Trade Commission fined TikTok $5.7 million over child privacy violations.
More recently, TikTok has faced deeper scrutiny from regulators around the globe over privacy and security worries. Some governments have raised concerns that the platform's parent company ByteDance (which is based in Beijing) may be compelled to share data on their countries' residents with Chinese officials. Last month, TikTok CEO Shou Zi Chew told a House committee that "ByteDance is not an agent of China or any other country."
Nevertheless, the app has been banned from government devices in several territories, including the US, UK, Canada, New Zealand, Australia, Norway and the European Parliament. Dozens of US states have prohibited TikTok on devices they own as well. Severalbills have been introduced that would give the US the power to ban the platform completely, while TikTok has claimed the White House is trying to force ByteDance to sell the app.
This article originally appeared on Engadget at https://www.engadget.com/uk-privacy-watchdog-fines-tiktok-158-million-for-misusing-kids-data-143046278.html?src=rss
Peloton owners with a Samsung Galaxy Watch 5 (including the Watch 5 Pro) or Galaxy Watch 4 can now monitor their heart rate on their exercise equipment. The Peloton Wear OS app update that enables the feature begins rolling out today.
The pairing process is similar to that of the Apple Watch, which launched its Peloton app in 2019 and added direct heart rate support in March 2022. Once you’ve installed the Peloton app update on your Galaxy Watch, choose a workout on your exercise equipment, open the app on your wearable and follow the “Connect” prompt. You should see your heart rate synced in real-time on your exercise machine. Peloton launched its Wear OS app last October, but it only showed users’ heart rates on the watch, not the workout equipment.
The update arrives as Samsung and Peloton (the latter especially) could use the strategic partnership. After years of being one of the only big-name Android smartwatches, Samsung’s flagship wearable has new competition in the Pixel Watch, which launched last October. Meanwhile, Peloton has struggled financially after a lockdown-era boom, leading to four rounds of layoffs last year that cut over half its workforce.
This article originally appeared on Engadget at https://www.engadget.com/samsung-galaxy-watch-users-can-now-view-their-heart-rate-on-peloton-equipment-140056941.html?src=rss