Posts with «music» label

YouTube Music debuts Samples, a TikTok-style feed for music discovery

In the last few years, basically every platform of consequence has made its own take on TikTok's signature scrolling feed of vertical videos. YouTube Music is the latest. Today, the app will get a vertical video feed called Samples that YouTube describes as a one-tap way to quickly sample and find new music. 

Samples lives in a new tab at the bottom of the app, alongside the home feed, your library and the explore section. Tapping Samples automatically starts a short vertical video clip of a song that YouTube Music thinks you'll like. Naturally, it's pulling songs based on your taste profile, with an emphasis on artists that you like, and related ones who you might not have heard before. 

The app already has multiple playlists that are tuned to your listening habits, including a Supermix that pulls songs and artists together across all your listening habits. There's also a Discover playlist that, naturally, focuses on things you're not familiar with but might enjoy based on your history. YouTube Music product manager Gregor Dodson told Engadget that the algorithm for the Samples feed is a little different from both of those — it's trying to site in between the two playlists, highlighting artists that you may be familiar with but pulling clips you haven't watched before.

The clips you'll see in Samples are only 30 seconds long, but that's about enough to give you a sense of the song. If it's not what you're looking for, just swipe up and you'll jump to another song, and you can do this as much as you want. Spotify added its own vertical video feed recently, but that's less for discovery and more to offer a way to quickly scroll through previews of songs in its playlists. But with both Spotify and YouTube Music implementing such a new, it seems likely that we'll see more apps do the same soon.

I got to try Samples for a few days before it launched, and the thing I found most interesting was to use it as a jumping off point for a longer listening session, something Dodson confirmed was by design. "Short form video [and] the infinite scroll have really interesting applications in music discovery," he said. "It's a simple way to discover new music, but it's not the end of the journey — it's just the start of finding a new artist or song." 

YouTube

To that end, the Samples video view is overlaid with a number of controls. From any Sample, you can tap the thumbs-up to save the song to your "liked songs" playlist. You can also save the song to any playlist you've created. Between these two options, it's pretty easy to quickly scan through Samples and save a bunch of songs to check out further. And if you hear something you want to immediately play in full, just tap the big old play button in the corner of the screen.

Since this is YouTube we're talking about, there's also a "Shorts" button that pops you into the main YouTube app. It'll show you other Shorts clips created with the audio you picked and let you jump into making your own clip with the song. Finally, the share button does just what you'd think — you get a link that can drop into a host of other apps like Messages, Reddit, Facebook and whatever else you might have installed. 

In the overflow menu, you can find a lot of other options, like starting a continuous radio station from the song, jumping into the artist's page to find more music, saving the track to your library and so forth. I with the "start radio" button was featured more prominently in the Samples video, because I found that to be a great way to jump into a auto-generated set of tunes with the same vibe as the Sample that initially caught my eye (and ear). 

My main issue with Samples is one of UI. Specifically, it's not always clear where exactly you'll end up in the app when you switch views. Tapping the Samples tab starts the video scroll, and to exit it you can tap one of the other bottom navigation tabs; tapping the "play" button opens up the familiar YouTube Music player controls. Finally, swiping back down to hide the player interface returns you to Samples. Seems pretty clear. But at one point, I had hit the play button to hear a song and then rotated my phone to landscape to see the video in full screen. When the next song started, I rotated my phone back to landscape, and the app threw me immediately back into the Samples view, with the clip of the song I had already heard still playing.

Another time, I started playing a song, and the hit the toggle at the top of the Now Playing screen to switch the video off and just hear the song. I listened to a few more songs from the auto-generated station of similar music, and then swiped down to hide the Now Playing UI and browse around the app. I was again tossed back into the Samples view. That does make some logical sense, as that's where the whole listening experience started. But I'm to used to being able to hide the Now Playing screen and browse around the app while still playing music that it was a bit jarring. That said, I could then hit the "Home" tab at the bottom and resume what I was playing. It works, but the addition of Samples does change a few of the app's expected behaviors.

These quibbles aside, Samples seems like a pretty handy addition to the YouTube Music app. In the few days I had to test it, it consistently served up music from artists I liked, and the song selections were indeed things I was usually less familiar with. The video clip aspect of it doesn't really matter to me, but given how many artists are making excellent videos these days, it's fun to remember that these visuals are worth checking out sometimes. (Thanks, YouTube Music, for surfacing the wild video for "I Know the End" by Phoebe Bridgers.) More importantly, it did work well as a jumping-off point for digging into some artists I had forgotten about or finding a tune to set the mood for a playlist or station.

YouTube says that the Samples feature is rolling out globally starting today for both Android and iOS users. As with most new features, it might not hit your app immediately, so you might need to be a little patient.

This article originally appeared on Engadget at https://www.engadget.com/youtube-music-debuts-samples-a-tiktok-style-feed-for-music-discovery-160007555.html?src=rss

Instagram's musical photo carousels are a lot like TikTok's Photo Mode

Instagram now lets you add music to photo carousels. Unveiled in partnership with pop star Olivia Rodrigo to promote her single “bad idea right?”, the feature allows you to pick licensed music to soundtrack your slideshows. In addition, the company announced that you can create Collabs with up to three co-authors and post audience-response prompts to Reels.

The carousel soundtracking feature adds a missing piece already found in TikTok’s Photo Mode, launched last year. “Whether you’re sharing a collection of summer memories with friends or moments from your camera roll, you can now add music to your photo carousels,” Instagram wrote in a blog post today. “Building off our launch of music for feed photos, anyone can add a song to capture the mood and bring their carousel to life.”

Also announced today, Instagram Collabs adds the ability to invite up to three friends (up from one) to help co-author feed posts, carousels or reels. The platform says each contributor’s audience will see the content (perhaps hinting that it could be a handy way for influencers to benefit from each other’s followings) and will feature on each account’s profile grid. In addition, the company says private profiles can still start posts / reels and invite collaborators as long as they follow the private account.

Instagram

Instagram also updated how the Add Yours sticker works. When a creator adds the new Add Yours prompt to a Reel and followers contribute content as a response, the creator can now highlight their favorite posted replies for all their followers to see. “With the Add Yours sticker, a creator or artist can invite their followers to join in on a fun prompt or challenge they create on Reels, and then hand-pick their favorite submissions to celebrate their fans’ creativity.” It essentially sounds like a way to use the human social desire to connect with high-status figures (especially celebrities like Rodrigo) to build engagement for creators and the platform as a whole.

Finally, Instagram noted that it’s bringing its music library “to more countries over the coming weeks,” although it hasn’t yet announced specific nations or dates. However, it did mention that Instagram is partnering with Spotify in Mexico and Brazil to showcase 50 of the most popular songs on Instagram Reels on the music platform’s Reels Music Chart.

This article originally appeared on Engadget at https://www.engadget.com/instagrams-musical-photo-carousels-are-a-lot-like-tiktoks-photo-mode-174008037.html?src=rss

Apple Music will help you find new songs and artists with Discovery Station

Apple has quietly launched a new feature for its music streaming service that could help you expand your playlists and find new artists to listen to. It's a personalized radio station called "Discovery Station," which picks the songs it thinks you'd be into from Apple Music's catalog. As Apple Insider notes, the tech giant's music service hasn't gone all in on algorithmic recommendations like Spotify, which has several playlists that can generate mixes based on your listening habits. 

An Apple spokesperson told us that Discovery Station will only play music you haven't played on the service before from both familiar artists and potentially unfamiliar ones it thinks you might like. And since its main purpose is to help you discover new music, it will never play the same song twice and will play continuously until you stop it. Like other playlists that use algorithms to recommend tracks, Discovery Station also bases its suggestions on your activity and will keep changing as your taste evolves. 

The feature is now live around the world. If you're an existing subscriber, you can access it by going to your Listen Now page and checking out the Stations for You section. If you don't have a subscription, it will cost you at least $5 a month in the US for an audio-only plan or at least $11 a month if you want access to Apple Music's video programming and other features, such as lossless audio and Dolby Atmos. 

This article originally appeared on Engadget at https://www.engadget.com/apple-music-will-help-you-find-new-songs-and-artists-with-discovery-station-051205049.html?src=rss

TikTok expands its music streaming service test to Australia, Mexico and Singapore

TikTok has started inviting users in Australia, Mexico and Singapore to participate in a closed beta test for its new music streaming service, according to TechCrunch and CNBC. The short-form video hosting app initially launched beta testing for its fledgling streaming service in Brazil and Indonesia in early July. Now, it's expanding the scope of its music service's experimental phase and giving invited users in those regions a free three-month trial to be able to try it out. 

TikTok Music is a completely separate app that testers will be able to download from the Apple App Store or the Google Play Store. It does, however, connect to the main TikTok app, so users can find the full versions of songs that go viral on the video-sharing platform. The music streaming app reportedly offers personalized song recommendations, real-time lyrics, collaborative playlists and the ability to find songs through a lyrics search feature, as well. TechCrunch says it has a Shazam-like feature, which presumably means it can find songs by listening to it, and will let users download tracks for offline listening. 

The ByteDance-owned app told TechCrunch that once the testers' trial period is done, it will cost them AUD12 (US$8.16) per month in Australia, Mex$115 (US$6.86) in Mexico and S$9.90 (US$7.48) in Singapore to be able to keep using the service. TikTok already has a music streaming service called Resso available in India, Brazil and Indonesia, but it's shutting the app down in the last two countries in September. The company has yet to announce if and when its music app is also coming to the US, but it did file a trademark application for "TikTok Music" in the country back in May 2022. 

This article originally appeared on Engadget at https://www.engadget.com/tiktok-expands-its-music-streaming-service-test-to-australia-mexico-and-singapore-055121108.html?src=rss

AI-generated music won’t win a Grammy anytime soon

It looks like Fake Drake won’t be taking home a Grammy. Recording Academy CEO Harvey Mason Jr. said this week that although the organization will consider music with limited AI-generated voices or instrumentation for award recognition, it will only honor songs written and performed “mostly by a human.”

“At this point, we are going to allow AI music and content to be submitted, but the Grammys will only be allowed to go to human creators who have contributed creatively in the appropriate categories,” Mason said in an interview with Grammy.com. “If there’s an AI voice singing the song or AI instrumentation, we’ll consider it. But in a songwriting-based category, it has to have been written mostly by a human. Same goes for performance categories – only a human performer can be considered for a Grammy. If AI did the songwriting or created the music, that’s a different consideration. But the Grammy will go to human creators at this point.”

The CEO’s comments mean the fake Drake / The Weeknd song “Heart on My Sleeve,” which went viral earlier this year before getting wiped from streaming platforms over copyright takedowns, wouldn’t be eligible. Another AI-generated scammer sold fake Frank Ocean tracks in April for a reported CAD 13,000 ($9,722 in US dollars), while Spotify has been busy purging tens of thousands of AI-made songs from its library.

On the other hand, it raises questions about artists like Holly Herndon, who used an AI version of her voice for a cover of Dolly Parton’s “Jolene.” (The AI-generated performance would suggest not, but would the fact that it’s her own voice make a difference?) Or, for that matter, there’s the upcoming “final” Beatles track that Paul McCartney says will use AI to isolate a garbled recording of John Lennon’s voice. And would Taryn Southern, who (also transparently) used AI to co-produce her 2018 debut album, be eligible? We reached out to the Recording Academy for clarification about these examples and will update this article if they respond.

Awards or not, Mason acknowledged that AI would upend the music industry. “AI is going to absolutely, unequivocally have a hand in shaping the future of our industry,” Mason said. “So, we have to start planning around that and thinking about what that means for us. How can we adapt to accommodate? How can we set guardrails and standards? There are a lot of things that need to be addressed around AI as it relates to our industry.” The CEO added that the Recording Academy recently held a summit “with industry leaders, tech entrepreneurs, streaming platforms, and people from the artist community” to discuss AI’s future. “We talked about the subject and discussed how the Recording Academy can be helpful: how we can play a role and the future of AI in music.”

This article originally appeared on Engadget at https://www.engadget.com/ai-generated-music-wont-win-a-grammy-anytime-soon-211855194.html?src=rss

Paul McCartney is using AI to create a final song for The Beatles

AI-assisted vocals aren't just for bootleg songs. Paul McCartney has revealed to BBC Radio 4 that he's using AI to turn a John Lennon demo into one last song for The Beatles. The technology helped extract Lennon's voice to get a "pure" version that could be mixed into a finished composition. The piece will be released later this year, McCartney says.

McCartney didn't name the song, but it's believed to be "Now and Then," a 1978 love song Lennon put on cassettes meant for the other former Beatle. The Guardian notes the tune was considered for release as a reunion song alongside tracks that did make the cut, such as "Free As A Bird," but there wasn't much to it — just a chorus, a crude backing track and the lightest of verses. The Beatles rejected it after George Harrison thought it was bad, and the electrical buzz from Lennon's apartment didn't help matters.

The inspiration for the revival came from dialog editor Emile de la Rey's work on the Peter Jackson documentary Get Back, where AI separated the Beatles' voices from instruments and other sounds. The tech provides "some sort of leeway" for producing songs, McCartney adds.

To date, music labels typically haven't been fond of AI due to copyright clashes. Creators have used algorithms to have famous artists "sing" songs they never actually produced, such as a recently pulled fantasy collaboration between Drake and The Weeknd. This, however, is different — McCartney is using AI to salvage a track that otherwise wouldn't have reached the public. It won't be surprising if other artists use the technique to recover work that would otherwise sit in an archive.

This article originally appeared on Engadget at https://www.engadget.com/paul-mccartney-is-using-ai-to-create-a-final-song-for-the-beatles-133839244.html?src=rss

Meta's open-source MusicGen AI uses text to create song genre mashups

Meta's Audiocraft research team has just released MusicGen, an open source deep learning language model that can generate new music based on text prompts and even be aligned to an existing song, The Decoder reported. It's much like ChatGPT for audio, letting you describe the style of music you want, drop in an existing tune (optionally) and then clicking "Generate." After a good chunk of time (around 160 seconds in my case), it spits out a short piece of all-new music based on your text prompts and melody. 

The demo on Facebook's Hugging Face AI site lets you describe your music, providing a handful of examples like "an 80s driving pop song with heavy drums and synth pads in the background." You can then "condition" that on a given song up top 30 seconds long, with controls letting select a specific portion of that. Then, you just hit generate and it renders a high-quality sample up to 12 seconds long. 

We present MusicGen: A simple and controllable music generation model. MusicGen can be prompted by both text and melody.
We release code (MIT) and models (CC-BY NC) for open research, reproducibility, and for the music community: https://t.co/OkYjL4xDN7pic.twitter.com/h1l4LGzYgf

— Felix Kreuk (@FelixKreuk) June 9, 2023

The team used 20,000 hours of licensed music for training, including 10,000 high quality music tracks from an internal dataset, along with Shutterstock and Pond5 tracks. To make it faster, they used Meta's 32Khz EnCodec audio tokenizer to generate smaller chunks of music that can be processed in parallel. "Unlike existing methods like MusicLM, MusicGen doesn't not require a self-supervised semantic representation [and has] only 50 auto-regressive steps per second of audio," wrote Hugging Face ML Engineer Ahsen Khaliq in a tweet.

Last month, Google released a similar music generator called MusicLM, but MusicGen seems to generate slightly better results. On a sample page, the researchers compare MusicGen's output with MusicLM and two other models, Riffusion and Musai, to prove that point. It can be run locally (a GPU with at least 16GB of RAM is recommended) and available in four model sizes, from small (300 million parameters) to large (3.3 billion parameters) — with the latter having the greatest potential for producing complex music. 

As mentioned, MusicGen is open source and can even be used to generate commercial music (I tried it with "Ode to Joy" and several suggested genres and the results above were... mixed). Still, it's the latest example of the breathtaking speed of AI development over the past half year, with deep learning models threatening to make incursions into yet another genre. 

This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-musicgen-ai-uses-text-to-create-song-genre-mashups-114030499.html?src=rss

Hitting the Books: How music chords hack your brain to elicit emotion

Johnny Cash's Hurt hits way different in A Major, as much so as Ring of Fire in G Minor. The dissonance in tone between the chords is, ahem, a minor one: simply the third note lowered to a flat. But that change can fundamentally alter how a song sounds, and what feelings that song conveys. In their new book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Dr. Larry S Sherman, professor of neuroscience at the Oregon Health and Science University, and Dr. Dennis Plies, a music professor at Warner Pacific University, explore the fascinating interplay between our brains, our instruments, our audiences, and the music they make together. 

Columbia University Press

Excerpted from Every Brain Needs Music: The Neuroscience of Making and Listening to Music by Larry S. Sherman and Dennis Plies published by Columbia University Press. Copyright (c) 2023 Columbia University Press. Used by arrangement with the Publisher. All rights reserved.


The Minor Fall and The Major Lift: Sorting Out Minor and Major Chords

Another function within areas of the secondary auditory cortex involves how we perceive different chords. For example, part of the auditory cortex (the superior temporal sulcus) appears to help distinguish major from minor chords.

Remarkably, from there, major and minor chords are processed by different areas of the brain outside the auditory cortex, where they are assigned emotional meaning. For example, in Western music, minor keys are perceived as “serious” or “sad” and major keys are perceived as “bright” or “happy.” This is a remarkable response when you think about it: two or three notes played together for a brief period of time, without any other music, can make us think “that is a sad sound” or “that is a happy sound.” People around the world have this response, although the tones that illicit these emotions differ from one culture to another. In a study of how the brain reacts to consonant chords (notes that sound “good” together, like middle C and the E and G above middle C, as in the opening chord of Billy Joel’s “Piano Man”), subjects were played consonant or dissonant chords (notes that sound “bad”together) in the minor and major keys, and their brains were analyzed using a method called positron emission tomography (PET). This method of measuring brain activity is different from the fMRI studies we discussed earlier. PET scanning, like fMRI, can be used to monitor blood flow in the brain as a measure of brain activity, but it uses tracer molecules that are injected into the subjects’ bloodstreams. Although the approach is different, many of the caveats we mentioned for fMRI studies also apply to PET studies. Nonetheless, these authors reported that minor chords activated an area of the brain involved in reward and emotion processing (the right striatum), while major chords induced significant activity in an area important for integrating and making sense of sensory information from various parts of the brain (the left middle temporal gyrus). These findings suggest the locations of pathways in the brain that contribute to a sense of happiness or sadness in response to certain stimuli, like music.

Don't Worry, Be Happy (or Sad): How Composers Manipulate our Emotions

Although major and minor chords by themselves can elicit “happy” or “sad” emotions, our emotional response to music that combines major and minor chords with certain tempos, lyrics, and melodies is more complex. For example, the emotional link to simple chords can have a significant and dynamic impact on the sentiments in lyrics. In some of his talks on the neuroscience of music, Larry, working with singer, pianist, and songwriter Naomi LaViolette, demonstrates this point using Leonard Cohen’s widely known and beloved song “Hallelujah.” Larry introduces the song as an example of how music can influence the meaning of lyrics, and then he plays an upbeat ragtime, with mostly major chords, while Naomi sings Cohen’s lyrics. The audience laughs, but it also finds that the lyrics have far less emotional impact than when sung to the original slow-paced music with several minor chords.

Songwriters take advantage of this effect all the time to highlight their lyrics’ emotional meaning. A study of guitar tablatures (a form of writing down music for guitar) examined the relationship between major and minor chords paired with lyrics and what is called emotional valence: In psychology, emotions considered to have a negative valence include anger and fear, while emotions with positive valence include joy. The study found that major chords are associated with higher-valence lyrics, which is consistent with previous studies showing that major chords evoke more positive emotional responses than minor chords. Thus, in Western music, pairing sad words or phrases with minor chords, and happy words or phrases with major chords, is an effective way to manipulate an audience’s feelings. Doing the opposite can, at the very least, muddle the meaning of the words but can also bring complexity and beauty to the message in the music.

Manipulative composers appear to have been around for a long time. Music was an important part of ancient Greek culture. Although today we read works such as Homer’s Iliad and Odyssey, these texts were meant to be sung with instrumental accompaniment. Surviving texts from many works include detailed information about the notes, scales, effects, and instruments to be used, and the meter of each piece can be deduced from the poetry (for example, the dactylic hexameter of Homer and other epic poetry). Armand D’Angour, a professor of classics at Oxford University, has recently recreated the sounds of ancient Greek music using original texts, music notation, and replicated instruments such as the aulos, which consists of two double-reed pipes played simultaneously by a single performer. Professor D’Angour has organized concerts based on some of these texts, reviving music that has not been heard for over 2,500 years. His work reveals that the music then, like now, uses major and minor tones and changes in meter to highlight the lyrics’ emotional intent. Simple changes in tones elicited emotional responses in the brains of ancient Greeks just as they do today, indicating that our recognition of the emotional value of these tones has been part of how our brains respond to music deep into antiquity.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-every-brain-needs-music-sherman-piles-columbia-university-press-143039604.html?src=rss

Apple now helps you discover concerts in Maps and Music

Apple wants to go beyond streaming live music to helping you find it in real life. The company has added concert discovery features to both Apple Maps and Apple Music. In Maps, you'll find over 40 curated "Guides" that spotlight hot concert venues in 14 major cities around the world, such as a techno club in Brooklyn and symphony halls in Vienna. This could help you decide where to go when you're new in town, or highlight an unfamiliar scene. You can also browse upcoming shows at those venues through a Shazam discovery module that taps info from Bandsintown.

Apple Music, meanwhile, also includes the Shazam module to let you browse a musician's upcoming shows. If a favorite artist is playing soon, this could help you land tickets. There's also a Set Lists section where you can listen to tracks played at certain tours (such as Sam Smith's and Kane Brown's) while learning about the productions.

Both experiences are available today. The additions aren't completely surprising. Apple has long emphasized human curation in Music, such as many of its custom playlists and DJ mixes. The integrations expand on that strategy to cover in-person gigs. Maps has also had curated Guides for food, shopping and travel. A coordinated push for Maps and Music is relatively unique, though — the company is clearly betting that it can raise interest in both services by using concerts as a hook.

This article originally appeared on Engadget at https://www.engadget.com/apple-now-helps-you-discover-concerts-in-maps-and-music-140059227.html?src=rss

Scammers used AI-generated Frank Ocean songs to steal thousands of dollars

More AI-generated music mimicking a famous artist has made the rounds — while making lots of money for the scammer passing it off as genuine. A collection of fake Frank Ocean songs sold for a reported $13,000 CAD ($9,722 in US dollars) last month on a music-leaking forum devoted to the Grammy-winning singer, according toVice. If the story sounds familiar, it’s essentially a recycling of last month’s AI Drake / The Weeknd fiasco.

As generative AI takes the world by storm — Google just devoted most of its I/O 2023 keynote to it — people eager to make a quick buck through unscrupulous means are seizing the moment before copyright laws catch up. It’s also caused headaches for Spotify, which recently pulled not just Fake Drake but tens of thousands of other AI-generated tracks after receiving complaints from Universal Music.

The scammer, who used the handle mourningassasin, told Vice they hired someone to make “around nine” Ocean songs using “very high-quality vocal snippets” of the Thinkin Bout You singer’s voice. The user posted a clip from one of the fake tracks to a leaked-music forum and claims to have quickly convinced its users of its authenticity. “Instantly, I noticed everyone started to believe it,” mourningassasin said. The fact that Ocean hasn’t released a new album since 2016 and recently teased an upcoming follow-up to Blond may have added to the eagerness to believe the songs were real.

The scammer claims multiple people expressed interest in private messages, offering to “pay big money for it.” They reportedly fetched $3,000 to $4,000 for each song in mid to late April. The user has since been banned from the leaked-music forum, which may be having an existential crisis as AI-generated music makes it easier than ever to produce convincing knockoffs. “This situation has put a major dent in our server’s credibility, and will result in distrust from any new and unverified seller throughout these communities,” said the owner of a Discord server where the fake tracks gained traction.

This article originally appeared on Engadget at https://www.engadget.com/scammers-used-ai-generated-frank-ocean-songs-to-steal-thousands-of-dollars-222042845.html?src=rss