Spotify is waving goodbye to its heart icon. The company is combining the icon, which enables you to quickly save music to your library, with the "add to playlist" prompt under a single plus button.
The plus button works a little differently on Spotify than it does on Apple Music, where tapping it a second time on an album or playlist downloads it to your device. When you tap Spotify's plus button once, you'll add a song, playlist, podcast or audiobook to your library. The plus button will then turn into a green check. If you tap the checkmark on the Now Playing view, you'll be able to add the song or podcast episode to a playlist rather than just saving it to your Liked Songs or Your Episodes.
Spotify
Spotify says the plus button will streamline how folks save songs and podcasts. It wrote in a blog post that user research showed the button "helped save time and gave users the ability to add to multiple playlists at once." It could come in useful, for instance, if you're listening to a radio station or Discover Weekly and encounter a song you like that would work well on one or more of your playlists.
The plus button is starting to roll out on Spotify's iOS and Android apps today. It'll be available to all Spotify users in the coming weeks.
You may not know the name Tohru Okada, but if you've ever owned a PlayStation console, you'll be familiar with one of his most iconic creations: The sound that plays every time the PS logo appears. According to Japanese-language sources (via GameSpot), Okada has passed away on the 14th due to heart failure. He was 73 years old. Okada was reportedly hospitalized early last year due to a compression fracture and was undergoing rehabilitation in hopes of performing at a music festival in April.
In addition to the PlayStation logo sound, Okada also composed the music for a series of Crash Bandicoot advertisements that aired in the '90s, as well as for some anime titles like Mobile Suit SD Gundam. For long-time fans of Japanese rock, though, he was more than just a game and anime composer. He was the keyboardist for a rock band called Moonriders, where he played with Keiichi Suzuki, who made music for Nintendo's Mother series that's also known as EarthBound outside Japan.
While the PS "bing" sound is short and unobtrusive, Sony has been using it for over 25 years. You can watch the video below to hear what it sounded like for various PS ads and loading screens from the very first PlayStation.
Generative AI is absolutelyeverywhererightnow, so it’s no surprise to see Spotify putting it to use in its latest feature, simply called “DJ.” It’s a new way to immediately start a personalized selection of music playing that combines Spotify’s well-known personalization tools that you can find in playlists like Discover Weekly as well as the content that populates your home screen with some AI tricks. I got early access to DJ and have been playing with it for the last day to see how Spotify’s latest take on personalized music works, but the feature is available as of today in beta for all premium subscribers in the US and Canada.
While Spotify has loads of personalized playlists for users, I’ve found that the app lacks a simple way to tell it to just play some music you like. On Apple Music, for example, I can ask Siri to play music I like and it’ll start a personalized radio station based on music I’ve played alongside some things it thinks I’ll enjoy but haven’t played before. It’s a reliable way to jump right into my collection. In the same vein, Spotify’s DJ pulls together a mix of songs you’re currently listening to, old favorites you might have forgotten, and new tunes that fit in with what it thinks you’ll like.
The AI twist to DJ comes in the form of a literal DJ, which speaks to you in an AI voice generated by Sonantic, a startup that Spotify bought last year with a focus on generating realistic speech. In this case, the DJ’s voice model was trained on the voice of a real human, Spotify’s own Head of Cultural Partnerships, Xavier “X” Jernigan. Jernigan hosted “The Get Up,” Spotify’s morning show that combined recorded segments with music tailored to your tastes.
The DJ’s voice is generated through AI, and so are the things it says to you. When you first kick off a DJ session, you’ll get a quick overview of what you might expect to hear. For example, the first time I started up DJ, “X” came on and told me that it was a DJ designed for music and that it knew what I liked and for starters it was going to play me some Jenny Lewis. Sure enough, Lewis’s “Do Si Do” kicked things off, along with a few other songs with a similar vibe. At the top of the now playing screen, you’ll see a little info on how the song was picked, like “based on recent listening,” “throwbacks,” “recommended for you” or “from your past.”
Once you start a segment, you’ll generally hear a handful of songs that fit into the category, but if you want to change things up you can just tap the DJ button in the lower right corner of the now playing screen. At that point, X the DJ pops back up to give you some info about what’s coming up next. When I just tapped it, X said, “OK, changing it up. Here are our editor’s picks for the best in hard rock this week, starting with Motionless In White.”
Spotify says that none of the dialog you hear from X is pre-recorded; it’s all generated on the fly by OpenAI. However, the company wanted to make it clear that it looks at generative AI as a tool for its music editors, not something that it is just trusting to get everything right. Spotify’s VP of personalization Ziad Sultan told Engadget in a product demo that the company put together a “writers room” of script writers, music editors, data curators and engineers, all of whom are working together to make sure that the bits of info that the AI DJ drops are useful, accurate and relevant to the music you’re hearing.
Sultan stressed that Spotify’s usage was a lot different than implementations like free-form text, image generation and other such AI use cases. “We’ve built a very specific use case, and we’ve made a few choices about how it’ll be implemented,” he said. “The most important one is the creation of that writer’s room – we’re taking this [AI] tool and putting it into the hands of music experts.”
What’ll make Spotify’s DJ work or fail is whether it can pull up music you want to hear. From that perspective, Spotify isn’t doing anything wildly different than it already does: analyzing your listening history and finding stuff it knows you like and things it thinks you’ll enjoy. And as with everything else you do on Spotify, your DJ usage will be analyzed so that it can get better at serving you tunes you want to listen to. At the beginning, anyway, the AI DJ aspects are being used as small augmentations to a personalized music channel – and as long as Spotify can continue to know what songs you love and which ones you’re likely to fall in love with, DJ should be a useful addition.
Almost every music streaming service on the market offers a radio feature, allowing you to create an automatically generated playlist around a song or artist you love. For the most part, however, those features don’t offer a lot of flexibility. You pick a single song or artist and the platform does the rest – as is the case with Spotify and Apple Music.
Google has begun rolling out a redesigned radio feature on YouTube Music the company claims provides users with a lot more control over their listening experience. Among the new features the refreshed experience includes is the ability to pick up to 30 artists when creating your own radio station. You can also decide how frequently those artists repeat and apply filters that change the mood of the resulting playlist. For instance, a few of the selections include “chill,” “downbeat” and “pump-up.”
It’s also possible to adjust the parameters you set after creating a station by tapping the “Tune” option that appears at the bottom of the interface once you’re listening to your new playlist. Naturally, you can save the station to revisit it later. Once the new experience is available on your device, you will see a prompt in the main interface that says “Create a radio.” As with many of Google’s rollouts, it may take some time before you see the feature on your client.
On its own, it’s fair to say the feature won’t be enough to convince some to ditch Spotify and Apple Music for YouTube Music, but if you’re among the 50 million subscribers Google says has access to the service, it may prompt you to use it more frequently or convert the free trial you got with your phone into a paid subscription.
Some of us are destined to lead successful lives thanks to the circumstances of our birth. Some of us, like attorney Bruce Jackson, are destined to lead such lives in spite them. Raised in New York's Amsterdam housing projects and subjected to the daily brutalities of growing up a black man in America, Jackson's story is ultimately one of tempered success. Sure he went on to study at Georgetown Law before representing some of the biggest names in hip hop — LL Cool J, Heavy D, the Lost Boyz and Mr. Cheeks, SWV, Busta Rhymes — and working 15 years as Microsoft's associate general counsel. But at the end of the day, he is still a black man living in America, with all the baggage that comes with it.
In his autobiography, Never Far from Home(out now from Atria), Jackson recounts the challenges he has faced in life, of which there are no shortage: from being falsely accused of robbery at age 10 to witnessing the murder of his friend at 15 to spending a night in lockup as an adult for the crime of driving his own car; the shock of navigating Microsoft's lillywhite workforce following years spent in the entertainment industry, and the end of a loving marriage brought low by his demanding work. While Jackson's story is ultimately one of triumph, Never Far from Home reveals a hollowness, a betrayal, of the American Dream that people of Bill Gates' (and this writer's) complexion will likely never have to experience. In the excerpt below, Jackson recalls his decision to leave a Napster-ravaged music industry to the clammy embrace of Seattle and the Pacific Northwest.
In the late 1990s, the digital revolution pushed the music business into a state of flux. And here was Tony Dofat, sitting in my office, apoplectic, talking about how to stop Napster and other platforms from taking the legs out from under the traditional recording industry.
I shook my head. “If they’re already doing it, then it’s too late. Cat’s out of the bag. I don’t care if you start suing people, you’re never going back to the old model. It’s over.”
In fact, lawsuits, spearheaded by Metallica and others, the chosen mode of defense in those early days of the digital music onslaught, only served to embolden consumers and publicize their cause. Free music for everyone! won the day.
These were terrifying times for artists and industry executives alike. A decades-old business model had been built on the premise that recorded music was a salable commodity.
Artists would put out a record and then embark on a promotional tour to support that record. A significant portion of a musician’s income (and the income of the label that supported the artist) was derived from the sale of a physical product: recorded albums (or singles), either in vinyl, cassette, or compact disc. Suddenly, that model was flipped on its head... and still is. Artists earn a comparative pittance from downloads or streams, and most of their revenue is derived from touring, or from monetizing social media accounts whose numbers are bolstered by a song’s popularity. (Publicly, Spotify has stated that it pays artists between $.003 and $.005 per stream. Translation: 250 streams will result in revenue of approximately one dollar for the recording artist.)
Thus, the music itself has been turned primarily into a marketing tool used to entice listeners to the product: concert and festival tickets, and a social media advertising platform. It is a much tougher and leaner business model. Additionally, it is a model that changed the notion that record labels and producers needed only one decent track around which they could build an entire album. This happened all the time in the vinyl era: an artist came up with a hit single, an album was quickly assembled, often with filler that did not meet the standard established by the single. Streaming platforms changed all of that. Consumers today seek out only the individual songs they like, and do it for a fraction of what they used to spend on albums. Ten bucks a month gets you access to thousands of songs on Spotify or Pandora or Apple Music roughly the same amount a single album cost in the pre-streaming era. For consumers, it has been a landmark victory (except for the part about artists not being able to create art if they can’t feed themselves); for artists and record labels, it has been a catastrophic blow.
For everyone connected to the music business, it was a shock to the system. For me, it was provocation to consider what I wanted to do with the next phase of my career. In early 2000, I received a call from a corporate recruiter about a position with Microsoft, which was looking for an in-house counsel with a background in entertainment law — specifically, to work in the company’s burgeoning digital media division. The job would entail working with content providers and negotiating deals in which they would agree to make their content — music, movies, television shows, books — available to consumers via Microsoft’s Windows Media Player. In a sense, I would still be in the entertainment business; I would be spending a lot of time working with the same recording industry executives with whom I had built prior relationships.
But there were downsides, as well. For one thing, I was recently married, with a one-year-old baby and a stepson, living in a nice place in the New York City suburbs. I wasn’t eager to leave them—or my other daughters—three thousand miles behind while I moved to Microsoft’s headquarters in the Pacific Northwest. From an experience standpoint, though, it was almost too good an offer to turn down.
Deeply conflicted and at a crossroads in my career, I solicited advice from friends and colleagues, including, most notably, Clarence Avant. If I had to name one person who has been the most important mentor in my life, it would be Clarence, “the Black Godfather.” In an extraordinary life that now spans almost ninety years, Clarence has been among the most influential men in Black culture, music, politics, and civil rights. It’s no surprise that Netflix’s documentary on Clarence featured interviews with not just a who’s who of music and entertainment industry superstars, but also former US presidents Barack Obama and Bill Clinton.
In the early 1990s, Clarence became chairman of the board of Motown Records. As lofty a title as that might be, it denotes only a fraction of the wisdom and power he wielded. When the offer came down from Microsoft, I consulted with Clarence. Would I be making a mistake, I wondered, by leaving the music business and walking away from a firm I had started? Clarence talked me through the pros and cons, but in the end, he offered a steely assessment, in a way that only Clarence could.
“Son, take your ass to Microsoft, and get some of that stock.”
Ever found yourself turning down the radio so you can focus on finding a parking spot? Music didn’t stop you seeing, but it was taking up some tangible mental resources. But what if you had a way to immediately make the music more calming? Or to change that distracting string section? That, effectively, is the promise of Aimi’s interactive music player app. It won’t help you find a parking spot, though, you’re on your own with that.
If the name Aimi sounds familiar, that’s because its self-described “generative music platform” has been available online for a while. What’s new is the mobile app, launching in beta today with 5,000 slots open globally. The mobile experience takes the endless mood-based music feeds from the Aimi website and adds the option to tweak them to your heart’s content. It’s not a full-bore music making app, more of a tailored soundtrack for when you want a certain vibe, or as Aimi calls them: Experiences. The basic app will be free, but unlocking the majority of those controls will cost $10 a month.
The app offers experiences with names such as Serenity, Flow, Electronica and Push. Each gives a clear hint at what the vibe is and there are 10 of them at launch. The slowest, Serenity, starts at 64 BPM and they ratchet up to Push’s time-honored throb of 128 BPM.
As a listener, you could just open one of the experiences, tap play and go about your business. The idea being that if what the app serves you up isn’t quite what you wanted, you can mash the shuffle button and it’ll reconfigure the track with new sounds and energy. Or maybe you liked it, so there’s a thumbs-up option to tell it “more of this please.” That’s the most basic use case, which is also the extent of the free tier – but you can take it a few steps further with a subscription.
For premium users, once you have an experience playing, swiping left will give much more detailed control. The first screen shows a cluster of circles, each one labeled after a musical part (Beats, FX, Bass and so on). Hold down one of these circles and, as long as it’s active, it’ll solo just that part. If you tap a circle, you’ll enter a sub menu where you can adjust the volume of that part along with a shuffle option for just that element and more thumbs up/down.
If you swipe left one more time, you’ll find a selection of sliders which can vary from experience to experience, but tend to include “Intensity,” “Progression,” “Vocals” and “Texture.” It’s here that you can tell the app to do things like add a little intensity, mix things up more often or deliver more/less vocals. The changes are usually quite subtle - it’s more re-adjusting than remixing. These settings are remembered, too, so the next time you fire up that experience it’ll be to your taste. Or, at least the taste you had the last time you listened to it.
All the music on offer here is of the electronic variety. And despite the relatively wide range of BPMs, there’s definitely a thread that runs through them. That’s to say, this isn’t genre-hopping in the sense that you might want a Hip Hop vibe before moving over to some Indie and back to EDM. It’s more like being at a large House club with different areas with different BPMs along with a few well-stocked chill out rooms.
According to the company, the musical loops in Aimi are created by a pool of over 150 artists including some big names like Carl Cox. Once the loops are fed into the platform, AI takes over to match the pitch, BPM and general vibe. Theoretically, you have an endless radio station of music you can interact with, and the library is set to keep growing over time. Let’s hope that includes some other genres. Hip Hop and anything with a breakbeat would instantly provide a shot of different energy here, for example. Likewise, something on the more acoustic side of things would at least provide an option for those less into electronic music.
Generative music has seen an increase of interest in recent years as technology has developed enough to make it more fluid than just burping up clips that are in time and key. Mostly this has been focused on the headspace area, meditative apps, concentration soundtracks and so on. Aimi’s main rivals here would include Endel ($15 a month) and Brain.fm ($7 a month).
While Aimi does occupy this space too, its emphasis on interactivity with its mood-based streams sets it apart. In fact, Aimi CEO, Edward Balassanian, sees it as a gateway for the musically curious. “One of the strengths of generative music is that we can use it to attract casual listeners with continuous music experiences and then introduce them to interactive music by letting them take ownership of their music experience.” he told Engadget.
Aimi
This hints at a broader plan. Right now there’s the linear player on Aimi.fm and the new interactive app launching today. In the future, there will also be Aimi Studio, which Balassanian says will be released this summer. “Once we get you hooked on interacting with music through our player, we want you to feel inspired to try making music using Aimi studio. Aimi studio will be offered in both basic and pro editions for everyone from aspiring amateurs to professionals.” he added.
I’m uncertain if this will appeal to users that use something like Note by Ableton or Maschine by Native Instruments. The actual amount of impact you can have on the music in Aimi is very limited as your effectively just giving nudges to the AI rather than being directly hands on. Likewise, the section of the app where you can solo parts isn’t immediate, this means if you were hoping to remix on the fly DJ-style by cutting the bass and beats before dropping them back in on the next phrase, it’s not really designed for that.
Likewise, sometimes you can find yourself distracted by the thing that’s meant to help you focus. When I tried the “Flow” stream, the first “idea” it presented was actually a bit irritating to me, so it served the opposite purpose. Of course, I could shuffle it to something more agreeable, but the irony of being taken out of the moment, even if just temporarily, was not wasted on me.
To that end, it’s hard to see where the interactive arm of Aimi excels, at least at launch. The genres, while varied, do overlap quite a bit. The control you have over the music is quite gentle in the scheme of things and feels more like fine-tuning than an actual creator tool. The core experience of listening to chill vibes is a great alternative to your tired Spotify playlist, but that part is free and has been available in some form for a while.
Balassanian says that even more experiences from more artists will be coming after launch and once the Studio app is released anyone will be able to make loops and upload them to the platform for users to enjoy. In the meantime, you can sign up for early beta access here and start configuring your own soundtrack today.
Never mind ChatGPT — music might be the next big frontier for AI content generation. Google recently published research on MusicLM, a system that creates music in any genre with a text description. This isn't the first AI music generator. As TechCrunchnotes, projects like Google's AudioML and OpenAI's Jukebox have tackled the subject. However, MusicLM's model and vast training database (280,000 hours of music) help it produce music with surprising variety and depth. You might just like the output.
The AI can not only combine genres and instruments, but write tracks using abstract concepts that are normally difficult for computers to grasp. If you want a hybrid of dance music and reggaeton with a "spacey, otherworldly" tune that evokes a "sense of wonder and awe," MusicLM can make it happen. The technology can even craft melodies based on humming, whistling or the description of a painting. A story mode can stitch several descriptions together to produce a DJ set or soundtrack.
MusicLM has its problems, as with many AI generators. Some compositions sound strange, and vocals tend to be incomprehensible. And while the performances themselves are better than you'd expect, they can be repetitive in ways human works might not. Don't expect an EDM-style drop or the verse-chorus-verse pattern of a typical song.
Just don't plan on using the tech any time soon. As with other Google AI generators, the researchers aren't releasing MusicLM to the public over copyright concerns. Roughly one percent of the music produced at the time of publication was copied directly from the training songs. While questions regarding licensing for AI music haven't been settled, a 2021 whitepaper from Eric Sunray (now working for the Music Publishers Association) suggested that there's enough "coherent" traces of the original sounds that AI music can violate reproduction rights. You may have to get clearances to release AI-created songs, much like musicians who rely on samples.
AI already has a place in music. Artists like Holly Herndon and Arca have used algorithms to produce albums and museum soundtracks. However, those are either collaborative (as with Herndon) or intentionally unpredictable (like Arca's). MusicLM may not be ready for prime time, but it hints at a future where AI could play a larger role in the studio.
“We are now at the dawn of the age of infinitely connected music,” the data alchemist announced from beneath the Space Needle. Glenn McDonald had chosen his title himself, preferring “alchemy,” with its esoteric associations, over the now-ordinary “data science.” His job, as he described it from the stage, was “to use math and typing and computers to help people understand and discover music.”
McDonald practiced his alchemy for the music streaming service Spotify, where he worked to transmute the base stuff of big data — logs of listener interactions, bits of digital audio files, and whatever else he could get his hands on — into valuable gold: products that might attract and retain paying customers. The mysterious power of McDonald’s alchemy lay in the way that ordinary data, if processed correctly, appeared to transform from thin interactional traces into thick cultural significance.
It was 2014, and McDonald was presenting at the Pop Conference, an annual gathering of music critics and academics held in a crumpled, Frank Gehry–designed heap of a building in the center of Seattle. I was on the other side of the country, and I followed along online. That year, the conference’s theme was “Music and Mobility,” and Mc Donald started his talk by narrating his personal musical journey, playing samples as he went. “When I was a kid,” he began, “you discovered music by holding still and waiting.” As a child at home, he listened to the folk music his parents played on the stereo. But as he grew up, his listening expanded: the car radio offered heavy metal and new wave; the internet revealed a world of new and obscure genres to explore. Where once he had been stuck in place, a passive observer of music that happened to go by, he would eventually measure the progress of his life by his ever broadening musical horizons. McDonald had managed to turn this passion into a profession, working to help others explore what he called “the world of music,” which on-demand streaming services had made more accessible than ever before.
Elsewhere, McDonald (2013) would describe the world of music as though it were a landscape: “Follow any path, no matter how unlikely and untrodden it appears, and you’ll find a hidden valley with a hundred bands who’ve lived there for years, reconstructing the music world in methodically- and idiosyncratically-altered miniature, as in Australian hip hop, Hungarian pop, microhouse or Viking metal.”
Travelers through the world of music would find familiarity and surprise — sounds they never would have imagined and songs they adored. McDonald marveled at this new ability to hear music from around the world, from Scotland, Australia, or Malawi. “The perfect music for you may come from the other side of the planet,” he said, but this was not a problem: “in music, we have the teleporter.” On-demand streaming provided a kind of musical mobility, which allowed listeners to travel across the world of music instantaneously.
However, he suggested, repeating the common refrain, the scale of this world could be overwhelming and hard to navigate. “For this new world to actually be appreciable,” McDonald said, “we have to find ways to map this space and then build machines to take you through it along interesting paths.” The recommender systems offered by companies like Spotify were the machines. McDonald’s recent work had focused on the maps, or as he described them in another talk: a “kind of thin layer of vaguely intelligible order over the writhing, surging, insatiably expanding information-space-beast of all the world’s music.”
Although his language may have been unusually poetic, McDonald was expressing an understanding of musical variety that is widely shared among the makers of music recommendation: Music exists in a kind of space. That space is, in one sense, fairly ordinary — like a landscape that you might walk through, encountering new things as you go. But in another sense, this space is deeply weird: behind the valleys and hills, there is a writhing, surging beast, constantly growing and tying points in the space together, infinitely connected. The music space can seem as natural as the mountains visible from the top of the Space Needle; but it can also seem like the man-made topological jumble at its base. It is organic and intuitive; it is technological and chaotic.
Spatial metaphors provide a dominant language for thinking about differences among the makers of music recommendation, as they do in machine learning and among Euro-American cultures more generally. Within these contexts, it is easy to imagine certain, similar things as gathered over here, while other, different things cluster over there. In conversations with engineers, it is very common to find the music space summoned into existence through gestures, which envelop the speakers in an imaginary environment populated by brief pinches in the air and organized by waves of the hand. One genre is on your left, another on your right. On whiteboards and windows scattered around the office, you might find the music space rendered in two dimensions, containing an array of points that cluster and spread across the plane.
In the music space, music that is similar is nearby. If you find yourself within such a space, you should be surrounded by music that you like. To find more of it, you need only to look around you and move. In the music space, genres are like regions, playlists are like pathways, and tastes are like drifting, archipelagic territories. Your new favorite song may lie just over the horizon.
But despite their familiarity, spaces like these are strange: similarities can be found anywhere, and points that seemed far apart might suddenly become adjacent. If you ask, you will learn that all of these spatial representations are mere reductions of something much more complex, of a space comprising not two or three dimensions but potentially thousands of them. This is McDonald’s information-space-beast, a mathematical abstraction that stretches human spatial intuitions past their breaking point.
Spaces like these, generically called “similarity spaces,” are the symbolic terrain on which most machine learning works. To classify data points or recommend items, machine-learning systems typically locate them in spaces, gather them into clusters, measure distances among them, and draw boundaries between them. Machine learning, as the cultural theorist Adrian Mackenzie (2017, 63) has argued, “renders all differences as distances and directions of movement.” So while the music space is in one sense an informal metaphor (the landscape of musical variation) in another sense it is a highly technical formal object (the mathematical substrate of algorithmic recommendation).
Spatial understandings of data travel through technical infrastructures and everyday conversation; they are at once a form of metaphorical expression and a concrete computational practice. In other words, “space” here is both a formalism — a restricted, technical concept that facilitates precision through abstraction — and what the anthropologist Stefan Helmreich (2016, 468) calls an informalism — a less disciplined metaphor that travels alongside formal techniques. In practice, it is often hard or impossible to separate technical specificity from its metaphorical accompaniment. When the makers of music recommendation speak of space, they speak at once figuratively and technically.
For many critics, this “geometric rationality” (Blanke 2018) of machine learning makes it anathema to “culture” per se: it quantifies qualities, rationalizes passions, and plucks cultural objects from their everyday social contexts to relocate them in the sterile isolation of a computational grid. Mainstream cultural anthropology, for instance, has long defined itself in opposition to formalisms like these, which seem to lack the thickness, sensitivity, or adequacy to lived experience that we seek through ethnography. As the political theorists Louise Amoore and Volha Piotukh (2015, 361) suggest, such analytics “reduce heterogeneous forms of life and data to homogeneous spaces of calculation.”
To use the geographer Henri Lefebvre’s (1992) terms, similarity spaces are clear examples of “abstract space” — a kind of representational space in which everything is measurable and quantified, controlled by central authorities in the service of capital. The media theorist Robert Prey (2015, 16), applying Lefebvre’s framework to streaming music, suggests that people like McDonald — “data analysts, programmers and engineers” — are primarily concerned with the abstract, conceived space of calculation and measurement. Conceived space, in Lefebvrian thought, is parasitic on social, lived space, which Prey associates with the listeners who resist and reinterpret the work of technologists. The spread of abstract space under capitalism portends, in this framework, “the devastating conquest of the lived by the conceived” (Wilson 2013).
But for the people who work with it, the music space does not feel like a sterile grid, even at its most mathematical. The makers of music recommendation do not limit themselves to the refined abstractions of conceived space. Over the course of their training, they learn to experience the music space as ordinary and inhabitable, despite its underlying strangeness. The music space is as intuitive as a landscape to be walked across and as alien as a complex, highly dimensional object of engineering. To use an often- problematized distinction from cultural geography, they treat “space” like “place,” as though the abstract, homogeneous grid were a kind of livable local environment.
Similarity spaces are the result of many decisions; they are by no means ``natural,” and people like McDonald are aware that the choices they make can profoundly rearrange them. Yet spatial metaphorizing, moving across speech, gesture, illustration, and computation, helps make the patterns in cultural data feel real. A confusion between maps and territories— between malleable representations and objective terrains— is productive for people who are at once interested in creating objective knowledge and concerned with accounting for their own subjective influence on the process. These spatial understandings alter the meaning of musical concepts like genre or social phenomena like taste, rendering them as forms of clustering.
Consumer electronics aimed at young children tend to be quite janky and cheap-looking, and they often have to be to survive the extreme stress-testing normal use in this situation. You could buy a higher quality item intended for normal use, but this carries the risk of burning a hole in the pockets of the parents. To thread the needle on this dilemma for a child’s audiobook player, [Turi] built the Grimmboy for a relative of his.
Taking its name from the Brothers Grimm, the player is able of playing a number of children’s stories and fables in multiple languages, with each physically represented by a small cassette tape likeness with an RFID tag hidden in each one. A tape can be selected and placed in the player, and the Arduino at the center of it will recognize the tag and play the corresponding MP3 file stored locally on an SD card. There are simple controls and all the circuitry to support its lithium battery as well. All of the source code that [Turi] used to build this is available on the project’s GitHub page.
You don't need Spotify or a dedicated app to try karaoke at home. Apple Music has introduced a Sing feature that lets you take over the vocals. You can not only adjust the voice levels, but use multiple lyric views depending on what you want to belt out — you can perform a duet or even handle background duties. Apple also notes that the lyric views are now cued to the beat and light up slowly, so it's easier to know when you should draw out a verse.
The feature will be available worldwide for "tens of millions" of tracks later in December on the new Apple TV 4K as well as recent iPhones (iPhone 11 and later) and iPads (such as last year's 9th-generation model). Android supports real-time lyrics, but won't let you adjust vocal levels. Accordingly, Apple Music plans to share more than 50 playlists devoted to songs "optimized" for the Sing feature. Don't be surprised if karaoke staples from Queen and other artists make the cut.
Spotify rolled out a karaoke feature in June, but with a very different focus. While Apple Music Sing is clearly aimed at parties, its Spotify counterpart is more of a gaming experience that records your voice and rates your performance. Apple tells Engadget its feature doesn't use microphones at all, so you won't have to worry if your version of "Islands in the Stream" needs some polish.
There's no mystery behind the addition. Sing gives you another reason to use Apple Music in group settings — it's not just for soundtracking your latest soirée. It could also serve as a selling point for the Apple TV, where music has rarely been a major priority. While this probably won't replace the karaoke machine at your favorite bar, it might offer a good-enough experience for those times when you'd rather stay home.