As is tradition, Apple is pulling the cover off of a new Apple Watch at its September event. Rumors suggested that the Apple Watch Series 8 would be a mostly iterative update to last year's model, and that appears to be the case so far: It looks essentially identical to the Series 7.
The first main new feature is a temperature sensor that Apple is tying to women's health. It'll use readings to give an estimate on when you may be ovulating. It's meant to be used overnight, sampling your wrist temperature every five seconds so you can see nightly shifts from your baseline temperature. This will work for everyone, but for people that ovulate it'll help indicate where they are in their cycle. You'll also get notifications on potential deviations from your norm.
There's also a new safety feature called Crash Detection. Much as current watches can detect when you fall, the Series 8 can detect car crashes thanks to two new accelerometers. It works in concert with the other sensors already included in the Apple Watch to detect four different types of crashes, including rollovers, front impact, back impact and side impact.
Apple says that the Series 8 has the same 18-hour battery life, but there's a new low power mode that can give you up to 36 hours on a full charge. It keeps a lot of the core features like activity tracking and fall detection while also turning off things like the always-on display. Anyone who knows they'll be away from a charger for a long time should appreciate this feature, and it's coming to older Watch models as well, from Series 4 onward.
Apple Watch Series 8 comes in four aluminum colors (silver, a black-ish midnight, a gold-ish starlight, and red), as well as three stainless steel finishes — looks like the titanium models are going to be saved for the Apple Watch Pro, which we'll probably hear about momentarily. The GPS-only model starts at $399, while the cellular models start at $499. All will be available on September 16th.
Follow all of the news from Apple’s iPhone event right here.
Last October, Google added new, eco-friendly driving directions to Maps. In the US, you could pull up a driving route that would take into account things like congestion and incline to find you the directions that are best for your fuel economy. As part of a handful of Maps announcements today, Google says the feature will be available in "nearly 40" European countries, though we didn't get a list of everywhere it'll be available. Google had previously added the feature to maps in Canada and Germany.
Just like in the US, these routes are often times not the fastest, so you'll have to pick between whether speed is your ultimate priority. It'll show you both routes, and will default to the more eco-friendly option when the difference in time is small. But if you want to always default to the fastest route, you can do so in the Google Maps settings.
Google
Regardless of whether you use this feature in the US, Canada or Europe, Google is also including a new preference setting that lets you tell the app what time of vehicle you drive. You'll be able to specify whether your car runs on gas (or diesel), is a hybrid or a full electric vehicle. That'll affect the specific directions, although we can't say yet just how different they will be. As is often the case with Google updates, the company says this new feature will be rolling out over the coming weeks.
Aside from whether The Last of Us Part I is worth the $70 asking price, the question surrounding this remake is how much the original 2013 game was going to change. Would developer Naughty Dog treat this as a total do-over, changing the level design, gameplay mechanics and player upgrades? It has become obvious over the last few weeks, as Sony released a handful of preview videos ahead of today's release, that that wouldn’t be the case. Instead, the goal was to bring massively updated visuals and a host of quality-of-life improvements to a game that would otherwise stay true to its roots.
“This is a unique project for Naughty Dog. It’s the first time we’ve taken on a full remake,” said creative director Shaun Escayg in an interview. “We knew that we wanted to stay true to the original game as closely as possible, [to] add what we think will heighten and enhance the experience but not fundamentally change the experience.” That mindset permeates the game, from everything you can see in the environment down to the battles against both humans and the Infected.
“We didn’t feel like these combat encounters were dated and there wasn’t really anything we were looking at saying ‘we want a do-over here,’” added game director Matthew Gallant. “We love the combat in The Last of Us. We think those spaces are really iconic: They’re really strong, they afford a ton of different options for moving around and fighting. What often was dated was perhaps the technology underlying some of these fights.”
Gallant, who was a combat designer on the original The Last of Us, says the game could only handle eight AI “brains” at any given time, despite fights that often had more enemies than that. This meant that they had to reactivate and deactivate those brains based on where the character was and what they were doing. Similarly, he described a lot of the battles in the original game as “hand scripted.” “You move here, they react by doing this; that was just the level of technology that we had at the time and it was what made sense, and we got pretty good results,” he said. “Those are really great fights, and they hold up really well. But with our latest engine technology we can be a lot more flexible.”
The new AI, unsurprisingly, is far more advanced. “We have the option to use information in the [level] layout to tell enemies ‘this is a strong position to defend, this is a good flanking route, this is a good line of sight to other enemies, and there’s an encounter manager layer that’s assigning NPCs to roles within the fight,” Gallant said. “Who would be the best flanker right now, who would be the best person to defend this point, who should be pushing up on the player right now?” In my experience so far, the end result is a game that’s far less predictable than the original – if you get caught out of stealth, enemies advance quickly and mercilessly, especially on harder difficulty levels. “You should be able to play a fight ten different ways and get ten different results,” he added.
Another big question around combat was why Naughty Dog didn’t add some of the new mechanics it built in The Last of Us Part IIfrom 2020. In that game, players control Ellie and Abby, both of whom can lie on the ground to hide in grass or crawl under vehicles for cover. They can also dodge, a feature that added a whole new dimension to fights, giving you an out when a Clicker or Bloater is bearing down on you for a one-hit instant kill.
According to Gallant, the ripple effects of adding dodge to the original game would have been too significant. “Dodge isn’t something that lives in a vacuum,” he said. “You have to add tells to the enemy attacks, and now the enemies are going to be attacking differently. You also may need to change the encounter spaces; you need to give a little more room to have that dodge gameplay.” Beyond that, adding dodge would diminish the tension that Naughty Dog tried to infuse in the game’s battles.
“If you have dodge, you kind of have an out. Then all of a sudden, a fight that was very claustrophobic and tense and nerve-wracking – this thing’s bearing down on you and you have to land that headshot to kill it – you get a very different sensation if you have dodge,” Gallant said.
But most importantly, according to Gallant, playing as Joel fundamentally needs to feel different than playing as Ellie, and just porting over her moves would diminish those differences. “The way Joel plays tells you a lot about his character,” he said. “He is a bruiser, he’s a brawler, he’s an older man. The way that he fights is supposed to feel very different than the way Ellie fights in The Last of Us Part II. She’s a younger woman, she’s nimble, she has a whole skill set that’s very different.” Fans can continue to argue about whether Naughty Dog should have gone further with the changes it made to gameplay, but it’s also reasonable that they want to keep the characters in Part I distinct from those in Part II.
While it might take players some time to recognize the extent of the AI updates, the graphical improvements are immediately obvious. For me, the most striking change is the facial animations, but the extent to which Naughty Dog went in and looked at every aspect of a scene to improve it is equally impressive. For example, as Joel and Ellie make their way through the suburbs of Pittsburgh with new companions Henry and his teenage brother Sam, Ellie and Sam take a break on a couch in a ruined house. From looking back at old screenshots, I saw that the couch was totally redesigned. Why not just use the original couch design?
“We’re trying to update everything with the decade of artistic development and improvement in technology since the PlayStation 3,” Escayg explained. “Is this the most grounded-looking couch? Can it stand up in this environment? How does it wear and tear over time? How does it work with the lighting and the time of day in that setting? Does it actually focus your attention on Sam and Ellie, or does it detract?”
Speaking more broadly, Escayg notes that Naughty Dog went through thousands of “micro decisions” across the entire game. “Does anything distract? Let’s remove it,” he said. “Do we absolutely need it? Are fans really attached to it? Are we really attached to it?”
Gallant says that a lot of the re-evaluation that Naughty Dog did focused on why it designed sections of the game the way they did a decade ago. “This area is plain — is it plain because we want you to kind of move through it and it’s meant to be unremarkable, or is it plain because we were low on memory on the PlayStation 3 and this was kind of a transition area from one detailed area to a next one?”
Naughty Dog did make one major addition that will fundamentally change how The Last of Us Part I plays. There are now myriad accessibility options, none of which were present in either the PS3 game or the PS4 remaster. The feature set includes everything the developer put into Part II in 2020, along with a few new additions. Despite the fact that the game wasn’t originally designed with accessibility in mind, Gallant says that it was relatively straightforward to bring these features over – though some of the more unique scenes in the game were tougher to account for.
“One example is the arcade mini-game in Left Behind,” Gallant said. The mini-game in question requires you to make a specific series of directional and button presses in a limited amount of time, like you do in Street Fighter II or Mortal Kombat. “We needed to design our text-to-speech there to tell you the instructions of what buttons to put in very quickly so you have time to put in the inputs. We worked with accessibility consultants and they tried some various revisions of that mini game. We did a couple rounds there to make sure that experience was accessible.”
The PS5’s DualSense controller and its extensive haptics system opened up one of those new accessibility options, dialog haptics. “This is a feature where we play the spoken dialog as vibrations on the controller,” Gallant explained, “and the intent there is to give deaf players a sense of how the line was delivered. Where was the emphasis, what was the cadence? And that along with the subtitles provides more of that story context and the performance to deaf players as well.”
While people will still continue to argue about whether The Last of Us Part I is worth $70, my conversation with Escayg and Gallant made it clear that Naughty Dog doesn’t believe things like the level design and core gameplay needed an overhaul. “We wanted anything we changed, anything we remade, anything that we adjusted to be in service of the original vision that was larger than what the hardware was capable of,” Gallant said. For better or worse, Naughty Dog’s mission was to make a version of The Last of Us that is, as Gallant puts it, “the best version of itself.”
Ever since Sony and Naughty Dog announced The Last of Us Part I, a $70, ground-up PS5 remake of the classic 2013 PS3 game, there’s been an intense discussion around whether this even needs to exist. After all, Naughty Dog remastered the original game in 2014 for the PS4, giving it 1080p graphics at 60 fps, and it still looks solid. But, compared to The Last of Us Part II, which came out in June of 2020, the original shows its age. Facial expressions are less lifelike, and the environments, while still beautiful and well-designed, lack a certain level of depth and detail.
As Naughty Dog co-president and The Last of Us co-creator Neil Druckmann tells it, the idea for this remake came when they were animating flashbacks for Part II. When I first played the sequel, I took note of a very brief sequence showing the game’s protagonists Joel and Ellie walking through a ruined city — the vastly improved animation and fidelity of the scene made me want to see more of a world that I had spent so much time in rendered with modern technology.
Now, two years later, my wish has been granted. The Last of Us Part I is a complete rebuild of the game (and its excellent two-ish hour DLC Left Behind), but it’s a bit of a different beast than remakes like Resident Evil 2 and Final Fantasy VII. It’s a massive visual upgrade over the original, and there are numerous other significant improvements — but the experience of playing the game itself will be extremely familiar to anyone who has experienced The Last of Us on the PS3 or PS4.
Level design and enemy placement is identical to the original, but the enemy AI has been significantly improved, which means some encounters can play out quite a bit differently. The music and voice performances are lifted straight from the original game, and the direction of cinematic sequences are completely faithful — but when they were so good to begin with, why change it?
For those who haven’t played The Last of Us, it’s a survival / action game that takes place 20 years after a pandemic wipes out most of the world’s population; the Infected are bloodthirsty, zombie-like beasts, and society as we know it has collapsed. Joel, a hardened, violent, emotionally stunted survivor, finds himself tasked with smuggling Ellie, a 14-year-old girl, across the country.
While the post-apocalyptic setting has been done many times, The Last of Us manages to tell an impactful story that balances brutal and devastating emotional beats with surprisingly hopeful and tender human connections. In my opinion, nothing about that original tale needed to be changed, and I’m glad that Joel and Ellie’s journey is authentic to the original vision.
A complete visual redesign
Naughty Dog has a reputation for building intricate, vast and beautiful worlds, and as the company’s first PS5 game, The Last of Us Part I continues that tradition. From the very first scene through the end credits, I marveled at the detail and richness of the world Naughty Dog created — it’s a huge upgrade over the original and easily on par with The Last of Us Part II.
For me, the most significant change is in facial animations. Naughty Dog has said they were able to go back to the original motion-captured performances and use them as a guide for putting more nuance and emotion into the game. The climax of the 15-minute prologue showing how the outbreak starts hit harder, thanks in large part to the facial expressions carrying more emotional heft. The original version of the game looked great but still occasionally dipped into the uncanny valley – that’s not the case here.
Characters' facial animations look amazing, regardless of whether it’s in a cutscene or during gameplay. While there are plenty of cinematics in The Last of Us Part I, there’s a ton of storytelling that happens through the gameplay itself. I always enjoyed spinning the camera around to focus on the characters’ faces during quieter times of conversation, and they’re impressively detailed and expressive, reflecting the feel of the scene in subtle but noticeable ways. Enemies look more realistic than ever, too, whether it’s the angry faces of a pack of human hunters or the disturbingly distorted expressions of the Infected.
The improved environmental detail Naughty Dog added to the world is just as significant. The original game was already a stunning depiction of a post-apocalyptic United States, from the Quarantine Zones where humanity clung to safety, to abandoned towns overrun with Infected, or lush forests and roaring rivers of the wilds. All those settings are amplified in The Last of Us Part I, with better lighting, more realistic trees and vegetation, stunning reflections on water and loads of tiny details everywhere you look.
A great example of this is in the pinboard above Joel’s desk in his Texas house, which we briefly see in the prologue. I used the game’s photo mode to zoom in on the details and found numerous hand-written notes reminding Joel of his grocery list, his daughter’s soccer schedule, a cleaning schedule and even a letter his brother Tommy wrote when they were kids. Unless you go into photo mode and zoom in, you’ll never notice these details, but it helps build a world that feels lived in.
Throughout the game, you’ll come across certain areas like the subway under Boston and the Pittsburgh hotel basement that are shrouded with infectious spores and have little natural light. In the original game, those areas were particularly hard to see fine detail in — the spore particles overwhelmed the visuals and reduced the colors and visibility of the area to a shadowy gray mush. These types of environments look significantly better in Part I. The spores add an unsettling dimension, but the details shine through the fog. It’s also much easier to navigate, which is a welcome change — a lot of these darker areas involve going underwater to make your way around obstacles, and it was often hard to see a way forward.
The Last of Us Part I offers two visual modes: Fidelity runs the game at 30 fps in full 4K resolution, and Performance targets 60 fps while dynamically adjusting the resolution. (Alternatively, if you’re running the PS5 beta software that enabled 1440p as resolution setting, it'll max out there.) I found myself jumping between the two modes, turning on Performance for battles and using Fidelity for quieter parts that were more focused on exploration. Overall, I prefer Performance mode, mostly because I find it hard to go back to 30 fps after playing at 60 for a while.
Gameplay tweaks and a new AI system
Improved graphics are table stakes for a remake, though. The big question around The Last of Us Part I was how much gameplay would change – whether we’d see new mechanics from Part II, or if the company would redesign levels to give experienced players something new.
Naughty Dog has been faithful to a fault. Level design is identical, and as best I can tell there are even the same number and placement of enemies throughout the world. If you know the general progression of The Last of Us and Left Behind (which remains a separate experience from the main game), you won’t find any surprises here. It’s revealing that Naughty Dog apparently had a chance to “do over” any parts of the game that it feels didn’t age as well and didn’t take them. A cynic would say they wanted to put less effort into the project, while an optimist would say they’re just standing by the original game’s design. I think both points of view are valid and will simply note that people who know the game inside and out aren’t going to find anything unexpected.
Similarly, Joel still moves like the burly middle-aged man that he is. You can’t dodge, and you can’t lay prone. You can now, however, pick up and throw bricks and bottles on the run, just as you can in Part II. There’s definitely something satisfying about running towards an Infected, stunning them with a brick throw and then finishing them off with a swing a melee weapon, but in the grand scheme of things it isn’t a major change.
One thing that is notably different is enemy AI. Human enemies are smarter and more aggressive, working together to flank you; they're also a lot harder to lose once they pick up your trail. Infected, meanwhile, present their own set of challenges. Clickers, the blind Infected that use echolocation to find you and can kill you in one shot, have the same behavior they do in Part II. They’ll often stop their wandering and let out a series of “barks” — and if you’re near them when they do, well, you’re probably going to die quickly. In the original game, you were mostly safe as long as you didn’t make too much noise walking, but now you have to keep moving or hiding at all times.
The mega-powerful Bloaters are also modeled after their counterparts in Part II. The biggest change in their behavior is that they’ll build up a head of steam and charge at you like a bull — if you get out of the way they’ll often slam into a wall or other object and be stunned for a moment, a great opportunity to blast away at them with your shotgun. But in Part II, you can use the dodge button to dance out of the way. Since there’s no dodge in Part I, you have to sprint out of the way instead, something that’s not nearly as reliable. After getting so used to dodging the Bloater’s charge in Part II, it was a real pain to not have the same move here. And if a Bloater grabs you, it’s an instant death, so you’ll want to treat these upgraded enemies with the utmost care.
The AI and behavior of your allies has been upgraded, too, which addresses a big complaint about the original game. If you were in stealth, your allies were essentially invisible to enemies, which meant that your cover couldn’t get blown if Ellie or another companion ran out in front of a Clicker. This avoided the frustration of being seen when you didn’t actually do anything to reveal your position, but it also meant that it looked pretty ridiculous when characters could run right out in front of enemies and not get spotted.
Now, your companions are much smarter at mimicking your behavior, going into cover when you’re in stealth and only revealing themselves if you do the same. Once or twice in my playthrough, an ally would be “out of position” and in the enemy’s line of sight, but, as in the first game, they’re essentially invisible. The good news is that it just doesn't happen very often. It’s not perfect, but it’s an improvement.
The haptic feedback system and adaptive triggers on the PS5’s DualSense controller also offer some subtle but noteworthy improvements to gameplay. Naughty Dog says each weapon has different resistance and feedback from the triggers, and the haptic vibrations are unique as well. While I can’t recognize every slight detail, shooting a revolver feels quite different on the trigger than shooting the shotgun or drawing your bow. Haptics accompany actions like reloading too, so you’ll feel a vibration for each pump of the shotgun after Joel takes a shot. There are too many haptic touches throughout the game to count, but one of my favorites is that you can "feel" rainfall as it vibrates lightly across the controller, like droplets are bouncing off your body.
Updates galore
While graphics and AI are the changes most people will notice first, there are a lot of smaller tweaks throughout that make The Last of Us Part I feel more like Part II. Things like a redesigned HUD and weapon selection interface, aiming reticles for different weapons and button prompts (like mashing square to open a blocked door or holding triangle to lift a gate) all match their counterparts in Part II. While weapon upgrade options are identical to those in the original game, the new visuals of Joel working on his guns with various tools are a lot more interesting than in the original game.
Sony / Naughty Dog
Upon finishing the game, you’ll unlock a host of bonus material and gameplay modifiers. Most significant are the Permadeath and Speed Run modes. Just as in Part II, Permadeath removes all checkpoints, and if you set it to the most difficult level, one death sends you back to the very beginning of the game. For those who want a significant challenge but aren’t quite that dedicated, you can do Permadeath “per act” (which Naughty Dog estimates encompasses two to three hours of gameplay) or “per chapter,” which adds some checkpoints within each act. You can also try it at any difficulty level, which makes the challenge a lot more accessible. I know I’m not good enough to try a truly obscene Permadeath run on the ultra-difficult Grounded difficulty, but I have kicked off a run on Hard, which I should have a prayer of surviving.
Speedrun mode is pretty self-explanatory, but it’s a nice quality of life enhancement for people who like to play games as quickly as possible. It enables an in-game timer that automatically pauses during cinematic and scene transitions. Once you finish the game, you’ll find a recap that breaks down your speed per chapter as well as your total play time, and the game saves records broken down by difficulty level and permadeath setting.
Other unlockable extras include tons of concept art, both from the original release and new art done for this 2022 rerelease. There’s also a viewer that lets you explore highly detailed character models for just about everyone in the game; it also lets you see the disgusting details of the Infected in close range if you’re into that sort of thing. More Part II extras brought over here include a set of filters you can apply to tweak the visuals of the game (think an 8-bit setting or one that renders the game in a comic book style) and a bunch of gameplay modifiers. You can turn on infinite ammo or crafting supplies, one-shot kills, slow motion, explosive arrows and much more. Only hardcore fans are probably going to spend time with these, but they can add some fun new ways to play the game — combining something like unlimited ammo with a permadeath setting on the game’s hardest difficulty would be a particularly unique challenge, for example.
It’s not a stretch to say that The Last of Us Part II helped push accessibility in the video games industry forward — Naughty Dog provided players with an extensive and impressive selection of options, and I’m very glad to see that the company replicated that with Part I. Setting include a host of control adjustments (including complete control remapping), visual aids like magnification and high contrast modes, features that make navigating the world easier like a ledge guard to keep you from falling to your death, a text-to-speech reader, audio cues, extensive combat modifications and much more.
Sony / Naughty Dog
It’s all present in Part I, along with a new feature that delivers haptic feedback on the controller to help deaf or hard-of-hearing players feel the emphasis in how lines of dialog are delivered. The game also includes audio descriptions for cutscenes, something that wasn’t present in Part II. All these accessibility modifications are important additions and things that any player can appreciate if they want to customize their experience with the game.
At a more basic level, Part I also lets you set a custom difficulty level. There are six options, but you can also set different challenges across five parts of the game: player, enemies, allies, stealth and resources. So you could make it a little easier to stay in stealth, or make resources more plentiful while otherwise keeping enemy aggressiveness high, for example. It’s yet another way to tweak your experience to match your skill level.
I'd be remiss if I didn't mention that virtual photographers will love Photo Mode in The Last of Us Part I. It's even better than it is in Part II thanks to the addition of three lights that you can place anywhere around a scene to make things even more dramatic. You can adjust the color temperatures, brightness, position and many more options to customize the scene further than ever before. I can't wait to see what the incredibly skilled virtual photography community around these games does with Part I. (All screenshots in this review, with the exception of those credited to Sony, were taken by me using the game's Photo Mode.)
Is Part I worth it, and who is it for?
After going through the many things Naughty Dog added and changed for The Last of Us Part I, the $70 price point doesn't bother me as much as it initially did. Yes, that’s a lot of money for a game, and it's fair to ask whether replaying a game with nine-year-old mechanics should cost that much. If Sony / Naughty Dog priced this at $50 or even $60, I think that would be a fair price point that would be harder to take issue with. Even at $70, though, the sheer breadth of changes and significance of things like the new visuals and accessibility options make this a major improvement over the remastered PS4 version.
In fact, I’d go so far as to say that this is the definitive version of The Last of Us. I know the original game inside and out, and everything that made it one of my favorites is here; the changes Naughty Dog made do nothing to diminish that original experience, only improve it. If the company had gone further and redesigned levels or made more extensive changes to gameplay mechanics, I don’t know if I’d feel the same. There’s something to be said for the purity of the original vision, and that’s fully intact. It just looks and plays better than ever, and the accessibility features mean more people can enjoy it.
That said, this game definitely isn’t for everyone. If you played The Last of Us and haven’t felt the need to revisit it, Part I won’t change your mind. The story is identical, and the combat and exploration formula is essentially unchanged.
But in a world where The Last of Us is going to premiere as a high-profile HBO series sometime in 2023, it's not surprising to see Sony and Naughty Dog revisit this game. The companies are surely expecting increased interest in the franchise, and having a beautiful, modern version of the game ready for new players makes a lot of sense. For those people new to the series, this is the version to play. And if you’re a big fan of the game, the kind of person who goes back to Joel and Ellie’s story every year or two (like yours truly), this is the best way to do it.
In the wake of Roe v. Wade being overturned, Google announced that it is making it easier to use its Maps and Search products to find medical providers that offer abortions. When someone searches for specific services and Google has confirmation that a location provides those services, it'll be clearly labeled in Search and Maps. For example, Google notes that it already does this when you search for EV charging stations or a specific COVID-19 vaccine brand, and now it'll do so for veterans hospitals and healthcare facilities that provide abortions. As you can see in the above image, searching for "abortion clinics near me" will bring up a list of locations that Google has confirmed provide abortion services.
While Google isn't coming right out and saying it, this seems to be an effort to avoid sending people who are searching for medical care to so-called "abortion crisis centers" or anti-abortion centers. For locations that Google doesn't have confirmation for, it'll instead say "might not provide abortions," which serves as a passive but still significant red flag for anyone looking for treatment. In 2018, the Supreme Court ruled that anti-abortion centers couldn't be forced to acknowledge abortion as an option, something that outsources that work to private companies like Google.
Earlier this week, Yelp announced it would take a similar but more definitive step by adding warning labels to anti-abortion centers; these warnings note that such facilities "may not have licensed medical professionals on site." Google says its update has been in the works for months, but it comes at a time where employees have said the company needs to do more to project both users and its contract workers in a post-Roe world.
The fact that Google is getting confirmation from centers that provide abortions should make both Search and Maps results more useful. A Google spokesperson said that "we get confirmation that places provide a particular service in a number of ways, including regularly calling businesses directly and working with authoritative data sources." So users should be able to trust that medical centers that are labeled as providing abortions do in fact offer the service. Google is also making it easy to expand your search if you don't find relevant results nearby with a "search farther away" prompt, another tool that can help people looking for abortion services where clinics are rare.
Google says these changes will start rolling out today, though they won't immediately be visible to all Google users until a bit later.
Just as it is to Eddie Munson in Stranger Things 4, Metallica's "Master of Puppets" is, to me, the “most metal ever.” I spent my teen years obsessively learning the guitar, and Metallica was one of my biggest influences. The combination of vocalist and rhythm guitarist James Hetfield's thrash riffs and progressive song structures along with lead guitarist Kirk Hammett's shredding gave me plenty to try and master. I was never quite fast or precise enough to fully nail Metallica's hardest songs, but I could do a pretty decent impression when I was on my game.
Some 20-plus years later, I am decidedly not on my game, having only played sporadically over the last decade. I've tried getting back into playing in fits and starts, but nothing has really stuck. Just recently, though, Finnish company Yousician came on my radar thanks to a collaboration with — who else? — Metallica.
At a high level, the Yousician software listens to your guitar playing and matches it to the lesson or song you're trying to play, giving you a higher score depending on how accurate you are. The app features courses and songs for guitar, piano, bass, ukulele and vocals, but my time was only spent on the guitar section.
For people who've never played before, there are loads of introductory lessons — but the most interesting thing about Yousician for someone like me are the song transcriptions. The app is loaded up with tons of popular songs that have, in my limited testing, fairly accurate transcriptions that help you learn to play along with the original recording. Queuing a song up brings up a continuously scrolling tablature overview of the song; play along with it and Yousician will try and tell you if you hit a chord right on the beat, whether you're a little early or late or whether you blew it completely.
From what I can tell, the vast majority of the music on Yousician has been recorded by session musicians — so you're not playing along to the original Nirvana or Foo Fighters tracks, but a well-recorded, though somewhat soulless, reproduction. That's OK, as these exercises work well enough for learning a song, and then you can just go play along with the original once you have it perfected.
But the Metallica course is different, and far more compelling. Yousician got access to the master recording for 10 of the band's songs, which means you're learning from and playing along with the original songs you (presumably) love.
The Metallica portion of Yousician isn't limited to learning specific songs, however. There are three courses to play through: Riff Life, Rock in Rhythm and Take the Lead, each of which dives into a different aspect of the band's music. Each of those courses, in turn, has a handful of lessons focused on a song and the skills needed to play it. There are also videos featuring members of the band talking about the overarching concept. While James and Kirk aren't literally teaching you the songs, it's still great to see them play up close and personal and hear about how they approach writing and performing.
For example, the "Rock in Rhythm" course has a whole section on downpicking, a more percussive and aggressive way of using your picking hand that has come to define much of Metallica's riffs and heavy metal music in general. Seeing James Hetfield perform some of his most complicated and fast riffs in great detail is an absolute treat.
Mixed in with these videos are lessons that focus on a specific part of a song. The Riff Life course starts things out extremely simple, with the key riffs to songs like "For Whom the Bell Tolls," "Nothing Else Matters'' and "Enter Sandman." These lessons follow a pretty standard format. First, you'll listen to the isolated guitar part to get it in your head, sometimes accompanied by a Yousician instructor showing you how to approach the song. After that, you play the part in the context of the song, starting out slowly and then gradually speeding up to play it at full speed. Then, to complete the lesson, you perform the complete song.
For that last option, Yousician offers multiple ways to move forward. If you're a beginner, you can play simplified versions of the song — but Yousician also includes full versions of the rhythm guitar track or a combo of the rhythm and lead parts. If you're just learning the song for the first time, you're not going to want to jump right into those versions. But if you're up for the challenge, the practice mode helpfully divides the song up into sections like intro, verse, chorus, solo and so forth. You can slow the song down, work on those sections, and then string the entire thing together. The app uses time stretching so that the music’s pitch isn’t affected.
As someone already familiar with the Metallica songs included, I can tell Yousician has done an impressive job with these full transcriptions. I've already picked up some tricks and learned a few improved ways to play these songs, even for very simple parts like the opening riff to "Enter Sandman." I've known that song basically since I first picked up a guitar, but Yousician identified that Hetfield plays the riff with his left hand in a fairly unconventional finger position, one that is not simple but makes the notes ring out clearer once you master it.
The lead guitar parts are also impressively detailed, considering how fast and complex some of Hammett's solos can be. This is a case where I'm sure it helped to have access to Metallica's master recordings for these songs; being able to isolate parts and slow things down makes the learning process much more accessible and also likely made a difference in the accuracy of the transcriptions. While I can't say that the notation for extremely fast solos like those in "One" or "Battery" are 100-percent accurate, they should be good enough for a convincing performance.
A screenshot of the guitar tablature for the guitar solo in the Metallica song "One."
Unfortunately, I ran into some problems when trying to tackle the aforementioned epic, “Master of Puppets.” While I was working my way through the downpicking lessons, I was presented with the riff played during the main verse. Whether through my own ineptitude, Yousician not “hearing” me well enough or some other unknown issue, I simply could not play the riff accurately enough to move forward. It’s definitely a fast one, but even at slowed down speeds, Yousician consistently didn’t recognize that I was hitting the sliding power chords that anchor the end of the riff. A colleague of mine had previously tried Yousician and had a similar problem with the app not recognizing his playing, which can be a major bummer if you’re trying to ace each lesson.
I can’t say why this happened with this particular riff. Yousician did a good job at hearing me play the song’s introduction, which is equally fast and pretty complex in its own right. There seemed to be something specific to those sliding chords that the app had a hard time picking up. I’m not well-practiced enough to attempt the fastest solos the Metallica course offers, so I can’t say how well it’ll pick those up, but it did a fine job of recognizing the quick, arpeggiated licks near the end of the “Fade to Black” solo. Yousician did a better job of picking things up when I plugged my guitar straight into my computer using the iRig 2 interface. But since I don't usually go straight into my computer, I didn't have any virtual amps or effects set up, which meant playing wasn't nearly as much fun as it is through my amp.
Despite these occasional issues, I really enjoyed the Yousician Metallica course. Whether it’s worth the money is another question altogether – Yousician costs $140 a year or $30 a month. That’s not cheap, but it’s less expensive than the private guitar lessons I took 20 years ago. Obviously, Yousician can’t tailor its lessons to me, but I’m still impressed with the attention to detail and comprehensive nature of the Metallica course, and there’s a host of other things I could play around with, too. Between the accuracy of the transcriptions, a solid song selection and the ability to slow down tracks for practicing, there’s a lot to like here.
It certainly would have been a fantastic tool when I was learning the guitar as a teenager – but in 2022, there are a wide variety of options for learning your favorite songs. That’s probably the biggest catch with Yousician. Most people will probably be happy to view YouTube instructional videos and look up transcriptions for free online. I just did a quick search for “Master of Puppets guitar lesson” and found a host of excellent videos, including one multi-parter where the instructor spent ten minutes just demonstrating the first two riffs. It was a thorough, detailed lesson from someone who clearly knows the song as well as Metallica’s approach to playing in general.
That said, I’d still encourage Metallica fans to check out a monthly subscription to Yousician. The song selection spans simpler tracks to some of their toughest material, making it useful regardless of your skill level. The video content is entertaining and informative; you don’t often get to see a band speaking so candidly about their approach to playing their instruments. And as good as some YouTube lessons are, being able to look at and play along with detailed tablature transcriptions of extremely fast guitar solos makes the learning experience much better. Those transcriptions combined with the original Metallica master tracks that you can slow down or speed up as needed are an excellent practice tool. For anyone looking to unleash their inner Eddie Munson, Yousician’s Metallica course is a solid place to start.
Ever since Apple launched the App Store, developers big and small have gotten caught up in the company's approval process and had their apps delayed or removed altogether. The popular messaging app Telegram is just the latest, according to the company's CEO Pavel Durov. On August 10th, Durov posted a message to his Telegram channel saying the app's latest update had been stuck in Apple's review process for two weeks without any real word from the company about why it was held up.
As noted by The Verge, the update was finally released yesterday, and Durov again took to Telegram to discuss what happened. The CEO says that Apple told Telegram that it would have to remove a new feature called Telemoji, which Durov described as "higher quality vector-animated versions of the standard emoji." He included a preview of what they would look like in his post — they're similar to the basic emoji set Apple uses, but with some pretty delightful animations that certainly could help make messaging a little more expressive.
"This is a puzzling move on Apple's behalf, because Telemoji would have brought an entire new dimension to its static low-resolution emoji and would have significantly enriched their ecosystem," Durov wrote in his post. It's not entirely clear how this feature would enrich Apple's overall ecosystem, but it still seems like quite the puzzling thing for Apple to get caught up over, especially since Telegram already has a host of emoji and sticker options that go far beyond the default set found in iOS. Indeed, Durov noted that there are more than 10 new emoji packs in the latest Telegram update, and said the company will take the time to make Telemoji "even more unique and recognizable."
There are still a lot of emoji-related improvements in the latest Telegram update, though. The company says it is launching an "open emoji platform" where anyone can upload their own set of emoji that people who pay for Telegram's premium service can use. If you're not a premium user, you'll still be able to see the customized emoji and test using them in "saved messages" like reminders and notes in the app. The custom emoji can be interactive as well — if you tap on them, you'll get a full-screen animated reaction.
To make it easier to access all this, the sticker, GIF and emoji panel has been redesigned, with tabs for each of those reaction categories. This makes the iOS keyboard match up with the Android app as well as the web version of Telegram. There are also new privacy settings that let you control who can send you video and voice messages: everyone, contacts or no one. Telegram notes that, like its other privacy settings, you can set "exceptions" so that specific groups or people can "always" or "never" send you voice or video messages. The new update — sans Telemoji — is available now.
Facebook and Apple have been at odds for several years now; Apple announced back at WWDC 2020 that iOS would require apps to ask users to opt-in to cross-app advertising tracking. Facebook spent much of the next months speaking out against Apple's plans and predicting revenue instability due to the upcoming changes, but the feature was released in iOS 14.5 back in April of 2021. Somewhat surprisingly, though, a new report from The Wall Street Journalclaims that before this all went down, Facebook and Apple were working on a partnership and revenue-sharing agreement.
According to the Journal, Apple and Facebook were considering a a subscription service that would offer an ad-free version of the platform. And since Apple takes a cut of in-app purchases, including subscriptions, it could have been a very lucrative arrangement indeed.
Another arrangement that was discussed and ended up being a point of contention was Apple taking a cut of "boosted posts," which essentially amounts to paying to put a post in front of a larger audience. Facebook has long considered boosted posts part of its advertising portfolio; as the Journal notes, small businesses often use boosted posts to reach more people. The issue came down to Apple saying boosts should be considered in-app purchases, which would be subject to the 30 percent revenue cut that the company takes. Facebook, on the other hand, maintained that those were advertising products which aren't subject to Apple's cut.
Since rolling out its user-tracking changes in 2021, research firm Insider Intelligence claims that 37 percent of iPhone users have opted in to letting companies track their activity across apps. Since the change went into effect, Facebook (now Meta) has seen its revenue growth shrink significantly — and last quarter, Meta reported the first revenue decline in the company's history.
As these discussions reportedly took place between 2016 and 2018, we're a long way off from these talks. Apple is doing its best to position itself as a defender of privacy, and Meta... well, Meta is busy trying to make the Metaverse a thing. But for now at least, advertising is the only notable way Meta makes revenue, so the company will have to continue to adjust to a world in which iOS app tracking protection is a thing that most users take advantage of.
Facebook parent company Meta has just reported its earnings for the second quarter of 2022, and it was another quarter of shrinking profits. Total revenue of $28.8 billion was only down one percent compared to Q2 one year ago, but net income dropped 36 percent to $6.7 billion. Making almost $7 billion in profit is not a bad quarter for anyone, but the size of the decline compared to a year ago is pretty significant. And, according to the Wall Street Journal, this is the first-ever drop in revenue for Meta / Facebook — so even though we're only talking one percent, it's still noteworthy.
Revenue from advertising and Meta's "family of apps" was essentially flat year-over-year, and Reality Labs (home to hardware like the Meta Quest and other metaverse-related initiatives) actually grew 48 percent year-over-year to $452 million. But Reality Labs accounted for a $2.8 billion loss this quarter, a 15 percent larger loss than Q2 one year ago. At this rate, it seems likely that Reality Labs will lose Meta more than the $10 billion it cost the company in 2021.
This comes the same day that the FTC announced it was seeking to block Meta's acquisition of 'Supernatural' VR workout app maker Within, a proposed sale that was announced last year. “Instead of competing on the merits, Meta is trying to buy its way to the top,” John Newman, deputy director of the FTC's Bureau of Competition, said in a statement.
Meta is holding a call with investors at 5PM ET, and we'll be listening in to hear comments from CEO Mark Zuckerberg and will update this post with anything we learn.
Later this year, Google Photos is going to get a significant update that has the distinction of first arriving on Chromebooks. According to a Google blog post, Google Photos will get a new movie editor and video editing features this fall as part of an update to Chrome OS. From the sound of things, it’ll let users make videos similar to the highlight clips the app already automatically makes. You’ll be able to select a theme as well as people or pets you want to feature in it; from there, Google Photos will pull together a movie using video clips and images from your library. It’ll be smart enough to scan longer videos and pull out specific clips to include in these new creations as well.
While it’s no surprise that Google is including an automated tool, the company is also including the ability to start from scratch, adding video clips and photos in any order you like. The app will let you adjust things like brightness and contrast, trim clips as you see fit, add title cards and music and apply Google’s Real Tone filters that work better with non-white hairstyles and darker skin.
Google isn’t saying yet if these video editing features will come to the mobile apps for iOS and Android, but Google Photos has usually had feature parity regardless of platform, so it wouldn’t be a surprise to see these tools expand past Chromebooks before long. In fact, the video editor will be built in to an optimized version of the Android Google Photos app specifically built for larger screens. The app will also seamlessly work with the Files and Gallery Chrome OS apps, so you can open a video in the Gallery app and immediately move it right over to Google Photos for editing or including in a new creation.
Google
There are numerous other handy updates planned for Chrome OS coming in the next few months. Another new Google Photos feature will allow Chromebooks to access your library and use those pictures for background wallpaper; like other Chrome OS wallpaper options, you can pick a specific album and set it to change daily. The aforementioned Gallery app is going to get PDF editing features, so you can fill out forms and sign them if you’re using a Chromebook with a stylus. That feature is coming next week. There’s also a new Cursive app for capturing and organizing hand-written notes; those can be copy and pasted into other apps or exported to PDFs for, depending on how you need to share them.
Chrome OS is also getting a new dark mode, something that’s been rumored for a long time now. As you can on most other devices, you’ll be able to pick one mode or have it automatically switch based on the time of day. Some new wallpapers will also come with light and dark versions that automatically switch depending on which theme you use, too.
Google
Finally, Google is making a few productivity improvements to Chrome OS. Clicking on the date in the Chromebook shelf will pop up a monthly calendar view; you can choose a date to see your Google Calendar events without having to open the app or website. And Chrome OS will let you save virtual desk setups, so if you have a specific set of tabs and apps you use frequently, you can call them up and dismiss them as needed.
Most of these updates should be coming in August, though Google specifically noted the virtual desk update won’t be available until late September. And the Google Photos video editing tools are set to arrive in the “fall” — hopefully sooner than later.