One of the first images taken by the James Webb Telescope that was released by NASA was the "sharpest infrared image of the distant universe to date." It's a wondrous photo showing a detailed cluster of galaxies. It's also currently being used by bad actors to infect systems with malware. Security analytics platform Securonix has identified a new malware campaign that uses the image, and the company is calling it the GO#WEBBFUSCATOR.
The attack starts with a phishing email containing a Microsoft Office attachment. Hidden within the document's metadata is a URL that downloads a file with a script, which runs if certain Word macros are enabled. That, in turn, downloads a copy of Webb’s First Deep Field photo (pictured above) that contains as a malicious code masquerading as a certificate. In its report about the campaign, the company said all anti-virus programs were unable to detect the malicious code in the image.
Securonix VP Augusto Barros told Popular Science that there are a couple of possible reasons why the bad actors chose to use the popular James Webb photo. One is that the high-resolution images NASA had released come in massive file sizes and can evade suspicion in that regard. Also, even if an anti-malware program flags it, reviewers might pass it over since it's been widely shared online in the past couple of months.
Another interesting thing of note about the campaign is that it uses Golang, Google’s open-source programming language, for its malware. Securonix says Golang-based malware are rising in popularity, because they have flexible cross-platform support and are more difficult to analyze and reverse engineer than malware based on other programming languages. Like other malware campaign that starts with a phishing email, though, the best way to avoid being a victim of this attack is to avoid downloading attachments from untrusted sources.
Ten years ago, Teenage Engineering made a splash with the quirky, “boutique” OP-1 synthesizer. The b-word gets quotes because the OP-1 would go on to be a huge hit, enjoying a 10 year run and several re-stocks along the way. The success of the OP-1 and the equally unique products that followed saw big brands lining up to collaborate in the hope that some of that design magic might do wonders for their own products. Today, Teenage Engineering’s unique style can be found in everything from adorable gaming consoles to living rooms and pant pockets across the globe.
Back here in 2022, the company recently unveiled the sequel to the synth that started it all - and it’s called the OP-1 Field ($1,999). The new “Field” line also includes the TX-6 mixer ($1,199) and it looks like the company is repositioning itself with a new design aesthetic and… price range. Teenage Engineering has always charged a premium for its genre-bending, playful design but given that the original OP-1 cost around $800 at launch and that the OP-1 Field is largely based on it, it’s understandable that some loyal fans are feeling a little… priced out this time around.
Perhaps the bigger question is, will the new Field series offer enough magical music dust and Nordic design delight, to not only justify the expense, but to also keep Teenage Engineering’s unique approach to making music relevant to new and existing artists alike?
@Random Mcranderson (YouTube) - I think what TE missed is the overall negative effect on their reputation this has had.
Introducing the TX-6 and OP-1 Field
Before we can try and understand “Field” and what it means for the company, we should probably get a feel for the latest two products that are dividing fans in comments sections and on forums. We’ll start with the TX-6 as that’s an entirely new product for Teenage Engineering.
In the most simple terms, the TX-6 is a tiny battery-powered mixer and audio interface. Despite its diminutive size it has six stereo inputs, a built-in synth, eight effects, “DJ mode”, an instrument tuner and wireless/Bluetooth MIDI control. For something as portable as a deck of cards, that’s quite impressive. The TX-6 could be your main desktop audio interface by day and the beating heart of your hyper-portable (or not) multi-synth studio by night.
James Trew / Engadget
Unsurprisingly it’s particularly well suited for connecting and mixing smaller studio gear. Not only Teenage Engineering’s own products, but Korg’s Volca range or Roland’s many compact synths and drum machines are also a good fit. Ultimately, anything you can wrangle into a 3.5mm line-level output is fair game here. Anything with either an XLR connection or that requires phantom power is going to be a challenge of course. For outputs there are: main, aux and "cue” (for DJ mode).
@Pretty.mess (Gear Space) If I had enough money to buy one of these, I would probably buy something else. But I do love TE and the built in sequencer and synth looks interesting.
As is often the case with Teenage Engineering products, there are some features that one might not normally expect. On the TX-6 that would be the synth engine that includes drums sounds. Without a MIDI controller you can’t play it chromatically, but it’s unusual to see creative tools like this in most mixers and provides a way to poke out ideas right on the device. The inclusion of Bluetooth MIDI really feels in keeping with the portable form factor and a recent firmware update allowed for recording the mixer’s output directly to USB drives, which means you can lay down tracks without even needing a phone to record into. One might argue some internal storage could have been included for the price, but we presume the density of the hardware doesn’t allow for it (we hope).
The OP-1 Field, on the other hand, is very clearly a successor to the decade-old OP-1. The launch was sold as the new synth being “100 times” better with a list of 100 new features or improvements. In practical terms, the main upgrades seem to be much improved on-board storage, new “tape” modes (more on this later), full stereo signal chain, a new synth engine, a new reverb effect, an improved display, longer battery life and 32-bit float recording.
@Tarekith (OP-Forums) not everything needs to be aimed at the lower end of the market. This is a massive update for an already very capable instrument.
With the Field, the OP-1 has been refreshed to bring it up to date after a decade of user feedback. And this includes squashing some long-held limitations of the original. Most notably, the ability to work on multiple projects on the device without the hassle of backing them up to a PC. Yep, the original OP-1 only had the ability to record one song (or more accurately, one "tape") at a time. Another big one, particularly for the synth and drum engines, is the introduction of stereo.
To show this off, there’s a new synth engine called “Dimension.” It joins the other 10 that were on the original (which already covered most bases). Dimension is a subtractive/analog-style synth with a variable waveform (it gradually changes from various pulse styles through sawtooth and then noise).It also has a chorus feature for a fuller stereo sound. It does a pretty good job of replicating more analog style sounds as well as lush pads and even some horns and wind instruments.
With so many different synth engines you’re not short of choice, but the OP-1 is sometimes considered as having quite a colder, digital sound. This is true to a degree, but – as with most things on this synth – there are creative ways to get around that if you know where to look.
James Trew / Engadget
Instead of there being an internal sequencer, Teenage Engineering decided to imitate recording to tape on the OP-1, with just four tracks, including a physical time limitation of six minutes for each track. The analogy is taken seriously with no “undo” or “copy/paste” instead you “lift” tape and you can drop it back elsewhere. There are modern concessions so that it’s not just about making life needlessly hard – instead it's a very different way of making music that urges you to build songs in a way that most software’s infinite options and endless tracks ironically doesn’t.
But it’s precisely this unusual approach, anachronisms like “tape” and imitating physical limitations in a digital environment that arguably makes Teenage Engineering products stand out. Most electronic music production these days likely happens in software on laptops. As computers became powerful enough to mimic outboard gear, often the only hardware you might see producers using was a MIDI controller for playing software instruments with. In the last decade, at least, there’s been a steady re-emergence of hardware at the center of the workflow. But most don’t employ such a restrictive workflow as the OP-1.
@ellisedwardsx (Reddit) Love it. Everything cool about Op-1 to me but minus the things I didn’t like. I love the new additions.
“It favors those that have playing skills” YouTuber and OP-1 expert Cuckoo told Engadget referring to the OP-1’s live tape recording approach. “Like if you want to be incredibly immediate, and you appreciate that, then it's for you,” he added.
Cuckoo, like many fans of Teenage Engineering, sees the lack of things like a MIDI sequencer or the ability to add and remove effects at any stage in the creation process as a good thing. A typical DAW lets you move single notes around or change almost anything at any time which “feels” more useful (and it’s how most modern production tools work). The OP-1, for example, is just a lot more committal. Once that idea is recorded to tape, you’re limited in what can be done with it. But for some, that’s what makes it so exciting. Everything you do nudges you forward in the song-making process or, at the very least, avoids you jamming away several hours as you trawl through VST presets as often happens in something like Ableton Live.
“Teenage Engineering, they're very good at minimizing your options, and in a good way. Like on a computer, everything is possible. But because everything is possible, it's not like one optimized workflow. You need to find that workflow for yourself. And most people probably don't create like, a tight workflow.” Cuckoo said.
While this alternative way of working does have its fans, it can almost feel like learning a new language if you’re used to a more conventional DAW+MIDI situation. This will likely mean, at the beginning at least, you’ll almost certainly spend as much time googling for answers as you do actually creating. Before long though, you’ll start noticing the exciting things you can do that your faithful old DAW may never have put in your mind (even if it’s something it can do).
James Trew / Engadget
Take the built-in FM radio on the OP-1, for example, as Cuckoo illustrates. “Sometimes I've been performing with an OP-1, and been sampling from the FM radio, chopping up a [drum] kit, making a song, making an improvisation in like, maybe seven minutes or so?” Making a song based on sampling the radio in under 10 minutes is not something most gear inspires you to do.
Most don’t have an FM radio to be fair.
You can also use that radio in other, weird creative ways. You could use it to modulate an LFO, for example (so the song on the radio is controlling a filter or other parameter). Or you can create synth sounds using a random FM sample looped and twisted in creative ways. You can also broadcast over FM (albeit incredibly short distances), which works perfectly with Teenage Engineering’s OB-4 speaker, which also has an FM receiver.
@finc (Reddit) But what is a tiny low powered FM transmitter for?
This is really where Teenage Engineering excels: adding playful touches that open creative opportunities you might not find elsewhere. Along with the FM radio, the OP-1 Field has a gyroscope that can also be used as a modulator, which makes it exciting for live performances.
And while MIDI Bluetooth LE is becoming more common, it makes so much sense in the portable form-factor of both the TX-6 and the OP-1 Field. Using the TX-6 wirelessly with the OP-1 Field was easy to setup and felt very natural. In fact they both play nice with iOS natively too, so if you have a suite of mobile apps you enjoy using already, you can carve out quite a capable mobile studio with a good mix of hardware and software alike.
Using the OP-1 Field together with the TX-6 does feel decadent. The 3.5mm inputs on the mixer naturally makes you think twice about what you might plug into it. I could plug a full-size synth into it, or maybe something like the MPC Live II, but this would also be a bit bizarre. A relatively comprehensive mixer it might be, but its size begs you to, well, take it outside we suppose?
“I think ‘Field’ leads my mind to think about field recordings and to be out in the field, work outside of the office… music that is portable. And I think it's very obvious especially if you look at the TX-6 mixer [...] this thing is so incredibly well designed. And it's hard to convince people that get angry when they see the price tag that how incredibly well engineered it is.” Cuckoo said.
@Brokener Than (YouTube) The personal aspect of buying the OP-1 and what you get out of it is really the only justification you need to overpay for an instrument.
But if portability is the key behind the “Field” moniker, Teenage Engineering has to convince people to part with thousands of dollars when there are apps and even mixers that can combine to do something similar for a fraction of the price.
“A lot of people say, well, ‘you can do all of this with Ableton Live and a computer that costs half the price!’ But it's not the point. It's, I think, if the result is all that counts, the end product you can do a lot of stuff with an iPad, and some apps are free even. The result is not everything, it’s also mastering a device. Like, playing this live is a joy.” Cuckoo said.
James Trew / Engadget
Take a look around on YouTube at who is actually using the OP-1 or the TX-6. If you look (and listen) long enough you’ll maybe start to see that there has maybe long been a corner of the music making world that doesn’t feel at home with the pads of an MPC, or the endless expandability of Ableton Live. A group that doesn’t want to get pulled into the world of modular synthesis or circuit bending. People that have a fondness for alternative methods but with an appreciation for well-designed hardware. As long as they have the means.
It’s hard not to get sucked into the OP-1 Field. I personally struggled with the workflow initially, but I find it curious. Something tells me that if I stick with it and break my old MIDI/DAW habits, great rewards await. The TX-6 mixer on the other hand is a harder sell while still somehow incredibly appealing. Especially if you are already flush with portable gear, it makes a lot more sense. Perhaps the important question is: What is next in the Field series. An OP-Z Field? Some high-end Pocket Operators? Something completely different? Whatever it is, it could be the company’s most crucial device yet, or what ultimately alienates the company’s loyal fan base.
Just as it is to Eddie Munson in Stranger Things 4, Metallica's "Master of Puppets" is, to me, the “most metal ever.” I spent my teen years obsessively learning the guitar, and Metallica was one of my biggest influences. The combination of vocalist and rhythm guitarist James Hetfield's thrash riffs and progressive song structures along with lead guitarist Kirk Hammett's shredding gave me plenty to try and master. I was never quite fast or precise enough to fully nail Metallica's hardest songs, but I could do a pretty decent impression when I was on my game.
Some 20-plus years later, I am decidedly not on my game, having only played sporadically over the last decade. I've tried getting back into playing in fits and starts, but nothing has really stuck. Just recently, though, Finnish company Yousician came on my radar thanks to a collaboration with — who else? — Metallica.
At a high level, the Yousician software listens to your guitar playing and matches it to the lesson or song you're trying to play, giving you a higher score depending on how accurate you are. The app features courses and songs for guitar, piano, bass, ukulele and vocals, but my time was only spent on the guitar section.
For people who've never played before, there are loads of introductory lessons — but the most interesting thing about Yousician for someone like me are the song transcriptions. The app is loaded up with tons of popular songs that have, in my limited testing, fairly accurate transcriptions that help you learn to play along with the original recording. Queuing a song up brings up a continuously scrolling tablature overview of the song; play along with it and Yousician will try and tell you if you hit a chord right on the beat, whether you're a little early or late or whether you blew it completely.
From what I can tell, the vast majority of the music on Yousician has been recorded by session musicians — so you're not playing along to the original Nirvana or Foo Fighters tracks, but a well-recorded, though somewhat soulless, reproduction. That's OK, as these exercises work well enough for learning a song, and then you can just go play along with the original once you have it perfected.
But the Metallica course is different, and far more compelling. Yousician got access to the master recording for 10 of the band's songs, which means you're learning from and playing along with the original songs you (presumably) love.
The Metallica portion of Yousician isn't limited to learning specific songs, however. There are three courses to play through: Riff Life, Rock in Rhythm and Take the Lead, each of which dives into a different aspect of the band's music. Each of those courses, in turn, has a handful of lessons focused on a song and the skills needed to play it. There are also videos featuring members of the band talking about the overarching concept. While James and Kirk aren't literally teaching you the songs, it's still great to see them play up close and personal and hear about how they approach writing and performing.
For example, the "Rock in Rhythm" course has a whole section on downpicking, a more percussive and aggressive way of using your picking hand that has come to define much of Metallica's riffs and heavy metal music in general. Seeing James Hetfield perform some of his most complicated and fast riffs in great detail is an absolute treat.
Mixed in with these videos are lessons that focus on a specific part of a song. The Riff Life course starts things out extremely simple, with the key riffs to songs like "For Whom the Bell Tolls," "Nothing Else Matters'' and "Enter Sandman." These lessons follow a pretty standard format. First, you'll listen to the isolated guitar part to get it in your head, sometimes accompanied by a Yousician instructor showing you how to approach the song. After that, you play the part in the context of the song, starting out slowly and then gradually speeding up to play it at full speed. Then, to complete the lesson, you perform the complete song.
For that last option, Yousician offers multiple ways to move forward. If you're a beginner, you can play simplified versions of the song — but Yousician also includes full versions of the rhythm guitar track or a combo of the rhythm and lead parts. If you're just learning the song for the first time, you're not going to want to jump right into those versions. But if you're up for the challenge, the practice mode helpfully divides the song up into sections like intro, verse, chorus, solo and so forth. You can slow the song down, work on those sections, and then string the entire thing together. The app uses time stretching so that the music’s pitch isn’t affected.
As someone already familiar with the Metallica songs included, I can tell Yousician has done an impressive job with these full transcriptions. I've already picked up some tricks and learned a few improved ways to play these songs, even for very simple parts like the opening riff to "Enter Sandman." I've known that song basically since I first picked up a guitar, but Yousician identified that Hetfield plays the riff with his left hand in a fairly unconventional finger position, one that is not simple but makes the notes ring out clearer once you master it.
The lead guitar parts are also impressively detailed, considering how fast and complex some of Hammett's solos can be. This is a case where I'm sure it helped to have access to Metallica's master recordings for these songs; being able to isolate parts and slow things down makes the learning process much more accessible and also likely made a difference in the accuracy of the transcriptions. While I can't say that the notation for extremely fast solos like those in "One" or "Battery" are 100-percent accurate, they should be good enough for a convincing performance.
A screenshot of the guitar tablature for the guitar solo in the Metallica song "One."
Unfortunately, I ran into some problems when trying to tackle the aforementioned epic, “Master of Puppets.” While I was working my way through the downpicking lessons, I was presented with the riff played during the main verse. Whether through my own ineptitude, Yousician not “hearing” me well enough or some other unknown issue, I simply could not play the riff accurately enough to move forward. It’s definitely a fast one, but even at slowed down speeds, Yousician consistently didn’t recognize that I was hitting the sliding power chords that anchor the end of the riff. A colleague of mine had previously tried Yousician and had a similar problem with the app not recognizing his playing, which can be a major bummer if you’re trying to ace each lesson.
I can’t say why this happened with this particular riff. Yousician did a good job at hearing me play the song’s introduction, which is equally fast and pretty complex in its own right. There seemed to be something specific to those sliding chords that the app had a hard time picking up. I’m not well-practiced enough to attempt the fastest solos the Metallica course offers, so I can’t say how well it’ll pick those up, but it did a fine job of recognizing the quick, arpeggiated licks near the end of the “Fade to Black” solo. Yousician did a better job of picking things up when I plugged my guitar straight into my computer using the iRig 2 interface. But since I don't usually go straight into my computer, I didn't have any virtual amps or effects set up, which meant playing wasn't nearly as much fun as it is through my amp.
Despite these occasional issues, I really enjoyed the Yousician Metallica course. Whether it’s worth the money is another question altogether – Yousician costs $140 a year or $30 a month. That’s not cheap, but it’s less expensive than the private guitar lessons I took 20 years ago. Obviously, Yousician can’t tailor its lessons to me, but I’m still impressed with the attention to detail and comprehensive nature of the Metallica course, and there’s a host of other things I could play around with, too. Between the accuracy of the transcriptions, a solid song selection and the ability to slow down tracks for practicing, there’s a lot to like here.
It certainly would have been a fantastic tool when I was learning the guitar as a teenager – but in 2022, there are a wide variety of options for learning your favorite songs. That’s probably the biggest catch with Yousician. Most people will probably be happy to view YouTube instructional videos and look up transcriptions for free online. I just did a quick search for “Master of Puppets guitar lesson” and found a host of excellent videos, including one multi-parter where the instructor spent ten minutes just demonstrating the first two riffs. It was a thorough, detailed lesson from someone who clearly knows the song as well as Metallica’s approach to playing in general.
That said, I’d still encourage Metallica fans to check out a monthly subscription to Yousician. The song selection spans simpler tracks to some of their toughest material, making it useful regardless of your skill level. The video content is entertaining and informative; you don’t often get to see a band speaking so candidly about their approach to playing their instruments. And as good as some YouTube lessons are, being able to look at and play along with detailed tablature transcriptions of extremely fast guitar solos makes the learning experience much better. Those transcriptions combined with the original Metallica master tracks that you can slow down or speed up as needed are an excellent practice tool. For anyone looking to unleash their inner Eddie Munson, Yousician’s Metallica course is a solid place to start.
Google searches with quotes just became much more useful if you're looking for the exact place words appear on a page. The internet giant has updated quote-based searches with page snippets that show exactly where you'll find the text you're looking for. You might not have to scroll through a giant document just to find the right phrase.
There are limitations. Searches with quotes might turn up results that aren't visible (such as meta description tags) or only show up in web addresses and title links. You might not see all of the mentions in a snippet if they're too far apart. You'll "generally" only see bolded mentions on desktop, and you won't see the bolding at all for specialized searches and results (such as image searches and video boxes). You may have to use your browser's on-page search feature to jump to the relevant keywords.
The company characterized the change as a response to feedback. It hesitated to make snippets for these searches in the past, as documents didn't always produce readable descriptions. This is an acknowledgment that people using quotes to search are sometimes "power users" more interested in pinpointing words than reading site descriptions.
TikTok is conducting a broader test of games in its all-conquering app. The company recently added a way for creators in some markets (including the US) to append one of nine mini-games to a video by tapping the Add Link button and choosing the MiniGame option. When viewers come across a video that links to a game, they can start playing it by tapping a link next to the creator's username.
“Currently, we’re exploring bringing HTML5 games to TikTok through integrations with third-party game developers and studios," a TikTok spokesperson told TechCrunch. One of the games is from Aim Lab, the maker of a popular aim training app of the same name. Its TikTok game is called Mr. Aim Lab’s Nightmare. TikTok's other partners on the initiative include developers Voodoo, Nitro Games, FRVR and Lotem.
None of the games have ads or in-app purchases at the minute and the project is in the early stages of testing. TikTok is looking to find out how (or if) creators craft content around them, and how users interact with the games. As The Verge notes, users can record their gameplay and share it in a fresh video.
Reports in recent months suggested TikTok was readying for a major push into gaming. Parent company ByteDance bought game developer Moonton Technology last year. TikTok teamed up with Zynga for an exclusive mobile game called Disco Loco 3D; a charity game called Garden of Good, through which players can trigger donations to Feeding America, became available on the US version of TikTok in June. TikTok previously tested HTML5 games in Vietnam.
Other major tech companies have made a push into mobile gaming, including Apple, Google and, more recently, Netflix. Zynga, of course, became a social gaming giant with the help of Facebook's massive reach, while Facebook moved into cloud gaming in 2020. It's no secret that Meta is trying to ape many of TikTok's features across many of its apps, so it's interesting to see TikTok taking a leaf out of Facebook's playbook on the gaming front.
We hope you weren't using Meta's experimental Tuned app to keep your relationship fires burning. Gizmodoreports Meta is shutting down Tuned on September 19th, and that sign-up attempts for the couple-oriented app now produce errors. The company wasn't shy about its reasons for the move. In a statement to Engadget, a spokesperson said Meta's New Product Experimentation team winds down apps if they "aren’t sticking."
Meta's (then Facebook's) NPE Team launched Tuned in April 2020 to give partners a "private space" where they could share feelings, love notes, challenges and music streams. The timing was apt (if unintentional) given the start of the COVID-19 pandemic. In theory, this helped distant couples cement their bonds when they couldn't connect in person.
It's not certain how many people used the app, though. While Meta brought the initially iOS-only software to Android and said there were "many couples" who used Tuned to get closer, there's little doubt Tuned remained a niche product compared to the likes of Facebook or Instagram. There's a good chance you're hearing about this app for the first time, after all. We'd add that there wasn't much point when you could text, video chat or otherwise use existing services to accomplish many of the same goals.
You might have seen this coming. Meta has routinely shut down experimental apps, and has even axed higher-profile apps when they didn't gain traction. These closures help the company save resources and focus on more popular platforms. As it stands, Tuned was increasingly an outlier for a tech giant shifting its attention from social networking to the metaverse.
Google Calendar has released a new update for an issue that it promised to fix three years ago. The "known senders" feature will finally let you block invitations from people you don't know that can effectively spam up your calendar. With the "Only if the sender is known" toggle enabled under "Event settings," it'll automatically add invites only from people in your contacts list, people you've interacted with, or users on the same domain.
Normally, Google Calendar automatically adds events when you receive emailed invites, no matter who sends them. The only way to prevent this until now was to disable automatic event adding completely, forcing you to manually deal with each invite.
Google
Now, you can have the automatic invitations from folks you know while cutting off spam events like "Crypto meetup 9PM tonight" sent by some rando. Simply navigate to your Google Calendar settings, choose "Event Settings" and choose the "Add invitations to my calendar dropdown." Then, select the option "Only if the sender is known."
You'll still receive spammy invites, but the new option lets you trash them before they ever see your calendar. Google notes that this may alert a sender that they're not in your contacts list, but that seems to be the only potential downside. It's a small but useful tweak, joining recent Google updates for Calendar, Gmail and other apps.
Google Drive is an incredibly powerful tool for storing and organizing all sorts of data. And best of all, it’s available to anyone with a Google account for free (at least to start). Additionally, because Drive holds all your files in the cloud, it offers some important advantages compared to stashing everything locally on your phone or PC. Drive also works on practically any device with an internet connection, which makes it easy to use at home, at school, in the office and everywhere in between.
However, if you’re new to Drive, there are some important basics you should know before you transfer over all your data and files. So here’s a quick rundown covering the most critical things about Google’s popular cloud storage service.
Storage and pricing
Every Google Drive user gets 15GB of free storage. However, any data you have saved in Google Photos also counts towards that limit. So if you’re using that to back up your pictures , you may not have a ton of room left over for documents and files. That said, you can increase your storage in Drive via a Google One subscription, which starts as low as $1.99 a month (or $20 a year) for 100GB of storage and goes up to $9.99 for 2TB of storage (or $100 a year).
Sam Rutherford/Engadget
For most people, 100GB is more than enough to stash important files, work docs, and family photos. But if you’re planning on using Drive as a way to backup all your data, you’ll probably want to go with one of the bigger plans. The nice thing is that even though the basic $20 a year plan is relatively cheap, there are a number of ways to get additional storage for free, at least temporarily. For example, anyone who buys a new Chromebook will get 100GB of space in Drive free for a year, while customers new to Google One may get offers to test the service out with a free one-month subscription.
So before you start uploading all your files, you’re going to want to figure out how much storage you need and how much that may (or may not) cost you.
Uploading, support files, and organization
Once you’ve figured out how much storage you need, you can begin uploading or transferring your files to Drive. For single files or data stored locally on your device, you can simply tap the New button and select the option for File or Folder upload. On a computer, you can also drag and drop files into your browser window when you are on the Drive website. Drive supports a wide variety of file types including most of the most popular formats like .JPGs, .PNGs, .GIFs, .MP3s, and more. For a full list of support file types, check out Google’s official Help Center here.
After you have all your files uploaded, you can manage them just like you would locally on your phone or computer. You can create nested folders and drag and drop files from one place to another. And of course, you can look for a specific file or folder by typing in the search box, though it’s important to remember that if you’re storing a lot of files in Drive, it may take a bit longer to find them (especially if your internet connection isn’t very speedy). So if you’re able to create a general directory of folders for important projects or data sets on day one, you’ll probably save yourself a lot of time and headaches later.
Sam Rutherford/Engadget
It’s also important to note that while you can create new Google Docs, Sheets, Slides, etc. directly within Drive on PC, on mobile you need to install both Drive and the specific productivity program you want individually. That’s because while they all work together, they are considered separate apps.
Another good way to use Google Drive to organize your work is to save templates for various projects in Docs. This allows you to start writing a script or create forms without starting from scratch every time. You can also save templates for things like bibliographies, potentially saving students time when trying to cite sources for a research paper.
Alternatively, instead of using dedicated apps, you can share a Google Sheet with roommates to help figure out the cost of utilities and other shared expenses. And while it wasn’t strictly designed for this, students have discovered that when places like schools ban or put restrictions on typical messaging apps, you can still chat with friends using Google Docs. All you have to do is invite people to a blank doc and then use real-time collaboration to talk and respond to each other. And once you’re done, you can simply delete the doc, or keep it around for another day.
Collaboration
In addition to making cloud storage simple and easy to use, one of Google Drive’s most powerful features is its range of collaboration tools. Sharing a file or document with someone else is as simple as hitting the share button and entering their email. Alternatively, Drive can generate a link to send via text, social media or your messaging app of choice. Once someone has access, you’ll be able to view or edit the file with them in real-time.
That said, it’s important to know who you’re sharing your files with and how they are using them. For example, it might be really helpful to give editing permission to a teacher or mentor if you’re looking for help with an essay, but less so if you’re just sharing an ebook with a friend. In addition to the owner of the file, Drive offers three different levels of access: viewer, commenter and editor. And if something goes wrong and you ever want to see an older copy of a Google Doc, Sheet or Slide, you can open the File menu and select the option that says Version history.
Sam Rutherford/Engadget
Viewers are only able to see and read the document, but don’t have the ability to change any of the content. Commenters can view and surface thoughts and questions about the content by using Google’s Comment tool, while editors can make changes just like the owner of a doc.
If you want to see files that others have sent you recently, you can click on Google Drive’s Shared with me tab. And if you have a Google Workspace account through school or work, you can also open the handy Activity Dashboard by clicking on the squiggly icon. (It’s in the top right next to the blue Share button on a desktop.) Finally, if you want a fast way to see which files you’ve shared with others, you can type “to:” into Drive’s search box.
Accessing files offline
While Google Drive is intended primarily as a way to manage docs and files stored in the cloud, it does support offline access, which can be handy when you don’t have a good internet connection. However, there are some steps you need to take before you can get the full benefit of using Drive offline.
Sam Rutherford/Engadget
First, you need to make some changes to your Drive’s settings while connected to the internet before going offline. On a computer, you need to click the gear icon in the top right corner of your Google Drive browser tab, hit Settings and then check the box next to the Offline menu option. On mobile, you’ll need to open the Drive app, find a specific file and then designate for offline access by enabling the option from the More icon (it's the one that looks like three vertical dots). Once you do that, you’ll be able to access, edit and save any changes you make. And the next time your device connects to the internet, it will automatically sync any changes you made to the offline doc to the one saved in the cloud. Meanwhile on a Chromebook, all you have to do is open up your Google Drive settings, scroll down, check the box next to the Offline option and hit Done.
Text-to-image generation is the hot algorithmic process right now, with OpenAI’s Craiyon (formerly DALL-E mini) and Google’s Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that it hopes will help to build immersive worlds in the Metaverse and create high digital art.
A lot of work into creating an image based on just the phrase, “there's a horse in the hospital,” when using a generation AI. First the phrase itself is fed through a transformer model, a neural network that parses the words of the sentence and develops a contextual understanding of their relationship to one another. Once it gets the gist of what the user is describing, the AI will synthesize a new image using a set of GANs (generative adversarial networks).
Thanks to efforts in recent years to train ML models on increasingly expandisve, high-definition image sets with well-curated text descriptions, today’s state-of-the-art AIs can create photorealistic images of most whatever nonsense you feed them. The specific creation process differs between AIs.
Meta AI
For example, Google’s Imagen uses a Diffusion model, “which learns to convert a pattern of random dots to images,” per a June Keyword blog. “These images first start as low resolution and then progressively increase in resolution.” Google’s Parti AI, on the other hand, “first converts a collection of images into a sequence of code entries, similar to puzzle pieces. A given text prompt is then translated into these code entries and a new image is created.”
While these systems can create most anything described to them, the user doesn’t have any control over the specific aspects of the output image. “To realize AI’s potential to push creative expression forward,” Meta CEO Mark Zuckerberg stated in Tuesday’s blog, “people should be able to shape and control the content a system generates.”
The company’s “exploratory AI research concept,” dubbed Make-A-Scene, does just that by incorporating user-created sketches to its text-based image generation, outputting a 2,048 x 2,048-pixel image. This combination allows the user to not just describe what they want in the image but also dictate the image’s overall composition as well. “It demonstrates how people can use both text and simple drawings to convey their vision with greater specificity, using a variety of elements, forms, arrangements, depth, compositions, and structures,” Zuckerberg said.
In testing, a panel of human evaluators overwhelmingly chose the text-and-sketch image over the text-only image as better aligned with the original sketch (99.54 percent of the time) and better aligned with the original text description 66 percent of the time. To further develop the technology, Meta has shared its Make-A-Scene demo with prominent AI artists including Sofia Crespo, Scott Eaton, Alexander Reben, and Refik Anadol, who will use the system and provide feedback. There’s no word on when the AI will be made available to the public.
Google is rolling out Chrome OS version 103, which includes features that will make it easier for users to share things between Chromebooks and Android devices. For one thing, as the company announced at CES, Phone Hub is getting an upgrade. From your Chromebook, you'll instantly be able to access the latest photos you took with your phone, even when you're offline.
When you take a picture with your phone, it will automatically show up in the Recent Photos section of Phone Hub (which allows you to control some of your mobile device's features from your laptop). You'll need to click on the image to download it, though it's a more elegant option than going to the Google Photos website or emailing yourself a photo.
Google
Also new is a way to get a Chromebook connected to the internet more quickly. If you're trying to link your laptop to a WiFi network that's already saved on your Android phone, you can use Nearby Share. Go to the WiFi network tab in the internet settings on your phone. After you select the Share option, you can tap the Nearby button and choose the Chromebook you want to get online. The Chromebook should then automatically gain access to the internet and save the login credentials.
In addition, Google revealed the Chrome OS Screencast app it announced earlier this month will start rolling out this week. You can use that to record, trim and transcribe video.
Later this summer, Chromebooks will gain fast pairing support for hundreds of Bluetooth headphone models including, of course, Pixel Buds. Fast Pair will save the headphones to your Google account, so both your Chromebook and Android phone can connect to them swiftly.
Google said it will roll out more features to make Chromebooks and Android devices play more nicely with each other later this year. The company is looking to take a page out of Apple's playbook with updates like these. Apple has long offered deep integration between its devices, including features such as WiFi password sharing and iCloud photo syncing, which helps it get people more invested in its ecosystem.