Meta has called for legislation that would require app stores to get parental approval before their teens download any app. That would effectively put more onus on parents, as well as Google and Apple, to protect younger users from apps that have the potential to cause harm.
"Parents should approve their teen’s app downloads, and we support federal legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps," Antigone Davis, Meta's global head of safety, wrote. The company is proposing a plan that would see app stores notifying parents when their teen wants to download an app, in a similar way to how they are alerted when a kid wants to makean in-app purchase. The parent would then approve or deny the request.
Meta says its approach would let parents verify their teen's age when they set up a phone, rather than requiring everyone to verify their age multiple times across various apps. The company suggests legislation is needed to make sure all apps that teens use are held to the same standard.
It notes that states are enacting "a patchwork of different laws," some requiring teens to get parental approval for different apps and others mandating age verification. However, "teens move interchangeably between many websites and apps, and social media laws that hold different platforms to different standards in different states will mean teens are inconsistently protected," Davis wrote.
Under current proposals, Meta argues that parents would need to navigate different signup methods and provide "potentially sensitive identification information" for themselves and their teens "to apps with inconsistent security and privacy practices." Indeed, experts say that such age verification practices threaten the privacy of all users.
Utah is enacting legislation that will require social media apps to obtain parental content before a teen can use them. That state and Arkansas both passed social media age verification laws. Following a lawsuit from tech companies, a federal judge struck down the Arkansas legislation a day before it was set to take effect in September. The Utah laws are scheduled to come into force in March.
In its call for federal legislation, this could be seen as a case of Meta trying to pass the buck to parents and app stores. A judge this week rejected attempts by Meta, YouTube parent Google and TikTok owner ByteDance to dismiss lawsuits blaming them for teens' social media addiction. In October, 41 states and the District of Columbia sued Meta for allegedly releasing "harmful features on Instagram and Facebook that addict children and teens to their mental and physical detriment," among other things.
This article originally appeared on Engadget at https://www.engadget.com/meta-calls-for-legislation-to-require-parental-approval-for-teens-app-downloads-171016744.html?src=rss
Instagram is expanding its Close Friends feature from Stories and Notes to feed posts and Reels. As such, you'll be able to share Reels and feed posts with a smaller, perhaps more trusted audience instead of everyone who follows you.
The Instagram team says folks use Close Friends "as a pressure-free space to connect with the people that matter most." By expanding the Close Friends option to Reels and feed posts, the developers hope you'll have "more ways to be your most authentic self on Instagram while having more choices over who sees your content."
Sharing a Reel or feed post only with Close Friends is pretty straightforward. When you're creating one, hit the Audience button, select Close Friends and then tap Share. The post or Reel will have a green star label, so those on your Close Friends list who see it will know they're part of an exclusive club. To highlight the expansion of the feature, you might see the app's plus button turn into a green star icon today.
It's worth noting that the Close Friends list will be the same group of people across all Instagram features. However, Instagram has been looking at other ways for everyone to share things with smaller audiences. Last month, Instagram head Adam Mosseri revealed that his team was experimenting with a way to let users share Stories with different subsets of followers. Facebook users have long been able to set up many different lists of friends and choose which one to share a post with.
This article originally appeared on Engadget at https://www.engadget.com/you-can-now-limit-instagram-posts-and-reels-to-close-friends-181123680.html?src=rss
Last spring, Teenage Engineering announced a curious, tiny mixer. At $1,200 the TX-6 appeared to pair a serious price tag with almost comically small controls. It divided music making forums with naysayers deeming it evidence that the company was squandering its reputation as a maker of spendy-but-delightful products. Importantly, the TX-6 was the first in a new line of “Field” products. It was soon joined by the OP-1 Field synthesizer, but until recently that was it, and a mixer with a synth didn’t feel like much of a “system”.
With the arrival of the TP-7 recorder and the CM-15 microphone, the Field family is complete — although the company hasn’t ruled out adding more products further down the line. And like some sort of heavily designed musical Infinity Stones, all four products feel far more exciting and powerful together than they do individually. Or, forced metaphors aside, it’s easier to see where the company was going with all this now that the family is complete.
That’s if the $5,900 entry fee for the full set doesn’t make you balk. But let’s ignore the economics for now, as that’s an accepted part of the Teenage Engineering experience at this point. What we have here is a compact, creativity-inducing system that’s like no other and this off-beat, playful approach to product design is something I wish we saw more of (and ideally in a more accessible way).
We’ve already covered the TX-6 mixer and the OP-1 Field synthesizer and how they interact with each other. But as the two new arrivals bring their own set of skills to the Field system, most of which is laid out below. I say most, as every time I tinker with them, it occurs to me to try something new. Similarly, revisiting the online guide seems to have an uncanny ability to throw up things you missed last time, further unlocking ideas or features.
CM-15
Teenage Engineering’s first studio microphone is nothing if not beautiful. The Field aesthetic of small, rectangular CNC aluminum makes the most sense here out of all the products. The CM15 could really just be another fancy microphone. The CM15 is also the one where the price is most inconspicuous, given high-end microphones tend to start around the $1,200 that you’ll need to spend to add this to your collection.
The CM15 is a large diaphragm condenser microphone which is the type preferred in studios and tends to be a lot more sensitive than something like the podcaster’s favorite Shure SM7b. This microphone doesn’t have elaborate features like internal storage or any type of sound modification tools, but it’s not without some interesting details. For one, the CM15 has three output options — mini XLR, USB and 3.5mm — which makes it compatible with a wide range of devices. Specifically, as the CM15 has its own battery, it plays nice with more USB devices than rival condensers that may require more juice than your phone can deliver.
A switch around the back offers three levels of gain adjustment (neutral and +/- 18dB) which is handy given the variety of things you can plug this into. The gain is analog and in testing sounds pretty clean, with only a marginal effect on the noise floor. I found being able to quickly adjust the gain directly on the mic for different situations made this mic feel like a really good all-rounder, both at home or on the go.
With regard to the Field range, and the intercompatibility thereof, there’s less here than other devices in the family. When you plug the CM15 into the TP-7 recorder over USB it recognizes it as the CM15 and presents you a cute mini icon of it. When the mic is detected you’ll also have the option to add an additional 12dB of digital gain — something that’s not an option when plugging in a phone, for example. The CM15 is also the only mic I tried that worked with the USB port of the TX-6 mixer. This allows you to add effects and, of course, mix it with other instruments, but also this frees up an analog input if needed (though the mic will share channel six with anything else on that input).
Teenage Engineering states the CM15 can also be used as an audio interface, but when tested this didn’t work for Windows, MacOS or iOS. Though it will work as a USB mic for all those operating systems.
As for sound, the CM15 is a very “close” sounding condenser microphone. By that, I mean it never seemed to pick up a lot of the room which can often be the case with condensers, especially those with larger diaphragms. This is due to the supercardioid polar pattern but the result is perfect for mobile applications where you may find yourself in different environments and the CM15 will deliver fairly consistent sound. For my voice, I might appreciate the option to bump the mid-high frequencies a touch, but for most everything else, including foley and instruments, the CM15 sounds bright and clear.
TP-7
I’ll say it straight up front, the $1,499 TP-7 is my favorite of the four Field devices. The OP-1 Field is the flagship, but for pure portability to functionality balance, the TP-7 wins. Described as a “Field recorder” the TP-7 takes the idea of a portable cassette recorder and brings it up to date for the 21st century. There’s a built-in microphone, 128GB of storage and three stereo inputs (that can also be outputs). It can record multitrack podcasts, has tactile scrubbing controls and a thumb rocker and can even become a tiny turntable complete with scratching and physical pitch control.
First and foremost though, the TP-7 is a capable recorder. Press and hold the side button, even when the device is off, and it’ll spring to life and start recording via the internal mic. This feature is more about recording short notes and ideas which you can then have transcribed via a companion app. The app connects over Bluetooth or USB, works offline and will even identify different speakers. It’s not as fully featured as a paid service like Trint or Otter but it’s really cool extra functionality. I even tried loading an old interview I had on my PC onto the TP-7 and the app happily transcribed that, too. The only restriction seemingly being that you have a TP-7 (you can’t load an audio up from your phone within the app, for example).
Beyond memo recording is more general recording of the TP-7’s various inputs. As with the TX-6 mixer, your main inputs are 3.5mm ports which isn’t ideal but most things with a line signal can be wrangled into 3.5mm easily enough. You can also record audio into and out of your phone via USB-C (including the iPhone 15) or directly from the CM15 digitally and over 3.5mm analog at the same time, if you wanted.
The three 3.5mm ports can be configured for line-level or headset/TRRS input or flipped into outputs. Line level will cover most instruments and active electronics with audio output, while headset mode is for anything with a lower output signal such as, well, headsets, but also some other unpowered microphones like lavaliers. I even had some success recording an SM7b via an XLR to TRRS adapter. You can add up to 45dB of gain to the 3.5mm inputs, and with about 35dB the output from Shure’s gain-hungry mic was quiet, but clean and usable. Other XLR dynamic mics were much louder and usable.
With three microphones connected this way, the TP-7 will spit out a multitrack WAV file with each one recorded on its own channel making this a capable podcast recording tool or mini studio recorder that you can mix properly after the fact.
Connect a phone to the TP-7 over USB-C and you can record any sound directly, so you could grab the audio from a video and transcribe it with the app, or load up a beat and then sing or rhyme over it for an on the go demo whenever inspiration strikes. When playing back on the TP-7 the main front disk rotates and you can speed it up, slow it down or even do some rudimentary scratching. This could be used for effect when feeding the output into the TX-6 mixer for recording onto another device.
Multitrack also works for playback. So if you have a WAV file that has drums, vocals, synth and bass as different tracks, you can play it on the TP-7 into the TX-6 over USB and you can mix and add effects to each part of the track separately. In this way, you can use the pair as an effective performance tool, creating an intro with just the beat, adding in the bassline and so on.
Taking this concept even further, with two TP-7s and the TX-6 mixer you effectively have a pair of tiny turntables, with actual turning platters, that can be pitched up or down in real time into the mixer. It’s a classic analog DJ setup but the size of a paperback. I tried it, and mixing this way is really hard as using the jog wheel to alter pitch is a bit heavy handed. You can adjust the pitch more gently by holding the side button and then using the jog wheel, but if, like me, you haven’t mixed this way in 20 years, it takes a little getting used to. It’s also a little OTT to be fair.
What’s much more reasonable, is using the TP-7 as a general audio player. You can load files onto it, and then play them back either on the internal speaker or (preferably) via headphones. You can use the side rocker or the main wheel to control the playback, too. Currently you can only play .wav and .flac file extensions, which is fine, but the lack of mp3 feels like an obvious omission (Teenage Engineering confirmed support is incoming).
The flexibility of the TP-7 doesn’t stop out in the field. Connect it to your PC and it’ll become an audio interface, too. Or at least, that’s the idea. Right now on Windows I only had it working briefly and not in full. On macOS it was marginally better, but not usable. Bear in mind the TX-6 also offers this functionality, and after months that still doesn’t work with Windows at all and is still not flawless on macOS. It’s a shame, as at this price point you’d hope it works at launch and across both systems.
There’s really a lot more you can do with the TP-7, especially in combination with the TX-6. There’s Bluetooth MIDI functionality, for one. The two really make a great team, but the above cover much of the main functionality. Everything else starts to get a little bit niche. Fun, but niche. I’m also certain that functionality will continue to grow as Teenage Engineering is generally pretty good about adding features, often based on user feedback.
Putting it all together
After spending days plugging different things into the TP-7 and the TX-6 and trying out various scenarios and ideas, it sometimes felt like that was often half the fun. Wondering what will happen if you do X and connect to Y. Like musical lego. Much of this will be true for many combinations of audio gear, but the Field line does lend itself particularly well to this playful experimentation.
That said, there are some bugs that you might not expect at this price point. The most obvious one I encountered was the audio interface functionality. At launch I would expect Windows and macOS support and for both to be fairly seamless. Other curiosities were less important but still confusing. Sometimes the CM15 wouldn’t be recognized over USB until a restart, or simply using the analog/3.5mm output would sporadically give crunchy audio when recording into one thing, but clear audio on the TP-7. This could well be down to cables, adapters and so on, but when the same scenario works just fine on a product a third of the price it’s harder to justify.
Take the Tula mic, for example. It’s actually a device that’s already quite popular with Teenage Engineering fans. It has a more classic design, but offers similar functionality to both the TP-7 and the CM15 combined. The mic on it maybe isn’t as good as Teenage Engineering’s, and the recorder functionality doesn’t have the fancy rocker and jog wheel controls, but it’s a good mic and a good recorder all in one and it only costs $259 — less than a tenth of the TP-7 and CM15 together.
But as I said up top, this is less about the price. Teenage Engineering fans are aware of the expense that comes with the products. Many consider it worth it just for that extra dash of playfulness that you don’t find elsewhere. (Other fans are, to be clear, still not really okay with the pricing.) That’s perhaps a conundrum that good old market forces can decide. If, after all these years, the company is still chugging along, it suggests there are plenty of people that consider it a premium worth paying.
What is less contested is that Teenage Engineering does something unique enough to earn it enough fans for there to even be an argument. Or an article like this one. The Field system, in my opinion, exemplifies what the company does best. Interesting tools that have a practical core and a less practical fun side. Individually all four field items will solve a basic problem, like most products do. Together they become a little bit more than the sum of their parts. If you believe creativity lives in that space between functionality and possibility then the Field range creates enough room here for the right kind of creator that the price
This article originally appeared on Engadget at https://www.engadget.com/teenage-engineering-tp7-cm15-field-series-review-170052292.html?src=rss
Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.
Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.
Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology.
SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.
Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months.
This article originally appeared on Engadget at https://www.engadget.com/googles-ai-empowered-search-feature-goes-global-with-expansion-to-120-countries-180028084.html?src=rss
Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.
Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.
Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology.
SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.
Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months.
This article originally appeared on Engadget at https://www.engadget.com/googles-ai-powered-search-feature-goes-global-with-a-120-country-expansion-180028037.html?src=rss
Many Pixel owners have been left with a bad taste in their mouths after it took Google over a month to fix a serious bug, Ars Technica has reported. It first appeared with the launch of Android 14 back on October 8th, locking some users with multiple accounts out of their device's local storage. It affects multiple devices ranging from the Pixel 4 to the Pixel 8, and for many users, it was akin to being locked out of their phone by ransomware.
Some folks were unable to unlock their devices, while others were able to boot up but had no access to lock storage. However, the bug rendered some phones completely unusable, as they would continuously bootloop and never reach the home screen. Reports of the issue appeared shortly after Android 14 launched, but Google kept rolling out the buggy release and only acknowledged the flaw some 20 days after it appeared.
The November update patch is now rolling out, but the initial November 2 release notes weren't very positive. Google said users locked out of their storage may only get some data back, and those experiencing a bootloop may lose everything. Today's update, however, states that users who were unable to access media storage should get all their data back once they install the November patch.
Those stuck in a reboot may not be as lucky, though. Those folks will be able to at least get up and running again after submitting a form. However Google said that "data recovery solutions are still being investigated for devices that are repeatedly rebooting," adding that "we'll share additional updates soon."
The sordid episode shows how Google failed to properly implement its own much-touted failsafe systems, as Ars Technica noted. It kept rolling out Android 14 with the flaw despite multiple reports, and the vaunted dual partition system didn't work because it didn't accurately detect a boot failure. Finally, it took Google ages to elevate the issue to a higher priority, leaving many users stuck with bricked phones for weeks. "Little did I realize that 'seven years of updates' was not a feature, but a threat," said one disgusted user on Google's issue tracker.
This article originally appeared on Engadget at https://www.engadget.com/google-has-fixed-a-bug-in-android-14-that-locked-pixel-users-out-of-their-devices-061040556.html?src=rss
Microsoft is injecting a ton of generative AI-powered features into Windows 11, but it's not all about the Copilot assistant. The company has started to update a string of apps with new AI functions, including Paint, Clipchamp, Snipping Tool and Photos. Microsoft released an update for Windows 11 2023, known as 23H2, on October 31. That update expanded access to Copilot and other AI features.
Microsoft is rolling out the AI updates gradually, so you may not have access to everything just yet. Still, it may be handy for you to know what you can do with the new tools. Here are some pointers on how to use the AI features in each app.
How to use Paint in Windows 11
An AI-infused version of Paint that includes generative AI features is rolling out to Windows 11 users. Microsoft Paint Cocreator taps into the DALL-E model to enable you to create images based on a text description. The feature will whip up just about anything you can think of (within reason).
It's easy enough to get started with Cocreator, as long as you have access to it. To begin with, Cocreator is available in the US, UK, France, Australia, Canada, Italy and Germany. Only prompts in English are supported for now. At the outset, there's a waitlist to use Cocreator. You can join this from the Cocreator side panel and you'll receive an email to let you know when you can start using the feature.
You'll need to sign into your Microsoft account to use Cocreator. That's because the cloud-based service Cocreator runs on requires authentication and authorization. You also need to sign in to access credits; you'll need these to generate images with DALL-E. When you join Cocreator, you'll receive 50 credits with which you can create images. Each generated image costs one credit.
Microsoft
How to install Paint on Microsoft Windows 11
If you don't already have Paint installed, you can download it from the Microsoft Store. Once you have it, open Paint and select the Cocreator icon on the toolbar. From there, you can type in a description of the image you'd like the AI to generate. Microsoft suggests being as descriptive as possible in order to get results that match your concept.
After entering the text, select a style that you'd like your image to be in. Then hit the Create button.
Cocreator will then generate three different images based on your text input and the style you chose. Simply click on one of these images to add it to the Paint canvas so you can start modifying it.
Meanwhile, Paint now supports background removal as well as layers. With the help of AI, you can isolate an item (such as an object or person) and remove the background with a single click. You can also edit individual layers without affecting the rest of the image.
How to use video auto composition with Clipchamp on Windows 11
It should be easier for you to stitch footage together in the video-editing tool Clipchamp. The app will help guide you with automated suggestions for the likes of scenes, edits and narratives. But it's the auto compose feature that may prove most useful for many users. Auto compose is available on the web and in the Microsoft Clipchamp desktop app.
Microsoft says that the media you add to Clipchamp is not used to train AI models and all of the processing takes place in the app or browser. The app's AI video editor (which Microsoft says is useful for everyone) can automatically generate slideshows, montage videos and short videos in 1080p based on the photos and videos you add to it.
If you don't like the first video that Clipchamp offers up, you can check out a different version "instantly" since the app will generate multiple videos for you. Auto compose may also prove useful for professional video editors, Microsoft says, as the tool can generate several unique videos in the space of a few minutes.
Microsoft
After you sign into Clipchamp, click the "Create a video with AI" button. You'll find this front and center on the main page. After you give your project a working title, you can upload media by clicking the "Click to add or drag and drop" button. Alternatively, you can simply drag and drop videos and photos into the media window.
Once you've finished adding everything, hit the "Get started" button.Now, it's a case of letting the AI know what kind of style and aesthetic you're looking for. Styles include things like elegant, vibrant and bold. You'll use thumbs up and thumbs down buttons to inform the AI of your preferences. Alternatively, you can leave the decision up to Clipchamp by selecting the "Choose for me" option. When you're ready to move onto the following step, click the Next button.
Microsoft
Clipchamp will suggest a length for your video based on what it believes are the best combinations of your media. You'll be able to adjust the video length and the aspect ratio before moving on. Before you leave this screen, you can preview the video by clicking the play button.
Next up, you'll be able to change the background music on the "Finish your video" screen if you're not a fan of the track that the AI picked. Click the music button to change the tune. Again, you'll be able to preview your video and audio track. If you're not happy with the video, you can ask for a different take by clicking on "Create a new version."
Microsoft
If you do like the video Clipchamp has created, you're pretty much done at this point. Click the Export button to save the video. From the export page, you can share your video directly to the likes of YouTube and TikTok, or add a copy to your OneDrive storage.
After the AI is done with your video, you can further customize it in Clipchamp. Click on the "Edit in timeline" button and you'll be able to do things like add stickers, captions, animated text and audio files.
In addition, you can enhance your video with AI options including a text-to-speech voiceover feature and automatically generated subtitles. The speaker coach tool aims to provide you with real-time feedback on your camera recordings to help improve your speaking skills and video presentations.
Many Clipchamp features are available for free. But for videos in 4K resolution and other premium tools, you'll need to pay for the essentials plan, which costs $12 per month or $120 per year.
How to use Snipping Tool's AI features
The Snipping Tool is one of the most useful in Windows 11. It's a cinch to capture and share some or all of your display. The app's AI functions should come in useful in a number of ways.
First, the app supports text recognition. If you use the Snipping Tool to take a screenshot of something with text in it, you can click the Text Actions button. At the outset, you'll have two main options. You can copy all of the text and paste it into another app.
Tech Based/YouTube
Alternatively, you can quickly redact private information. The tool should be able to recognize email addresses and phone numbers, and you'll be able to swiftly blue those out. That should save you having to manually cover up text in, say, Paint.
The Snipping Tool should work quite nicely with Copilot as well. As indicated in a Windows 11 promo video, you can paste something you've clipped with the tool into Copilot, then do things like ask the assistant to remove the background from the image.
How to use Background Blur in Windows 11's Photos app
Microsoft
The Windows 11 Photos app has some useful AI features as well. Those include improved search for images stored on OneDrive accounts —- it should be easier for you to find a photo based on content or location where it was taken.
The app’s editing features have been enhanced thanks to AI as well. One of the handier and easiest-to-use tools is the self-explanatory Background Blur (Paint 3D has a similar feature). That can help the subject of your photo stand out. AI separates the background from the subject, but to ensure your data stays on your device, the separation process takes place there rather than in the cloud.
To use Background Blur, first select the image you want to use and open it in the Photos app. Click on "Edit image" at the top of the screen and select Background Blur. You'll then have a few options to choose from. You can opt to enable the blur effect instantly; adjust the intensity of the blur before applying it; or have more granular control by turning on the "Selection brush tool."
Opt for the Selection brush tool and you can manually add denote more parts of the image for the AI to blur out. Alternatively, you can deselect parts of the image that you don't want to be blurred. You'll be able to change the brush size for finer control and modify the brush softness to intensify or turn down the blue effect.
This article originally appeared on Engadget at https://www.engadget.com/windows-11s-new-ai-features-how-to-use-paint-clipchamp-snipping-tool-and-photos-191541014.html?src=rss
Brave joins the growing list of browsers that come with built-in generative AI assistants. The open source browser developer has started rolling out an update for Brave on desktop, which gives users access to its AI assistant Leo. Brave introduced Leo through its Nightly experimental channel back in August and has been testing it ever since. The assistant is based on the Llama 2 large language model, which Microsoft and Meta had developed together for commercial and research purposes.
Like other AI assistants, users can ask Leo to do various tasks, such as creating summaries of web pages and videos, translating and/or rewriting pages and even generating new content. The Llama 2-powered Leo is available for free to all users, but Brave has also introduced a paid version capable of "higher-quality conversations." Leo Premium, as it's called, is powered by Anthropic's Claude Instant and can produce longer and more detailed responses. Users will have to pay $15 a month for it, but they will also get priority queuing during peak periods and early access to new features.
In its announcement, Brave Software emphasized that Leo preserves users' privacy. The developer said that conversations with Leo are not persisted on its servers and that the assistant's responses are immediately discarded and "not used for model training." It also explained that it doesn't collect IP addresses and retain personal data that can identify a user. Plus, users don't even have to create an account to use Leo.
Back in July, Brave came under fire after it was accused of selling copyrighted information to train artificial intelligence models without consent. "Brave Search has the right to monetize and put terms of service on the output of its search-engine," the company's Chief of Search, Josep M. Pujol, said at the time in response to the allegations. "The 'content of web page' is always an excerpt that depends on the user’s query, always with attribution to the URL of the content. This is a standard and expected feature of all search engines."
Brave is rolling out Leo on desktop in phases over the next few days. Those using the browser on their Android and iOS devices, however, will have to keep an eye out for its release on mobile in the coming months.
This article originally appeared on Engadget at https://www.engadget.com/braves-ai-assistant-comes-to-its-desktop-browser-160010918.html?src=rss
Meta’s Threads continues to grow, all while the service it aped, X, continues to splutter and fall apart. Mark Zuckerberg said that Threads currently has “just under” 100 million monthly active users and that the app could reach 1 billion users in the next couple of years.
Threads picked up 100 million sign-ups in its first week, with easy ways to create an account from your existing Instagram profile. However, engagement dropped off amid complaints about limited functionality and feeds flooded with unwanted posts from brands and users with big audience numbers on Instagram. I was not interested in the piecemeal thoughts of startup execs with a podcast. Shocking, I know.
Meta has since steadily added new features, and engagement seems to have rebounded in recent weeks as Elon Musk continues to make unpopular changes to X, like stripping headlines from links and well, all the other things.
– Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
X (formerly known as Twitter) has begun rolling out yet another feature nobody asked for. Now, users will have the option to call each other via audio and video calls on the platform. This doesn't come as a total surprise, as CEO Linda Yaccarino previously confirmed that video chat would be coming to the social media site back in August. The best explanation for the addition is Elon Musk’s aim to make X the “everything” app – a one-stop shop for multiple features and services.
DJI's Osmo Pocket 3 gimbal camera has arrived with major updates over the previous model, adding a much larger 1-inch sensor that should greatly improve image quality. It also packs a new 2-inch display with 4.7 times the area of the last model. That said, It's also significantly more expensive than the Pocket 2 was at launch. It costs $520 in the US, $170 more than the Pocket 2.
Apple One, Arcade and News+ plans are now more expensive too.
The price of Apple TV+ is going up by $3 per month to $10. The annual TV+ plan has risen from $69 to $99. Apple Arcade is now $7 per month instead of $5. As for Apple News+, that'll now run you $13 per month for a standalone subscription, up from $10. The cost of an Apple TV+ subscription previously went up from $5 per month to $7 in October 2022, meaning that the price of the service has doubled in just over 12 months.
TikTok In The Mix will take place in Mesa, Arizona on December 10 – the first global live music event from the video platform. The headliners are Cardi B, Niall Horan, Anitta and Charlie Puth, with surprise guests and performances by emerging artists. Followers of the four headliners will get presale codes to buy In The Mix tickets starting on October 27. The general sale will start on November 2 and TikTok will stream the event live on its app too.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-metas-threads-reaches-almost-100-million-active-users-111509107.html?src=rss
Back in May, Google announced it was working on a feature called “about this image” that gives users verified data regarding any photo on the internet. Well, it just rolled out as part of search, so you won’t be able to get away with passing off somebody else’s photo of a 1988 Burger King Alf plushie as your own.
Here’s how it works. Just use Google Search, select an image and click on the three dots on the right-hand corner to access the tool. You’ll receive a whole gob of useful information, including when the image was originally published, if it’s been published since then and where it’s popped up throughout the years. A veritable cornucopia of metadata.
Google
The obvious use case scenario for this is verifying whether or not an image used to accompany a news event is legit, or if it’s been taken out of context from something that happened in 2007 to drum up misinformation. To that end, the tool also shows you how other sites use and describe the image, similar to how search already handles factual information via the “perspectives” filter and the “about this result” tab. Google says you can also access the feature by clicking on the “more about this page” link, with more options to come.
Of course, there’s a little thing sweeping the world right now called artificial intelligence. The images generated by AI platforms can be tough to distinguish from the genuine article, so Google’s tool also lets you know if an image has been AI-generated or not. However, this depends on the metadata including this information, so the original image creators would have to opt-in. Google says its own AI-generated images will always feature the appropriate metadata.
That’s not the only tool Google’s rolling out to provide increased nuance for image searches. Fact Check Explorer, a handy app for journalists, will soon expand to include images. As for non-image based searches, the tech giant also announced software that creates AI-generated descriptions of websites, helping users research lesser-known entities.
This article originally appeared on Engadget at https://www.engadget.com/googles-new-image-verification-tool-combs-metadata-to-find-context-and-sniff-out-ai-fakes-165339778.html?src=rss