Posts with «technology & electronics» label

Google's Search Labs lets you test its AI-powered 'products and ideas'

It's fair to say that Google was caught flat-footed by Microsoft's launch of Bing search powered by ChatGPT, as it didn't have anything similar when it unveiled its own conversational AI, Bard. Now, Google has announced Search Labs, a new way for consumers to test "bold new ideas and ideas we're exploring" in search, the company said at its IO conference.

There are three key features available for a limited time. The first is called Search Generative Experience (SGE), bringing generative AI directly into Google Search. "The new Search experience helps you quickly find and make sense of information," Google's Direct of Search wrote. "As you search, you can get the gist of a topic with AI-powered overviews, pointers to explore more, and ways to naturally follow up."

Google

Also available from the Search prompt are Code Tips, that use large language models to provide snippets and "pointers for writing code faster and smarter," according to Google. You can get reponses about languages including Java, Go, Python, Javascript, C++, Kotlin, shell, Docker and Git. 

Finally, "Add to Sheets" lets you insert search results directly into a spreadsheet. For example, if you're planning a vacation on a Sheets document, you can easily add a link straight from Google Search. 

Google's Bard could potentially improve all of Google's products ranging from Maps to Drive. Search, however, is the company's core function and principal moneymaker, and was one of the first things it mentioned when announcing Bard. To that end, it'll be very interesting to see how it compares with what Microsoft's ChatGPT-powered Bing can do.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/googles-search-labs-lets-you-test-its-ai-powered-products-and-ideas-175254478.html?src=rss

Google’s Duet AI brings more generative features to Workspace apps

After OpenAI’s ChatGPT caught the tech world off guard late last year, Google reportedly declared a “code red,” scrambling to plan a response to the new threat. The first fruit of that reorientation trickled out earlier this year with its Bard chatbot and some generative AI features baked into Google Workspace apps. Today at Google I/O 2023, we finally see a more fleshed-out picture of how the company views AI’s role in its cloud-based productivity suite. Google Duet AI is the company’s branding for its collection of AI tools across Workspace apps.

Like Microsoft Copilot for Office apps, Duet AI is an umbrella term for a growing list of generative AI features across Google Workspace apps. (The industry seems to have settled on marketing language depicting generative AI as your workplace ally.) First, the Gmail mobile app will now draft full replies to your emails based on a prompt in a new “Help me write” feature. In addition, the mobile Gmail app will soon add contextual assistance, “allowing you to create professional replies that automatically fill in names and other relevant information.”

Google

Duet AI also makes an appearance in Google Slides. Here, it takes the form of image generation for your presentations. Like Midjourney or DALL-E 2, Duet AI can now turn simple text prompts (entered into the Duet AI “Sidekick” side panel) into AI-generated images to enhance Slides presentations. It could help save you the trouble of scouring the internet for the right slide image while spicing them up with something original.

In Google Sheets, Duet AI can understand the context of a cell’s data and label it accordingly. The spreadsheet app also adds a new “help me organize” feature to create custom plans: describe what you want to do in plain language, and Duet AI will outline strategies and steps to accomplish it. “Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start,” the company said.

Google

Meanwhile, Duet AI in Google Meet can generate custom background images for video calls with a text prompt. Google says the feature can help users “express themselves and deepen connections during video calls while protecting the privacy of their surroundings.” Like the Slides image generation, Duet’s Google Meet integration could be a shortcut to save you from searching for an image that conveys the right ambiance for your meeting (while hiding any unwanted objects or bystanders behind you).

Duet also adds an “assisted writing experience” in Google Docs’ smart canvas. Entering a prompt describing what you want to write about will generate a Docs draft. The feature also works in Docs’ smart chips (automatic suggestions and info about things like documents and people mentioned in a project). Additionally, Google is upgrading Docs’ built-in Grammarly-style tools. A new proofread suggestion pane will offer tips about concise writing, avoiding repetition and using a more formal or active voice. The company adds that you can easily toggle the feature when you don’t want it to nag you about grammar.

Initially, you’ll have to sign up for a waitlist to try the new Duet AI Workspace features. Google says you can enter your info here to be notified as it opens the generative AI features to more users and regions “in the weeks ahead.”

This is a developing story. Please check back for updates.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/googles-duet-ai-brings-more-generative-features-to-workspace-apps-173944737.html?src=rss

Google is incorporating Adobe's Firefly AI image generator into Bard

Back in March, Adobe announced that it too would be jumping into the generative AI pool alongside the likes of Google, Meta, Microsoft and other tech industry heavyweights with the release of Adobe Firefly, a suite of AI features. Available across Adobe's product lineup including Photoshop, After Effects and Premiere Pro, Firefly is designed to eliminate much of the drudge work associated with modern photo and video editing. On Wednesday, Adobe and Google jointly announced during the 2023 I/O event that both Firefly and the Express graphics suite will soon be incorporated into Bard, allowing users to generate, edit and share AI images directly from the chatbot's command line.

Per a release from the company, users will be able to generate an image with Firefly, then edit and modify it using Adobe Express assets, fonts and templates within the Bard platform directly — even post to social media once it's ready. Those generated images will reportedly be of the same high quality that Firefly beta users are already accustomed to as they are all being created from the same database of Adobe Stock images, openly licensed and public domain content. 

Additionally, Google and Adobe will leverage the latter's existing Content Authenticity Initiative to mitigate some of the threats to creators that generative AI poses. This includes a "do not train" list which will preclude a piece of art's inclusion in Firefly's training data as well as persistent tags that will tell future viewers whether or not a work was generated and what model was used to make it. Bard users can expect to see the new features begin rolling out in the coming weeks ahead of a wide-scale release.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/google-is-incorporating-adobes-firefly-ai-image-generator-into-bard-174525371.html?src=rss

Google Photos will use generative AI to straight-up change your images

Google is stuffing generative AI into seemingly all its products, and that now includes the photo app on your phone. The company has previewed an "experimental" Magic Editor tool in Google Photos that can not only fix photos, but outright change them to create the shot you wanted all along. You can move and resize subjects, stretch objects (such as the bench above), remove an unwanted bag strap or even replace an overcast sky with a sunnier version.

Magic Editor will be available in early form to "select" Pixel phones later this year, Google says. The tech giant warns that output might be flawed, and that it will use feedback to improve the technology.

Google is no stranger to AI-based image editing. Magic Eraser already lets you remove unwanted subjects, while Photo Unblur resharpens jittery pictures. Magic Editor, however, takes things a step further. The technology adds content that was never there, and effectively lets you retake snapshots that were less-than-perfectly composed. You can manipulate shots with editors like Adobe's Photoshop, of course, but this is both easier and included in your phone's photo management app.

The addition may be helpful for salvaging photos that would otherwise be unusable. However, it also adds to the list of ethical questions surrounding generative AI. Google Photos' experiment will make it relatively simple to present a version of events that never existed. It may be that much harder to trust someone's social media snaps, even though they're not entirely fake.

This article originally appeared on Engadget at https://www.engadget.com/google-photos-will-use-generative-ai-to-straight-up-change-your-images-171014939.html?src=rss

Nikon's Z8 mirrorless camera offers 8K60p RAW video and 20fps burst speeds

Nikon has announced the 45.7-megapixel Z8, a powerful full-frame mirrorless camera with up to 8K60p RAW video, 20fps RAW burst speeds and more. It's effectively a slimmed-down version of Nikon's Z9, and shares the latter's stacked, backside-illuminated (BSI) sensor and complete lack of a mechanical shutter. The main thing the Z8 lacks next to the Z9 is unlimited video recording, but it's also $1,500 cheaper.

Nikon is best known for photography, but the Z8's headline feature is the 8K60p N-RAW video. There's an interesting story there, as the cinema camera company RED has used its patents to stop other camera companies from using RAW video in the past. However, RED's lawsuit against Nikon was dismissed late last month, allowing Nikon to use N-RAW (a compressed 12-bit RAW codec developed in conjunction with a company called intoPIX) in any of its cameras. It can also capture 12-bit ProRes RAW video. 

Nikon

Along with 8K60p, the Z8 supports 4K capture at up to 120fps and 10-bit ProRes, H.264 and H.265 formats. It also offers exposure tools like waveforms, customizable autofocus and more. As mentioned, the smaller body means it can't record all video formats for an unlimited time like the Z9. Rather, you're limited to 90 minutes for 8K30p and two hours for 4K60p without overheating. With the stacked sensor, rolling shutter should be very well controlled, just like on the Z9.

In terms of photography, the Z9's burst speeds aren't restrained by a mechanical shutter, because there isn't one. As such, you can capture 14-bit RAW+JPEG images at up to 20 fps, mighty impressive for such a high-resolution camera. It comes with settings designed for portrait photographers like skin softening and human-friendly white balance. 

Nikon

It offers face, eye, vehicle and animal detection autofocus, promising AF speeds at the same level as the (excellent) Z9. It can recognize nine types of subjects automatically, including eyes, faces, heads and upper bodies for both animals and people, along with vehicles and more. 

The Z8's magnesium-allow body may be smaller than the Z9, but it's equally as dust- and weather-resistant. It's also much the same in terms of controls, with a generous array of dials and buttons to change settings. Battery life is good at 700 shots max (CIPA) and two-plus hours of 4K video shooting, but if you need more, you can get the optional MB-N12 battery grip ($350). 

Other features include 6.0 stops of in-body stabilization with compatible lenses, which is good but not as good as recent Sony, Canon and Panasonic models. The electronic viewfinder (EVF) has a relatively low 3.69 million dots of resolution, but also very low lag and a high 120Hz refresh rate. Unfortunately, the 3.2-inch, 2,100K dot rear display only tilts up and doesn't flip out, so the camera won't be suitable for many vloggers — a poor decision on Nikon's part, in my opinion. 

It has one SD UHS-II and one CFexpress card slot that supports speeds up to 1,500 MB/s required for internal 8K RAW recording. That differs from the Z9, which has two CFexpress card slots. On top of the usual USB-C charging port, it has a super-speed USB communication terminal for rapid data transfers. It also comes with a full-sized HDMI connector for external video recording and monitoring, along with 3.5mm headphone and microphone parts. 

The Nikon Z8 goes on sale on May 25th, 2023 for $4,000. That's $1,500 less than the $5,500 Z9, $2,500 less than the Sony A1 and $700 more than Canon's R5 — with far less serious overheating issues. 

Nikon

This article originally appeared on Engadget at https://www.engadget.com/nikons-z8-mirrorless-camera-offers-8k60p-raw-video-and-20fps-burst-speeds-141556946.html?src=rss

The Morning After: Nintendo wants to put several Switches ‘in every home’

After selling 23 million Switches two years ago and 18 million in the last year, Nintendo expects demand for the aging console to continue to fall. It's forecasting sales of 15 million for next year and isn't even confident of that figure, according to its latest earnings report. "Sustaining the Switch’s sales momentum will be difficult in its seventh year," said president Shuntaro Furukawa in a call. "Our goal of selling 15 million units this fiscal year is a bit of a stretch." To achieve that, he added: "We try to not only put one system in every home but several in every home.” Well, at least the new Zelda game is just around the corner…

– Mat Smith

The Morning After isn’t just a newsletter – it’s also a daily podcast. Get our daily audio briefings, Monday through Friday, by subscribing right here.

The biggest stories you might have missed

What to expect at Google I/O 2023

Pokémon developer Game Freak is partnering with Private Division on a new action franchise

Volvo’s compact electric SUV will be the EX30

Apple is bringing Final Cut Pro and Logic Pro to iPad on May 23rd

The best travel gear for graduates

Spotify has reportedly removed tens of thousands of AI-generated songs

Universal Music claimed bots inflated the number of streams.

Spotify has reportedly pulled tens of thousands of tracks from generative AI company Boomy. It's said to have removed seven percent of the songs created by the startup's systems, which underscores the swift proliferation of AI-generated content on music streaming platforms. Universal Music reportedly told Spotify and other major services that it detected suspicious streaming activity on Boomy's songs, to glean more money from Spotify, which pays out on a per-listen basis.

Continue reading.

VanMoof simplifies things for its new, cheaper S4 and X4 e-bikes

Pick from a typical and step-through frame.

VanMoof

VanMoof is trying to deliver premium e-bike features and build quality for substantially less money. At $2,498, that’s $1,000 less than the company’s top-of-the-range S5 and X5 bikes, but that doesn’t make them exactly cheap. VanMoof co-founder Ties Carlier said in a press release this was an attempt at a “more simple, more accessible and more reliable” e-bike. One major simplification is the transition to adaptive motor support and a two-speed gear hub. The SX5 series had a three-speed gear system, and while it had a torque sensor to assist, adaptive motor support is new for these cheaper e-bikes. The company expects the range to be equivalent to both the SA5 and older SX3 e-bikes, 37-62 miles (60-150 km), depending on conditions and rider. Both the VanMoof S4 and X4 are available to pre-order now.

Continue reading.

Apple Watch Series 9 may finally get a new processor

The watches have used the same one since 2020.

The Apple Watch has effectively used the same processor since 2020's Series 6, but it's poised for a long-due upgrade. Bloomberg's Mark Gurman claims the Apple Watch Series 9 will use a truly "new processor." He believes the CPU in the S9 system-on-chip will be based on the A15 chip, which first appeared in the iPhone 13 family. Apple has historically introduced new Apple Watches in September, so it shouldn’t be too long a wait.

Continue reading.

Twitter is going to purge and archive inactive accounts

Elon Musk says it's important to 'free up abandoned handles.'

Twitter owner Elon Musk has warned the social network’s users they may see a drop in followers because the company is purging accounts that have "had no activity at all" for several years. Musk's announcement was quite vague, so we'll have to wait for Twitter to announce more specific rules, such as how long "several years" actually is. At the moment, though, the website has yet to update its inactive account policy page, which only states users need to log in every 30 days to keep their account active.

Continue reading.

WhatsApp begins testing Wear OS support

The beta lets you record voice messages or chat on Google-powered wearables.

WhatsApp is now testing an app for Wear OS 3 on devices like the Galaxy Watch 5, Pixel Watch and others. It has much of the functionality of the mobile versions, showing recent chats and contacts, while allowing you to send voice and text messages. WhatsApp offers a circular complication that shows unread messages on your watch's home page. There are also two tiles for contacts and voice messages to let you quickly access people or start a voice message recording. It's a significant release for Wear OS 3, with an ultra-popular app that most people have on their phones, in turn fulfilling Google's aim of getting more developers on the platform.

Continue reading.

A robot puppet rolled through San Francisco singing Vanessa Carlton hits

Only 951 miles to go!

YouTube

Twenty-one years after Vanessa Carlton released her debut single, ‘A Thousand Miles,’ a team of hobbyist roboticists has brought Carlton’s music back to the public ear — this time, to the streets of San Francisco, with an animatronic performer and, thankfully, a disco ball.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nintendo-wants-to-put-several-switches-in-every-home-111515506.html?src=rss

MediaTek's newest Dimensity chip is built for gaming phones

MediaTek has a simple answer to Qualcomm's Snapdragon 8 Gen 2 you see in many gaming phones: deliver an uprated version of last year's high-end hardware. The brand has unveiled a Dimensity 9200+ system-on-chip with improvements that will be particularly noticeable with games. You'll find higher clock speeds for the main Cortex-X3 core (up from 3.05GHz to 3.35GHz), three Cortex-A715 cores (from 2.85GHz to 3GHz) and four Cortex-A510 efficiency cores (1.8GHz to 2GHz). More importantly, the company says it has "boosted" the Immortalis-G715 graphics by 17 percent — games that were borderline playable before should be smoother.

The Dimensity 9200+ is built using TSMC's newest 4-nanometer process, potentially extending battery life and allowing for cooler, slimmer phones. The WiFi 7 support, AI processing unit and image signal processor are unchanged, although there's not much room to complain. WiFi 7 still isn't a finished standard, for example, and routers that support it are still extremely rare.

You won't have to wait long to see the first phones based on this chip. MediaTek expects the first Dimensity 9200+ phones to launch later this month, although it hasn't named customers as of this writing. The question is whether or not this refresh is enough. The Snapdragon 8 Gen 2 has only a slight edge over the regular 9200, so a higher-clocked 9200+ might emerge victorious. However, Qualcomm doesn't usually sit still — it likes to ship mid-cycle upgrades of its own.

Nonetheless, this may be an important release if you're a mobile gamer. This gives Qualcomm fresh competition in the Android gaming world. That, in turn, could lead to both more variety in phones as well as more aggressive pricing.

This article originally appeared on Engadget at https://www.engadget.com/mediateks-newest-dimensity-chip-is-built-for-gaming-phones-070007357.html?src=rss

WhatsApp bug is making some Android phones falsely report microphone access

Google and WhatsApp have confirmed they are aware of a bug that makes it appear as if WhatsApp is accessing phones’ microphones unnecessarily on some Android devices. The issue first cropped up a month ago, but gained new attention after a Twitter engineer tweeted about it in a post that was boosted by Elon Musk.

An image shared by Twitter engineer Foad Dabiri appeared to show that the microphone had been repeatedly running in the background while he wasn’t using the app. He tweeted a screenshot from Android’s Privacy Dashboard, which tracks how often apps access a device’s microphone and camera.

WhatsApp has been using the microphone in the background, while I was asleep and since I woke up at 6AM (and that's just a part of the timeline!) What's going on? pic.twitter.com/pNIfe4VlHV

— Foad Dabiri (@foaddabiri) May 6, 2023

Musk retweeted Dabiri’s post, saying “WhatsApp cannot be trusted.” Incidentally, Musk is known to be a fan of Signal, and has said encrypted direct messages on Twitter could roll out as soon as this month. The company didn’t respond to a request for comment.

In a statement shared on Twitter, WhatsApp suggested it was an Android-related issue, and not a result of inappropriate microphone access by the messaging app “We believe this is a bug on Android that mis-attributes information in their Privacy Dashboard and have asked Google to investigate and remediate,” the company said.

Dabiri is not the first to notice the issue. WhatsApp blogwabetainfohighlighted the bug a month ago, describing it at the time as “a false positive” affecting owners of some Pixel and Samsung devices. They added that restarting the phone may be a possible fix. Meanwhile, Google has said little about what could be causing the discrepancy, but confirmed it’s looking into the matter. "We are aware of the issue and are working closely with WhatsApp to investigate,” a Google spokesperson said in a statement.

This article originally appeared on Engadget at https://www.engadget.com/whatsapp-bug-is-making-some-android-phones-falsely-report-microphone-access-220213592.html?src=rss

Apple's new Beats Studio headphones could support personalized spatial audio

It has been more than five years since Beats last refreshed its top-end Studio headphones, but a new model could be on the way. According to 9to5Mac, Apple is “about” to launch a set of Beats Studio Pro headphones. The new model reportedly features a custom Beats chip that promises improved active noise cancellation and transparency mode performance. For the first time, the Studio line may also feature personalized spatial audio. Additionally, 9to5Mac speculates the new model will come with a USB-C port for fast charging.

Visually, the headphones look similar to the current Studio3 model, though it appears Apple has done away with the “Studio” branding found on the side of those headphones. Based on codenames found by 9to5Mac in the internal files for iOS 16.5’s release candidate, Apple collaborated with fashion designer Samuel Ross, best known for starting the clothing label A-Cold-Wall, on the design of the Beats Studio Pro. Images the outlet found in those same files suggest Apple will offer the headphones in four colorways: blue, black, brown and white.

It’s unclear if Apple intends for the Beats Studio Pro to replace the $349 Studio3 headphones, or if the company plans to market them as a more premium offering. According to 9to5, Apple is also working on a set of Studio Buds+. They will reportedly support audio sharing, automatic device switching and Hey Siri integration. The outlet suggests both products will arrive in stores soon.

This article originally appeared on Engadget at https://www.engadget.com/apples-new-beats-studio-headphones-could-support-personalized-spatial-audio-200614057.html?src=rss

Meta's open-source ImageBind AI aims to mimic human perception

Meta is open-sourcing an AI tool called ImageBind that predicts connections between data similar to how humans perceive or imagine an environment. While image generators like Midjourney, Stable Diffusion and DALL-E 2 pair words with images, allowing you to generate visual scenes based only on a text description, ImageBind casts a broader net. It can link text, images / videos, audio, 3D measurements (depth), temperature data (thermal), and motion data (from inertial measurement units) — and it does this without having to first train on every possibility. It’s an early stage of a framework that could eventually generate complex environments from an input as simple as a text prompt, image or audio recording (or some combination of the three).

You could view ImageBind as moving machine learning closer to human learning. For example, if you’re standing in a stimulating environment like a busy city street, your brain (largely unconsciously) absorbs the sights, sounds and other sensory experiences to infer information about passing cars and pedestrians, tall buildings, weather and much more. Humans and other animals evolved to process this data for our genetic advantage: survival and passing on our DNA. (The more aware you are of your surroundings, the more you can avoid danger and adapt to your environment for better survival and prosperity.) As computers get closer to mimicking animals’ multi-sensory connections, they can use those links to generate fully realized scenes based only on limited chunks of data.

So, while you can use Midjourney to prompt “a basset hound wearing a Gandalf outfit while balancing on a beach ball” and get a relatively realistic photo of this bizarre scene, a multimodal AI tool like ImageBind may eventually create a video of the dog with corresponding sounds, including a detailed suburban living room, the room’s temperature and the precise locations of the dog and anyone else in the scene. “This creates distinctive opportunities to create animations out of static images by combining them with audio prompts,” Meta researchers said today in a developer-focused blog post. “For example, a creator could couple an image with an alarm clock and a rooster crowing, and use a crowing audio prompt to segment the rooster or the sound of an alarm to segment the clock and animate both into a video sequence.”

Meta’s graph showing ImageBind’s accuracy outperforming single-mode models.
Meta

As for what else one could do with this new toy, it points clearly to one of Meta’s core ambitions: VR, mixed reality and the metaverse. For example, imagine a future headset that can construct fully realized 3D scenes (with sound, movement, etc.) on the fly. Or, virtual game developers could perhaps eventually use it to take much of the legwork out of their design process. Similarly, content creators could make immersive videos with realistic soundscapes and movement based on only text, image or audio input. It’s also easy to imagine a tool like ImageBind opening new doors in the accessibility space, generating real-time multimedia descriptions to help people with vision or hearing disabilities better perceive their immediate environments.

“In typical AI systems, there is a specific embedding (that is, vectors of numbers that can represent data and their relationships in machine learning) for each respective modality,” said Meta. “ImageBind shows that it’s possible to create a joint embedding space across multiple modalities without needing to train on data with every different combination of modalities. This is important because it’s not feasible for researchers to create datasets with samples that contain, for example, audio data and thermal data from a busy city street, or depth data and a text description of a seaside cliff.”

Meta views the tech as eventually expanding beyond its current six “senses,” so to speak. “While we explored six modalities in our current research, we believe that introducing new modalities that link as many senses as possible — like touch, speech, smell, and brain fMRI signals — will enable richer human-centric AI models.” Developers interested in exploring this new sandbox can start by diving into Meta’s open-source code.

This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-imagebind-ai-aims-to-mimic-human-perception-181500560.html?src=rss