Posts with «software» label

Google adds more context and AI-generated photos to image search

Google announced new features to its image search function to make it easier to spot altered content, the company announced at Google I/O 2023 on Wednesday. Photos on the search engine will soon include an "about this image" option that tells users when the image and ones like it were first indexed by Google, where it may have appeared first and other places the image has been posted online. That information could help users figure out whether something they're seeing was generated by AI, according to Google. 

The new feature will show up by clicking the three dots on an image in Google Image results. Google did not say exactly when the new feature will be available, besides that it'll be first available in the United States in the "coming months." Vice president of search Cathy Edwards told Engadget that the tool doesn't currently tell you if an image has been edited or manipulated, though the company is researching effective ways of detecting such tweaks.

Meanwhile, Google also began rolling out images generated by AI. Those images will include a markup in the original file to add context about its creation wherever its used. Image publishers like Midjourney and Shutterstock will also include the markup. Google's efforts to clarify to users where its search results come from started earlier this year with efforts like"About this result."

This is a developing story. Please check back for updates.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/generative-ai-google-image-search-context-175311217.html?src=rss

Google’s Duet AI brings more generative features to Workspace apps

After OpenAI’s ChatGPT caught the tech world off guard late last year, Google reportedly declared a “code red,” scrambling to plan a response to the new threat. The first fruit of that reorientation trickled out earlier this year with its Bard chatbot and some generative AI features baked into Google Workspace apps. Today at Google I/O 2023, we finally see a more fleshed-out picture of how the company views AI’s role in its cloud-based productivity suite. Google Duet AI is the company’s branding for its collection of AI tools across Workspace apps.

Like Microsoft Copilot for Office apps, Duet AI is an umbrella term for a growing list of generative AI features across Google Workspace apps. (The industry seems to have settled on marketing language depicting generative AI as your workplace ally.) First, the Gmail mobile app will now draft full replies to your emails based on a prompt in a new “Help me write” feature. In addition, the mobile Gmail app will soon add contextual assistance, “allowing you to create professional replies that automatically fill in names and other relevant information.”

Google

Duet AI also makes an appearance in Google Slides. Here, it takes the form of image generation for your presentations. Like Midjourney or DALL-E 2, Duet AI can now turn simple text prompts (entered into the Duet AI “Sidekick” side panel) into AI-generated images to enhance Slides presentations. It could help save you the trouble of scouring the internet for the right slide image while spicing them up with something original.

In Google Sheets, Duet AI can understand the context of a cell’s data and label it accordingly. The spreadsheet app also adds a new “help me organize” feature to create custom plans: describe what you want to do in plain language, and Duet AI will outline strategies and steps to accomplish it. “Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start,” the company said.

Google

Meanwhile, Duet AI in Google Meet can generate custom background images for video calls with a text prompt. Google says the feature can help users “express themselves and deepen connections during video calls while protecting the privacy of their surroundings.” Like the Slides image generation, Duet’s Google Meet integration could be a shortcut to save you from searching for an image that conveys the right ambiance for your meeting (while hiding any unwanted objects or bystanders behind you).

Duet also adds an “assisted writing experience” in Google Docs’ smart canvas. Entering a prompt describing what you want to write about will generate a Docs draft. The feature also works in Docs’ smart chips (automatic suggestions and info about things like documents and people mentioned in a project). Additionally, Google is upgrading Docs’ built-in Grammarly-style tools. A new proofread suggestion pane will offer tips about concise writing, avoiding repetition and using a more formal or active voice. The company adds that you can easily toggle the feature when you don’t want it to nag you about grammar.

Initially, you’ll have to sign up for a waitlist to try the new Duet AI Workspace features. Google says you can enter your info here to be notified as it opens the generative AI features to more users and regions “in the weeks ahead.”

This is a developing story. Please check back for updates.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/googles-duet-ai-brings-more-generative-features-to-workspace-apps-173944737.html?src=rss

Google is incorporating Adobe's Firefly AI image generator into Bard

Back in March, Adobe announced that it too would be jumping into the generative AI pool alongside the likes of Google, Meta, Microsoft and other tech industry heavyweights with the release of Adobe Firefly, a suite of AI features. Available across Adobe's product lineup including Photoshop, After Effects and Premiere Pro, Firefly is designed to eliminate much of the drudge work associated with modern photo and video editing. On Wednesday, Adobe and Google jointly announced during the 2023 I/O event that both Firefly and the Express graphics suite will soon be incorporated into Bard, allowing users to generate, edit and share AI images directly from the chatbot's command line.

Per a release from the company, users will be able to generate an image with Firefly, then edit and modify it using Adobe Express assets, fonts and templates within the Bard platform directly — even post to social media once it's ready. Those generated images will reportedly be of the same high quality that Firefly beta users are already accustomed to as they are all being created from the same database of Adobe Stock images, openly licensed and public domain content. 

Additionally, Google and Adobe will leverage the latter's existing Content Authenticity Initiative to mitigate some of the threats to creators that generative AI poses. This includes a "do not train" list which will preclude a piece of art's inclusion in Firefly's training data as well as persistent tags that will tell future viewers whether or not a work was generated and what model was used to make it. Bard users can expect to see the new features begin rolling out in the coming weeks ahead of a wide-scale release.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/google-is-incorporating-adobes-firefly-ai-image-generator-into-bard-174525371.html?src=rss

Google Photos will use generative AI to straight-up change your images

Google is stuffing generative AI into seemingly all its products, and that now includes the photo app on your phone. The company has previewed an "experimental" Magic Editor tool in Google Photos that can not only fix photos, but outright change them to create the shot you wanted all along. You can move and resize subjects, stretch objects (such as the bench above), remove an unwanted bag strap or even replace an overcast sky with a sunnier version.

Magic Editor will be available in early form to "select" Pixel phones later this year, Google says. The tech giant warns that output might be flawed, and that it will use feedback to improve the technology.

Google is no stranger to AI-based image editing. Magic Eraser already lets you remove unwanted subjects, while Photo Unblur resharpens jittery pictures. Magic Editor, however, takes things a step further. The technology adds content that was never there, and effectively lets you retake snapshots that were less-than-perfectly composed. You can manipulate shots with editors like Adobe's Photoshop, of course, but this is both easier and included in your phone's photo management app.

The addition may be helpful for salvaging photos that would otherwise be unusable. However, it also adds to the list of ethical questions surrounding generative AI. Google Photos' experiment will make it relatively simple to present a version of events that never existed. It may be that much harder to trust someone's social media snaps, even though they're not entirely fake.

This article originally appeared on Engadget at https://www.engadget.com/google-photos-will-use-generative-ai-to-straight-up-change-your-images-171014939.html?src=rss

Google Maps is expanding Immersive View to routes

Google Maps is expanding the Immersive View format it revealed last year to an important part of the app: routes. When you look for directions in Google Maps on iOS and Android in select cities, you'll see a more detailed view of the route.

The feature isn't just about making your journey look nicer than a bold line tracing the steps from point A to B (and maybe C). The idea is to bring all of the key information that you may need about the trip into a single place. You'll see details on traffic, weather, air quality, bike lanes and where to find nearby parking.

If you're planning a journey ahead of time and the weather is expected to be foggy or rainy, you'll see that in the visualization. Google is also using a blend of AI, real-time data and long-term trends to give you a sense of how busy traffic might be by displaying a certain number of vehicles on the virtual roads.

From Street View ➡️ New Immersive View for routes in @GoogleMaps 🧵↓#GoogleIOpic.twitter.com/CMdR697hwm

— Google (@Google) May 10, 2023

Ahead of Google I/O, Miriam Daniel, the vice-president of Google Maps Experiences, told Engadget that the team was focusing on the above-ground parts of the journey for now. So, don't expect the visualizations to include your subway trips quite yet. Still, given that Google Maps users look up around 20 billion kilometers' worth of directions per day, Immersive View for Routes could come in handy for many folks.

Google plans to roll out Immersive View for Routes in 15 cities by the end of the year. In the coming months, you'll be able to check it out in Amsterdam, Berlin, Dublin, Florence, Las Vegas, London, Los Angeles, New York, Miami, Paris, Seattle, San Francisco, San Jose, Tokyo and Venice.

Immersive View uses AI and computer vision to blend together billions of aerial and Street View images to create 3D models of spaces. Google announced the feature at I/O last year and started rolling it out more broadly in February.

Elsewhere, Google has some Maps-related updates for developers (I/O is the company's annual developer conference, after all). The Google Maps Platform is offering a preview of an Aerial View API for locations in the US starting today. Developers can use this to add "a pre-packaged, birds-eye view video" of a location to their apps or websites. Some of Google's partners are testing out the API, including Rent, which is using it to offer potential renters a more expansive look at a property and the surrounding area. That could give folks a clearer idea of the location where they may end up living before they visit an apartment in person.

Meanwhile, Google is adding Photorealistic 3D Tiles to the Map Tiles API on an experimental basis starting today. This grants developers access to the high-resolution 3D imagery that powers Google Earth. It could make it easier for folks to create their own 3D maps. Google suggests that a tourism company might use the tiles to build interactive maps for guided tours or to show off the most striking features of a national park.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/google-maps-is-expanding-immersive-view-to-routes-170618016.html?src=rss

Apple is bringing Final Cut Pro and Logic Pro to iPad on May 23rd

Apple finally has professional creative software to match the iPad Pro. The company is releasing both Final Cut Pro and Logic Pro for iPad on May 23rd. The two tablet apps now feature a touch-friendly interface and other iPad-specific improvements, such as Pencil and Magic Keyboard support (more on those in a moment). At the same time, Apple wants to reassure producers that these are full-featured apps that won't leave Mac users feeling lost.

The apps also represent a change in Apple's pricing strategy. Where Final Cut Pro and Logic Pro for Mac are one-time purchases, you'll have to subscribe to the iPad versions for either $5 per month or $49 per year. There's a one-month free trial. The move isn't surprising given Apple's increasing reliance on services for revenue, but it may be disappointing if you were hoping to avoid the industry's subscription trend.

Developing...

This article originally appeared on Engadget at https://www.engadget.com/apple-is-bringing-final-cut-pro-and-logic-pro-to-ipad-on-may-23rd-132957320.html?src=rss

Artiphon’s Minibeats AR app creates music from movement and gestures

Artiphon, the company behind the Orba handheld synth and MIDI controller, launched a new AR music creation app today that you don’t need a musical background to enjoy. Minibeats for iOS uses gestures, dance moves and facial expressions to craft songs played on 12 virtual instruments with colorful visual effects.

You could view the Minibeats app as a phone camera equivalent to Artiphon’s music-creation hardware. Here, instead of tapping touchpads on top of an orb-like device, the app lets you wave your hands, smile, frown and bust a move; the camera will capture your gestures and turn them into corresponding music.

The app is an extension of the company’s mission to make music creation a fun and simple activity that anyone can do. “With an intuitive interface and zero learning curve, Minibeats allows you to make music in innovative ways using simple gestures,” Artiphon’s announcement reads. “Dance to the beat as Minibeats tracks your movements and mixes the music. Wave your hands to draw across the sky with sparkles, lasers, and ripples. And even play music by smiling and frowning as Minibeats detects your emotions and scores it with a mood that matches the moment.”

Artiphon

The app taps into the Snapchat CameraKit SDK, which Artiphon already used in custom lenses it launched earlier this year in collaboration with electronic artists San Holo and LP Giobbi. “The iOS app will take this idea even further with more music to choose from and even more exciting ways to play it,” the launch video below states.

Although the app is tailored for simplicity, it provides hint videos to show you the ropes and learn the subtler details of AR music creation. Additionally, it includes “dozens” of visual effects corresponding to your gestures and sounds. And, of course, the app makes it easy to share your creations, letting you download your makeshift music video to your iOS Photos library or share with friends through text, email or social apps.

This article originally appeared on Engadget at https://www.engadget.com/artiphons-minibeats-ar-app-creates-music-from-movement-and-gestures-130025054.html?src=rss

WhatsApp begins testing Wear OS support

One of the largest apps in the world is coming to Wear OS watches, 9to5Google and WaBetaInfo have reported. WhatsApp is now testing an app for Wear OS 3 on devices like the Galaxy Watch 5, Pixel Watch and others. It offers much of the functionality of the mobile versions, showing recent chats and contacts, while allowing you to send voice and text messages. 

To set up the app, you'll need to have the Beta version of WhatsApp on your phone. After installing the app on your watch, it will display an eight-digit alphanumeric code that you punch in to the mobile app.

WhatsApp

From there, a list of recent conversations will pop up, along with "Settings" and "Open on phone." Clicking any of the conversations will bring up individual or group chats, showing messages, images sent, etc. At the bottom of each chat, you can choose to send a voice or text message, using the system keyboard for the latter. Similarly, you can view or listen to any existing or received messages. 

WhatsApp offers a circular complication that shows unread messages on your watch's home page. There are also two tiles for contacts and voice messages, to let you quickly access people or start a voice message recording. 

It's a significant release for Wear OS 3, offering an ultra-popular app that most people have on their phones — in turn fulfilling Google's aim of getting more developers on the platform. To get the app, you'll need to sign up for the WhatsApp beta and be running version 2.23.10.10+ on both your smartphone and watch. 

This article originally appeared on Engadget at https://www.engadget.com/whatsapp-begins-testing-wear-os-support-105519596.html?src=rss

Xbox app for PC now lets you find games based on accessibility features and estimated playtimes

Microsoft fine-tuned its discovery features in the Xbox app for PC this week. In addition, the desktop app’s April update adds the ability to sort by accessibility features and view collections based on how long it takes to finish them.

Microsoft first let developers add accessibility feature tags to their games in late 2021. Now, you can filter the All PC Games list in the Windows app to show results with specific accessibility features like a steady camera, narrated game menus or custom volume controls (among others). The update brings the desktop app up to speed with Xbox consoles, which already included accessibility filtering.

A byproduct of Microsoft’s HowLongToBeat integration last year, new collections make it easier to find games based on their approximate completion times. The new “Quick Games to Play” and “Longest Games” collections are on the PC app’s Home Screen. For example, HowLongToBeat’s estimates for Mass Effect 3 include 24 1 / 2 hours for the main story, an extra 11 hours to complete side quests and 50 total hours for completionists to wrap it all up. So if you’re hoping to avoid games requiring too much or too little investment, browsing these groups could be a handy way to find a starting point for your next adventure.

This article originally appeared on Engadget at https://www.engadget.com/xbox-app-for-pc-now-lets-you-find-games-based-on-accessibility-features-and-estimated-playtimes-162001538.html?src=rss

Slack is getting in on the GPT AI trend

At its World Tour NYC event, Salesforce has introduced Slack GPT, which it describes as a three-pronged vision that integrates AI features into the business messaging app. Slack GPT is comprised of AI-powered features built natively into the app, a new AI-ready platform that was recently made available to developers, and the availability of Einstein GPT in the app that will power its ability to instantly generate insights and summaries. Einstein GPT was developed by Salesforce as a generative AI for customer relationship management (CRM) and could assist businesses with tasks related to sales. 

The integrated AI features will give users access to a workflow builder that doesn't require them to know how to code. In it, they can automatically create or update a canvas, which is Slack's tool designed for collaboration. Users can also summon Einstein GPT to summarize Huddle calls and create canvases from those calls, simply by clicking a button. That said, companies don't have to stick to using Einstein GPT only. They can integrate large language models of their choice into the new AI-ready Slack platform, including OpenAI's. In fact, a Claude (Anthropic) app is now available for Slack, while the ChatGPT app for the messaging service is currently in beta. Salesforce assures customers that Anthropic and OpenAI will not take data from their Slack apps to train their language models. 

Salesforce said Slack GPT is being developed to boost users' productivity and gave several examples of how its features could be used. For sales, teams could use those features to auto-generate account channel summaries, create canvases for investors and create customer recommendations. Customer service agents can use AI-generated solutions and responses to quickly resolve issues and auto-generate case summaries. Developers can use the features to scan for channel activities and summarize root cause analysis when identifying solutions for issues in their software. The AI tools could also auto-generate image and copy for blogs, email campaigns and social media posts for marketers. At the moment, Slack GPT's native AI capabilities, the new AI-ready platform and the Einstein GPT app for Slack are still in development, and it's unclear when they're going to roll out. 

In addition to Slack GPT, Salesforce has also announced its plans to collaborate with Accenture "to accelerate the deployment of generative AI for CRM." The companies are apparently planning to provide businesses and organizations with the technology and help they need to be able to adopt Einstein GPT to increase productivity and improve customer experiences.

This article originally appeared on Engadget at https://www.engadget.com/slack-is-getting-in-on-the-gpt-ai-trend-090054594.html?src=rss