For the first time, WhatApp is coming to smartwatches. At its I/O 2023 keynote on Wednesday, Google announced that the chat app will be available this summer on Wear OS 3 devices, including Samsung's Galaxy Watch 5 and the Pixel Watch. Among other features, the smartwatch version of WhatsApp allows you to record and send voice messages. You can also use the app to send text messages and see a list of your favorite contacts.
A beta version of the software was spotted earlier in the week by 9to5Google. From that preview, we know adding a Wear OS device to your account will involve typing an eight-digit alphanumeric code provided to you through your phone. Additionally, the beta release features a circular complication that shows unread messages on your watch’s home screen. The complication also has two tiles for contacts and voice messages, allowing you to quickly send messages to your friends and family.
The news that WhatsApp is heading to Wear OS devices comes after Meta announced at the end of last month it had redesigned WhatsApp’s multi-device functionality to make it possible to use one account on more than one phone.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/whatsapp-arrives-on-wear-os-this-summer-182644527.html?src=rss
Google has stood at the forefront at many of the tech industry's AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company's work in foundation models, are "the bedrock for the industry and the AI-powered products that billions of people use daily." On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms.
PaLM 2, obviously, is the successor to Google's existing PaLM model that, until recently, powered its experimental Bard AI. "Think of PaLM as a general model that then can be fine tuned to achieve particular tasks," he explained during a reporters call earlier in the week. "For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts." Ghahramani also notes that PaLM was "the first large language model to perform an expert level on the US medical licensing exam."
Bard now runs on PaLM 2, which offers improved multilingual, reasoning, and coding capabilities, according to the company. The language model has been trained far more heavily on multilingual texts than its predecessor, covering more than 100 languages with improved understanding of cultural idioms and turns of phrase.
It is equally adept at generating programming code in Python and JavaScript. The model has also reportedly demonstrated "improved capabilities in logic, common sense reasoning, and mathematics," thanks to extensive training data from "scientific papers and web pages that contain mathematical expressions."
Even more impressive is that Google was able to spin off application-specific versions of the base PaLM system dubbed Gecko, Otter, Bison and Unicorn.
"We built PaLM to to be smaller, faster and more efficient, while increasing its capability," Ghahramani said. "We then distilled this into a family of models in a wide range of sizes so the lightest model can run as an interactive application on mobile devices on the latest Samsung Galaxy." In all, Google is announcing more than two dozen products that will feature PaLM capabilities at Wednesday's I/O event
This is a developing story. Please check back for updates.
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-its-multilingual-code-generating-palm-2-language-model-180805304.html?src=rss
Google announced new features to its image search function to make it easier to spot altered content, the company announced at Google I/O 2023 on Wednesday. Photos on the search engine will soon include an "about this image" option that tells users when the image and ones like it were first indexed by Google, where it may have appeared first and other places the image has been posted online. That information could help users figure out whether something they're seeing was generated by AI, according to Google.
The new feature will show up by clicking the three dots on an image in Google Image results. Google did not say exactly when the new feature will be available, besides that it'll be first available in the United States in the "coming months." Vice president of search Cathy Edwards told Engadget that the tool doesn't currently tell you if an image has been edited or manipulated, though the company is researching effective ways of detecting such tweaks.
Meanwhile, Google also began rolling out images generated by AI. Those images will include a markup in the original file to add context about its creation wherever its used. Image publishers like Midjourney and Shutterstock will also include the markup. Google's efforts to clarify to users where its search results come from started earlier this year with efforts like"About this result."
This is a developing story. Please check back for updates.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/generative-ai-google-image-search-context-175311217.html?src=rss
It's fair to say that Google was caught flat-footed by Microsoft's launch of Bing search powered by ChatGPT, as it didn't have anything similar when it unveiled its own conversational AI, Bard. Now, Google has announced Search Labs, a new way for consumers to test "bold new ideas and ideas we're exploring" in search, the company said at its IO conference.
There are three key features available for a limited time. The first is called Search Generative Experience (SGE), bringing generative AI directly into Google Search. "The new Search experience helps you quickly find and make sense of information," Google's Direct of Search wrote. "As you search, you can get the gist of a topic with AI-powered overviews, pointers to explore more, and ways to naturally follow up."
Google
Also available from the Search prompt are Code Tips, that use large language models to provide snippets and "pointers for writing code faster and smarter," according to Google. You can get reponses about languages including Java, Go, Python, Javascript, C++, Kotlin, shell, Docker and Git.
Finally, "Add to Sheets" lets you insert search results directly into a spreadsheet. For example, if you're planning a vacation on a Sheets document, you can easily add a link straight from Google Search.
Google's Bard could potentially improve all of Google's products ranging from Maps to Drive. Search, however, is the company's core function and principal moneymaker, and was one of the first things it mentioned when announcing Bard. To that end, it'll be very interesting to see how it compares with what Microsoft's ChatGPT-powered Bing can do.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/googles-search-labs-lets-you-test-its-ai-powered-products-and-ideas-175254478.html?src=rss
Search is, a quarter century since its launch, arguably still Google's most impactful and familiar creation — a way in this day and age to sort through the chafe of unsourced nonsense on Twitter or Facebook and hopefully finding vetted, trusted information quickly. With that context it mind, Google announced an absolutely puzzling new feature today for its flagship product at its 2023 IO conference: the Perspectives tab.
Perspectives, according to the company, is a means to "exclusively see long- and short-form videos, images and written posts that people have shared on discussion boards, Q&A sites and social media platforms." In addition to the Perspectives tab, a carousel of the same results "may" appear in some search results — a standalone module graphically resembling the Top Stories module.
In its announcement, Google mentioned the Perspectives initiative is happening in tandem with its quest to "transform" search through AI. What role AI will play in sourcing content for Perspectives is still unknown, but automating content selection and providing a sheen of authenticity to it has had its hiccups for Google in the past. The Popular On Twitter module has surfaced misinformation about breaking news events like mass shootings in the past, and it's nearly certain Perspectives will hit many of the same snags. Google was also among the tech companies grilled by Congress in 2021 over its role in the spread of fake news, an issue which again, Perspectives seems poised to exacerbate.
This shift towards potentially dubious sources seems to not simply be the result of trawling forums and tweets for a new glut of hits for search results, but a considered strategy away from legacy sources of information. The company claims Perspectives will coincide with changes in "how we rank results in Search overall, with a greater focus on content with unique expertise and experience." Has anyone at Google been on a forum or social media website? Unique expertise is almost universally in short supply.
Perspectives launches "in the coming weeks." God help us all.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-searchs-new-perspectives-tab-will-highlight-forum-and-social-media-posts-175209372.html?src=rss
Knowingly or unknowingly, Microsoft kicked off a race to integrate generative AI into search engines when it introduced Bing AI in February. Google seemingly rushed into an announcement just a day before Microsoft’s launch event, telling the world its generative AI chatbot would be called Bard. Since then, Google has opened up access to its ChatGPT and Bing AI rival, but while Microsoft’s offering has been embedded into its search and browser products, Bard remains a separate chatbot.
That doesn’t mean Google hasn’t been busy with generative AI. It’s infused basically all of its products with the stuff, while leaving Search largely untouched. That is, until now. At its I/O developer conference today, the company unveiled the Search Generative Experience (SGE) as part of the new experimental Search Labs platform. Users can sign up to test new projects and SGE is one of three available at launch.
I checked out a demo of SGE during a briefing with Google’s vice president of Search and, while it has some obvious similarities to Bing AI, there are notable differences as well.
For one thing, SGE doesn’t look too different from your standard Google Search at first glance. The input bar is still the same (whereas Bing’s is larger and more like a compose box on Twitter). But the results page is where I first saw something new. Near the top of the page, just below the search bar but above all other results is a shaded section showing what the generative AI found. Google calls this the AI-powered snapshot containing “key information to consider, with links to dig deeper.”
At the top of this snapshot is a note reminding you that “Generative AI is experimental,” followed by answers to your question that SGE found from multiple sources online. On the top right is a button for you to expand the snapshot, as well as cards that show the articles from which the answers were drawn.
Google
I asked Edwards to help me search for fun things to do in Frankfurt, Germany, as well as the best yoga poses for lower back pain. The typical search results showed up pretty much instantly, though the snapshot area showed a loading animation while the AI compiled its findings. After a few seconds, I saw a list of suggestions for the former, including Palmengarten and Romerburg. But when Edwards clicked the expand button, the snapshot opened up and revealed more questions that SGE thought might be relevant, along with answers. These included “is Frankfurt worth visiting” and “is three days enough to visit Frankfurt,” and the results for each included source articles.
My second question yielded more interesting findings. Not only did SGE show a list of suggested poses, expanding the answers brought up pictures in the source articles that gave a better idea of how to perform each one. Below the list was a suggestion to avoid yoga “if you have certain back problems, such as a spinal fracture or a herniated disc.” Further down in the snapshot, there was also a list of poses to avoid if you have lower back pain.
Importantly, the very bottom of the snapshot included a note saying “This is for informational purposes only. The information does not constitute medical advice or diagnosis.” Edwards said this is one of the safety features built into SGE, where the disclaimer shows up on sensitive topics that could affect a person’s health or financial decisions.
In addition, the snapshot doesn’t appear at all when Google’s algorithms detect that a query has to do with topics like self-harm, domestic violence or mental health crises. What you’ll see in those situations is the standard notice about how and where to get help.
Google
Based on my brief and limited preview, SGE seemed at once similar and different to Bing AI. When citing its sources, for example, SGE doesn’t show inline notations with footnotes linking to each article. Instead, it shows cards on the right or below each section, similar to how the cards on news results look.
Both Google and Microsoft’s layouts offer conversational views, with suggested follow-up prompts at the end of each response. But SGE doesn’t have an input bar at the bottom, and the search bar remains up top, outside of the snapshot. This makes it seem less like talking to a chatbot than Bing AI.
Google didn’t say it set out to build a conversational experience, though. It said “With new generative AI capabilities in Search, we’re now taking more of the work out of searching.” Instead of your having to do multiple searches to get at a specific answer or itinerary or process, you can just bundle your parameters into one query, like “What’s better for a family with kids under 3 and a dog, Bryce Canyon or Arches.”
The good news is that when you use the suggested responses in the snapshot, you can go into a new conversational mode. Here, “context will be carried over from question to question,” according to a press release. You’ll also be able to ask Google for help buying things online, and Edwards said the company sees 1.8 billion updates to its product listings every hour, helping keep information about supply and prices fresh and accurate. And since it’s Google after all, and Google relies heavily on ads to make money, SGE will also feature dedicated ad spaces throughout the page.
Google also said it would remain committed to making sure ads are distinguishable from organic search results. You can sign up to test the new SGE in Search Labs The experiment will be available to all in the coming weeks, starting in the US in English. Look for Labs Icon in the Google App or Chrome desktop and visit labs.google.com/search for more info.
This article originally appeared on Engadget at https://www.engadget.com/google-search-generative-experience-preview-a-familiar-yet-different-approach-175156245.html?src=rss
After OpenAI’s ChatGPT caught the tech world off guard late last year, Google reportedly declared a “code red,” scrambling to plan a response to the new threat. The first fruit of that reorientation trickled out earlier this year with its Bard chatbot and some generative AI features baked into Google Workspace apps. Today at Google I/O 2023, we finally see a more fleshed-out picture of how the company views AI’s role in its cloud-based productivity suite. Google Duet AI is the company’s branding for its collection of AI tools across Workspace apps.
Like Microsoft Copilot for Office apps, Duet AI is an umbrella term for a growing list of generative AI features across Google Workspace apps. (The industry seems to have settled on marketing language depicting generative AI as your workplace ally.) First, the Gmail mobile app will now draft full replies to your emails based on a prompt in a new “Help me write” feature. In addition, the mobile Gmail app will soon add contextual assistance, “allowing you to create professional replies that automatically fill in names and other relevant information.”
Google
Duet AI also makes an appearance in Google Slides. Here, it takes the form of image generation for your presentations. Like Midjourney or DALL-E 2, Duet AI can now turn simple text prompts (entered into the Duet AI “Sidekick” side panel) into AI-generated images to enhance Slides presentations. It could help save you the trouble of scouring the internet for the right slide image while spicing them up with something original.
In Google Sheets, Duet AI can understand the context of a cell’s data and label it accordingly. The spreadsheet app also adds a new “help me organize” feature to create custom plans: describe what you want to do in plain language, and Duet AI will outline strategies and steps to accomplish it. “Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start,” the company said.
Google
Meanwhile, Duet AI in Google Meet can generate custom background images for video calls with a text prompt. Google says the feature can help users “express themselves and deepen connections during video calls while protecting the privacy of their surroundings.” Like the Slides image generation, Duet’s Google Meet integration could be a shortcut to save you from searching for an image that conveys the right ambiance for your meeting (while hiding any unwanted objects or bystanders behind you).
Duet also adds an “assisted writing experience” in Google Docs’ smart canvas. Entering a prompt describing what you want to write about will generate a Docs draft. The feature also works in Docs’ smart chips (automatic suggestions and info about things like documents and people mentioned in a project). Additionally, Google is upgrading Docs’ built-in Grammarly-style tools. A new proofread suggestion pane will offer tips about concise writing, avoiding repetition and using a more formal or active voice. The company adds that you can easily toggle the feature when you don’t want it to nag you about grammar.
Initially, you’ll have to sign up for a waitlist to try the new Duet AI Workspace features. Google says you can enter your info here to be notified as it opens the generative AI features to more users and regions “in the weeks ahead.”
This is a developing story. Please check back for updates.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/googles-duet-ai-brings-more-generative-features-to-workspace-apps-173944737.html?src=rss
For the past two months, anybody wanting to try out Google's new chatbot AI, Bard, had to first register their interest and join a waitlist before being granted access. On Wednesday, the company announced that those days are over. Bard will immediately be dropping the waitlist requirement as it expands to 180 additional countries and territories. What's more, this expanded Bard will be built atop Google's newest Large Language Model, PaLM 2, making it more capable than ever before.
Google hurriedly released the first generation Bard back in February after OpenAI's ChatGPT came out of nowhere and began eating the industry's collective lunch like Gulliver in a Lilliputian cafeteria. Matters were made worse when Bard's initial performances proved less than impressive — especially given Google's generally-accepted status at the forefront of AI development — which hurt both Google's public image and its bottom line. In the intervening months, the company has worked to further develop PaLM, the language model that essentially powers Bard, allowing it to produce better quality and higher fidelity responses, as well as perform new tasks like generating programming code.
As Google executives announced at I/O on Wednesday, Bard has been switched over to then new PaLM 2 platform. As such, users can expect a bevy of new features and functions to roll out in the coming days and weeks. Features like a higher degree of visual responses to your queries, so when you ask for "must see sights" in New Orleans, you'll be presented with images of the sites you'd see, more than just a bullet list or text-based description. Conversely, users will be able to more easily input images to Bard alongside their written queries, bringing Google Lens capabilities to Bard.
Even as Google mixes and matches AI capabilities amongst its products — 25 new offerings running on PaLM 2 are being announced today alone — the company is looking to ally with other industry leaders to further augment Bard's abilities. Google announced on Wednesday that it is partnering with Adobe to bring its Firefly generative AI to Bard as a means to counter Microsoft's BingChat-DallE2 offering.
Finally, Google shared news that it will be implementing a number of changes and updates in response to feedback received from the community since launch. Clicking on a line of generated code or chatbot answer and Bard will provide a link to that specific bit's source. Additionally, the company is working to add a export ability so users can easily run generated programming code on Replit or toss their generated works into Docs or Gmail. There will even be a new Dark theme.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-bard-transitions-to-palm-2-and-expands-to-180-countries-172908926.html?src=rss
Back in March, Adobe announced that it too would be jumping into the generative AI pool alongside the likes of Google, Meta, Microsoft and other tech industry heavyweights with the release of Adobe Firefly, a suite of AI features. Available across Adobe's product lineup including Photoshop, After Effects and Premiere Pro, Firefly is designed to eliminate much of the drudge work associated with modern photo and video editing. On Wednesday, Adobe and Google jointly announced during the 2023 I/O event that both Firefly and the Express graphics suite will soon be incorporated into Bard, allowing users to generate, edit and share AI images directly from the chatbot's command line.
Per a release from the company, users will be able to generate an image with Firefly, then edit and modify it using Adobe Express assets, fonts and templates within the Bard platform directly — even post to social media once it's ready. Those generated images will reportedly be of the same high quality that Firefly beta users are already accustomed to as they are all being created from the same database of Adobe Stock images, openly licensed and public domain content.
Additionally, Google and Adobe will leverage the latter's existing Content Authenticity Initiative to mitigate some of the threats to creators that generative AI poses. This includes a "do not train" list which will preclude a piece of art's inclusion in Firefly's training data as well as persistent tags that will tell future viewers whether or not a work was generated and what model was used to make it. Bard users can expect to see the new features begin rolling out in the coming weeks ahead of a wide-scale release.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-is-incorporating-adobes-firefly-ai-image-generator-into-bard-174525371.html?src=rss
Google is stuffing generative AI into seemingly all its products, and that now includes the photo app on your phone. The company has previewed an "experimental" Magic Editor tool in Google Photos that can not only fix photos, but outright change them to create the shot you wanted all along. You can move and resize subjects, stretch objects (such as the bench above), remove an unwanted bag strap or even replace an overcast sky with a sunnier version.
Magic Editor will be available in early form to "select" Pixel phones later this year, Google says. The tech giant warns that output might be flawed, and that it will use feedback to improve the technology.
Google is no stranger to AI-based image editing. Magic Eraser already lets you remove unwanted subjects, while Photo Unblur resharpens jittery pictures. Magic Editor, however, takes things a step further. The technology adds content that was never there, and effectively lets you retake snapshots that were less-than-perfectly composed. You can manipulate shots with editors like Adobe's Photoshop, of course, but this is both easier and included in your phone's photo management app.
The addition may be helpful for salvaging photos that would otherwise be unusable. However, it also adds to the list of ethical questions surrounding generative AI. Google Photos' experiment will make it relatively simple to present a version of events that never existed. It may be that much harder to trust someone's social media snaps, even though they're not entirely fake.
This article originally appeared on Engadget at https://www.engadget.com/google-photos-will-use-generative-ai-to-straight-up-change-your-images-171014939.html?src=rss