Posts with «information technology» label

Apple Silicon Macs now natively support Unreal Engine 5

Fortnite creator Epic Games' Unreal Engine 5 allows anyone to quickly build 3D worlds, so it's great not just for games, but Hollywood virtual sets and more. Until now, recent Mac users have relied on Rosetta technology to run it, but Epic has just released a new update, version 5.2, that works natively on Apple Silicon. That should allow for significantly improved performance on M1 and M2 Macs. 

There's more news for Apple users as well. Epic unveiled a new iPad app (below) for virtual productions that works with the Unreal Engine's ICVFX (In-Camera VFX) editor. It offers "an intuitive touch-based interface for stage operations such as color grading, light card placement, and nDisplay management tasks from anywhere within the LED volume," the company said. In other words, it lets DPs, VFX folks and others tweak lighting and more on virtual sets from a simple, portable interface.

Epic Games

The update is interesting in the context of Apple's antitrust dispute with Epic Games over Fortnite commissions on the App Store. Apple largely won that fight, as an appeal panel found that the company wasn't a monopolist in the distribution of iOS apps. Back in 2020, Apple tried to suspend Epic Games' developer account, but that move was later blocked by a judge. 

Other new features introduced with the Unreal Engine 5.2 update include a "Procedural Content Generation framework" that lets you populate large scenes with the Unreal Engine assets of your choice, making it faster to build large worlds. And another feature called Substrate allows material creation with more control over the look and feel of objects used in in real-time applications like games or for linear content creation. Epic demonstrated that using its previous Rivian demo, giving a metallic-looking paint job to the R1T electric pickup.                                                                                                                         

This article originally appeared on Engadget at https://www.engadget.com/apple-silicon-macs-now-natively-support-unreal-engine-5-124257710.html?src=rss

Facebook Messenger app for Apple Watch is going away after May 31st

Say goodbye to another high-profile Apple Watch app. As MacRumorsnotes, Meta is telling Facebook Messenger users that the Apple Watch version will be unavailable after May 31st. While you'll still get message notifications beyond that point, you won't have the option to respond. Meta didn't provide an explanation in a statement to Engadget. Instead, it pointed users to Messenger on "iPhone, desktop and the web."

Meta (then Facebook) introduced Messenger for the Apple Watch in 2015. The app couldn't offer text responses, but you could send audio clips, stickers and similar smartwatch-friendly responses from your wrist. That made it helpful for quickly acknowledging a message without reaching for your iPhone.

There are a few factors that may play a role. To start, the limited interaction hurt the app's appeal. That may have affected its potential audience. Meta is also laying off roughly 10,000 employees and refocusing its efforts in a bid to cut costs. That means cutting less essential projects, and it's safe to presume Messenger for Apple Watch wasn't a top priority.

Numerous well-known companies have dropped their Apple Watch apps over the years. Meta scrapped its wrist-worn Instagram app in 2018. Slack, Twitter, Uber and others have also ditched their wearable clients. In many cases, developers left due to either a lack of demand or a lack of necessity — there's not much point to a native smartwatch app if you'll likely pick up your phone regardless.

Apple may be aware of this. Rumors suggest watchOS 10 may be redesigned around widgets. Apps might stick around, but the emphasis could be on quick-glance information rather than navigating apps on a tiny screen. Even if you use Messenger for Apple Watch now, there might not be as much incentive to use it going forward.

This article originally appeared on Engadget at https://www.engadget.com/facebook-messenger-app-for-apple-watch-is-going-away-after-may-31st-180252947.html?src=rss

Google’s Project Starline booth gave me a holographic meeting experience

It’s been two years since Google introduced its Project Starline holographic video conferencing experiment, and though we didn’t hear more about it during the keynote at I/O 2023 today, there’s actually been an update. The company quietly announced that it’s made new prototypes of the Starline booth that are smaller and easier to deploy. I was able to check out a demo of the experience here at Shoreline Park and am surprised how much I enjoyed it.

But first, let’s get one thing out of the way. Google did not allow us to take pictures or video of the setup. It’s hard to capture holographs on camera anyway, so I’m not sure how effective it would have been. Due to that limitation, though, we’re not going to have a lot of photos for this post and I’ll do my best to describe the experience in words.

After some brief introductions, I entered a booth with a chair and desk in front of the Starline system. The prototype itself was made up of a light-field display that looked like a mesh window, which I’d guess is about 40-inches wide. Along the top, left and right edges of the screen were cameras that Google uses to get the visual data required to generate the 3D model of me. At this point, everything looked fairly unassuming.

Things changed slightly when Andrew Nartker, who heads up the Project Starline team at Google, stepped into frame. He sat in his chair in a booth next to mine, and when I looked at him dead on, it felt like a pretty typical 2D experience, except in what felt like very high resolution. He was life-sized and it seemed as if we were making eye contact and holding each other’s gaze, despite not looking into a camera. When I leaned forward or leaned closer, he did too, and nonverbal cues like that made the call feel a little richer.

What blew me away, though, was when he picked up an apple (haha I guess Apple can say it was at I/O) and held it out towards me. It was so realistic that I felt as if I could grab the fruit from his fist. We tried a few other things later — fist bumping and high fiving, and though we never actually made physical contact, the positioning of limbs on the call was accurate enough that we could grab the projections of each other’s fists.

The experience wasn’t perfect, of course. There were parts where, when Nartker and I were talking at the same time, I could tell he could not hear what I was saying. Every now and then, too, the graphics would blink or appear to glitch. But those were very minor issues, and overall the demo felt very refined. Some of the issues could even be chalked up to spotty event WiFi, and I can personally attest to the fact that the signal was indeed very shitty.

It’s also worth noting that Starline was basically getting the visual and audio data of me and Nartker, sending it to the cloud over WiFi, creating a 3D model of both of us, and then sending it down to the light display and speakers on the prototype. Some hiccups are more than understandable.

While the earliest Starline prototypes took up entire rooms, the current version is smaller and easier to deploy. To that end, Google announced today that it had shared some units with early access partners including T-Mobile, WeWork and Salesforce. The company hopes to get real-world feedback to “see how Project Starline can help distributed workforces stay connected.”

We’re clearly a long way off from seeing these in our homes, but it was nice to get a taste of what Project Starline feels like so far. This was the first time media demos were available, too, so I’m glad I was able to check it out for myself and tell you about it instead of relying on Google’s own messaging. I am impressed by the realism of the projections, but I remain uncertain about how effectively this might substitute or complement in-person conversations. For now, though, we’ll keep an eye on Google’s work on Project Starline and keep you posted.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/googles-project-starline-booths-gave-me-a-holographic-meeting-experience-205804960.html?src=rss

You can now stream Android phone apps to your Chromebook

You won't have to install Android apps on your Chromebook when you need them in a pinch. After a preview at CES last year, Google has enabled app streaming through Phone Hub in Chrome OS Beta. You can quickly check your messages, or track the status of a food order without having to sign in again.

Once Phone Hub is enabled, you can stream apps by either clicking a messaging app notification or browsing the Hub's Recent Apps section after you've opened a given app on your phone. Google doesn't describe certain app types as off-limits, although it's safe to say that you won't want to play action games this way.

The feature works with "select" phones running Android 13 or newer. The Chromebook and handset need to be on the same WiFi network and physically close-by, although you can use the phone as a hotspot through Instant Tethering if necessary.

Google is ultimately mirroring the remote Android app access you've had in Windows for years. However, the functionality might be more useful on Chromebooks. While app streaming won't replace native apps, it can save precious storage space and spare you from having to jump between devices just to complete certain tasks. This approach is also more manufacturer-independent where Microsoft's approach is restricted to Samsung and Honor phones.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-stream-android-phone-apps-to-your-chromebook-202830500.html?src=rss

Google opens up access to its text-to-music AI

AI-generated music has been in the spotlight lately, between a track that seemingly featured vocals from Drake and The Weeknd gaining traction to Spotify reportedly removing thousands of songs over concerns that people were using them to game the system. Now, Google is wading further into that space as the company is opening up access to its text-to-music AI, which is called MusicLM.

Google detailed the system back in January when it published research on MusicLM. At the time, the company said it didn't have any plans to offer the public access to MusicLM due to ethical concerns related to copyrighted material, some of which the AI copied directly into the songs it generated. 

The generative AI landscape has shifted dramatically this year, however, and now Google feels comfortable enough to let the public try MusicLM. "We’ve been working with musicians like Dan Deacon and hosting workshops to see how this technology can empower the creative process," Google Research product manager Hema Manickavasagam and Google Labs product manager Kristin Yim wrote in a blog post

As TechCrunch points out, the current public version of MusicLM doesn't allow users to generate music with specific artists or vocals. That could help Google to avoid copyright issues and stop users from generating fake "unreleased songs" from popular artists and selling them for thousands of dollars.

You can now sign up to try MusicLM through AI Test Kitchen on the web, Android and iOS. Google suggests that you can try prompts based on mood, genre and instruments, such as “soulful jazz for a dinner party” or "two nylon string guitars playing in flamenco style." The experimental AI will generate two tracks and you can identify your favorite by selecting a trophy icon. Google says doing so will help it to improve the model.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/google-opens-up-access-to-its-text-to-music-ai-202251175.html?src=rss

How to pre-order the Google Pixel Fold

Prior to today's I/O event, Google confirmed the leaks and rumors about the existence of its first foldable smartphone with a teaser video on YouTube. We now know the full specs and pre-order details for the $1,800 handheld. Starting now, you can pre-order the Google Pixel Fold through Google's store front, and units should begin shipping sometime in June. And when you pre-order, Google will thrown in a free Pixel Watch too. 

Like the 7-series Pixel phones, the Pixel Fold will feature Google's Tensor G2 SOC and come with 12GB of RAM and either 256 or 512GB of storage. The claimed battery life extends beyond 24-hours and supports both wireless charging or 30W fast charging. Google says it's the thinnest foldable phone on the market, measuring a half-inch thick when folded.

The exterior features an always-on, 5.8-inch OLED display with up to 1550 nits of brightness and 120Hz refresh rate. It's covered in the same Gorilla Glass Victus as the Pixel 7 and 7 Pro — but it's the interior screen that's getting most of the attention. The 7.6-inch, 120Hz folding display is facilitated by a custom, dual-axis steel hinge and foldable Ultra Thin Glass with a layer of protective plastic. There's just enough friction within the hinges to enable different views when propped up in tabletop mode.

The Pixel Fold has a total of five cameras: an 8MP inner camera, a 9.5MP selfie cam on the front screen, and three cameras across the rear bar, including a telephoto lens, an ultrawide lens and a 48MP camera with a half-inch sensor. The multiple screens and cameras will enable features like split screen productivity, tripod-free astrophotography and real-time translation during face-to-face conversations. 

We'll have full reviews of the foldable soon. In the meantime our senior reviews writer, Sam Rutherford was able to do a quick hands-on with the Pixel Fold and thinks it's a fitting rival for Samsung's foldables. You can get it in either black or white and pre-orders placed now should ship near the end of June.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/how-to-pre-order-the-google-pixel-fold-190517124.html?src=rss

Google remembered to add Gmail and Calendar to Wear OS

Google says Wear OS’ time in the doldrums is over and, two years after it approached Samsung to help bail out the platform, it’s now the world’s “fastest-growing” wearable ecosystem. At its I/O 2023 keynote on Wednesday, the company laid out the roadmap for several improvements arriving in the coming months. That includes tighter integrations with Google Home and Nest, the long-awaited launch of a full Gmail and Calendar app, and the first news about the next-generation of Wear OS.

Probably the most asked-for feature coming to Wear OS is access to Gmail and Calendar from the wrist. At some point “later this year,” users will be able to access key features on their watch including triaging and offering quick responses to the missives in their inbox. Similarly, in Calendar you can check your schedule, RSVP to invites and update tasks and to-do list items. And if you’re ensconced within Google’s smart home ecosystem, you’ll soon be able to use your watch to see who rang your Nest doorbell, and remotely unlock your door.

As part of today’s keynote, Google also wanted to highlight the big-name third parties that have thrown their weight behind Wear OS. That includes WhatsApp, which recently started beta testing its app for the platform which will offer standalone messaging support. Similarly, Spotify is launching new tiles to trigger a new podcast episode or start a DJ session, while Peloton has recently updated its own Wear OS app to enhance tracking support and let users view their weekly workout progress from their wrist.

Google also wanted to show off its Watch Face Format, built with Samsung, in order to make it easier for developers to create watch faces for Wear OS devices. The key selling point is that the system itself will handle the hard work, making it easier for less-experienced users to use. That includes optimizing the face for rendering and battery performance, letting designers focus on the actual design part of the job.

Of more interest, however, is the news that Google is launching the emulator and developer preview for Wear OS 4. The company says that, when it launches “later this year,” users should expect to see improved battery life and performance. In addition, there’s a promise of new accessibility features including a faster and more reliable text-to-speech engine. There’s also new backup and restore support to make it easier to swap their watch (or phone) without a lot of awkward fiddling. We can expect to learn more about this, and everything else Google is showing off, in the coming months.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/google-remembered-to-add-gmail-and-calendar-to-wear-os-190027384.html?src=rss

Google adds more context and AI-generated photos to image search

Google announced new features to its image search function to make it easier to spot altered content, the company announced at Google I/O 2023 on Wednesday. Photos on the search engine will soon include an "about this image" option that tells users when the image and ones like it were first indexed by Google, where it may have appeared first and other places the image has been posted online. That information could help users figure out whether something they're seeing was generated by AI, according to Google. 

The new feature will show up by clicking the three dots on an image in Google Image results. Google did not say exactly when the new feature will be available, besides that it'll be first available in the United States in the "coming months." Vice president of search Cathy Edwards told Engadget that the tool doesn't currently tell you if an image has been edited or manipulated, though the company is researching effective ways of detecting such tweaks.

Meanwhile, Google also began rolling out images generated by AI. Those images will include a markup in the original file to add context about its creation wherever its used. Image publishers like Midjourney and Shutterstock will also include the markup. Google's efforts to clarify to users where its search results come from started earlier this year with efforts like"About this result."

This is a developing story. Please check back for updates.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/generative-ai-google-image-search-context-175311217.html?src=rss

IBM's Watson returns as an AI development studio

Years before everyone was being impressed with the human-like text output of ChatGPT and other generative AI systems, IBM's Watson was blowing our minds on Jeopardy. IBM's cognitive computing project famously dominated its human opponents, but the company had much larger long-term goals, such as using Watson's ability to simulate a human thought process to help doctors diagnose patients and recommend treatments. That didn't work out. Now, IBM is pivoting its supercomputer platform into Watsonx, an AI development studio packed with foundation and open-source models companies can use to train their own AI platforms.

If that sounds familiar, it may be because NVIDIA recently announced a similar service with its AI Foundations program. Both platforms are designed to give enterprises a way to build, train, scale and deploy an AI platform. IBM says Watsonx will provide AI builders with a robust series of training models with an auditable data lineage — ranging from datasets focused on automatically generating code for developers or for handling industry-specific databases to climate datasets designed to help organizations plan for natural disasters.

IBM has already built an example of what the platform can do with that latter dataset in collaboration with NASA, using the geospatial foundation model to convert satellite images into maps that track changes from natural disasters and climate change.

Reimagining Watson as an AI development studio might lack the pizazz of a headline-grabbing supercomputer that can beat humans on a TV quiz show — but the original vision of Watson was out of reach for the average person. Depending on how companies use IBM's new AI training program, you may find yourself interacting with a part of Watson yourself sometime in the near future.

Watsonx is expected to be available in stages, starting with the Watsonx.ai studio in July, and expanding with new features debuting later this year.

This article originally appeared on Engadget at https://www.engadget.com/ibms-watson-returns-as-an-ai-development-studio-195717082.html?src=rss

Apple is bringing Final Cut Pro and Logic Pro to iPad on May 23rd

Apple finally has professional creative software to match the iPad Pro. The company is releasing both Final Cut Pro and Logic Pro for iPad on May 23rd. The two tablet apps now feature a touch-friendly interface and other iPad-specific improvements, such as Pencil and Magic Keyboard support (more on those in a moment). At the same time, Apple wants to reassure producers that these are full-featured apps that won't leave Mac users feeling lost.

The apps also represent a change in Apple's pricing strategy. Where Final Cut Pro and Logic Pro for Mac are one-time purchases, you'll have to subscribe to the iPad versions for either $5 per month or $49 per year. There's a one-month free trial. The move isn't surprising given Apple's increasing reliance on services for revenue, but it may be disappointing if you were hoping to avoid the industry's subscription trend.

Developing...

This article originally appeared on Engadget at https://www.engadget.com/apple-is-bringing-final-cut-pro-and-logic-pro-to-ipad-on-may-23rd-132957320.html?src=rss