In addition to a slew of Macs, new Silicon and a compelling new Vision Pro headset, Apple also introduced a 15-inch MacBook Air at its Worldwide Developer Conference (WWDC) today. The larger MacBook Air not only offers a bigger display than its 13-inch counterpart, but it also comes with a more sophisticated sound system and battery life that's rated for hours longer. I was able to pick one up to see how it feels here at Apple Park today, though I wasn't allowed to do much else with it.
I do like how thin and light the new MacBook Air is — at 11.5mm (0.45inches) thin and 3.3 pounds (1.49 kgs), it beats the Dell XPS 15, which is both heavier and thicker. Apple's machine has a slightly smaller 15.3-inch Liquid Retina Display, though, whereas Dell's comes in at 15.6 inches.
The 15-inch MacBook Air's screen can get up to 500 nits of brightness, and though I never got a chance to view it outdoors, the photos and interfaces I did see were lovely and crisp. Colors were vibrant and rich, and when an Apple rep showed me photos of long-haired dogs and a woman in a red dress in front of some cliffs, the details were tack sharp.
I also got to see how the laptop handles tasks like photo-editing and gaming, which thanks to its M2 chip happened impressively quickly. An Apple rep used Photonator to erase multiple kayaks from a topdown photo of canoes on a river, and also changed the colors of certain parts of the image. Everything happened instantly and accurately. They also showed me part of a game called Stray so I could see how the laptop handled the graphics rendering of things like light reflecting off a puddle. These were very controlled demos, so while they all did perform well and without lag, I would rather evaluate the MacBook Air based on our own realworld testing.
I did get to check out the new six-speaker sound system with spatial audio when a rep played some songs for me, including Beyonce's Cuff It. Unfortunately, because the demo space we were in was fairly noisy, it was hard to gauge how well the audio sounded. I stuck my ear right next to the machine and was only barely able to hear the song. This is another feature we'll have to wait for a review unit to test for ourselves.
I wasn't able to do much else with the new laptop, really, but here's a quick recap of some of its features. It has the same notch design from the 13-inch MacBook Air that houses its 1080p webcam, but unlike the smaller model, this year's device comes with a 10-core GPU across the board instead of 8 cores. It also ships with the dual-port 35W charger by default and has a larger trackpad.
If you're intrigued, you can order the 15-inch MacBook Air today, starting at $1,299, and it'll be available in stores on June 13th.
Follow all of the news from Apple's WWDC 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/apple-macbook-air-15-inch-preview-portable-power-213455527.html?src=rss
Apple is slated to hold its annual Worldwide Developer Conference today, and based on the rumors and leaks we've seen, it's shaping up to be a monumental year. The industry is expecting the company to launch its first mixed reality headset, along with a new platform that powers VR or AR applications, as well as the usual suspects like updates to iOS, macOS, watchOS and more. In addition, there might be new Mac hardware and we never know what surprises might be in store. Will there be a Ted Lasso reveal? Or maybe new Fitness+ updates or a celebrity appearance? I guess we'll just have to wait till find out. The show kicks off at 1pm ET/10am PT, and we'll be starting to publish updates at 12pm ET, so stay tuned!
This article originally appeared on Engadget at https://www.engadget.com/apple-wwdc-2023-live-updates-160004876.html?src=rss
As Microsoft unveiled more of its plans for AI domination at its Build developer conference today, no aspect of its business will be left untouched by AI. In addition to bringing its "Copilot" to Windows 11 and Edge, the company also shared details on how it will be infusing the Store with AI, beginning with the new AI Hub.
This is a "new curated section in the Microsoft Store where we will promote the best AI experiences built by the developer community and Microsoft," the company said in a press release. It will use this area to "educate customers on how to start and expand their AI journey, inspiring them to use AI in everyday ways to boost productivity, spark creativity and more." Examples include apps like Luminar Neo, Descript, Podcastle, Copy.ai, Kickresume, Play.ht and other services that let users tap AI to help them create content.
The Store will also get AI-generated review summaries that takes feedback left by other users on apps and games and generates a concise rundown of what was said. This way, people won't have to sift through the "thousands of reviews," that Microsoft says some popular apps have.
If you're a Windows Insider, you'll be able to try out a new feature in preview that will restore your Store app icons when you're transitioning to a new Windows 11 device. You'll have to be moving from a Windows 10 or 11 setup to begin with, and when you switch over, icons for your Store apps will "automatically get restored right where [you] had them — on the Start menu and Taskbar."
Developers will also be getting some AI support, like automatically generated keywords and suggested Search Tags in the Partner Center. This will use AI to "consumer your metadata, as well as other signals, and help you improve the discoverability of your app in the Microsoft Store search results." The company is also adding the ability to list your app in multiple categories.
Microsoft Store Ads are also expanding in a few ways. First, they'll be added to search results on Bing starting next month, so people using their browsers to look for stuff will also be aware of relevant Windows apps. Next month, they'll also be reaching outside the US market to more than 150 regions around the world. Developers will also get the option to display rich advertising in the spotlight section of the Store.
Most of the consumer-facing features announced for the Microsoft Store today will be available "soon," and more specific timeframes have yet to be shared. Still, it's clear the company is intent on bringing AI to every part of its business and all its products and the onslaught is nigh.
This article originally appeared on Engadget at https://www.engadget.com/ai-is-headed-to-the-microsoft-store-on-windows-150035716.html?src=rss
Guess Android tablets aren't dead just yet. Following Google's official launch of the Pixel Tablet last week, Amazon has unveiled a new Fire tablet called the Max 11. For just $230, the Fire Max 11 offers an 11-inch LCD screen, slim aluminum frame and smart home controls courtesy of Alexa. I was able to briefly check out a sample at a briefing last week and am impressed by how much Amazon is offering for the money.
This isn't your average Fire tablet, by the way. While the company's previous slates have found a niche as affordable, kid-friendly mobile entertainment devices, the Max 11 is all grown up. With slimmer bezels, a more-premium aluminum build and weighing just over a pound, it's designed for those who also want to do some work and multi-tasking. To that end, the tablet uses an octa-core MediaTek processor that Amazon said is almost 50 percent faster than its "next fastest tablet."
There's a fingerprint sensor embedded in the power button, making the Maxx 11 the company's first tablet to offer this feature. The 11-inch screen, which Amazon says is its "biggest, most vibrant... tablet display," has a 2,000 x 1,2000 resolution and is certified for low blue light. It also supports WiFi 6 and runs Fire OS 8, which offers some split-screen and picture-in-picture features to let you fire off emails while keeping an eye on your favorite YouTube livestream (like the Engadget Podcast, perhaps?).
More importantly, though, the company also made a keyboard case and stylus for the Max 11 that you can get for an additional $100. If you don't need the pen ($35), you can get just the case for $90. It attaches to the device magnetically and connects via pogo pins, too. I like that the cover comes with a kickstand, and in my brief experience it was sturdy enough to prop the tablet up and various angles. The keyboard is detachable so you can peel it off when you don't want it in the way. Its keys were surprisingly springy and deep, with a well-spaced layout. Though I think the trackpad is a little small, I'm glad that Amazon at least included one instead of ditching it altogether.
Amazon
I also enjoyed casually scribbling my name and random greetings with the "Made For Amazon Stylus Pen," which uses a replaceable AAA battery that the company said should last six months. Palm rejection on OneNote was effective during the briefing, and you can also write directly into search and message fields, and the Max 11 will convert your scrawl into text that you can submit.
Of course, this is quite a different device from the Pixel Tablet, which comes with a speaker base that keeps it charged and turns into a smart display when attached. But lest you forget, Amazon already offers Show Mode on its tablets, which turns them into dashboards for your connected home, a la its Echo Shows. The same is true for the Max 11, and with the kickstand on the case, you can basically turn it into a smart display. Sure, it won't always remain charged unless you plug it in, nor will it have a superior audio system when left standing. But you can sort of replicate the Pixel Tablet experience here for $150 less. Alexa can always be listening, too.
The Max 11 itself will last 14 hours on a charge, according to Amazon, and 64GB and 128GB models will be available. For those who are curious, the device will have 4GB of RAM and 8-megapixel front and rear cameras. And in case you're clumsy or expect the kids in your life to fight over this tablet, it should be reassuring to know that Amazon claims the Max 11 is "three times as durable as the iPad 10.9' (10th generation)."
Like the company's other tablets, the Fire Max 11 supports comprehensive parental controls and multiple user profiles so you can share this with some peace of mind. For just $330, the Fire Max 11 offers plenty of features that make it seem like a solid value. It's certainly cheaper (when you include the price of the keyboard case) than the Galaxy Tabs, Surfaces and iPads that have long dominated the tablet market. There are companies like Lenovo to look out for, of course, but given the strong foothold Amazon has had in the family-oriented slate space, the Max 11 appears poised to find its home in the backpacks of many school-going children soon. You can pre-order the Fire Max 11 starting at $230 today.
This article originally appeared on Engadget at https://www.engadget.com/amazons-latest-fire-tablet-is-a-230-android-powered-2-in-1-130022727.html?src=rss
Every third Thursday of May, the world commemorates Global Accessibility Awareness Day or GAAD. And as has become customary in the last few years, major tech companies are taking this week as a chance to share their latest accessibility-minded products. From Apple and Google to Webex and Adobe, the industry’s biggest players have launched new features to make their products easier to use. Here’s a quick roundup of this week’s GAAD news.
Apple's launches and updates
First up: Apple. The company actually had a huge set of updates to share, which makes sense since it typically releases most of its accessibility-centric news at this time each year. For 2023, Apple is introducing Assistive Access, which is an accessibility setting that, when turned on, changes the home screen for iPhone and iPad to a layout with fewer distractions and icons. You can choose from a row-based or grid-based layout, and the latter would result in a 2x3 arrangement of large icons. You can decide what these are, and most of Apple’s first-party apps can be used here.
The icons themselves are larger than usual, featuring high contrast labels making them more readable. When you tap into an app, a back button appears at the bottom for easier navigation. Assistive Access also includes a new Calls app that combines Phone and FaceTime features into one customized experience. Messages, Camera, Photos and Music have also been tweaked for the simpler interface and they all feature high contrast buttons, large text labels and tools that, according to Apple, "help trusted supporters tailor the experience for the individual they're supporting." The goal is to offer a less-distracting or confusing system to those who may find the typical iOS interface overwhelming.
Apple also launched Live Speech this week, which works on iPhone, iPad and Mac. It will allow users to type what they want to say and have the device read it aloud. It not only works for in-person conversations, but for Phone and FaceTime calls as well. You'll also be able to create shortcuts for phrases you frequently use, like "Hi, can I get a tall vanilla latte?" or "Excuse me, where is the bathroom?" The company also introduced Personal Voice, which lets you create a digital voice that sounds like yours. This could be helpful for those at risk of losing their ability to speak due to conditions that could impact their voice. The setup process includes "reading alongside randomized text prompts for about 15 minutes on iPhone or iPad."
For those with visual impairments, Apple is adding a new Point and Speak feature to the detection mode in Magnifier. This will use an iPhone or iPad's camera, LiDAR scanner and on-device machine learning to understand where a person has positioned their finger and scan the target area for words, before reading them out for the user. For instance, if you hold up your phone and point at different parts on a microwave or washing machine's controls, the system will say what the labels are — like "Add 30 seconds," "Defrost" or "Start."
The company made a slew of other smaller announcements this week, including updates that allow Macs to pair directly with Made-for-iPhone hearing devices, as well as phonetic suggestions for text editing in voice typing.
Google's new accessibility tools
Meanwhile, Google is introducing a new Visual Question and Answer (or VQA) tool in the Lookout app, which uses AI to answer follow-up questions about images. The company's accessibility lead and senior director of Products For All Eve Andersson told Engadget in an interview that VQA is the result of a collaboration between the inclusion and DeepMind teams.
Google
To use VQA, you'll open Lookout and start the Images mode to scan a picture. After the app tells you what's in the scene, you can ask follow-ups to glean more detail. For example, if Lookout said the image depicts a family having a picnic, you can ask what time of day it is or whether there are trees around them. This lets the user determine how much information they want from a picture, instead of being constrained to an initial description.
Often, it is tricky to figure out how much detail to include in an image description, since you want to provide enough to be helpful but not so much that you overwhelm the user. For example, "What’s the right amount of detail to give to our users in Lookout?" Andersson said. "You never actually know what they want." Andersson added that AI can help determine the context of why someone is asking for a description or more information and deliver the appropriate info.
When it launches in the fall, VQA can present a way for the user to decide when to ask for more and when they've learned enough. Of course, since it's powered by AI, the generated data might not be accurate, so there's no guarantee this tool works perfectly, but it's an interesting approach that puts power in users' hands.
Google is also expanding Live Captions to work in French, Italian and German later this year, as well as bringing the wheelchair-friendly labels for places in Maps to more people around the world.
Microsoft, Samsung, Adobe and more
Plenty more companies had news to share this week, including Adobe, which is rolling out a feature that uses AI to automate the process of generating tags for PDFs that would make them friendlier for screen readers. This uses Adobe's Sensei AI, and will also indicate the correct reading order. Since this could really speed up the process of tagging PDFs, people and organizations could potentially use the tool to go through stockpiles of old documents to make them more accessible. Adobe is also launching a PDF Accessibility Checker to "enable large organizations to quickly and efficiently evaluate the accessibility of existing PDFs at scale."
Microsoft also had some small updates to share, specifically around Xbox. It's added new accessibility settings to the Xbox app on PC, including options to disable background images and disable animations, so users can reduce potentially disruptive, confusing or triggering components. The company also expanded its support pages and added accessibility filters to its web store to make it easier to find optimized games.
Meanwhile, Samsung announced this week that it's adding two new levels of ambient sound settings to the Galaxy Buds 2 Pro, which brings the total number of options to five. This would let those who use the earbuds to listen to their environment get greater control over how loud they want the sounds to be. They'll also be able to select different settings for individual ears, as well as choose the levels of clarity and create customized profiles for their hearing.
We also learned that Cisco, the company behind the Webex video conferencing software, is teaming up with speech recognition company VoiceITT to add transcriptions that better support people with non-standard speech. This builds on Webex's existing live translation feature, and uses VoiceITT's AI to familiarize itself with a person's speech patterns to better understand what they want to communicate. Then, it'll establish and transcribe what is said, and the captions will appear in a chat bar during calls.
That sentiment is true not just for Netflix, nor the tech industry alone, but also for the entire world. While it's nice to see so many companies take the opportunity this week to release and highlight accessibility-minded features, it's important to remember that inclusive design should not and cannot be a once-a-year effort. I was also glad to see that despite the current fervor around generative AI, most companies did not appear to stuff the buzzword into every assistive feature or announcement this week for no good reason. For example, Andersson said "we're typically thinking about user needs" and adopting a problem-first approach as opposed to focusing on determining where a type of technology can be applied to a solution.
While it's probably at least partially true that announcements around GAAD are a bit of a PR and marketing game, ultimately some of the tools launched today can actually improve the lives of people with disabilities or different needs. I call that a net win.
This article originally appeared on Engadget at https://www.engadget.com/the-tech-industrys-accessibility-related-products-and-launches-this-week-130022115.html?src=rss
It’s been two years since Google introduced its Project Starline holographic video conferencing experiment, and though we didn’t hear more about it during the keynote at I/O 2023 today, there’s actually been an update. The company quietly announced that it’s made new prototypes of the Starline booth that are smaller and easier to deploy. I was able to check out a demo of the experience here at Shoreline Park and am surprised how much I enjoyed it.
But first, let’s get one thing out of the way. Google did not allow us to take pictures or video of the setup. It’s hard to capture holographs on camera anyway, so I’m not sure how effective it would have been. Due to that limitation, though, we’re not going to have a lot of photos for this post and I’ll do my best to describe the experience in words.
After some brief introductions, I entered a booth with a chair and desk in front of the Starline system. The prototype itself was made up of a light-field display that looked like a mesh window, which I’d guess is about 40-inches wide. Along the top, left and right edges of the screen were cameras that Google uses to get the visual data required to generate the 3D model of me. At this point, everything looked fairly unassuming.
Things changed slightly when Andrew Nartker, who heads up the Project Starline team at Google, stepped into frame. He sat in his chair in a booth next to mine, and when I looked at him dead on, it felt like a pretty typical 2D experience, except in what felt like very high resolution. He was life-sized and it seemed as if we were making eye contact and holding each other’s gaze, despite not looking into a camera. When I leaned forward or leaned closer, he did too, and nonverbal cues like that made the call feel a little richer.
What blew me away, though, was when he picked up an apple (haha I guess Apple can say it was at I/O) and held it out towards me. It was so realistic that I felt as if I could grab the fruit from his fist. We tried a few other things later — fist bumping and high fiving, and though we never actually made physical contact, the positioning of limbs on the call was accurate enough that we could grab the projections of each other’s fists.
The experience wasn’t perfect, of course. There were parts where, when Nartker and I were talking at the same time, I could tell he could not hear what I was saying. Every now and then, too, the graphics would blink or appear to glitch. But those were very minor issues, and overall the demo felt very refined. Some of the issues could even be chalked up to spotty event WiFi, and I can personally attest to the fact that the signal was indeed very shitty.
It’s also worth noting that Starline was basically getting the visual and audio data of me and Nartker, sending it to the cloud over WiFi, creating a 3D model of both of us, and then sending it down to the light display and speakers on the prototype. Some hiccups are more than understandable.
While the earliest Starline prototypes took up entire rooms, the current version is smaller and easier to deploy. To that end, Google announced today that it had shared some units with early access partners including T-Mobile, WeWork and Salesforce. The company hopes to get real-world feedback to “see how Project Starline can help distributed workforces stay connected.”
We’re clearly a long way off from seeing these in our homes, but it was nice to get a taste of what Project Starline feels like so far. This was the first time media demos were available, too, so I’m glad I was able to check it out for myself and tell you about it instead of relying on Google’s own messaging. I am impressed by the realism of the projections, but I remain uncertain about how effectively this might substitute or complement in-person conversations. For now, though, we’ll keep an eye on Google’s work on Project Starline and keep you posted.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/googles-project-starline-booths-gave-me-a-holographic-meeting-experience-205804960.html?src=rss
When Google’s vice president of Nest products Rishi Chandra told me about the company’s vision for ambient computing in 2019, he talked about a hypothetical smart display that was basically a tablet attached to a speaker dock. You would be able to lift the screen off its base, take it with you to another room and place it on another station there. Today, at Google I/O, that hypothetical device is launching for real as the Pixel Tablet, and I was able to get some hands-on time with it in April.
Though it was initially teased last I/O, the Pixel Tablet is actually ready for purchase this year. Come May 10th, you’ll be able to pre-order it for $499, and that includes the speaker base. Google won’t be selling the tablet on its own, though you can buy additional docks for $120 each so you can have stations in multiple rooms.
Clearly, the company doesn’t want you to think of this as a $370 tablet. This is more of a hybrid device, better considered as a smart display with a detachable screen. So don’t think of it as a successor to the discontinued Pixel Slate — Google said it was done with making its own tablets (or, more specifically, focusing on laptops) and it is… even if it is confusingly calling this thing the Pixel Tablet.
In spite of its name, the Pixel Tablet will likely spend most of its time in your home as a smart display. With an 11-inch screen, rounded-rectangle shape and a mesh fabric covering the speaker base, the Pixel Tablet looks incredibly similar to the Nest Hub Max. You can get it in either Hazel (gray), Porcelain (white) or Rose if you’re in the US, and the latter two have white bezels while the first has black borders. The device has a nano-ceramic coating that Google said was “inspired by the feel of porcelain,” lending it a “textured feel unlike any other tablet.” It’s hard to describe how this felt — I liked the matte finish but it’s not like my fingers were sent into spasms of euphoria when I touched the device.
What I did find impressive was how little it weighed. I picked it up to use while it was still attached to the speaker, and didn’t feel any strain at all. Granted, I only held it that way for a few minutes, and it would probably start to feel heavy if I held it long enough. But then again, you’re not really supposed to hold the screen with the speaker connected.
On its own, the tablet is a fairly straightforward Android 13 slate. It offers the same multitasking features as most devices running the latest version of Android L for larger displays, though Google has optimized 50 of its own apps for the Pixel Tablet. It also worked with developers to optimize apps like Spotify, Minecraft, Disney+ and more for the larger screen. For instance, Gmail and WhatsApp offer two-column layouts, and when I dragged a slider to expand the width of the former to take up more than half the screen, it went from a single column to a two-column view. Speaking of WhatsApp, you can now make video calls from the app on the Pixel Tablet, making it the first slate to support this.
You can also do things like drag and drop photos between apps while in split screen mode. A row of icons appears at the bottom of the screen when you drag your finger slightly from the bottom and pause. From here, you can launch your favorite and recently used apps.
Photo by: Sam Rutherford / Engadget
The Pixel Tablet is also the first tablet to be powered by the company’s own Tensor G2 processor, which enables AI features like voice typing, Magic Eraser and Photo Unblur. This is also the first tablet on which you can make WhatsApp video calls, by the way. Of course, you don’t have to be using the screen on its own to make use of these tools — the software works the same way whether the tablet is docked or disconnected.
Thankfully, the magnets holding the two parts together are strong enough to keep the display from sliding despite the angle it’s propped up at. It’s also possible to remove the screen with one hand, as a Google rep showed me at the demo, but it required some finesse in my experience. The dock isn’t heavy enough that you can simply peel the tablet off from the top — you’d need to use your hand as a lever along the bottom edge to separate the two. With practice, I could see this action becoming easier to do.
When the screen is attached to the base, a few things happen. The onboard speakers are deactivated and any media you’re playing will automatically stream through the dock’s more-capable system. From what I heard, the base speakers sound similar to those on the Nest Hub Max, which is to say the music was clear and had a nice amount of bass. I haven’t heard enough to judge the audio quality for sure, but it was definitely an upgrade from the tablet’s tinny output.
Another feature that becomes available when the display is connected to the dock is Hub Mode. You’ll see your selected photos on the lock screen, just like you do on Nest Hubs, as well as a home button on the bottom left. Tapping this brings up a control panel for your compatible connected home appliances like thermostats, lights, locks and camera feeds.
Photo by: Sam Rutherford / Engadget
In this mode, anyone who can physically touch the Pixel Tablet can access this dashboard, so if you have a friend visiting, they can also turn on the lights without having to unlock your device. This only works when the tablet is docked. They’ll also be able to set timers or play music and ask Google for answers. But don’t worry — they can’t do things that require your personal info, like see your calendar events, for example. That would require unlocking the tablet, and I appreciate that there’s a fingerprint sensor on the power button at the top to make this more convenient.
During the hands-on event in New York, I used the demo unit to turn off a lamp in a San Francisco office and was able to watch it happen via the camera feed that was also onscreen.
When the tablet is docked, you’ll also be able to use it as an additional screen and Chromecast to it. Google said this is the first tablet with Chromecast built in, but to be clear, the feature is only available when the device is docked and in Hub Mode, not as a standalone slate. It’s a nice touch regardless, and great for places like your bedroom if you don’t have space for a TV. I’m definitely planning on leaving a Pixel Tablet dock by my bed so I can stream Netflix in the background when trying to fall asleep.
I also like the idea of using the Pixel Tablet as a dedicated device for my video conferences. The slate itself has two 8-megapixel cameras — one on the rear and one in front. Google has designed Meet to keep the user centered in the frame even if you’re moving around. The company says the Pixel Tablet “has the best Google Meet app video calls of any tablet,” which is a claim I’ll have to put to the test in the real world.
Photo by: Sam Rutherford / Engadget
Using the Tensor G2 processor, the system will automatically adjust brightness to make sure you’re well lit. This was pretty funny to watch during our demo when the camera hunted for a person to keep in frame when I left its view. It discovered my colleague Sam about a foot away, even though he wasn’t facing the Tablet, and zoomed in on him. When both of us looked at the camera, the framing changed to accommodate us.
I’m not a fan of the low camera angle when the screen is docked, but the good news is you can still use Meet when the tablet is detached. Google also makes a case that you can buy for $79. It comes with a kickstand that doubles as a handle when unfolded all the way, so you can prop the device up on the go or hang it on a hook if you wish. I can see myself propping the tablet up on a higher surface or hanging it on a kitchen cabinet if I were to take a call from my parents while cooking dinner. What's nice is that because of the way the case is designed, you can easily snap the screen back onto the dock even with the case on, since the kickstand fits nicely around the base and the pogo pins can still make contact.
It’s worth noting that when the screen is detached, the speaker base is basically useless. You can’t cast to it, and because it doesn’t have a microphone onboard, it won’t hear your commands. It doesn’t have a battery onboard either, so this isn’t a portable system you can take to the beach or on a road trip (though I can’t imagine why you would).
Photo by: Sam Rutherford / Engadget
The tablet battery will last for 12 hours of video streaming, according to Google, so you should at least be able enjoy an entire season of You on a longhaul flight.
But remember. This isn’t meant to be a tablet first. Most other Android slates you’ll probably pick up about a few times a month, only to be annoyed to find it’s dead and need charging. Or you’ll take it with you on a trip to watch shows on the train or if you don’t like inflight entertainment options. With the Pixel Tablet, you’ll at least not have to worry about keeping it charged.
I’ve liked the idea of a smart display with a detachable screen since Chandra first mentioned it to me and, at first blush at least, the concept seems solid. I’ll have to wait till I can test out a unit in my own home to know how practical this idea is, but so far I’m intrigued.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/pixel-tablet-hands-on-basically-a-500-smart-display-with-a-detachable-screen-185151133.html?src=rss
Knowingly or unknowingly, Microsoft kicked off a race to integrate generative AI into search engines when it introduced Bing AI in February. Google seemingly rushed into an announcement just a day before Microsoft’s launch event, telling the world its generative AI chatbot would be called Bard. Since then, Google has opened up access to its ChatGPT and Bing AI rival, but while Microsoft’s offering has been embedded into its search and browser products, Bard remains a separate chatbot.
That doesn’t mean Google hasn’t been busy with generative AI. It’s infused basically all of its products with the stuff, while leaving Search largely untouched. That is, until now. At its I/O developer conference today, the company unveiled the Search Generative Experience (SGE) as part of the new experimental Search Labs platform. Users can sign up to test new projects and SGE is one of three available at launch.
I checked out a demo of SGE during a briefing with Google’s vice president of Search and, while it has some obvious similarities to Bing AI, there are notable differences as well.
For one thing, SGE doesn’t look too different from your standard Google Search at first glance. The input bar is still the same (whereas Bing’s is larger and more like a compose box on Twitter). But the results page is where I first saw something new. Near the top of the page, just below the search bar but above all other results is a shaded section showing what the generative AI found. Google calls this the AI-powered snapshot containing “key information to consider, with links to dig deeper.”
At the top of this snapshot is a note reminding you that “Generative AI is experimental,” followed by answers to your question that SGE found from multiple sources online. On the top right is a button for you to expand the snapshot, as well as cards that show the articles from which the answers were drawn.
Google
I asked Edwards to help me search for fun things to do in Frankfurt, Germany, as well as the best yoga poses for lower back pain. The typical search results showed up pretty much instantly, though the snapshot area showed a loading animation while the AI compiled its findings. After a few seconds, I saw a list of suggestions for the former, including Palmengarten and Romerburg. But when Edwards clicked the expand button, the snapshot opened up and revealed more questions that SGE thought might be relevant, along with answers. These included “is Frankfurt worth visiting” and “is three days enough to visit Frankfurt,” and the results for each included source articles.
My second question yielded more interesting findings. Not only did SGE show a list of suggested poses, expanding the answers brought up pictures in the source articles that gave a better idea of how to perform each one. Below the list was a suggestion to avoid yoga “if you have certain back problems, such as a spinal fracture or a herniated disc.” Further down in the snapshot, there was also a list of poses to avoid if you have lower back pain.
Importantly, the very bottom of the snapshot included a note saying “This is for informational purposes only. The information does not constitute medical advice or diagnosis.” Edwards said this is one of the safety features built into SGE, where the disclaimer shows up on sensitive topics that could affect a person’s health or financial decisions.
In addition, the snapshot doesn’t appear at all when Google’s algorithms detect that a query has to do with topics like self-harm, domestic violence or mental health crises. What you’ll see in those situations is the standard notice about how and where to get help.
Google
Based on my brief and limited preview, SGE seemed at once similar and different to Bing AI. When citing its sources, for example, SGE doesn’t show inline notations with footnotes linking to each article. Instead, it shows cards on the right or below each section, similar to how the cards on news results look.
Both Google and Microsoft’s layouts offer conversational views, with suggested follow-up prompts at the end of each response. But SGE doesn’t have an input bar at the bottom, and the search bar remains up top, outside of the snapshot. This makes it seem less like talking to a chatbot than Bing AI.
Google didn’t say it set out to build a conversational experience, though. It said “With new generative AI capabilities in Search, we’re now taking more of the work out of searching.” Instead of your having to do multiple searches to get at a specific answer or itinerary or process, you can just bundle your parameters into one query, like “What’s better for a family with kids under 3 and a dog, Bryce Canyon or Arches.”
The good news is that when you use the suggested responses in the snapshot, you can go into a new conversational mode. Here, “context will be carried over from question to question,” according to a press release. You’ll also be able to ask Google for help buying things online, and Edwards said the company sees 1.8 billion updates to its product listings every hour, helping keep information about supply and prices fresh and accurate. And since it’s Google after all, and Google relies heavily on ads to make money, SGE will also feature dedicated ad spaces throughout the page.
Google also said it would remain committed to making sure ads are distinguishable from organic search results. You can sign up to test the new SGE in Search Labs The experiment will be available to all in the coming weeks, starting in the US in English. Look for Labs Icon in the Google App or Chrome desktop and visit labs.google.com/search for more info.
This article originally appeared on Engadget at https://www.engadget.com/google-search-generative-experience-preview-a-familiar-yet-different-approach-175156245.html?src=rss
Google is hosting its first full-on in-person I/O developer conference since the pandemic and we expect the company to announce a biblical amount of news at breakneck pace. Engadget is here at the show and will bring you a liveblog of what's happening at the keynote as it happens. The show kicks off at 1pm ET today and we'll be starting our commentary as early as noon. Keep your browser open here for our coverage of everything from Mountain View, CA today!
This article originally appeared on Engadget at https://www.engadget.com/live-updates-from-google-io-2023-163201853.html?src=rss
For better or worse, video calls have become an integral part of our lives. Whether you're chatting with a loved one who's oceans away, or collaborating with teammates across timezones, sitting in front of a webcam or holding up your phone is an inescapable reality. That's why many companies have developed products meant to make video calls feel more natural — like NVIDIA's Broadcast tool to make it look like you're maintaining eye contact even if you aren't and Google's experimental 3D telepresence or holographic booths. But Logitech is introducing something that uses dead simple technology to make video chats more like the real-world experience. The company announced Project Ghost in January, and recently invited us to check out a functioning version in New York.
The premise is straightforward. Instead of futzing around with holograms or algorithms that make your pupils look like you're staring at a camera, Logitech simply embedded its existing Rally Plus video conferencing system into a booth it teamed up with furniture maker Steelcase to create. The result is a booth that's like a larger business class seat (but not quite first class), with walls about 5 feet 10 inches tall. Light brown wooden slats line the exterior, matching the panels inside. On one side sits a hollow wall that's almost two feet thick, with a screen inside it and a mirror below that, placed at a perpendicular angle. Facing the TV is a light pink couch, a side table with a touchscreen control panel on it and some green plants behind that.
With its warm colors, soft curves, pink couch and greenery, the booth felt very inviting. I quickly collapsed onto the sofa and was slightly surprised to see a woman staring back at me. She appeared life-sized and it felt as if our eyes met, even though she was sitting in a similar booth all the way in Boston. Since the camera is embedded behind the display, it was easy for me to peer into her face and on the other end of the call it would look like I was staring right at her.
Though Logitech executives at the demo told me the video quality was capped at 1080p and was more likely streaming at 720p or lower, I initially thought the woman I was calling was rendered in 4K. But the clarity and realism that I had assumed was a result of high resolution was more likely because I wasn't used to talking to someone on such a large screen. Normally, I take my calls on a 13-inch laptop, and even when I'm in a meeting room with my colleagues' faces plastered on a 40-inch TV, I didn't get the sense that they were right in front of me.
The only time I felt like there was some distortion was when I heard feedback of my voice during parts of the demo. I couldn't tell where the speakers and mics were embedded in the space, so I didn't get to adjust or learn how to move to avoid the echoes. But for the most part, the meeting was smooth, and when the company's executives finally left the space for me to be alone with the caller, I was able to relax. Though I was only looking at the person's upper body, I was able to note small changes in body language like posture. It's not a perfect replacement for a real-world conversation, but possibly because I wasn't on my laptop, I was a lot more focused than I normally am on calls.
Much of that sense of realism and privacy might have to do with the setup of the booth. Behind the couch is a black wall, while above the TV box is a horizontal light with a filtered effect, and together they make the caller look well lit and in focus. The fact that both me and the person I'm talking to are staring at our upper bodies and heads with nothing else in the background removes any distraction.
Logitech
Of course, you could achieve something similar by investing in a tripod, a backdrop, a dedicated camera and spend a lot less money, but this product isn't meant for the average consumer. Logitech said it received a lot of interest from companies wanting to order the booths for their office spaces, and that it was looking into iterating on the design to make them more suitable for different scenarios.
In addition to bigger setups for multiple people (the current couch is designed for one person), Logitech said it could also come up with something people could buy for home use. I could see Ghost being incredibly useful for calls with my therapist, telehealth appointments or even just as a dedicated livestreaming station. But considering Logitech estimates selling each unit for about $15,000 to $20,000 depending on the size or style, this is probably something I can only look at in envy. If you have that sort of money to spare, the company said it would be ready to sell these in the fall.
This article originally appeared on Engadget at https://www.engadget.com/logitech-made-googles-project-starline-video-conferencing-booth--minus-holograms-163058592.html?src=rss