After dipping its toes into live sports with golf and tennis exhibitions, Netflix is taking a major step forward on that front. The company has locked in a deal with the NFL to air a Christmas Day doubleheader, marking the first time that it will broadcast games from the league. Netflix will stream at least one holiday game in 2025 and 2026 as well. These games won't be blacked out in competing teams' home markets.
Reports last week suggested Netflix was in play for Christmas Day NFL games, and that was seemingly why the league postponed the reveal of its 2024 schedule until today (when Netflix is trying to win over advertisers at its upfront presentation). To that end, it's not yet clear which NFL teams will be the first to square off live on Netflix around the world, though we'll find out when the league releases the schedule at 8PM ET.
Of course, Netflix isn't the first streaming service to broadcast NFL games. Prime Video has been showing them for years, while YouTube is the home of NFL Sunday Ticket.
It's not Netflix's first foray into the NFL as a whole, either. Last year, it debuted Quarterback, a hit unscripted series that followed Patrick Mahomes, Kirk Cousins, and Marcus Mariota during the 2022 season. A self-explanatory follow-up show called Receiver will arrive this summer.
This article originally appeared on Engadget at https://www.engadget.com/netflix-will-stream-its-first-nfl-games-on-christmas-day-163407396.html?src=rss
Google I/O isn’t the only tech-adjacent event this week. Uber just held its annual GO-GET event and announced a whole bunch of new features coming to the rideshare platform/taxi app/whatever you wanna call it. Much of this news concerns shuttles and expanded ride sharing options, as Uber states in its promotional materials that “we cultivate the magic of human interaction.” Ah, yes. The magic of avoiding eye contact with a stranger sitting next to you in an Uber Pool. It truly is special.
Anyways, the big news here is something called Uber Shuttle. This lets users reserve up to five seats up to seven days in advance for transportation to and from major events like concerts and basketball games, though it's also available for trips to the airport. The company brags that this feature is particularly budget-friendly, noting that each rider will pay “a fraction of the price of an UberX.” The company promises that these rides will not be impacted by surge pricing. We’ll see about that. It’s also worth noting that these shuttles are only for events listed in the app, which is kind of a bummer.
Uber has partnered with Live Nation to bring these shuttles to certain venues throughout the summer, including Miami’s Hard Rock Stadium and Charlotte’s PNC Pavilion. These Uber Shuttles won’t be your typical Nissan Sentra or Toyota Camry. They are actual shuttles that hold anywhere from 14 to 55 occupants. The company says each driver will be commercially licensed to operate a large transport vehicle.
Rideshare companies have been trying to crack the "bus, but worse" code for a while now. Uber tried something similar in 2015, called Uber Hop, which failed spectacularly. Lyft followed suit in 2017, also failing spectacularly. Third time's the charm?
GO-GET wasn't just about the standard bus hiding under a fresh coat of Silicon Valley paint. UberX Share is getting a new feature that lets users schedule shared rides in advance to save a bit of money. The company notes that an average rider should save around 25 percent per ride using this tool when compared to a regular trip with UberX.
It says this is “perfect for commutes” and that it's “intentionally launching this new offering in cities that have experienced some of the highest rates of employees returning to office.” This includes New York City, Los Angeles, Chicago, San Francisco, San Diego and Atlanta, with more locations to be added in the near future.
The company also announced Uber Caregiver, which lets people book rides for loved ones to doctors appointments and the like. This feature rolls out sometime during the summer and will be available for customers aged 65 and over.
Uber
Food delivery platform Uber Eats is getting a couple of updates. The company has added Costco to its lineup of retail delivery offerings. Costco members will not only get stuff delivered, but should get an additional discount on top of membership privileges. Finally, Uber Eats Lists is a new way for people to decide on what to nosh on. This allows users to peruse restaurant recommendations from friends and local foodies. Uber says this “makes it easy to explore a new city or switch up your go-tos.” The service launches in July in NYC and Chicago, with more cities to come.
Regular Uber users should look out for these features throughout the summer, though not if they live in Minneapolis. Uber’s pulling up stakes after the city council voted to increase driver pay. It would rather leave a bustling metropolis than abide by a slight pay increase. After all, the idea of fair pay could spread and infect the innocent minds of Uber drivers everywhere. Long live the totally healthy and normal gig economy.
This article originally appeared on Engadget at https://www.engadget.com/uber-announces-its-new-worse-version-of-a-bus-160727976.html?src=rss
Anyone who wants to keep an eye on their perimeter or see nighttime trash panda action may want to check out this deal on Amazon. Currently, bundles of the Blink Outdoor 4 cameras are on sale, with the deepest discount going to a five-pack set. At full price, it costs $400. With the discount, it's $200 instead. That matches the Prime member-only price we saw earlier this year, but this time, even those who don't pay for Amazon's program can get the offer. Other bundles and Blink devices are on sale too as part of a larger sale.
The Blink Outdoor 4 security cameras allow users to see, hear and talk with anyone who comes into view and send motion-detection alerts and live feeds to a connected smartphone. They can also send footage to an Echo Show smart display and receive commands from other Alexa-enabled devices like an Echo Dot or Fire TV. Just note that Blink equipment isn't Google Assistant- or Siri-compatible, so these really only make sense for the Amazon-based smart home.
The Outdoor 4 is the latest generation of the cameras, offering a wider field of vision and better day and night image quality over the previous generation. During the day, they shoot 1080p video and use infrared night vision in the dark. Each unit runs on a pair of AA batteries which should power the camera for two years. A plug-in Sync Module that stays inside is required to operate the Outdoor 4 cameras and, conveniently, is included in each bundle — as are enough batteries for the cameras, mounting kits and the plug for the Sync Module.
For those who just need to keep an eye on one area outside, there's the one-camera system, which also includes the Sync Module and other accessories. It's 40 percent off right now and down to an all-time low of $60. For a longer battery life, the Outdoor 4 single-cam system can also be bundled with a battery pack that extends the run time from two years to four. That version is $80 after a 33 percent discount.
Amazon is also discounting its Blink branded doorbells, floodlights and indoor cameras as part of a larger sale. Blanketing a home in Alexa's watchful eye just got a whole lot cheaper.
This article originally appeared on Engadget at https://www.engadget.com/blink-outdoor-4-security-cameras-are-up-to-half-off-right-now-155239715.html?src=rss
This week, streaming services are joining linear networks in revealing some of the projects they've got coming up in an attempt to win over advertisers. After Prime Video stepped up to the plate on Tuesday, it was Warner Bros. Discovery's turn at bat on Wednesday. The company surprised many by dropping a teaser trailer for Dune: Prophecy, a six-episode Dune prequel series that's coming to Max this fall.
The spinoff is set 10,000 years before the events of the Dune movies. It follows two Harkonnen sisters who tackle a threat to humanity while setting up the sisterhood that will eventually become the Bene Gesserit. Dune: Prophecy is based on the novel Sisterhood of Dune by Brian Herbert and Kevin J. Anderson.
The series stars Emily Watson, Olivia Williams, Travis Fimmel, Jodhi May and the always-great Mark Strong. The trailer makes the show look suitably large in scope, though you'll need to wait a few more months for it to arrive.
In the meantime, you'll soon be able to watch Dune: Part Two on Max (though we recommend catching this butt-kicking epic on a giant screen if it's still showing in a theater near you). The sequel is coming to the streaming service next week, on May 21.
In addition, it might be too early for a trailer for the second season of The Last of Us, though WBD has released the first official images. The shots of Joel (Pedro Pascal) and Ellie (Bella Ramsey) don't give much away, but fans of the second game in the series might recognize those fairy lights behind Joel's magnificent mane. The Last of Us will return on HBO and Max in 2025, hopefully on January 1.
Warner Bros. Discovery
Warner Bros. Discovery
This article originally appeared on Engadget at https://www.engadget.com/the-first-dune-prophecy-teaser-takes-the-action-back-by-10000-years-152911407.html?src=rss
AI's capabilities are growing at tremendous speeds, and while that apparently warrants a ton of the United States' money for development, it doesn't seem to translate to a very obvious action: regulation. A bipartisan group of four senators, led by majority leader Chuck Schumer, have announced a legislative plan for AI that includes putting $32 billion towards research and development. But, it passes off the responsibility of devising regulatory measures around areas such as job eliminations, discrimination and copyright infringement to Senate committees.
“It’s very hard to do regulations because AI is changing too quickly,” Schumer said in an interview published by The New York Times. Yet, in March, the European Parliament approved wide-ranging legislation for regulating AI that manages the obligations of AI applications based on what risks and effects they could bring. The European Union said it hopes to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field."
Schumer seems to disagree with finding that balance, instead stating in the interview that investment into AI research and development "is sort of the American way — we are more entrepreneurial."
For absolutely no reason at all and clearly not to hypothesize on reasons he avoided regulations, if you didn't know, one of Schumer's daughters works as a senior policy manager for Amazon, and the other one has worked for Meta (it's unclear if she still does). Furthermore, in May 2022, the New York Post reported that over 80 of Schumer's former employees held jobs in Big Tech at places such as Google and Apple.
This article originally appeared on Engadget at https://www.engadget.com/chuck-schumer-is-dropping-the-ball-on-regulating-ai-144957345.html?src=rss
After years of rumors, Canon has confirmed that a flagship EOS R1 camera is in the works for its EOS line. The full-frame mirrorless camera is slated to arrive later this year and, while Canon hasn't revealed all the details just yet, it teased just enough to whet your appetite. There's no indication as to how much the EOS R1 will cost just yet either, but you may need to dig deep into your wallet this one.
The company says that the professional-grade camera will have an RF mount and offer improved video and still performance compared with the EOS R3. It will boast an upgraded image processing system that combines a fresh CMOS sensor, a new image processor called Digic Accelerator and the existing Digic X processor.
Canon says the system will be able to process a large volume of data at high speed and deliver advancements in auto focus and other areas. The company claims it's been able to combine the capabilities of the image processing system with its deep-learning tech to achieve "high-speed and high-accuracy subject recognition."
This powers a feature called Action Priority, which can, for instance, detect a player carrying a certain action in a sports game (like shooting a ball) and identify them as the main subject for a shot. The system would be able to instantly shift the auto focus frame in that person's direction to help make sure the photographer doesn't miss out on capturing key moments from a game.
Canon claims the EOS R1 can track athletes during sporting events even if they're momentarily out of line of sight. The focus on sports in the initial announcement suggests that the camera could be put to the test at this summer's Olympic Games in Paris.
In addition, Canon says it's bringing the image noise reduction feature that was initially built for PC software directly into the camera. It suggests this further improves image quality and can help users fulfill their creative goals.
This article originally appeared on Engadget at https://www.engadget.com/canon-confirms-its-long-rumored-flagship-eos-r1-is-coming-later-this-year-142838188.html?src=rss
Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more.
Built-in eye-tracking for iPhones and iPads
The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it.
That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.
Vocal shortcuts for easier hands-free control
Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready.
Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.
To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.
Music haptics in Apple Music and other apps
For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too.
Help in cars — motion sickness and CarPlay
Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.
Apple
For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.
Other Apple Accessibility updates
There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss
This might come as a shock to you but the things people put on social media aren't always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what's real without context or expertise in a specific area. That's part of why many platforms use a fact-checking team to keep an eye (often more so look like they're keeping an eye) on what's getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company's plans to do so in December.
Mosseri stated that Threads "recently" made it so that Meta's third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching "near-identical false content" that users shared on Threads. However, there's no indication of exactly when the program started or if it's global.
Then there's the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?
This article originally appeared on Engadget at https://www.engadget.com/threads-gets-its-own-fact-checking-program-130013115.html?src=rss
At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology.
The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action.
The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses.
For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss
Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.
Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.
Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.
But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.
Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.
And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen.
While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.
One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.
Elon Musk continues to add policy after baffling policy.
The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.
Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss