Blink Outdoor 4 security cameras are up to half off right now

Anyone who wants to keep an eye on their perimeter or see nighttime trash panda action may want to check out this deal on Amazon. Currently, bundles of the Blink Outdoor 4 cameras are on sale, with the deepest discount going to a five-pack set. At full price, it costs $400. With the discount, it's $200 instead. That matches the Prime member-only price we saw earlier this year, but this time, even those who don't pay for Amazon's program can get the offer. Other bundles and Blink devices are on sale too as part of a larger sale.  

The Blink Outdoor 4 security cameras allow users to see, hear and talk with anyone who comes into view and send motion-detection alerts and live feeds to a connected smartphone. They can also send footage to an Echo Show smart display and receive commands from other Alexa-enabled devices like an Echo Dot or Fire TV. Just note that Blink equipment isn't Google Assistant- or Siri-compatible, so these really only make sense for the Amazon-based smart home.  

The Outdoor 4 is the latest generation of the cameras, offering a wider field of vision and better day and night image quality over the previous generation. During the day, they shoot 1080p video and use infrared night vision in the dark. Each unit runs on a pair of AA batteries which should power the camera for two years. A plug-in Sync Module that stays inside is required to operate the Outdoor 4 cameras and, conveniently, is included in each bundle — as are enough batteries for the cameras, mounting kits and the plug for the Sync Module. 

For those who just need to keep an eye on one area outside, there's the one-camera system, which also includes the Sync Module and other accessories. It's 40 percent off right now and down to an all-time low of $60. For a longer battery life, the Outdoor 4 single-cam system can also be bundled with a battery pack that extends the run time from two years to four. That version is $80 after a 33 percent discount. 

Amazon is also discounting its Blink branded doorbells, floodlights and indoor cameras as part of a larger sale. Blanketing a home in Alexa's watchful eye just got a whole lot cheaper. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/blink-outdoor-4-security-cameras-are-up-to-half-off-right-now-155239715.html?src=rss

The first Dune: Prophecy teaser takes the action back by 10,000 years

This week, streaming services are joining linear networks in revealing some of the projects they've got coming up in an attempt to win over advertisers. After Prime Video stepped up to the plate on Tuesday, it was Warner Bros. Discovery's turn at bat on Wednesday. The company surprised many by dropping a teaser trailer for Dune: Prophecy, a six-episode Dune prequel series that's coming to Max this fall.

The spinoff is set 10,000 years before the events of the Dune movies. It follows two Harkonnen sisters who tackle a threat to humanity while setting up the sisterhood that will eventually become the Bene Gesserit. Dune: Prophecy is based on the novel Sisterhood of Dune by Brian Herbert and Kevin J. Anderson.

The series stars Emily Watson, Olivia Williams, Travis Fimmel, Jodhi May and the always-great Mark Strong. The trailer makes the show look suitably large in scope, though you'll need to wait a few more months for it to arrive.

In the meantime, you'll soon be able to watch Dune: Part Two on Max (though we recommend catching this butt-kicking epic on a giant screen if it's still showing in a theater near you). The sequel is coming to the streaming service next week, on May 21.

In addition, it might be too early for a trailer for the second season of The Last of Us, though WBD has released the first official images. The shots of Joel (Pedro Pascal) and Ellie (Bella Ramsey) don't give much away, but fans of the second game in the series might recognize those fairy lights behind Joel's magnificent mane. The Last of Us will return on HBO and Max in 2025, hopefully on January 1.

Warner Bros. Discovery
Warner Bros. Discovery

This article originally appeared on Engadget at https://www.engadget.com/the-first-dune-prophecy-teaser-takes-the-action-back-by-10000-years-152911407.html?src=rss

Chuck Schumer is dropping the ball on regulating AI

AI's capabilities are growing at tremendous speeds, and while that apparently warrants a ton of the United States' money for development, it doesn't seem to translate to a very obvious action: regulation. A bipartisan group of four senators, led by majority leader Chuck Schumer, have announced a legislative plan for AI that includes putting $32 billion towards research and development. But, it passes off the responsibility of devising regulatory measures around areas such as job eliminations, discrimination and copyright infringement to Senate committees. 

“It’s very hard to do regulations because AI is changing too quickly,” Schumer said in an interview published by The New York Times. Yet, in March, the European Parliament approved wide-ranging legislation for regulating AI that manages the obligations of AI applications based on what risks and effects they could bring. The European Union said it hopes to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." 

Schumer seems to disagree with finding that balance, instead stating in the interview that investment into AI research and development "is sort of the American way — we are more entrepreneurial." 

For absolutely no reason at all and clearly not to hypothesize on reasons he avoided regulations, if you didn't know, one of Schumer's daughters works as a senior policy manager for Amazon, and the other one has worked for Meta (it's unclear if she still does). Furthermore, in May 2022, the New York Post reported that over 80 of Schumer's former employees held jobs in Big Tech at places such as Google and Apple.

This article originally appeared on Engadget at https://www.engadget.com/chuck-schumer-is-dropping-the-ball-on-regulating-ai-144957345.html?src=rss

Canon confirms its long-rumored flagship EOS R1 is coming later this year

After years of rumors, Canon has confirmed that a flagship EOS R1 camera is in the works for its EOS line. The full-frame mirrorless camera is slated to arrive later this year and, while Canon hasn't revealed all the details just yet, it teased just enough to whet your appetite. There's no indication as to how much the EOS R1 will cost just yet either, but you may need to dig deep into your wallet this one.

The company says that the professional-grade camera will have an RF mount and offer improved video and still performance compared with the EOS R3. It will boast an upgraded image processing system that combines a fresh CMOS sensor, a new image processor called Digic Accelerator and the existing Digic X processor.

Canon says the system will be able to process a large volume of data at high speed and deliver advancements in auto focus and other areas. The company claims it's been able to combine the capabilities of the image processing system with its deep-learning tech to achieve "high-speed and high-accuracy subject recognition."

This powers a feature called Action Priority, which can, for instance, detect a player carrying a certain action in a sports game (like shooting a ball) and identify them as the main subject for a shot. The system would be able to instantly shift the auto focus frame in that person's direction to help make sure the photographer doesn't miss out on capturing key moments from a game.

Canon claims the EOS R1 can track athletes during sporting events even if they're momentarily out of line of sight. The focus on sports in the initial announcement suggests that the camera could be put to the test at this summer's Olympic Games in Paris.

In addition, Canon says it's bringing the image noise reduction feature that was initially built for PC software directly into the camera. It suggests this further improves image quality and can help users fulfill their creative goals.

This article originally appeared on Engadget at https://www.engadget.com/canon-confirms-its-long-rumored-flagship-eos-r1-is-coming-later-this-year-142838188.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

Built-in eye-tracking for iPhones and iPads

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Vocal shortcuts for easier hands-free control

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

Music haptics in Apple Music and other apps

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Help in cars — motion sickness and CarPlay

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

Other Apple Accessibility updates

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Threads gets its own fact-checking program

This might come as a shock to you but the things people put on social media aren't always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what's real without context or expertise in a specific area. That's part of why many platforms use a fact-checking team to keep an eye (often more so look like they're keeping an eye) on what's getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company's plans to do so in December. 

Mosseri stated that Threads "recently" made it so that Meta's third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching "near-identical false content" that users shared on Threads. However, there's no indication of exactly when the program started or if it's global.

Then there's the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?

This article originally appeared on Engadget at https://www.engadget.com/threads-gets-its-own-fact-checking-program-130013115.html?src=rss

Google's Project Gameface hands-free 'mouse' launches on Android

At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss

4th Electronics Supply Chain Summit: Experts Highlights The Opportunities to Grow Security Products Industry in India

4th Electronics Supply Chain Summit: Experts Highlights The Opportunities to Grow Security Products Industry in India
  • Various ministries and associated departments were urged via PPO that security and added products will not be purchased from brands that are having a long history of security breaches and data leakage.
  • The global CCTV market is around 2 lakh crores, growing at a rate of 17 percent and is expected to reach more than 100 billion dollars.
  • In the surveillance industry, India has the potential to become the second largest market in the world by 2030
Nijhum Rudra Wed, 05/15/2024 - 17:22
Circuit Digest 15 May 12:52

The Morning After: The biggest news from Google's I/O keynote

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

The biggest stories you might have missed

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Google reveals its visual AI assistant, Project Astra

Full of potential.

Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

X now treats the term cisgender as a slur

Elon Musk continues to add policy after baffling policy.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

OpenAI co-founder Ilya Sutskever is leaving the company

He’s moving to a new project.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss

Google’s Smart Glass can now Process Video and Speech Simultaneously

Google’s Smart Glass can now Process Video and Speech Simultaneously

At Google I/O 2024, DeepMind Founder Demis Hassabis introduced their latest work on an advanced AI assistant known as ASTRA. This new technology, built on the latest Gemini model, aims to enhance AI interactions to feel more natural and intuitive. ASTRA processes video and speech simultaneously, organizing them into a timeline of events for efficient recall.

Staff Wed, 05/15/2024 - 15:27
Circuit Digest 15 May 10:57