After years of rumors, Canon has confirmed that a flagship EOS R1 camera is in the works for its EOS line. The full-frame mirrorless camera is slated to arrive later this year and, while Canon hasn't revealed all the details just yet, it teased just enough to whet your appetite. There's no indication as to how much the EOS R1 will cost just yet either, but you may need to dig deep into your wallet this one.
The company says that the professional-grade camera will have an RF mount and offer improved video and still performance compared with the EOS R3. It will boast an upgraded image processing system that combines a fresh CMOS sensor, a new image processor called Digic Accelerator and the existing Digic X processor.
Canon says the system will be able to process a large volume of data at high speed and deliver advancements in auto focus and other areas. The company claims it's been able to combine the capabilities of the image processing system with its deep-learning tech to achieve "high-speed and high-accuracy subject recognition."
This powers a feature called Action Priority, which can, for instance, detect a player carrying a certain action in a sports game (like shooting a ball) and identify them as the main subject for a shot. The system would be able to instantly shift the auto focus frame in that person's direction to help make sure the photographer doesn't miss out on capturing key moments from a game.
Canon claims the EOS R1 can track athletes during sporting events even if they're momentarily out of line of sight. The focus on sports in the initial announcement suggests that the camera could be put to the test at this summer's Olympic Games in Paris.
In addition, Canon says it's bringing the image noise reduction feature that was initially built for PC software directly into the camera. It suggests this further improves image quality and can help users fulfill their creative goals.
This article originally appeared on Engadget at https://www.engadget.com/canon-confirms-its-long-rumored-flagship-eos-r1-is-coming-later-this-year-142838188.html?src=rss
Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more.
Built-in eye-tracking for iPhones and iPads
The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it.
That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.
Vocal shortcuts for easier hands-free control
Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready.
Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.
To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.
Music haptics in Apple Music and other apps
For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too.
Help in cars — motion sickness and CarPlay
Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.
Apple
For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.
Other Apple Accessibility updates
There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss
This might come as a shock to you but the things people put on social media aren't always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what's real without context or expertise in a specific area. That's part of why many platforms use a fact-checking team to keep an eye (often more so look like they're keeping an eye) on what's getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company's plans to do so in December.
Mosseri stated that Threads "recently" made it so that Meta's third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching "near-identical false content" that users shared on Threads. However, there's no indication of exactly when the program started or if it's global.
Then there's the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?
This article originally appeared on Engadget at https://www.engadget.com/threads-gets-its-own-fact-checking-program-130013115.html?src=rss
At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology.
The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action.
The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses.
For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss
4th Electronics Supply Chain Summit: Experts Highlights The Opportunities to Grow Security Products Industry in India
Various ministries and associated departments were urged via PPO that security and added products will not be purchased from brands that are having a long history of security breaches and data leakage.
The global CCTV market is around 2 lakh crores, growing at a rate of 17 percent and is expected to reach more than 100 billion dollars.
In the surveillance industry, India has the potential to become the second largest market in the world by 2030
Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.
Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.
Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.
But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.
Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.
And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen.
While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.
One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.
Elon Musk continues to add policy after baffling policy.
The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.
Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss
Google’s Smart Glass can now Process Video and Speech Simultaneously
At Google I/O 2024, DeepMind Founder Demis Hassabis introduced their latest work on an advanced AI assistant known as ASTRA. This new technology, built on the latest Gemini model, aims to enhance AI interactions to feel more natural and intuitive. ASTRA processes video and speech simultaneously, organizing them into a timeline of events for efficient recall.
South Korea Looking to Invest $470bn to Form Semiconductor Mega-Cluster Outside Seoul
South Korea is now aiming to unleash 10 trillion won ($7.3 billion) as incentives to grow the semiconductor industry.
In an effort to strengthen the semiconductor industry in the country, South Korea is now aiming to unleash 10 trillion won ($7.3 billion) as incentives. According to the country’s Finance Minister Choi Sang-mok, the government is now extremely occupied with senior leaders in the country to allocate billions of dollars of funds for the program.
Qorvo's QPC7330 Enhance CATV Networks with Simplified Installation and Versatile Control
Qorvo has introduced the industry’s first single-chip variable inverse cable equalizer, the QPC7330. This IC is designed specifically for CATV application developers aiming to upgrade their networks to the advanced DOCSIS 4.0 standard, facilitating symmetrical multi-gigabit speeds over cable’s hybrid fiber coax (HFC) networks.
Ilya Sutskever has announced on X, formerly known as Twitter, that he's leaving OpenAI almost a decade after he co-founded the company. He's confident that OpenAI "will build [artificial general intelligence] that is both safe and beneficial" under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati, he continued. In his own post about Sutskever's departure, Altman called him "one of the greatest minds of our generation" and credited him for his work with the company. Jakub Pachocki, OpenAI's previous Director of Research who headed the development of GPT-4 and OpenAI Five, has taken Sutskever's role as Chief Scientist.
After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company's biggest scandal last year. In November, OpenAI's board of directors suddenly fired Altman and company President Greg Brockman. "[T]he board no longer has confidence in [Altman's] ability to continue leading OpenAI," the ChatGPT-maker announced back then. Sutskever, who was a board member, was involved in their dismissal and was the one who asked both Altman and Brockman to separate meetings where they were informed that they were being fired. According to reports that came out at the time, Altman and Sutskever had been butting heads when it came to how quickly OpenAI was developing and commercializing its generative AI technology.
Both Altman and Brockman were reinstated just five days after they were fired, and the original board was disbanded and replaced with a new one. Shortly before that happened, Sutskever posted on X that he "deeply regre[tted his] participation in the board's actions" and that he will do everything he can "to reunite the company." He then stepped down from his role as a board member, and while he remained Chief Scientist, The New York Times says he never really returned to work.
Sutskever shared that he's moving on to a new project that's "very personally meaningful" to him, though he has yet to share details about it. As for OpenAI, it recently unveiled GPT-4o, which it claims can recognize emotion and can process and generate output in text, audio and images.
Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less…
This article originally appeared on Engadget at https://www.engadget.com/openai-co-founder-and-chief-scientist-ilya-sutskever-is-leaving-the-company-054650964.html?src=rss