Posts with «region|us» label

Pansonic's powerful Lumix S5 II is $800 off with a prime lens

Panasonic's powerful full-frame mirrorless camera, the S5 II, is on sale at Amazon and B&H Photo Video at the lowest price we've seen yet. You can grab one with an 85mm f/1.8 prime lens for as little as $1,796, a savings of $800 over buying both separately — effectively giving you a discount on the camera and a free lens to boot. 

As I wrote in my review, the 24-megapixel S5 II was already a great value at $2,000 thanks mainly to its strength as a vlogging camera. It's the company's first model with a phase-detect autofocus system that eliminates the wobble and other issues of past models. 

Panasonic also brought over its new, more powerful stabilization system from the GH6. And it has the video features you'd expect on a Panasonic camera, like 10-bit log capture up to 6K, monitoring tools and advanced audio features. With the generous manual controls and excellent ergonomics, it's an easy camera to use. It also comes with a nice 3.68-million dot EVF and sharp rear display that full articulates for vlogging. 

For photos, it's reasonably fast and great in low light, thanks to the dual native ISO system. Other features include dual high-speed SD card slots and solid battery life, particularly for video. The main downside is noticeable rolling shutter, but that shouldn't be a dealbreaker for most users — particularly at that price.

This article originally appeared on Engadget at https://www.engadget.com/pansonics-powerful-lumix-s5-ii-is-800-off-with-a-prime-lens-084816863.html?src=rss

Microsoft is giving Windows Photos a boost with a generative AI-powered eraser

Microsoft has announced a generative-AI powered eraser for pictures, which gives you an easy way of removing unwanted elements from your photos. Windows Photos has long had a Spot Fix tool that can remove parts of an image for you, but the company says Generative erase is an enhanced version of the feature. Apparently, this newer tool can create "more seamless and realistic" results even when large objects, such as bystanders or clutter in the background, are removed from an image. 

If you'll recall, both Google and Samsung have their own versions of AI eraser tools on their mobile devices. Google's used to be exclusively available on newer Pixel phones until it was rolled out to older models. Microsoft's version, however, gives you access to an AI-powered photo eraser on your desktop or laptop computer. You only need to fire up the image editor in Photos to start using the feature. Simply choose the Erase option and then use the brush to create a mask over the elements you want to remove. You can even adjust the brush size to make it easier to select thinner or thicker objects, and you can also choose to highlight more than one element before erasing them all.

At the moment, though, access to Generative erase is pretty limited. It hasn't been released widely yet, and you can only use it if you're a Windows Insider through the Photos app on Windows 10 and Windows 11 for Arm64 devices.

Microsoft
undefinedThis article originally appeared on Engadget at https://www.engadget.com/microsoft-is-giving-windows-photos-a-boost-with-a-generative-ai-powered-eraser-061851854.html?src=rss

The Odysseus spacecraft has become the first US spacecraft to land on the moon in 50 years

The Odysseus spacecraft made by Houston-based Intuitive Machines has successfully landed on the surface of the moon. It marks the first time a spacecraft from a private company has landed on the lunar surface, and it’s the first US-made craft to reach the moon since the Apollo missions.

Odysseus was carrying NASA instruments, which the space agency said would be used to help prepare for future crewed missions to the moon under the Artemis program. NASA confirmed the landing happened at 6:23 PM ET on February 22. The lander launched from Earth on February 15, with the help of a SpaceX Falcon 9 rocket.

Your order was delivered… to the Moon! 📦@Int_Machines' uncrewed lunar lander landed at 6:23pm ET (2323 UTC), bringing NASA science to the Moon's surface. These instruments will prepare us for future human exploration of the Moon under #Artemis. pic.twitter.com/sS0poiWxrU

— NASA (@NASA) February 22, 2024

According to The New York Times, there were some “technical issues with the flight” that delayed the landing for a couple of hours. Intuitive Machines CTO Tim Crain told the paper that “Odysseus is definitely on the moon and operating but it remains to be seen whether the mission can achieve its objectives.” Odysseus has a limited window of about a week to send data back down to Earth before darkness sets in and makes the solar-powered craft inoperable.

Intuitive Machines wasn’t the first private company to attempt a landing. Astrobotic made an attempt last month with its Peregrine lander, but was unsuccessful. Intuitive Machines is planning to launch two other lunar landers this year.

This article originally appeared on Engadget at https://www.engadget.com/the-odysseus-spacecraft-has-become-the-first-us-spacecraft-to-land-on-the-moon-in-50-years-010041179.html?src=rss

Reddit files for IPO and will let some longtime users buy shares

After years of speculation, Reddit has officially filed paperwork for an Initial Public Offering on the New York Stock Exchange. The company, which plans to use RDDT as its ticker symbol, will also allow some longtime users to participate by buying shares.

In a note shared in the company’s S-1 filing with the SEC, Reddit CEO Steve Huffman said that many longtime users already feel a “deep sense of ownership” over their communities on the platform. “We want this sense of ownership to be reflected in real ownership—for our users to be our owners,” he wrote. “With this in mind, we are excited to invite the users and moderators who have contributed to Reddit to buy shares in our IPO, alongside our investors.”

The company didn’t say how many users might be able to participate, but said that eligible users would be determined based on their karma scores while “moderator contributions will be measured by membership and moderator actions.”

The filing also offers up new details about the inner workings of Reddit’s business. The company had 500 million visitors during the month of December and has recently averaged just over 73 million “daily active unique” visitors. In 2023, the company brought in $804 million in revenue (Reddit has yet to turn a profit). The document also notes that the company is “exploring” deals with AI companies to license its content as it looks to expand its revenue in the future.

Earlier in the day, Reddit and Google announced that they had struck such a deal, reportedly valued at around $60 million a year. “We believe our growing platform data will be a key element in the training of leading large language models (“LLMs”) and serve as an additional monetization channel for Reddit,” the company writes.

This article originally appeared on Engadget at https://www.engadget.com/reddit-files-for-ipo-and-will-let-some-longtime-users-buy-shares-234127305.html?src=rss

Stable Diffusion 3 is a new AI image generator that won't mess up text in pictures, its makers claim

Stability AI, the startup behind Stable Diffusion, the tool that uses generative AI to create images from text prompts, revealed Stable Diffusion 3, a next-generation model, on Thursday. Stability AI claimed that the new model, which isn’t widely available yet, improves image quality, works better with prompts containing multiple subjects, and can more accurate text as part of the generated image, something that previous Stable Diffusion models weren’t great at.

Stability AI CEO Emad Mosque posted some examples of this on X.

#SD3 can do quite a lot of text… https://t.co/DfcUzOZymj

— Emad (@EMostaque) February 22, 2024


The announcement comes days after Stability AI’s largest rival, OpenAI, unveiled Sora, a brand new AI model capable of generating nearly-realistic, high-definition videos from simple text prompts. Sora, which isn’t available to the general public yet either, sparked concerns about its potential to create realistic-looking fake footage. OpenAI said it's working with experts in misinformation and hateful content to test the tool before making it widely available.Stability AI said it’s doing the same. “[We] have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors,” the company wrote in a blog post on its website. “By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release.”

It’s not clear when Stable Diffusion 3 will be released to the public, but until then, anyone interested can join a waitlist.

This article originally appeared on Engadget at https://www.engadget.com/stable-diffusion-3-is-a-new-ai-image-generator-that-wont-mess-up-text-in-pictures-its-makers-claim-233751335.html?src=rss

Google pauses Gemini’s ability to generate people after overcorrecting for diversity in historical images

Google said Thursday it’s pausing its Gemini chatbot’s ability to generate people. The move comes after viral social posts showed the AI tool overcorrecting for diversity, producing “historical” images of Nazis, America’s Founding Fathers and the Pope as people of color.

“We’re already working to address recent issues with Gemini’s image generation feature,” Google posted on X (via The New York Times). “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

The X user @JohnLu0x posted screenshots of Gemini’s results for the prompt, “Generate an image of a 1943 German Solidier.” (Their misspelling of “Soldier” was intentional to trick the AI into bypassing its content filters to generate otherwise blocked Nazi images.) The generated results appear to show Black, Asian and Indigenous soldiers wearing Nazi uniforms.

Still real real broke pic.twitter.com/FrPBrYi47v

— John L. (@JohnLu0x) February 21, 2024

Other social users criticized Gemini for producing images for the prompt, “Generate a glamour shot of a [ethnicity] couple.” It successfully spit out images when using “Chinese,” “Jewish” or “South African” prompts but refused to produce results for “white.” “I cannot fulfill your request due to the potential for perpetuating harmful stereotypes and biases associated with specific ethnicities or skin tones,” Gemini responded to the latter request.

“John L.,” who helped kickstart the backlash, theorizes that Google applied a well-intended but lazily tacked-on solution to a real problem. “Their system prompt to add diversity to portrayals of people isn’t very smart (it doesn’t account for gender in historically male roles like pope; doesn’t account for race in historical or national depictions),” the user posted. After the internet’s anti-“woke” brigade latched onto their posts, the user clarified that they support diverse representation but believe Google’s “stupid move” was that it failed to do so “in a nuanced way.”

Before pausing Gemini’s ability to produce people, Google wrote, “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

The episode could be seen as a (much less subtle) callback to the launch of Bard in 2023. Google’s original AI chatbot got off to a rocky start when an advertisement for the chatbot on Twitter (now X) included an inaccurate “fact” about the James Webb Space Telescope.

As Google often does, it rebranded Bard in hopes of giving it a fresh start. Coinciding with a big performance and feature update, the company renamed the chatbot Gemini earlier this month as the company races to hold its ground against OpenAI’s ChatGPT and Microsoft Copilot — both of which pose an existential threat to its search engine (and, therefore, advertising revenue).

This article originally appeared on Engadget at https://www.engadget.com/google-pauses-geminis-ability-to-generate-people-after-overcorrecting-for-diversity-in-historical-images-220303074.html?src=rss

Reddit is licensing its content to Google to help train its AI models

Google has struck a deal with Reddit that will allow the search engine maker to train its AI models on Reddit’s vast catalog of user-generated content, the two companies announced. Under the arrangement, Google will get access to Reddit’s Data API, which will help the company “better understand” content from the site.

The deal also provides Google with a valuable source of content it can use to train its AI models. “Google will now have efficient and structured access to fresher information, as well as enhanced signals that will help us better understand Reddit content and display, train on, and otherwise use it in the most accurate and relevant ways,” the company said in a statement.

Access to Reddit’s data became a hot-button issue last year when the company announced it would start charging developers to the use its API. The changes resulted in the shuttering of many third-party Reddit clients, and a sitewide protest in which thousands of subreddits temporarily “went dark.” Reddit justified the changes, in part, by saying that large AI companies were scraping its data without paying. In a statement, Reddit noted that the new arrangement with Google “does not change Reddit's Data API Terms or Developer Terms” and that “API access remains free for non-commercial usage.”

The deal comes as Reddit is expected to go public in the coming weeks. Neither Google or Reddit disclosed the terms of their arrangement but Bloomberg reported last week that Reddit had struck a licensing deal with a “large AI company” valued at “about $60 million” a year. That amount was also confirmed by Reuters, which was first to report Google’s involvement.

This article originally appeared on Engadget at https://www.engadget.com/reddit-is-licensing-its-content-to-google-to-help-train-its-ai-models-200013007.html?src=rss

Framework's new sub-$500 modular laptop has no RAM, storage or OS

Framework is all about modular, upgradable laptops and now the company is offering people a more-cost effective entry point. It has dropped the price of its B-stock Factory Seconds systems (which are built with excess parts and new components). As such, it's now offering a Framework Laptop 13 barebones configuration for under $500 for the very first time.

The 13-inch machine comes with an 11th-gen Intel Core i7 processor with Iris Xe graphics. So the CPU should be sufficient for most basic tasks and some moderate gaming. Here's the catch: Frameworks' barebones laptops don't include RAM, storage, Wi-Fi connectivity, power adaptor or even an operating system.

Tinkerers (i.e. folks who likely would be most interested in playing around with a Framework system) are likely to have some spare parts kicking around anyway. You can buy whatever other components you might need from the Framework Marketplace. To that end, Framework says it's selling refurbished DDR4 memory at half the price of new.

One other thing worth noting is that Framework's B-stock systems have an original display with "slight cosmetic issues." The company notes that these can range from things like fine lines that can be seen from a certain angle or a lack of backlight uniformity that may be seen on a white screen. A-stock systems have a matte display, but they're a little more expensive. Factory Seconds laptops are available in the US, Canada and Australia for the time being.

This article originally appeared on Engadget at https://www.engadget.com/frameworks-new-sub-500-modular-laptop-has-no-ram-storage-or-os-184711789.html?src=rss

There’s a Playdate games showcase on February 28

The little console that could, Playdate, is getting a developer’s showcase on February 28 at 12PM ET. Manufacturer Panic promises a 14-minute presentation chock full of new games that may or may not make use of the console’s weird little crank.

We only know one game that’ll be featured at the event, but it’s a doozy. Lucas Pope, the creator behind Papers, Please and Return of the Obra Dinn, has been busy prepping a Playdate title called Mars After Midnight. We’ll likely get a new trailer for the game, which was first revealed back in 2021. Panic also says the event will include a “release update” on the title. So, the long wait is nearly over.

Mars After Midnight has been called a “spiritual sequel” to Papers, Please, though one set on an alien world and not in a fictional cold-war era country. You play as a door guard of an alien colony tasked with letting people in. That certainly sounds a whole lot like Papers, Please to me. As you can see, the graphics look absolutely gorgeous and the game certainly makes use of that crank.

Panic hasn’t teased any other games that will take center stage during the showcase, so its anyone’s guess. This is a quirky console that practically requires unique gameplay elements, so we could be in for some nifty surprises. The company has said the event will not feature any updates on hardware, for those looking for a Playdate 2.

To that end, the console is nearly two years old but only recently became readily available for purchase. Before last week, customers would have to wait months upon ordering the console before shipment. Now, you’ll get one within two to three days.

For the uninitiated, Panic has whipped up a really distinctive and magical portable gaming console. The bright yellow Playdate boasts a traditional D-pad, two buttons and, most importantly, a crank-based control mechanism. The console costs $200 and each purchase gets you 24 free games, with two unlocking each week for 12 weeks. This is the first developer’s showcase for Playdate since November of last year.

This article originally appeared on Engadget at https://www.engadget.com/theres-a-playdate-games-showcase-on-february-28-183125090.html?src=rss

Chrome's latest experimental AI feature can help you write

Google has added an experimental generative AI feature to its browser with the launch of Chrome M122. The new AI tool is called "Help me write," because it can literally help you write more descriptive sentences or even full paragraphs from a short prompt. Google says the tool uses its Gemini models to understand the context of the web page you're on so that it could generate appropriate suggestions. If you're on a review page, for instance, it can give you a suggestion that reads like a review instead of a sales copy.

Google

In one of Google's examples, the tool was able to spit out a decent description of what the person was selling with a prompt that simply read: "moving to a smaller place selling air fryer for 50 bucks." The tool suggested a full paragraph that was able to better communicate the user's message. "I'm moving to a smaller place and won't have room for my air fryer. It's in good condition and works great. I'm selling it for $50. Please contact me if you're interested," the suggestion read. 

In another example, the user asked the tool to write them a request to return a defective bike helmet and to communicate that the product developed a crack, which isn't mentioned in the product warranty. As you can see in Google's examples, you can change the length and tone of the suggestion if the first thing the writing aid comes up with isn't good enough to serve your needs. Once you're done, you can click the Replace button to switch your prompt with the suggested writeup.

Google

To activate the experimental tool, you have to go into Settings in Chrome's three-dot drop-down menu. There, you can find the Experimental AI page where you can activate "Help me write." To use the feature, just highlight the text you want to rewrite and then right-click on it to summon the "Help me write" box. Take note that it's only available for Chrome browsers on Macs and Windows PCs in the US at the moment. It can also only understand prompts and write suggestions in the English language. 

Google first announced the arrival of the writing tool back in January, when it revealed that it was going to start integrating AI features into its Chrome browser. In addition to "Help me write," Google said that it's also giving the browser an AI-powered tab organizer and the ability the generate customized themes. 

This article originally appeared on Engadget at https://www.engadget.com/chromes-latest-experimental-ai-feature-can-help-you-write-170014645.html?src=rss