Posts with «personal investing ideas & strategies» label

Huawei’s new foldable provokes scrutiny over Chinese-made chips

Following Huawei's surprise launch of the seemingly 5G-capable Mate 60 and Mate 60 Pro smartphones last week, the Chinese firm has today unveiled two more devices: the Mate 60 Pro+ and the Mate X5 foldable. Huawei was largely limited to 4G connectivity on its handsets since the US sanctions, but with this latest wave of smartphone launches, the company has been intentionally secretive about its choice of radio. Sources told Engadget that these are indeed 5G devices — as supported by Chinese blogger Vincent Zhong's speed test on the new foldable, which reached a download speed of over 1Gbps (you'll see that there is no 5G indicator on the screen).

It's likely that both phones are also powered by Huawei's mysterious HiSilicon Kirin 9000S, the 7nm process node of which has raised concerns on whether the local chip supplier, SMIC (Semiconductor Manufacturing International Corporation), has violated US sanctions to access foreign chip-making technology. Huawei did not immediately respond to requests for comments about the specs of these new phones or the chip.

A recent Kirin 9000S teardown conducted by TechInsights for Bloomberg confirmed SMIC's 7nm process, which was thought to be impossible given the import ban on key manufacturing equipment — namely the EUV lithography machines from Dutch firm ASML (Advanced Semiconductor Materials Lithography). Before the US import ban, Huawei relied on TSMC (Taiwan Semiconductor Manufacturing Company) for its 5nm process, which was enabled by ASML's machines.

It is unlikely that SMIC procured such advanced machinery from ASML — at least not directly — without raising alarms. According to Bits & Chips, ASML CEO Peter Wennink recently expressed that "the Mate 60 Pro shouldn’t come as a surprise to anyone, as the restrictions essentially forced the Chinese to double down on innovation." Thus implying that SMIC could well have developed its own high-end lithography machine. 

Benchmarks conducted by Chinese tech blog Geekerwan suggest that the Kirin 9000S' performance is close to Qualcomm's Snapdragon 888, meaning it's around two generations behind. The site added that the CPU here features one big core and three middle cores based on Huawei's own "TaiShan" architecture, in addition to four little cores based on Arm's efficient Cortex-A510. As a bonus, the Kirin 9000S is the first mobile processor to support multi-threading — running eight cores with 12 threads, though apparently apps will require further optimization to make use of this feature. As for the GPU, Huawei added its own Maleoon 910, which is allegedly on par with the one in the Snapdragon 888.

Huawei Mate 60 Pro+
Huawei

Much like the Mate 60 Pro, the higher-end Mate 60 Pro+ supports satellite call service by China Telecom and satellite messaging using BeiDou. The only notable differences (that we can see for now) are the different "nanotech metallic double dye process" and better rear cameras. As for the Mate X5 foldable, it's almost identical to the super slim Mate X3, except for the switch to Huawei's fancier Kunlun Glass on the external screen (hence a 2g bump in weight), as well as the slightly tweaked appearance of the rear camera island. Huawei has yet to reveal prices for either model, though pre-orders will start at 6:08PM local time today.

If all four of Huawei's latest smartphones are indeed powered by Kirin 9000S, it would suggest that Huawei is confident with its chip yield — potentially adding a further blow to the US sanctions. Rumors suggest that we'll be hearing more about these devices towards the end of September — conveniently avoiding the iPhone 15 rush.

This article originally appeared on Engadget at https://www.engadget.com/huaweis-new-foldable-provokes-scrutiny-over-chinese-made-chips-104105500.html?src=rss

iOS apps will publish to the Apple Vision Pro store by default

Apple just announced that nearly every iOS app will automatically publish the Vision Pro store by default, which the company says will give early adopters access to “hundreds of thousands of iPad and iPhone apps.” This will be in addition to whatever actual Vision Pro apps launch on the official store.

Most apps can easily run on Vision Pro, but you won’t get a full futuristic experience. Instead, you’ll see what you’d normally see on your phone or tablet, just blown up via a fake screen in front of you. Apple says that “app experiences can easily extend to Apple Vision Pro from day one — with no additional work required.”

This is slightly underwhelming when you consider the usual apps, like Facebook, but actually provides some real benefits. This means, for instance, that every streaming app will automatically be available at launch, so you can watch whatever you want on the headset’s virtual screen. Incidentally, the screen can occupy a relative width of 100 feet, so those lightsaber battles on Ahsoka will really pop. Marry that with the comfort-forward lightweight design and you’ve got yourself one heck of an entertainment machine, and that’s before uniquely-made streaming apps begin showing up.

On the developer side, there’s a forthcoming visionOS beta launching this fall so devs can test their apps to make sure they work. Additionally, this toolset will allow developers to make adjustments to maximize integration with the headset. It’ll also let you know if your app isn’t eligible for some reason, though most will be.

Now onto the why of this. The Apple Vision Pro is set to be a niche product for at least the first generation, due to the exorbitant price tag and limited use case scenarios, so exclusive apps could be scarce at launch. This allows Apple to sort of inflate the Vision Pro app store numbers to entice consumers. It could also pressure some of the larger developers out there, like Meta, to push through features exclusive to the headset. No matter the reason, one of the primary clarion calls whenever any new technology is announced is a cry for backwards compatibility, and well, this’ll do it.

For the uninitiated, the Apple Vision Pro is the company’s forthcoming mixed-reality headset. It boasts eye-tracking, so you can control apps via minute ocular movements, and an OLED screen on the exterior to display a digital recreation of your eyeballs for others to interact with. It’ll cost a whopping $3,500 when it launches next year, which is equatable to purchasing seven Meta Quest 3 VR headsets.

This article originally appeared on Engadget at https://www.engadget.com/ios-apps-will-publish-to-the-apple-vision-pro-store-by-default-183016666.html?src=rss

Vampire: The Masquerade - Bloodlines 2 returns from the shadows with a new developer

Vampire: The Masquerade - Bloodlines 2 has risen from the depths of development hell, two years after Paradox Interactive parted ways with the game's former developer, Hardsuit Labs, and delayed the game indefinitely. The publisher has since recruited Dear Esther and Everybody’s Gone to the Rapture studio The Chinese Room to work on the sequel to the original RPG from 2004. Bloodlines 2 is now set to arrive in fall 2024.

The Chinese Room has retained some of the original concepts while tossing out others to reframe Bloodlines 2 in its own vision. The modern-day Seattle setting has survived, as has some of Hardsuit's level and art design. However, creative director Alex Skidmore told PC Gamer that the game now has “a new code base with different gameplay mechanics and RPG systems." You'll play as an elder vampire instead of the fresh face you might be familiar with from the original game, though the protagonist has been in stasis for some time, so you'll be getting used to the wintry setting at the same time as them.

This is a new type of challenge for The Chinese Room, which until now has focused on atmospheric walking simulators infused with mystery, as Polygon notes. Much like its latest project, the studio has endured its own troubles over the years. It nearly shut down entirely in 2017 due to funding issues before Sumo Digital took over and revived it (Sumo Digital itself later found a new owner in the form of Tencent).

We'll find out more about what The Chinese Room has in store for fans in the coming months. Paradox plans to discuss Vampire: The Masquerade - Bloodlines 2 in more depth in January.

This article originally appeared on Engadget at https://www.engadget.com/vampire-the-masquerade---bloodlines-2-returns-from-the-shadows-with-a-new-developer-200008403.html?src=rss

An AI pilot has beaten three champion drone racers at their own game

In what can only bode poorly for our species' survival during the inevitable robot uprisings, an AI system has once again outperformed the people who trained it. This time, researchers at the University of Zurich in partnership with Intel, pitted their "Swift" AI piloting system against a trio of world champion drone racers — none of whom could best its top time.

Swift is the culmination of years of AI and machine learning research by the University of Zurich. In 2021, the team set an earlier iteration of the flight control algorithm that used a series of external cameras to validate its position in space in real-time, against amateur human pilots, all of whom were easily overmatched in every lap of every race during the test. That result was a milestone in its own right as, previously, self-guided drones relied on simplified physics models to continually calculate their optimum trajectory, which severely lowered their top speed. 

This week's result is another milestone, not just because the AI bested people whose job is to fly drones fast, but because it did so without the cumbersome external camera arrays= of its predecessor. The Swift system "reacts in real time to the data collected by an onboard camera, like the one used by human racers," an UZH Zurich release reads. It uses an integrated inertial measurement unit to track acceleration and speed while an onboard neural network localizes its position in space using data from the front-facing cameras. All of that data is fed into a central control unit — itself a deep neural network — which crunches through the numbers and devises a shortest/fastest path around the track. 

“Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich, said in a statement.

Rather than let a quadcopter smash its way around the track for the month that its controller AI would need to slowly learned the various weaves and bobs of the circuit, the research team instead simulated that learning session virtually. It took all of an hour. And then the drone went to work against 2019 Drone Racing League champion Alex Vanover, 2019 MultiGP Drone Racing champion Thomas Bitmatta, and three-time Swiss champion, Marvin Schaepper. 

Swift notched the fastest lap overall, beating the humans by a half second, though the meatsack pilots proved more adaptable to changing conditions during the course of a race. “Drones have a limited battery capacity; they need most of their energy just to stay airborne. Thus, by flying faster we increase their utility,” Scaramuzza said. As such, the research team hopes to continue developing the algorithm for eventual use in Search and Rescue operations, as well as forest monitoring, space exploration, and in film production.    

This article originally appeared on Engadget at https://www.engadget.com/an-ai-pilot-has-beaten-three-champion-drone-racers-at-their-own-game-190537914.html?src=rss

US Copyright Office opens public comments on AI and content ownership

The US Copyright Office (USCO) wants your thoughts on generative AI and who can theoretically be declared to own its outputs. The technology has increasingly commanded the legal system’s attention, and as such office began seeking public comments on Wednesday about some of AI’s thorniest issues (viaArs Technica). These include questions about companies training AI models on copyrighted works, the copyright eligibility of AI-generated content (along with liability for infringing on it) and how to handle machine-made outputs mimicking human artists’ work.

“The adoption and use of generative AI systems by millions of Americans — and the resulting volume of AI-generated material — have sparked widespread public debate about what these systems may mean for the future of creative industries and raise significant questions for the copyright system,” the USCO wrote in a notice published on Wednesday.

One issue the office hopes to address is the required degree of human authorship to register a copyright on (otherwise AI-driven) content, citing the rising number of attempts to copyright material that names AI as an author or co-author. “The crucial question appears to be whether the ‘work’ is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine,” the USCO wrote.

Although the issue is far from resolved, several cases have hinted at where the boundaries may fall. For example, the office said in February that the (human-made) text and layout arrangement from a partially AI-generated graphic novel were copyrightable, but the work’s Midjourney-generated images weren’t. On the other hand, a Federal judge recently rejected an attempt to register AI-generated art which had no human intervention other than its inciting text prompt. “Copyright has never stretched so far [...] as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here,” US District Judge Beryl Howell wrote in that ruling.

The USCO also seeks input on increasing infringement claims from copyright owners against AI companies for training on their published works. Sarah Silverman is among the high-profile plaintiffs suing OpenAI and Meta for allegedly training ChatGPT and LLaMA (respectively) on their written work — in her case, her 2010 memoir The Bedwetter. OpenAI also faces a class-action lawsuit over using scraped web data to train its viral chatbot.

The USPO says the public comment period will be open until November 15th. You can share your thoughts until then.

This article originally appeared on Engadget at https://www.engadget.com/us-copyright-office-opens-public-comments-on-ai-and-content-ownership-170225911.html?src=rss

Snapchat's new 'Dreams' feature uses generative AI to remix users' selfies

Snapchat has added a new generative AI feature to its app. Called “Dreams,” it’s in some ways similar to the company’s signature AR effects, known as lenses. But instead of real-time camera-based effects, the feature uses generative AI to remix users’ selfies into “fantastical images that transform their persona into new identities.”

The feature, which can be found in the app’s “memories” section, begins by asking users to take selfies showing their face at different angles. The app will then creates a series of eight images based on themes like “time travel” or “alternate universes.” Eventually, Snap says, users will be able to create Dreams that include their friends’ likenesses as well.

Dreams is the latest generative AI experiment from the company, which launched its MyAI chatbot earlier this year using OpenAI’s models. (Dreams uses open source tools and internal data, though the company hasn’t provided details about specific partners.)

The feature also highlights how the company is using interest in the technology as a source of revenue. MyAI was initially limited to Snapchat+, the app’s premium subscription tier, before it was released to all the app’s users this spring. The company has since added specialized features for subscribers, including the ability for MyAI to reply to photo Snaps with its own AI-generated images.

Likewise, Dreams will have both a free and paid component. Snap is allowing non-Snapchat+ subscribers to access just one — so use it wisely — “pack” of eight selfies, while subscribers will get access to one pack a month (the company says it plans to update Dreams with new themes and styles regularly). All users will be able to buy additional packs for a $0.99 in-app purchase.

Snap

In practice, the images appear to have some of the same limitations as other AI-based image generators. A promotional image shared by Snap showed what appeared to be the tips of partial fingers strangely placed over the subject's chest. When I tried Dreams to create my own AI selfies, some of the resulting images also had strange-looking hands, though it at least showed the correct number of fingers placed in an anatomically correct position.

Still, I can see how the feature could keep Snapchat users — who have collectively sent more than 10 billion messages to MyAI — coming back. And as tools like Midjourney have moved behind paywalls, Snap’s offerings might just seem like a better deal for those looking to try out generative AI.

This article originally appeared on Engadget at https://www.engadget.com/snapchats-new-dreams-feature-uses-generative-ai-to-remix-users-selfies-130038172.html?src=rss

Dolby Atmos will use your TV to expand living room speaker setups

Some companies allow you to use the speakers in your TV to augment the drivers in a soundbar or other speakers in order to enhance overall audio quality. Samsung has Q-Symphony and Sony has Acoustic Center Sync, for example. Today, Dolby has announced a new Atmos feature that will function similarly, pairing TV speakers with any wireless speakers you have in the room. Officially dubbed Dolby Atmos FlexConnect, the tech will debut first on 2024 TCL TVs.

Dolby explains that FlexConnect "intelligently optimizes the sound" based on the layout of the room and location of any speakers. The company says the technology will free users from the sonic limitations of room size, furniture positioning or the location power outlets. FlexConnect will allow speakers to be placed anywhere in a room and calibrate each of them to the TV speakers. This creates a customized Dolby Atmos sound profile unique to each user's home. 

Dolby says setup is quick and easy as acoustic mapping is done using microphones inside the TV. Those components locate each speaker before performing the aforementioned audio calibration. The company explains that the result should be more consistent immersive sound no matter where you're sitting in the room. 

FlexConnect isn't just boosting the center channel either. Instead, the feature is adjusting the sound for each speaker, even the ones inside the TV. If the system notices that a pair of speakers are at the front of the room, for example, it can tweak the audio so that the TV handles the bulk of the dialog and the speakers take on the rest of the front soundstage. If there are two speakers near the back of the room, the TV then handles dialog and those sounds that need to come from the front of the room. 

One item that could play a key role with Dolby Atmos FlexConnect is interoperability. Samsung's Q-Symphony and Sony's Acoustic Center Sync both require you to have a compatible soundbar and TV made by those companies. LG's Wow Orchestra works the same way. If this new technology is open to manufacturers to integrate in their products like Dolby Atmos as a whole, it would great if users could pair a TCL TV with a Sennheiser soundbar — just as one example. As you might expect, TCL plans to debut wireless speakers to accompany its upcoming FlexConnect-compatible TVs.

This article originally appeared on Engadget at https://www.engadget.com/dolby-atmos-will-use-your-tv-to-expand-living-room-speaker-setups-123021095.html?src=rss

WhatsApp lets you create groups without naming them

WhatsApp will now let you create small groups without first naming them. Mark Zuckerberg announced the new feature in a Facebook post (viaTechCrunch). You previously had to choose your group’s name when setting it up.

TechCrunch reports that unnamed groups have a cap of six members instead of the named groups’ limit of 1,024 participants. In addition, WhatsApp will reportedly auto-generate placeholder names for unnamed groups based on their members. (For example, “Rocco & Li-Chen” for a chat between them in Zuckerberg’s sample image below.) Depending on how they've saved members’ contacts, the group name will also appear differently for each member.

Meta / Mark Zuckerberg

When joining an unnamed group that includes people who haven’t saved your contacts, it will reportedly display your phone number to the group. This suggests the feature is designed more for established friends, family or colleagues and less for strangers.

TechCrunch reports that the feature will roll out globally “over the next few weeks.”

This article originally appeared on Engadget at https://www.engadget.com/whatsapp-lets-you-create-groups-without-naming-them-174420165.html?src=rss

Meta's new multimodal translator uses a single model to speak 100 languages

Though it's not quite ready to usher in the Doolittle future we've all been waiting for, modern AI translation methods are proving more than sufficient in accurately transforming humanity's roughly 6,500 spoken and written communication systems between one another. The problem is that each of these models tends to only do one or two tasks really well — translate and convert text to speech, speech to text or between either of the two sets — so you end up having to smash a bunch of models on top of each other to create the generalized performance seen in the likes of Google Translate or Facebook's myriad language services. 

That's a computationally intensive process, so Meta developed a single model that can do it all. SeamlessM4T is "a foundational multilingual and multitask model that seamlessly translates and transcribes across speech and text," Meta's blog from Tuesday reads. It can translate between any of nearly 100 languages for speech-to-text and text-to-text functions, speech-to-speech and text-to-speech supports those same languages as inputs and outputs them in any of 36 others tongues, including English. 

In their blog post, Meta's research team notes that SeamlessM4T "significantly improve[s] performance for the low and mid-resource languages we support," while maintaining "strong performance on high-resource languages, such as English, Spanish, and German." Meta built SeamlessM4T from its existing PyTorch-based multitask UnitY model architecture, which already natively performs the various modal translations as well as automatic speech recognition. It utilizes the BERT 2.0 system for audio encoding, breaking down inputs into their component tokens for analysis, and a HiFi-GAN unit vocoder to generate spoken responses. 

Meta has also curated a massive open-source speech-to-speech and speech-to-text parallel corpus, dubbed SeamlessAlign. The company mined "tens of billions of sentences" and "four million hours" of speech from publicly available repositories to "automatically align more than 443,000 hours of speech with texts, and create about 29,000 hours of speech-to-speech alignments," per the blog. When tested for robustness, SeamlessM4T reportedly outperformed its (current state-of-the-art) predecessor against background noises and speaker style variations by 37 percent and 48 percent, respectively.

As with most all of its previous machine translation efforts — whether that's Llama 2, Massively Multilingual Speech (MMS), Universal Speech Translator (UST), or the ambitious No Language Left Behind (NLLB) project — SeamlessM4T is being open-sourced. "we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems," the team wrote. "Keeping with our approach to open science, we are excited to share our model publicly to allow researchers and developers to build on this technology." If you're interested in working with SeamlessM4T for yourself, head over to GitHub to download the model, training data and documentation.

This article originally appeared on Engadget at https://www.engadget.com/metas-new-multimodal-translator-uses-a-single-model-to-speak-100-languages-133040214.html?src=rss

Rode's Wireless Pro mic kit lets you forget about 'clipped' audio

It might not be an overstatement to say Rode's original Wireless GO microphone system changed how a lot of YouTubers work. It wasn't the first wireless mic system, not by a long long shot, but its focus on creators made it incredibly popular. That success would inspire a lot of competing products — such as DJI's — which have since won over fans in a category that Rode arguably defined. Today, Rode fights back with the Wireless Pro — its new flagship wireless microphone system for creators.

The headline feature is the inclusion of onboard 32-bit float recording which means you should no longer have to worry about setting mic gain levels (though it's probably best that you do). This feature means the onboard recording will be almost impossible to "clip" or distort through being too loud. Effectively you should always have a useable recording if things went a bit too loud on the audio in your camera, which will be a great anxiety reducer to anyone who's ever had a production ruined thanks to bad audio.

The Wireless Pro could arguably help bring 32-bit float into the mainstream. There are specialist audio recorders out there that already offer this feature. And Rode already included it on its NT1 hybrid studio microphone, but given that you can plug a lot of different microphones into the Wireless Pro transmitters, this opens the door for recording a wide variety of audio content in 32-bit float — as long as you can feed it into a 3.5mm jack.

In a further attempt at streamlining the creatory process, the Wireless Pro also has advanced timecode capability so you won't need an external device for this. Though you will need to set this up via Rode Central, the companion app for the mic (there's no option on-device for this setting).

Photo by James Trew / Engadget

The Wireless Pro borrows a few features from alternatives or aftermarket accessories by including a charging case as standard (Rode currently offers one as a standalone purchase). That case is good for two total charges of the entire system according to the company and comes as standard with the new model. The stated battery life for the transmitters and receiver is around severn hours, meaning the Wireless Pro should be good for at least 20 hours total recording onto the 32gb storage (good for 40 hours of material apparently).

Another key upgrade is the improved range. The Wireless GO II, for example, has an approximate range of 656 feet (200 meters). The new Pro models expands that to 850 feet (260 meters) which is, coincidentally, a shade more than DJI's stated 820 feet (250 meters).

When Rode unveiled its more affordale Wireless ME kit, it introduced the idea of the receiver doubling as a "narrator" mic via a TRRS headset in the headphones/monitoring port. That's a feature that carries over to the Pro meaning you can record up to three different speakers albeit one of them will be wired, rather than cable free.

There are a couple of minor, but welcome quality of life updates, too, such as locking 3.5mm jacks so you won't rip your lav mic out and plugin power detection so the system can detect when the camera its plugged into is active, using that info to optimize power usage.

At time of publication, DJI's dual-mic product retails for $330. The Rode Wireless Pro will cost $399. That's obviously a slice more, but the company decided to include two Lavalier II mics as part of the bundle. The Lavalier II costs $99 on its own, so from that perspective the entire bundle represents a decent value if you're looking for complete solution. 

This article originally appeared on Engadget at https://www.engadget.com/rodes-wireless-pro-mic-kit-lets-you-forget-about-clipped-audio-000028417.html?src=rss