Posts with «finance trading» label

Call of Duty players can bring most of their 'Modern Warfare II' gear over to 'Modern Warfare III'

Activision is doing something different with Call of Duty: Modern Warfare III, this year's entry in the blockbuster military shooter series. Rather than starting from scratch and having to rebuild your collection of weapons and cosmetic items, Activision is letting players carry over nearly everything they unlock in Modern Warfare II.

This so-called Carry Forward initiative also applies to Call of Duty: Warzone, given that content is shared between the mainline games and the free-to-play battle royale. Warzone Mobile, which is slated to arrive later this year, will be integrated into all of this too.

For the most part, your unlocked operators, operator skins, bundles, all weapons, attachments and other rewards and cosmetic items will move forward from MW2 to MW3. What's more, if you continue to level up guns in MW2, that progress will be reflected in MW3.

This is a one-way street, though. Any MW3 progress or unlocks won't be replicated in MW2. There's no Carry Back feature.

The main things that won't progress from the 2022 game to this year's one are cosmetics for vehicles that aren't present in MW3. War Tracks, which are songs that can be played in vehicles, won't move over either. "Some Tactical and Lethal equipment may not be available depending on the removing of those items in MW3, to be replaced with MW3-only equipment," Activision noted in an extensive FAQ.

Even though MW2 and MW3 are handled by different internal studios (Infinity Ward and Sledgehammer Games, respectively), you won't have to jump through any hoops to transfer your weapons and cosmetics either. Activision will handle everything, though of course you'll need to be using the same account or profile for both games.

Activision will release Call of Duty: Modern Warfare III on November 10th. We'll learn much more about the upcoming game at a reveal event, which is set for August 17th.

Meanwhile, following the game's latest trailer, fans are speculating that MW3 will include an updated take on the hugely controversial No Russian mission from the original Modern Warfare II, which came out in 2009. That level wasn't in last year's rebooted version of MW2.

This article originally appeared on Engadget at https://www.engadget.com/call-of-duty-players-can-bring-most-of-their-modern-warfare-ii-gear-over-to-modern-warfare-iii-170024613.html?src=rss

X sued by AFP over not discussing payments for news content

Elon Musk and X, the site formerly known as Twitter, are in more legal trouble. The Agence France-Presse (AFP) is suing X for not engaging in discussions about payment to the French publisher in exchange for its articles appearing on the platform. In 2019, France passed neighboring rights legislation, extending copyright law to content produced by news publishers, such as text and videos, for two years after release. The law requires any sites that share this work to negotiate with the publishers about remuneration instead of sharing it without compensation for its creators. 

This is bizarre. They want us to pay *them* for traffic to their site where they make advertising revenue and we don’t!?

— Elon Musk (@elonmusk) August 3, 2023

In its press release, the AFP stated that it has "expressed its concerns over the clear refusal from Twitter (recently rebranded as 'X') to enter into discussions regarding the implementation of neighbouring rights for the press. These rights were established to enable news agencies and publishers to be remunerated by digital platforms which retain most of the monetary value generated by the distribution of news content."

X isn't the first tech company AFP has gone up against. In 2020, France's competition authority ordered Google to enter negotiations with publishers, and, while it reached an agreement in early 2021, the company was fined €500 million ($546 million) later that year for not reaching a fair agreement. In that case, part of the argument was that Google owns 90 percent of the search market, leaving them in a position where they could abuse their power if an equitable deal wasn't reached. Twitter's influence in this area of the internet isn't nearly as strong, so we'll have to wait and see if it will face the same fight. 

This article originally appeared on Engadget at https://www.engadget.com/x-sued-by-afp-over-not-discussing-payments-for-news-content-105501199.html?src=rss

Samsung Galaxy Z Fold 4 durability report: Has Samsung finally fixed its foldable phone's biggest weakness?

When Samsung released the original Galaxy Fold, it was about as durable as a Fabergé egg. But over the years, the company has made a number of changes to reduce the fragility of its flagship foldable phone. The Galaxy Z Fold 2 featured a redesigned hinge that prevented dirt from getting inside, while the Z Fold 3 added IPX8 water resistance and a stronger Armor Aluminum Chassis. And last year, the Z Fold 4 brought a more durable main screen and a new adhesive designed to keep its factory-installed screen protector more firmly in place.

That last one is a biggie because after owning a Z Fold 2 and a Z Fold 3, I found that the screen protectors on both phones started bubbling after six to eight months. This weakness is a concern for anyone thinking about buying an $1,800 foldable phone – especially when you consider that Samsung recommends that any repairs are done by an authorized service center. But as some who really likes foldable phones, I bought my own Z Fold 4 anyways and used it for a year. Here’s how well it held up.

Photo by Sam Rutherford/Engadget

I should mention that I’ve never put the phone in a case or used any other protective accessories like skins or sleeves. Despite being naked the whole time, the phone has done a decent job of withstanding typical daily abuse. Sure, there are some scratches and bare spots where paint has flaked off and a few dents from the phone being dropped or falling out of a pocket. But that’s sort of expected for a phone with no additional protection and both the front and back glass still look great.

More importantly, its flexible main screen looks practically as good as the day I got it. The screen protector is still sitting flat, there are no dead pixels or other blemishes and the hinge feels as sturdy as ever. All told, I’m pretty impressed considering some of the problems I encountered with previous generations. That said, while the pre-installed screen protector hasn’t started bubbling, there is one tiny spot along the top edge at the crease where you can see that it has started to (ever so slightly) separate from the display. So far, this hasn’t caused any issues. However, if past experience is any indication, this could cause the screen protector to start bubbling down the line.

Still, after claiming it switched to a new, more sticky adhesive to the Z Fold 4’s factory-installed screen protector in place, at least on my phone, Samsung’s tweak seems to have had at least some effect. Is the problem completely solved? No, not quite. Remember, this is just a single example, and it’s hard to account for things like the milder winter we’ve had this year, and chillier weather sometimes caused issues for Z Fold and Z Flip owners.

Also, while my Z Fold 4 has aged rather nicely, the screen protector on Engadget’s executive editor Aaron Souppouris’ Z Flip 4 has not fared nearly as well. He says the screen on his device was basically pristine for the first nine months. But after that, bubbles began to form and grew larger and larger until he removed the protector entirely and began using the phone with its naked flexible display.

It’s important to mention that Samsung instructs Z Flip and Z Fold owners not to use their devices without a screen protector. If you do remove it, you’re supposed to get it replaced as soon as possible. If you’re lucky, that can be as simple as finding a local Best Buy or uBreakiFix location and spending half an hour without your phone, and thankfully, Samsung offers one free screen protector replacement on both the Z Flip and Z Fold lines. Unfortunately, if you live in a remote area or just don’t have a nearby service center, you may need to rely on a mail-in repair, potentially leaving you without a phone for a couple of weeks or more. And for a lot of people, that’s not a reasonable option.

However, after talking to a number of Galaxy Z Flip and Z Fold owners who have removed their screen protectors, that seems to be merely a precaution. It’s totally possible to use a foldable phone without a screen protector just like you can on a regular handset. But given the more delicate nature of flexible displays (which are largely made of plastic instead of glass), the risk factor is higher. And with flexible screens costing a lot more to replace – up to $599 depending on the specific model – you don’t need a galaxy brain-sized noggin to understand why you might want to heed Samsung’s warnings. The counterpoint to that is because a foldable phone’s screen is protected by the rest of the device when closed, it’s only really vulnerable when you’re using it, as opposed to when it's simply resting in a pocket or bag.

Photo by Sam Rutherford/Engadget

So what’s the big takeaway? I think Samsung’s new adhesive has made a bit of a difference because, even in the case of Aaron’s Z Flip, it lasted longer than both of my previous Z Folds before the screen protector started bubbling. Even so, the screen protectors on Samsung’s foldable still require a bit more babying than a standard glass brick. This sort of fragility may be a deal-breaker for some, and understandably so. Thankfully, I live near multiple repair centers and I’m prepared to use my foldable without a screen protector – even though that’s not advised. 

For me, the ability to have a screen that expands when I want to watch a movie or multitask is worth the slightly reduced durability. But either way, this is something you need to consider before buying a foldable phone. In some ways, it’s like owning a car with a convertible roof, because while they're a bit more delicate and costly to repair, there’s nothing like driving around with the top off – or in this case a phone that can transform into a small tablet at a moment’s notice.

Just remember to do the sensible thing and put your expensive foldable phone in a case.

This article originally appeared on Engadget at https://www.engadget.com/samsung-galaxy-z-fold-4-durability-report-has-samsung-finally-fixed-its-foldable-phones-biggest-weakness-133015335.html?src=rss

Nothing Phone 2 comes to the US on July 17th for $599

Nothing has finally unveiled the Phone 2 after plenty of teasers, and it's likely what you're looking for if you thought the Phone 1 was underpowered — or if you simply couldn't buy the earlier model where you live. The new device offers performance much closer to a flagship thanks to a Snapdragon 8+ Gen 1 chip versus the mid-tier 778G+ from last year's hardware. While that's still not cutting edge, the company claims it's 80 percent faster. It enables 4K video at 60 frames per second, too, and RAW HDR photography captures eight frames (and thus more overall scene detail) instead of three frames like its predecessor.

Accordingly, Nothing says it has upgraded the Phone 2's camera quality. The updated 50MP primary and 50MP ultra-wide rear cams now have 2X "super-res" digital zoom, object tracking and other imaging updates. The front camera, meanwhile, jumps from a 16MP sensor to 32MP. As with some competitors, there's now an "Action Mode" to deliver extra-stable video recording.

There are some more conspicuous changes. You can expect a larger 6.7-inch, 120Hz LTPO OLED (if still 1080p) screen with a higher 1,600-nit peak brightness and thinner bezels. There's a tapered "2.5D" glass back. And yes, the signature Glyph lighting on the back is more advanced. In addition to more LED segments, you can create different lighting sequences for every contact and notification type. You can also have persistent lights for must-see notifications, and some lights now double as progress trackers for delivery and ride hailing services like Uber.

Nothing

Software plays a considerably more important role. Where the first model only had a few modest customizations, Nothing OS 2.0 on the Phone 2 lets you tweak considerably more. You can now have multiple home screens with custom color themes, grid sizes and app labels. You'll likewise find customizable folders, and a more advanced widget set includes shortcuts to quick settings. Those widgets are available on the lock screen as well.

The Phone 2 is billed as longer-lasting thanks to its 4,700mAh battery, and you'll get a complete charge in 55 minutes. The 15W wireless charging and 5W reverse wireless charging aren't surprising, but they're not always present in this upper-midrange phone segment.

Crucially, the Nothing Phone 2 will be priced right when it arrives in North America. It will be available in the US and Canada on July 17th at 4AM Eastern starting at $599 (and an oddly high $929 CAD) for a version with 8GB of RAM and 128GB of storage. Pay $699 ($999 CAD) and you'll get 12GB of RAM with 256GB of storage, while the top-end 12GB/512GB configuration sells for $799 ($1,099 CAD). Pre-orders are available now, and there will be early sales on July 13th through physical "Nothing Drops" in New York City (69 Gansevoort Street) and London (4 Peter Street).

There's no mention of North American carrier deals as of this writing, so this sequel might not be as easy to find as more mainstream offerings. However, the launch in the region remains a big deal. The Phone 2 significantly expands the audience for Nothing's handsets, and provides fresh competition to bang-for-the-buck phones like Google's similarly-priced Pixel 7.

This article originally appeared on Engadget at https://www.engadget.com/nothing-phone-2-comes-to-the-us-on-july-17th-for-599-153012499.html?src=rss

Natural Language Programming AIs are taking the drudgery out of coding

“Learn to code.” That three-word pejorative is perpetually on the lips and at the fingertips of internet trolls and tech bros whenever media layoffs are announced. A useless sentiment in its own right, but with the recent advent of code generating AIs, knowing the ins and outs of a programming language like Python could soon be about as useful as knowing how to fluently speak a dead language like Sanskrit. In fact, these genAIs are already helping professional software developers code faster and more effectively by handling much of the programming grunt work.

How coding works

Two of today’s most widely distributed and written coding languages are Java and Python. The former almost single handedly revolutionized cross-platform operation when it was released in the mid-’90s and now drives “everything from smartcards to space vehicles,” according to Java Magazine in 2020 — not to mention Wikipedia’s search function and all of Minecraft. The latter actually predates Java by a few years and serves as the code basis for many modern apps like Dropbox, Spotify and Instagram.

They differ significantly in their operation in that Java needs to be compiled (having its human-readable code translated into computer-executable machine code) before it can run, while Python is an interpreted language which means that its human code is converted into machine code line-by-line as the program executes, enabling it to run without first being compiled. The interpretation method allows code to be more easily written for multiple platforms while compiled code tends to be focused to a specific processor type. Regardless of how they run, the actual code-writing process is nearly identical between the two: somebody has to sit down, crack open a text editor or Integrated Development Environment (IDE) and actually write out all those lines of instruction. And up until recently, that somebody typically was a human.

The “classical programming” writing process of today isn’t that different from the process those of ENIAC, with a software engineer taking a problem, breaking it down into a series of sub-problems, writing code to solve each of those sub-problems in order, and then repeatedly debugging and recompiling the code until it runs. “Automatic programming,” on the other hand, removes the programmer by a degree of separation. Instead of a human writing each line of code individually, the person creates a high-level abstraction of the task for the computer to then generate low level code to address. This differs from “interactive” programming, which allows you to code a program while it is already running.

Today’s conversational AI coding systems, like what we see in Github’s Copilot or OpenAI’s ChatGPT, remove the programmer even further by hiding the coding process behind a veneer of natural language. The programmer tells the AI what they want programmed and how, and the machine can automatically generate the required code.

Building the tools to build the tools allowing any tool to build tools

Among the first of this new breed of conversational coding AIs was Codex, which was developed by OpenAI and released in late 2021. OpenAI had already implemented GPT-3 (precursor to GPT-3.5 that powers BingChat public) by this point, the large language model remarkably adept at mimicking human speech and writing after being trained on billions of words from the public web. The company then fine-tuned that model using 100-plus gigabytes of GitHub data to create Codex. It is capable of generating code in 12 different languages and can translate existing programs between them.

Codex is adept at generating small, simple or repeatable assets, like “a big red button that briefly shakes the screen when clicked” or regular functions like the email address validator on a Google Web Form. But no matter how prolific your prose, you won’t be using it for complex projects like coding a server-side load balancing program — it’s just too complicated an ask.

Google’s DeepMind developed AlphaCode specifically to address such challenges. Like Codex, AlphaCode was first trained on multiple gigabytes of existing GitHub code archives, but was then fed thousands of coding challenges pulled from online programming competitions, like figuring out how many binary strings with a given length don’t contain consecutive zeroes.

To do this, AlphaCode will generate as many as a million code candidates, then reject all but the top 1 percent to pass its test cases. The system will then group the remaining programs based on the similarity of their outputs and sequentially test them until it finds a candidate that successfully solves the given problem. Per a 2022 study published in Science, AlphaCode managed to correctly answer those challenge questions 34 percent of the time (compared to Codex’s single-digit success on the same benchmarks, that’s not bad). DeepMind even entered AlphaCode in a 5,000-competitor online programming contest, where it surpassed nearly 46 percent of the human competitors.

Now even the AI has notes

Just as GPT-3.5 serves as a foundational model for ChatGPT, Codex serves as the basis for GitHub’s Copilot AI. Trained on billions of lines of code assembled from the public web, Copilot offers cloud-based AI-assisted coding autocomplete features through a subscription plugin for the Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs).

Initially released as a developer’s preview in June of 2021, Copilot was among the very first coding capable AIs to reach the market. More than a million devs have leveraged the system in the two years since, GitHub's VP of Product Ryan J Salva, told Engadget during a recent interview. With Copilot, users can generate runnable code from natural language text inputs as well as autocomplete commonly repeated code sections and programming functions.

Salva notes that prior to Copilot’s release, GitHub’s previous machine-generated coding suggestions were only accepted by users 14 - 17 percent of the time, “which is fine. It means it was helping developers along.” In the two years since Copilot’s debut, that figure has grown to 35 percent, “and that's netting out to just under half of the amount of code being written [on GitHub] — 46 percent by AI to be exact.”

“[It’s] not a matter of just percentage of code written,” Salva clarified. “It's really about the productivity, the focus, the satisfaction of the developers who are creating.”

As with the outputs of natural language generators like ChatGPT, the code coming from Copilot is largely legible, but like any large language model trained on the open internet, GitHub made sure to incorporate additional safeguards against the system unintentionally producing exploitable code.

“Between when the model produces a suggestion and when that suggestion is presented to the developer,” Salva said, “we at runtime perform … a code quality analysis for the developer, looking for common errors or vulnerabilities in the code like cross-site scripting or path injection.”

That auditing step is meant to improve the quality of recommended code over time rather than monitor or police what the code might be used for. Copilot can help developers create the code that makes up malware, the system won’t prevent it. “We've taken the position that Copilot is there as a tool to help developers produce code,” Salva said, pointing to the numerous White Hat applications for such a system. “Putting a tool like Copilot in their hands … makes them more capable security researchers,” he continued.

As the technology continues to develop, Salva sees generative AI coding to expand far beyond its current technological bounds. That includes “taking a big bet” on conversational AI. “We also see AI-assisted development really percolating up into other parts of the software development life cycle,” he said, like using AI to autonomously repair a CI/CD build errors, patch security vulnerabilities, or have the AI review human-written code.

“Just as we use compilers to produce machine-level code today, I do think they'll eventually get to another layer of abstraction with AI that allows developers to express themselves in a different language,” Salva said. “Maybe it's natural language like English or French, or Korean. And that then gets ‘compiled down’ to something that the machines can understand,” freeing up engineers and developers to focus on the overall growth of the project rather than the nuts and bolts of its construction.

From coders to gabbers

With human decision-making still firmly wedged within the AI programming loop, at least for now, we have little to fear from having software writing software. As Salva noted, computers already do this to a degree when compiling code, and digital gray goos have yet to take over because of it. Instead, the most immediate challenges facing programming AI mirror those of generative AI in general: inherent biases skewing training data, model outputs that violate copyright, and concerns surrounding user data privacy when it comes to training large language models.

GitHub is far from alone in its efforts to build an AI programming buddy. OpenAI’s ChatGPT is capable of generating code — as are the already countless indie variants being built atop the GPT platform. So too is Amazon’s AWS CodeWhisperer system, which provides much of the same autocomplete functionality as Copilot, but optimized for use within the AWS framework. After multiple requests from users, Google incorporated code generation and debugging capabilities into Bard this past April as well, ahead of its ecosystem-wide pivot to embrace AI at I/O 2023 and the release of Codey, Alphabet’s answer to Copilot. We can’t be sure yet what generative coding systems will eventually become or how it might impact the tech industry — we could be looking at the earliest iterations of a transformative democratizing technology, or it could be Clippy for a new generation.

This article originally appeared on Engadget at https://www.engadget.com/natural-language-programming-ais-are-taking-the-drudgery-out-of-coding-140015594.html?src=rss

Apple's Assistive Access simplifies iOS 16 for people with cognitive disabilities

With Global Accessibility Awareness Day just days away, Apple is previewing a raft of new iOS features for cognitive accessibility, along with Live Speech, Personal Voice and more. The company said it worked in "deep collaboration" with community groups representing users with disabilities, and drew on "advances in hardware and software, including on-device machine learning" to make them work. 

The biggest update is "Assistive Access" designed to support users with cognitive disabilities. Essentially, it provides a custom, simplified experience for the phone, FaceTime, Messages, Camera, Photos, and Music apps. That includes a "distinct interface with high contrast buttons and large text labels" along with tools that can be customized by trusted supporters for each individual. 

Apple

"For example, for users who prefer communicating visually, Messages includes an emoji-only keyboard and the option to record a video message to share with loved ones. Users and trusted supporters can also choose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for users who prefer text," Apple wrote. 

The aim is to break down technological barriers for people with cognitive disabilities. "The intellectual and developmental disability community is bursting with creativity, but technology often poses physical, visual, or knowledge barriers for these individuals," said The Arc's Katy Schmid in a statement. "To have a feature that provides a cognitively accessible experience on iPhone or iPad — that means more open doors to education, employment, safety, and autonomy. It means broadening worlds and expanding potential." 

Another important new feature is Live Speech and Personal Voice for iPhone, iPad and Mac. Live Speech lets users type what they want to say and have it spoken out loud during phone and FaceTime calls or for in-person conversations. For users who can still speak but are at risk of losing their ability to do so due to a diagnosis of ALS or other conditions, there's the Personal Voice feature.

Apple

It lets them create a voice that sounds like their own by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. It then uses on-device machine learning to keep user information private, and works with Live Speech so users can effectively speak with others using a version of their own voices. "If you can tell [your friends and family] you love them, in a voice that sounds like you, it makes all the difference in the world," said Team Gleason board member and ALS advocate Philip Green, who has had his own voice impacted by ALS. 

Finally, Apple has introduced a Point and Speak function in the Magnifier to help users with vision disabilities interact with physical objects. "For example, while using a household appliance — such as a microwave — Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad," it wrote. The feature is built into the Magnifier app on iPhone and iPad, and can be used with other Magnifier features like People Detection, Door Detection and others. 

Along with the new functions, Apple is introducing new features, curated collections and more for Global Accessibility Awareness Day. Those include a SignTime launch in Germany, Italy, Spain and South Korea to connect Apple Store and Support customers with on-demand sign language interpreters, along with accessibility informative sessions at select Apple Store locations around the world. It's also offering podcasts, movies and more around the impact of accessible tech. The new Assistive Access and other features are set to roll out later this year, Apple said — for more, check out its press release

This article originally appeared on Engadget at https://www.engadget.com/apples-assistive-access-simplifies-ios-16-for-people-with-cognitive-disabilities-120012723.html?src=rss

Google rolls out support for passkeys across its services

When you check the security settings of your Google account, you will now find a new section marked "Passkeys." That's because the tech giant has started rolling out support for the new authentication technology, which offers a passwordless experience across its services. I'm already seeing the option in my accounts, and activating it for my phone and laptop was almost a one-click experience.

The technology uses your device biometrics — your fingerprint or your face — or its pin to confirm that it's you logging in. However, it's completely different from using your biometrics to auto-populate username and password boxes. Creating a passkey for your account generates a pair of cryptographic keys, one private and one public. The private key stays on your device, and it's what Google will use to verify your identity with the public key uploaded to its servers. Passkeys are considered more secure than current login technologies, since private keys only stay on the device where they're created and can't be stolen if a hacker breaks into Google's servers. The fact that you don't have to use a password to sign in means the technology can also protect you from phishing attempts. 

Google has been championing the use of passwordless logins and had added passkey support for Chrome and Android last year. That said, it will not be removing the option to sign in using passwords — or to activate two-factor authentication — which will be especially helpful if you have a device that doesn't support the newer technology yet. If you log into your account on multiple devices, you can create a passkey for each one of them, unless you have access to a service that backs up or syncs passkeys. A passkey you create on an iPhone, for instance, will sync with devices that use the same iCloud account, so it can also be available on an iPad or a MacBook. 

You can also use a passkey stored on your current phone to sign into a new device. Just choose "use a passkey from another device" and click through, after which Google will ask if you want to create a separate passkey for that device. 

In the blog post written by the Google Account Security and Safety teams, they said:

"Today's launch is a big step in a cross-industry effort that we started more than 10 years ago and we are committed to passkeys as the future of secure sign-in, for everyone. We hope that other web and app developers adopt passkeys as well and are able to use our deployment as a model."

This article originally appeared on Engadget at https://www.engadget.com/google-rolls-out-support-for-passkeys-across-its-services-130003969.html?src=rss

Microsoft’s Activision Blizzard purchase will reportedly be approved by the EU

Microsoft has reportedly cleared a major regulatory hurdle as it tries to move toward finalizing its Activision Blizzard purchase. The company’s licensing offers to competitors are expected to appease European Union (EU) antitrust concerns about the $69 billion acquisition, according to Reuters. The EU previously said it believed the deal could “significantly reduce competition” in PC, console and cloud gaming.

The EU isn’t expected to demand asset sales to approve the deal. However, the potential sale of Call of Duty has been a point of contention; Microsoft wants to hang onto the property while using the licensing agreements to quell regulators. The company has pledged to keep the franchise on competing platforms for at least 10 years if the purchase closes; it’s even bringing Call of Duty to Nintendo’s consoles.

Microsoft says it’s “committed to offering effective  and  easily  enforceable solutions  that address the European Commission’s concerns.” “Our commitment to grant long-term 100% equal access to  Call of Duty to Sony, Steam,  NVIDIA and others preserves the deal’s benefits to gamers and developers and increases competition in the market,” a Microsoft spokesperson told Reuters.

The company announced the deal in January 2022 to help it compete against industry leaders Tencent and Sony while developing its take on the metaverse. “Gaming is the most dynamic and exciting category in entertainment across all platforms today and will play a key role in the development of metaverse platforms,” Microsoft CEO Satya Nadella said at the time.

Microsoft will still need to appease the US Federal Trade Commission and UK regulators before the deal can be finalized. The company only has until July to sort out the antitrust concerns, or it will need to renegotiate or abandon the purchase (which would mean paying a breakup fee of up to $3 billion).

This article originally appeared on Engadget at https://www.engadget.com/microsofts-activision-blizzard-purchase-will-reportedly-be-approved-by-the-eu-174012371.html?src=rss

DJI's $369 Mini 2 SE drone can fly up to 10km away

The rumors were true, DJI is releasing a new Mini 2 SE drone that features a couple of upgrades over the company’s existing entry-level drone. Most notably, DJI has equipped the Mini 2 SE with its in-house OcuSync 2.0 transmission system, meaning the drone can now effectively fly more than twice as far away as the original Mini SE. That model’s “Enhanced WiFi” system limited its range to up to 4km. The new system should also maintain a more stable video feed at greater distances. That said, the addition of OcuSync 2.0 might not be as valuable as the numbers suggest. Most jurisdictions require that you maintain a visual line of sight with your drone, and with a UAV as small as the Mini 2 SE, it’s very likely you’ll lose sight of it long before you get a chance to fly it 10km away.

Additionally, DJI says the Mini 2 SE can fly for 31 minutes on a single battery charge, a modest upgrade from the previous model’s maximum 30-minute flight time. Aside from those changes, the Mini 2 SE is nearly identical to the model it’s about to replace. That’s not necessarily a bad thing. Like its predecessor, the Mini 2 SE weighs less than 249 grams, meaning you’re not required to register it with the Federal Aviation Administration. The new drone also carries over the aging but decent camera system found on the Mini SE. It comes with a three-axis gimbal and a 1/2.3-inch CMOS sensor capable of capturing 2.7K video and 12-megapixel stills.

The DJI Mini 2 SE will cost $369 when it arrives next month. In addition to selling the drone on its own, DJI will offer the Mini 2 SE as part of a “Fly More Combo” bundle that comes with additional batteries, replacement propellers and a carrying case for $519.

Google unveils Bard, its ChatGPT rival

ChatGPT, the automated text generation system from Open, has taken the world by storm in the two months since its public beta release but that time alone in the spotlight is quickly coming to an end. Google announced on Monday that its long-rumored chatbot AI project is real and on the way. It's called Bard.  

Bard will serve as an "experimental conversational AI service," per a blog post by Google CEO Sundar Pichai Monday. It's built atop Google's existing Language Model for Dialogue Applications (LaMDA) platform, which the company has been developing for the past two years. 

"Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models," Pichai declared. "It draws on information from the web to provide fresh, high-quality responses." Whether that reliance on the internet results in bigoted or racist behavior, as seemingly every chatbot before it has exhibited, remain to be seen.

The program will not simply be opened to the internet as ChatGPT was. Google is starting with the release of a lightweight version of LaMDA, which requires far lower system requirements than its full-specced brethren, for a select group of trusted users before scaling up from there. "We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information," Pichai said. "We’re excited for this phase of testing to help us continue to learn and improve Bard’s quality and speed." 

Chatting with internet users is only the next step in Google's larger AI mechanizations. Pichai notes that as user search requests become more complex and nuanced, "you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web," Pichai said. He added that such features would be rolling out to users "soon." The commercial API running atop LaMDA, dubbed Generative Language API, will begin inviting select developers to explore the system starting next month.