Meta faces another lawsuit over child safety

New Mexico is the latest jurisdiction to accuse Meta of failing to protect younger users. The state attorney general's office filed suit against the company this week after investigators set up test accounts on Instagram and Facebook in which they claimed to be preteens or teenagers. They used AI-generated profile photos for the accounts. The AG's office asserts that the accounts were barraged by explicit messages and images, along with sexual propositions from users. It also claimed that Meta's algorithms recommended sexual content to the test accounts.

The suit claims that “Meta has allowed Facebook and Instagram to become a marketplace for predators in search of children upon whom to prey," according to The Wall Street Journal. In addition, it asserts that Meta failed to employ measures to stop those under 13 from using its platforms and that CEO Mark Zuckerberg was personally liable for product choices that increased risks to children.

To get around Meta's age restrictions, investigators provided the company with adult dates of birth while setting up phony accounts for four children (kids often misstate their ages to access online services that they're not supposed to). However, they implied that the accounts were being used by children — one posted about losing a baby tooth and starting seventh grade. Per the suit, investigators also set up the account to make it seem as though the fictional child's mother was possibly trafficking her.

The suit alleges that, among other things, the accounts were sent child sex images and offers to pay for sex. Two days after investigators set up an account for a phony 13-year-old girl, Meta's algorithms suggested it follow a Facebook account with upwards of 119,000 followers that posted adult porn.

Investigators flagged inappropriate material (including some images that appeared to be of nude and underaged girls) through Meta's reporting systems. According to the suit, Meta's systems often found these images to be permissible on its platforms.

In a statement to the Journal, Meta claimed it prioritizes child safety and invests heavily in safety teams. “We use sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators,” the company said. Meta has also claimed that it carries out work to stop malicious adults from contacting children on its platforms.

Earlier this year, Meta set up a task force to tackle child safety issues after reports indicated Instagram's algorithms helped accounts that commissioned and bought underage-sex material to find each other. Just last week, the Journal reported on the alleged prevalence of child exploitation material on Instagram and Facebook. According to the Canadian Centre for Child Protection, a “network of Instagram accounts with as many as 10 million followers each has continued to livestream videos of child sex abuse months after it was reported to the company." Meta says it has taken action over such issues.

The New Mexico lawsuit follows suits that a group of 41 states and the District of Columbia filed in October. Among other matters, they alleged that the company knew its "addictive" aspects were harmful to young users and that it misled people about safety on its platforms.

This article originally appeared on Engadget at https://www.engadget.com/meta-faces-another-lawsuit-over-child-safety-164732291.html?src=rss

Twitch to cease operations in South Korea over ‘prohibitively expensive’ network fees

Twitch is leaving South Korea, with plans to cease all operations on February 27. This is due to ‘prohibitively expensive’ networking fees, according to CEO Dan Clancy. The news is a major bummer, as the country is one of the largest esports markets in the world, with some of the most competitive League of Legends and Starcraft players around.

Clancy calls this a “unique situation," noting that operating in South Korea ends up being ten times more expensive than other countries. He went on to write that Twitch undertook a “significant effort” to continue operations, but the Amazon-owned company simply couldn’t afford it.

Some of these efforts included incorporating a lower-cost peer-to-peer model and downgrading the resolution of streams to 720p, according to TechCrunch. The company had been running at a significant loss and it decided to, well, stop doing that. 

“I want to reiterate that this was a very difficult decision and one we are very disappointed we had to make. Korea has always and will continue to play a special role in the international esports community and we are incredibly grateful for the communities they built on Twitch,” wrote Clancy.

Netflix has also been open about its struggles to continue operations in South Korea. The streaming giant and local internet service provider SK Broadband had been tossing lawsuits back and forth regarding networking fees before settling back in September. As usual, consumers got the shaft on this one, as Netflix ended up raising prices by around 13 percent.

So what’s the issue exactly? It all boils down to a particular type of internet traffic tax employed in South Korea called the “Sending Party Network Pays” (SPNP) model. This tax requires the tech company, Twitch in this case, to pay a fee to the ISP for traffic to be delivered to the end user. Foreign companies resisted these efforts for years but there have been recent crackdowns, and here we are.

South Korea is the first country to force the SPNP model, but other nations are looking to follow suit. India, for instance, has expressed interest in changing up its telecom rules in favor of ISPs and the EU has been debating the issue since March. As for Twitch, the company’s hosting a live stream today to address concerns from Korean users.

This article originally appeared on Engadget at https://www.engadget.com/twitch-to-cease-operations-in-south-korea-over-prohibitively-expensive-network-fees-163041382.html?src=rss

The first affordable headphones with MEMS drivers don't disappoint

The headphone industry isn’t known for its rapid evolution. There are developments like spatial sound and steady advances in Bluetooth audio fidelity, but for the most part, the industry counts advances in decades rather than years. That makes the arrival of the Aurvana Ace headphones — the first wireless buds with MEMS drivers — quite the rare event. I recently wrote about what exactly MEMS technology is and why it matters, but Creative is the first consumer brand to sell a product that uses it.

Creative unveiled two models, the Aurvana Ace ($130) and the Aurvana Ace 2 ($150) in tandem. Both feature MEMS drivers, the main difference is that the Ace model supports high-resolution aptX Adaptive while the Ace 2 has top-of-the-line aptX Lossless (sometimes marketed as “CD quality”). The Ace 2 is the model we’ll be referring to from here on.

In fairness to Creative, just the inclusion of MEMS drivers alone would be a unique selling point, but the aforementioned aptX support adds another layer of HiFi credentials to the mix. Then there’s adaptive ANC and other details like wireless charging that give the Ace 2 a strong spec-sheet for the price. Some obvious omissions include small quality of life features like pausing playback if you remove a bud and audio personalization. Those could have been two easy wins that would make both models fairly hard to beat for the price in terms of features if nothing else.

Photo by James Trew / Engadget

When I tested the first ever xMEMS-powered in-ear monitors, the Singularity Oni, the extra detail in the high end was instantly obvious, especially in genres like metal and drum & bass. The lower frequencies were more of a challenge, with xMEMS, the company behind the drivers in both the Oni and the Aurvana, conceding that a hybrid setup with a conventional bass driver might be the preferred option until its own speakers can handle more bass. That’s exactly what we have here in the Aurvana Ace 2.

The key difference between the Aurvana Ace 2 and the Oni though is more important than a good low end thump (if that’s even possible). MEMS-based headphones need a small amount of “bias” power to work, this doesn’t impact battery life, but Singularity used a dedicated DAC with a specific xMEMS “mode.” Creative uses a specific amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. The popularity of true wireless (TWS) headphones these days means that if MEMS is to catch on, it has to be compatible.

The good news is that even without the expensive iFi DAC that the Singularity Oni IEMs required to work, the Aurvana Ace 2 bring extra clarity in the higher frequencies than rival products at this price. That’s to say, even with improved bass, the MEMS drivers clearly favor the mid- to high-end frequencies. The result is a sound that strikes a good balance between detail and body.

Listening to “Master of Puppets” the iconic chords had better presence and “crunch” than on a $250 pair of on-ear headphones I tried. Likewise, the aggressive snares in System of a Down’s “Chop Suey!” pop right through just as you’d hope. When I listened to the same song on the $200 Grell Audio TWS/1 with personalized audio activated the sounds were actually comparable. Just Creative’s sounded like that out of the box, but the Grell buds have slightly better dynamic range over all and more emphasis on the vocals.

For more electronic genres the Aurvana Ace’s hybrid setup really comes into play. Listening to Dead Prez’s “Hip-Hop” really shows off the bass capabilities, with more oomph here than both the Grell and a pair of $160 House of Marley Redemption 2 ANC — but it never felt overdone or fuzzy/loose.

Photo by James Trew / Engadget

Despite besting other headphones on specific like-for-like comparisons, as a whole the nuances and differences between the headphones is harder to quantify. The only set I tested that sounded consistently better, to me, was the Denon Perl Pro (formerly known as the NuraTrue Pro) but at $349 those are also the most expensive.

It would be remiss of me not to point out that there were also many songs and tests where differences between the various sets of earbuds were much harder to discern. With two iPhones, one Spotify account and a lot of swapping between headphones during the same song it’s possible to tease out small preferences between different sets, but the form factor, consumer preference and price point dictate that, to some extent, they all broadly overlap sonically.

The promise of MEMS drivers isn’t just about fidelity though. The claim is that the lack of moving parts and their semiconductor-like fabrication process ensures a higher level of consistency with less need for calibration and tuning. The end result being a more reliable production process which should mean lower cost. In turn this could translate into better value for money or at least a potentially more durable product. If the companies choose to pass that saving on of course.

For now, we’ll have to wait and see if other companies explore using MEMS drivers in their own products or whether it might remain an alternative option alongside technology like planar magnetic drivers and electrostatic headphones as specialist options for enthusiasts. One thing’s for sure: Creative’s Aurvana Ace series offers a great audio experience alongside premium features like wireless charging and aptX Lossless for a reasonable price — what’s not to like about that?

This article originally appeared on Engadget at https://www.engadget.com/the-first-affordable-headphones-with-mems-drivers-review-161536317.html?src=rss

Apple and Google are probably spying on your push notifications

Foreign governments likely spy on your smart phone usage, and now Senator Ron Wyden's office is pushing for Apple and Google to reveal how exactly it works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked. But it appears the Department of Justice won't let companies come clean about the practice. 

Push notifications don't actually come straight from the app. Instead, they pass through the smart phone provider, like Apple for iPhones or Google for Androids, to deliver the notifications to your screen. This has created murky room for government surveillance. "Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information," Wyden wrote in the letter on Wednesday.

Apple claims it was suppressed from coming clean about this process, which is why Wyden's letter specifically targets the Department of Justice. "In this case, the federal government prohibited us from sharing any information and now that this method has become public we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Apple's next transparency report will include requests for push notification tokens, according to the company. Specifically, Wyden asks the DOJ to let Apple and Google tell customers and the general public about the demand for these app notification records. Google did not respond to a request for comment by the time of publication.

It's even more complicated because apps can't do much about it. Even if there's an individual pledge for security, if an app delivers push notifications, it must use the Apple or Google system to do so. In theory, this means your private messaging could be shared with a foreign government if you're getting push notifications from the app. That includes any metadata about the notification, too, like account information.

The revelation about push notifications come at a time when privacy and security have become a selling point. Companies advertise how they'll keep your information safe, but as more loopholes come to light, it's becoming harder to suss out what's actually trustworthy. 

This article originally appeared on Engadget at https://www.engadget.com/apple-and-google-are-probably-spying-on-your-push-notifications-154543184.html?src=rss

GTA 6, The Game Awards and the great indie debate | This week's gaming news

After a slow month in the world of video game marketing, things are starting to pick up. The past week has given us a first look at the new Fallout TV show, a few release dates and a trailer for a little game called Grand Theft Auto VI — and the Game Awards are still to come. What good timing for us to launch a weekly video game show to dig into the news.

This week’s stories

The Game Awards

The Game Awards

The Game Awards will go live on Thursday, December 7, at 7:30PM ET. Expect a few hours of game announcements, new trailers, awkward interviews and musical performances, including one by the fictional band from Alan Wake 2.

Amazon MGM Studios

Fallout, but on TV!

Amazon dropped the first trailer for its live-action Fallout series — and, man, it sure does look like Fallout. The show is set in Los Angeles 200 years after the nuclear apocalypse, and it stars Yellowjackets actor Ella Purnell, plus Walton Goggins, Aaron Moten and Kyle MacLachlan. It’s heading to Prime Video on April 12, 2024.

GTA VI is coming in 2025

The biggest news item this week, pre-The Game Awards, was the first official trailer for Grand Theft Auto VI. As of writing it's already reached 105 million views on YouTube — a pace usually reserved for only the finest K-Pop videos. GTA VI is set in Vice City, it’s coming out in 2025 and I'm sure we’ll hear a lot more about it before then.

Nexon

What is an indie game?

The meat of this week’s episode focuses on the longstanding debate about what “indie” actually means. One of the titles nominated for Best Independent Game at the Game Awards, Dave the Diver, was commissioned and bankrolled by Nexon, one of the largest video game studios in South Korea. It’s not indie, and its inclusion in this category highlights how little consensus there still is around the definition.

This is kinda my area of expertise — it’s my 13th year as a video game journalist and indie games have always been a core feature of my reporting. I’ve spent a lot of time thinking about what I mean when I say “indie,” so I sat down and formalized this thought process. There are three questions that can help define a game in an indie gray area: Is the team on the mainstream system’s payroll? Is the game or team owned by a platform holder? Do the artists have creative control? I dug into these questions this week, and discuss how having a publisher isn’t related to the indie label at all.

But when all else fails in the indie debate, there’s one ultimate question to ask: Can this team exist without my support? This is why the distinction matters: The indie label helps to identify the artists that would not exist without game sales, crowdfunding or word-of-mouth support from players. It exists to determine the teams that are truly living and dying on game sales, and it helps players decide where to spend their money. If Dave the Diver didn’t sell well, its team would likely have the chance to try again. If, say, Pizza Tower didn’t sell well, its studio could have folded.

I think this is an important conversation, so give that story a read and let us know in the comments if you think my questions help or just make things more confusing. It’s probably a little bit of both.

Now playing

I’ve been thoroughly enjoying The Cosmic Wheel Sisterhood on Steam Deck — it’s the latest game from Deconstructeam, the indie studio that made The Red Strings Club and Gods Will Be Watching. The Cosmic Wheel Sisterhood is a game about building tarot decks, manipulating elections, betraying a coven of witches and seducing everyone; it’s sexy and well-written, and I highly recommend it. Another game I’m looking forward to is A Highland Song from indie studio Inkle; it just came out this week and I’m excited to dive in.

Let us know in the comments what you’re playing! Also, we still don’t know what to call this weekly video game news show, so leave us some name suggestions, too. Thanks!

This article originally appeared on Engadget at https://www.engadget.com/gta-6-the-game-awards-and-the-great-indie-debate--this-weeks-gaming-news-153051306.html?src=rss

Google's answer to GPT-4 is Gemini: 'the most capable model we’ve ever built'

OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.

“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.

The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared. 

The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.

Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.

Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year's competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.

While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra. 

Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.

Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version's availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.

Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.

Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.

This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss

Google announces new AI processing chips and a cloud 'hypercomputer'

Undoubtedly, 2023 has been the year of generative AI, and Google is marking its end with even more AI developments. The company has announced the creation of its most powerful TPU (formally known as Tensor Processing Units) yet, Cloud TPU v5p, and an AI Hypercomputer from Google Cloud. "The growth in [generative] AI models — with a tenfold increase in parameters annually over the past five years — brings heightened requirements for training, tuning, and inference," Amin Vahdat, Google's Engineering Fellow and Vice President for the Machine Leaning, Systems, and Cloud AI team, said in a release.

The Cloud TPU v5p is an AI accelerator, training and serving models. Google designed Cloud TPUs to work with models that are large, have long training periods, are mostly made of matrix computations and have no custom operations inside its main training loop, such as TensorFlow or JAX. Each TPU v5p pod brings 8,960 chips when using Google's highest-bandwidth inter-chip interconnect.

The Cloud TPU v5p follows previous iterations like the v5e and v4. According to Google, the TPU v5p has two times greater FLOPs and is four times more scalable when considering FLOPS per pod than the TPU v4. It can also train LLM models 2.8 times faster and embed dense models 1.9 times faster than the TPU v4. 

Then there's the new AI Hypercomputer, which includes an integrated system with open software, performance-optimized hardware, machine learning frameworks, and flexible consumption models. The idea is that this amalgamation will improve productivity and efficiency compared to if each piece was looked at separately. The AI Hypercomputer's performance-optimized hardware utilizes Google's Jupiter data center network technology.

In a change of pace, Google provides open software to developers with "extensive support" for machine learning frameworks such as JAX, PyTorch and TensorFlow. This announcement comes on the heels of Meta and IBM's launch of the AI Alliance, which prioritizes open sourcing (and Google is notably not involved in). The AI Hypercomputer also introduces two models, Flex Start Mode and Calendar Mode. 

Google shared the news alongside the introduction of Gemini, a new AI model that the company calls its "largest and most capable," and its rollout to Bard and the Pixel 8 Pro. It will come in three sizes: Gemini Pro, Gemini Ultra and Gemini Nano. 

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-ai-processing-chips-and-a-cloud-hypercomputer-150031454.html?src=rss

Google’s Gemini AI is coming to Android

Google is bringing Gemini, the new large language model it just introduced, to Android, beginning with the Pixel 8 Pro. The company’s flagship smartphone will run Gemini Nano, a version of the model built specifically to run locally on smaller devices, Google announced in a blog post. The Pixel 8 Pro is powered by the Google Tensor G3 chip designed to speed up AI performance.

This lets the Pixel 8 Pro add several smarts to existing features. The phone’s Recorder app, for instance, has a Summarize feature that currently needs a network connection to give you a summary of recorded conversations, interviews, and presentations. But thanks to Gemini Nano, the phone will now be able to provide a summary without needing a connection at all.

Gemini smarts will also power Gboard’s Smart Reply feature. Gboard will suggest high-quality responses to messages and be aware of context in conversations. The feature is currently available as a developer preview and needs to be enabled in settings. However, it only works with WhatsApp currently and will come to more apps next year.

“Gemini Nano running on Pixel 8 Pro offers several advantages by design, helping prevent sensitive data from leaving the phone, as well as offering the ability to use features without a network connection,” wrote Brian Rakowski, Google Pixel’s vice president of product management.

As part of today’s AI push, Google is upgrading Bard, the company’s ChatGPT rival, with Gemini as well, so you should see significant improvements when using the Pixel’s Assistant with Bard experience. Google is also rolling out a handful of AI-powered productivity and customization updates on other Pixel devices, including the Pixel Tablet and the Pixel Watch, although it isn’t immediately clear what they are.

Google

Gemini Nano is the smallest version of Google's large language model, while Gemini Pro is a larger model that will power not just Bard but other Google services like Search, Ads and Chrome, among others. Gemini Ultra, Google's beefiest model, will arrive in 2024 and will be used to further AI development.

Although today’s updates are focused on the Pixel 8 Pro, Google spoke today about AI Core, an Android 14 service that allows developers to access AI features like Nano. Google says AI Core is designed run on “new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon.” The company adds that “additional devices and silicon partners will be announced in the coming months.”

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-ai-is-coming-to-android-150025984.html?src=rss

AI joins you in the DJ booth with Algoriddim’s djay Pro 5

Algoriddim’s djay Pro software has always had close ties to Apple and often been at the forefront of new DJ tech, especially on Mac, iOS or iPadOS. Today marks the launch of djay Pro version 5 and it includes a variety of novel features, many of which leverage the company’s AI and a new partnership with the interactive team at AudioShake.

There are several buzzy trademarked names to remember this time around including Next-generation Neural Mix, Crossfader Fusion and Fluid Beatgrid. These are the major points of interest in djay Pro 5, with only a passing mention of improved stem separation on mobile, UI refreshes for the library and a new simplified Starter Mode that may cater to new users on the platform. The updates include some intriguing AI-automated features that put the system in control of more complex maneuvers. Best of all, existing users get it all for free as part of their subscription.

AudioShake and Algroiddim have been working on their audio separation tech (like many other companies) and are calling this refreshed version Next-generation Neural Mix. We’re told to expect crisp, clear separation of elements from vocals, harmonies and drums. The tools have also been optimized for mobile devices, as long as they run a supported OS.

Fluid Beatbrid is perhaps one of the easiest to understand and seems to be an underlying part of the crossfader updates. Anyone who’s used beatgrids knows they’re rarely perfect on first analysis and often take a bit of work to lock in, especially on tracks that need it. Songs with live instrumentation that tend to shift tempo naturally, EDM with varying tempo shifts during breakdowns and even just older dance tracks that tend to meander slightly throughout playback have been pain points. Fluid Beatgrid is supposed to use AI to accommodate for those shifts and find the right points to mark.

Crossfader Fusion is where stems, automation and those beatgrids all come into play. There are now a variety of settings for the crossfader beyond the usual curves. One of the highlighted modes is the Neural Mix (Harmonic Sustain) setting. This utilizes stem separation and automated level adjustments as you go from one track to the next.

For those who enjoy cutting and scratching, there are crossfade settings that use automated curves and spatial effects so, for example, outgoing track vocals can be dropping out as you cut into the next track automatically. The incoming track’s vocals can be highlighted for scratching and as your mix completes the transition, things are blended together further with AI.

There's even an example provided that shows how you can mix across vastly different BPMs, where the incoming song matches up with a slower outgoing track, but its original tempo is slowly integrated during the transition leaving you with the new faster tempo. 

Existing users should be alerted to the update, but newcomers can find djay Pro version 5 starting today at the App Store. While there will continue to be a free version, the optional Pro subscription costs $7 per month or $50 per year and gives you access to all the features across Mac, iOS and iPhone. Support for the app includes devices running MacOS 10.15 or later and iOS 15 / iPadOS 15 or later.

And as a side note, we’re told that djay Pro for Windows users were leveled up in September and will get Fluid Beatgrid in an update for that platform as soon as next week. Newer features like Crossfader Fusion are expected in the near future.

This article originally appeared on Engadget at https://www.engadget.com/ai-joins-you-in-the-dj-booth-with-algoriddims-djay-pro-5-150007224.html?src=rss

Half of London's famed black cab taxi fleet are now EVs

Half of London's black cab fleet is now made up of zero-emission vehicles, manufacturer LEVC and Transport for London (TfL) announced. Of the 14,690 licensed taxis in the capital, 7,972 are battery electric vehicles (BEVs), with most manufactured by Geely's LEVC, according to the latest figures. The number of those models grew a fairly dramatic 10 percent in the last month alone. 

"Reaching this milestone is a great reflection of how London is working hard to be a greener, more sustainable, environmentally friendly city," said TfL's Helen Chapman. "London's black taxis are recognized worldwide and we are proud to see that so many drivers are helping clean up the air." 

New drivers haven't had a choice in the matter, though, as since 2018, TfL has required that all new cabs licensed in the city be zero emissions cable (the rule was extended to private minicabs last year). Cabbies with existing licenses have been motivated to change, too, as any still using less efficient vehicles have been required since 2020 to pay a daily rate (now £12.50) to operate in central London's Ultra Low Emission Zone.

Many of London's larger taxi and minicab operators have committed to fully-electric fleets by 2025. That includes the city's largest operator, Addison Lee (which uses VW ID 4s) saying it would reach that goal by 2023. London's Black Cabs are generally independently owned and licensed under strict rules by TfL. Uber recently announced that London's black taxis would be listed on its app and while some drivers have signed up, many decried the plan. 

This article originally appeared on Engadget at https://www.engadget.com/half-of-londons-famed-black-cab-taxi-fleet-are-now-evs-134015907.html?src=rss