Posts with «author_name|will shanklin» label

FCC blocks robocall middleman One Eye from future campaigns

The Federal Communications Commission (FCC) today ordered voice service providers to block the global gateway provider One Eye. The FCC says the company, which serves as an “on-ramp” to US phone networks from outside the country, enabled robocall scams like impersonating a major financial institution and calls about bogus “preauthorized orders” placed in consumers’ names. The Biden administration’s FCC has focused on increasing its ability to enforce robocalls. “This company — what’s left of it — will now have a place in robocall history,” said FCC Chairwoman Jessica Rosenworcel. “We can and will continue to shut off providers that help scammers.”

Today’s order is the culmination of an escalating series of actions by the FCC to stop One Eye from facilitating shady robocall campaigns. First, the agency cited the company’s predecessor, PZ/Illum Telecommunication, for transmitting illegal robocalls. Then, in a cease-and-desist letter sent in February, the FCC’s Enforcement Bureau warned the newly minted One Eye that its rebranding wouldn’t help it avoid consequences while alerting it that a failure to comply would lead to a permanent block. (On the same day, it cautioned US voice providers about One Eye’s activity.) Finally, it sent an “initial determination order” in April, another step toward the block it ultimately issued today.

The FCC’s statement doesn’t specify where One Eye’s headquarters are. The February cease-and-desist letter was addressed to a registered LLC in Delaware, but that could merely be a US branch of a global operation based elsewhere.

The block has teeth thanks to the FCC’s Gateway Provider Order issued in May 2022. It laid out a new list of requirements for companies routing foreign calls to the US, including (among others) caller ID authentication using the STIR / SHAKEN framework, submitting certification plans, responding to traceback requests within 24 hours and blocking illegal traffic when notified by the FCC. 

“The Enforcement Bureau team has built a fair, transparent, but tough process by which we can essentially shut down access to U.S. communications networks by companies such as One Eye that are targeting consumers with illegal robocalls,” said Enforcement Bureau Chief Loyaan Egal. “Today’s action demonstrates another cutting-edge tool in our robocall enforcement options and represents a landmark date in our efforts to protect consumers from scam calls.”

This article originally appeared on Engadget at https://www.engadget.com/fcc-blocks-robocall-middleman-one-eye-from-future-campaigns-211509369.html?src=rss

Scientists discover microbes that can digest plastics at cool temperatures

In a potentially encouraging sign for reducing environmental waste, researchers have discovered microbes from the Alps and the Arctic that can break down plastic without requiring high temperatures. Although this is only a preliminary finding, a more efficient and effective breakdown of industrial plastic waste in landfills would give scientists a new tool for trying to reduce its ecological damage.

Scientists from the Swiss Federal Institute WSL published their findings this week in Frontiers in Microbiology, detailing how cold-adapted bacteria and fungus from polar regions and the Swiss Alps digested most of the plastics they tested — while only needing low to average temperatures. That last part is critical because plastic-eating microorganisms tend to need impractically high temperatures to work their magic. “Several microorganisms that can do this have already been found, but when their enzymes that make this possible are applied at an industrial scale, they typically only work at temperatures above [30 degrees Celsius / 86 degrees Fahrenheit],” the researchers explained. “The heating required means that industrial applications remain costly to date, and aren’t carbon-neutral.”

Unfortunately, none of the microorganisms tested succeeded at breaking down non-biodegradable polyethylene (PE), one of the most challenging plastics commonly found in consumer products and packaging. (They failed at degrading PE even after 126 days of incubation on the material.) But 56 percent of the strains tested decomposed biodegradable polyester-polyurethane (PUR) at 15 degrees Celsius (59 degrees Fahrenheit). Others digested commercially available biodegradable mixtures of polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA). The two most successful strains were fungi from the genera Neodevriesia and Lachnellula: They broke down every plastic tested other than the formidable PE.

Plastics are too recent an invention for the microorganisms to have evolved specifically to break them down. But the researchers highlight how natural selection equipping them to break down cutin, a protective layer in plants that shares much in common with plastics, played a part. “Microbes have been shown to produce a wide variety of polymer-degrading enzymes involved in the break-down of plant cell walls. In particular, plant-pathogenic fungi are often reported to biodegrade polyesters, because of their ability to produce cutinases which target plastic polymers due [to] their resemblance to the plant polymer cutin,” said co-author Dr. Beat Frey.

The researchers see promise in their findings but warn that hurdles remain. “The next big challenge will be to identify the plastic-degrading enzymes produced by the microbial strains and to optimize the process to obtain large amounts of proteins,” said Frey. “In addition, further modification of the enzymes might be needed to optimize properties such as protein stability.”

This article originally appeared on Engadget at https://www.engadget.com/scientists-discover-microbes-that-can-digest-plastics-at-cool-temperatures-173419885.html?src=rss

Scammers used AI-generated Frank Ocean songs to steal thousands of dollars

More AI-generated music mimicking a famous artist has made the rounds — while making lots of money for the scammer passing it off as genuine. A collection of fake Frank Ocean songs sold for a reported $13,000 CAD ($9,722 in US dollars) last month on a music-leaking forum devoted to the Grammy-winning singer, according toVice. If the story sounds familiar, it’s essentially a recycling of last month’s AI Drake / The Weeknd fiasco.

As generative AI takes the world by storm — Google just devoted most of its I/O 2023 keynote to it — people eager to make a quick buck through unscrupulous means are seizing the moment before copyright laws catch up. It’s also caused headaches for Spotify, which recently pulled not just Fake Drake but tens of thousands of other AI-generated tracks after receiving complaints from Universal Music.

The scammer, who used the handle mourningassasin, told Vice they hired someone to make “around nine” Ocean songs using “very high-quality vocal snippets” of the Thinkin Bout You singer’s voice. The user posted a clip from one of the fake tracks to a leaked-music forum and claims to have quickly convinced its users of its authenticity. “Instantly, I noticed everyone started to believe it,” mourningassasin said. The fact that Ocean hasn’t released a new album since 2016 and recently teased an upcoming follow-up to Blond may have added to the eagerness to believe the songs were real.

The scammer claims multiple people expressed interest in private messages, offering to “pay big money for it.” They reportedly fetched $3,000 to $4,000 for each song in mid to late April. The user has since been banned from the leaked-music forum, which may be having an existential crisis as AI-generated music makes it easier than ever to produce convincing knockoffs. “This situation has put a major dent in our server’s credibility, and will result in distrust from any new and unverified seller throughout these communities,” said the owner of a Discord server where the fake tracks gained traction.

This article originally appeared on Engadget at https://www.engadget.com/scammers-used-ai-generated-frank-ocean-songs-to-steal-thousands-of-dollars-222042845.html?src=rss

Google’s Duet AI brings more generative features to Workspace apps

After OpenAI’s ChatGPT caught the tech world off guard late last year, Google reportedly declared a “code red,” scrambling to plan a response to the new threat. The first fruit of that reorientation trickled out earlier this year with its Bard chatbot and some generative AI features baked into Google Workspace apps. Today at Google I/O 2023, we finally see a more fleshed-out picture of how the company views AI’s role in its cloud-based productivity suite. Google Duet AI is the company’s branding for its collection of AI tools across Workspace apps.

Like Microsoft Copilot for Office apps, Duet AI is an umbrella term for a growing list of generative AI features across Google Workspace apps. (The industry seems to have settled on marketing language depicting generative AI as your workplace ally.) First, the Gmail mobile app will now draft full replies to your emails based on a prompt in a new “Help me write” feature. In addition, the mobile Gmail app will soon add contextual assistance, “allowing you to create professional replies that automatically fill in names and other relevant information.”

Google

Duet AI also makes an appearance in Google Slides. Here, it takes the form of image generation for your presentations. Like Midjourney or DALL-E 2, Duet AI can now turn simple text prompts (entered into the Duet AI “Sidekick” side panel) into AI-generated images to enhance Slides presentations. It could help save you the trouble of scouring the internet for the right slide image while spicing them up with something original.

In Google Sheets, Duet AI can understand the context of a cell’s data and label it accordingly. The spreadsheet app also adds a new “help me organize” feature to create custom plans: describe what you want to do in plain language, and Duet AI will outline strategies and steps to accomplish it. “Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start,” the company said.

Google

Meanwhile, Duet AI in Google Meet can generate custom background images for video calls with a text prompt. Google says the feature can help users “express themselves and deepen connections during video calls while protecting the privacy of their surroundings.” Like the Slides image generation, Duet’s Google Meet integration could be a shortcut to save you from searching for an image that conveys the right ambiance for your meeting (while hiding any unwanted objects or bystanders behind you).

Duet also adds an “assisted writing experience” in Google Docs’ smart canvas. Entering a prompt describing what you want to write about will generate a Docs draft. The feature also works in Docs’ smart chips (automatic suggestions and info about things like documents and people mentioned in a project). Additionally, Google is upgrading Docs’ built-in Grammarly-style tools. A new proofread suggestion pane will offer tips about concise writing, avoiding repetition and using a more formal or active voice. The company adds that you can easily toggle the feature when you don’t want it to nag you about grammar.

Initially, you’ll have to sign up for a waitlist to try the new Duet AI Workspace features. Google says you can enter your info here to be notified as it opens the generative AI features to more users and regions “in the weeks ahead.”

This is a developing story. Please check back for updates.

Follow all of the news from Google I/O 2023 right here.

This article originally appeared on Engadget at https://www.engadget.com/googles-duet-ai-brings-more-generative-features-to-workspace-apps-173944737.html?src=rss

Meta's open-source ImageBind AI aims to mimic human perception

Meta is open-sourcing an AI tool called ImageBind that predicts connections between data similar to how humans perceive or imagine an environment. While image generators like Midjourney, Stable Diffusion and DALL-E 2 pair words with images, allowing you to generate visual scenes based only on a text description, ImageBind casts a broader net. It can link text, images / videos, audio, 3D measurements (depth), temperature data (thermal), and motion data (from inertial measurement units) — and it does this without having to first train on every possibility. It’s an early stage of a framework that could eventually generate complex environments from an input as simple as a text prompt, image or audio recording (or some combination of the three).

You could view ImageBind as moving machine learning closer to human learning. For example, if you’re standing in a stimulating environment like a busy city street, your brain (largely unconsciously) absorbs the sights, sounds and other sensory experiences to infer information about passing cars and pedestrians, tall buildings, weather and much more. Humans and other animals evolved to process this data for our genetic advantage: survival and passing on our DNA. (The more aware you are of your surroundings, the more you can avoid danger and adapt to your environment for better survival and prosperity.) As computers get closer to mimicking animals’ multi-sensory connections, they can use those links to generate fully realized scenes based only on limited chunks of data.

So, while you can use Midjourney to prompt “a basset hound wearing a Gandalf outfit while balancing on a beach ball” and get a relatively realistic photo of this bizarre scene, a multimodal AI tool like ImageBind may eventually create a video of the dog with corresponding sounds, including a detailed suburban living room, the room’s temperature and the precise locations of the dog and anyone else in the scene. “This creates distinctive opportunities to create animations out of static images by combining them with audio prompts,” Meta researchers said today in a developer-focused blog post. “For example, a creator could couple an image with an alarm clock and a rooster crowing, and use a crowing audio prompt to segment the rooster or the sound of an alarm to segment the clock and animate both into a video sequence.”

Meta’s graph showing ImageBind’s accuracy outperforming single-mode models.
Meta

As for what else one could do with this new toy, it points clearly to one of Meta’s core ambitions: VR, mixed reality and the metaverse. For example, imagine a future headset that can construct fully realized 3D scenes (with sound, movement, etc.) on the fly. Or, virtual game developers could perhaps eventually use it to take much of the legwork out of their design process. Similarly, content creators could make immersive videos with realistic soundscapes and movement based on only text, image or audio input. It’s also easy to imagine a tool like ImageBind opening new doors in the accessibility space, generating real-time multimedia descriptions to help people with vision or hearing disabilities better perceive their immediate environments.

“In typical AI systems, there is a specific embedding (that is, vectors of numbers that can represent data and their relationships in machine learning) for each respective modality,” said Meta. “ImageBind shows that it’s possible to create a joint embedding space across multiple modalities without needing to train on data with every different combination of modalities. This is important because it’s not feasible for researchers to create datasets with samples that contain, for example, audio data and thermal data from a busy city street, or depth data and a text description of a seaside cliff.”

Meta views the tech as eventually expanding beyond its current six “senses,” so to speak. “While we explored six modalities in our current research, we believe that introducing new modalities that link as many senses as possible — like touch, speech, smell, and brain fMRI signals — will enable richer human-centric AI models.” Developers interested in exploring this new sandbox can start by diving into Meta’s open-source code.

This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-imagebind-ai-aims-to-mimic-human-perception-181500560.html?src=rss

Artiphon’s Minibeats AR app creates music from movement and gestures

Artiphon, the company behind the Orba handheld synth and MIDI controller, launched a new AR music creation app today that you don’t need a musical background to enjoy. Minibeats for iOS uses gestures, dance moves and facial expressions to craft songs played on 12 virtual instruments with colorful visual effects.

You could view the Minibeats app as a phone camera equivalent to Artiphon’s music-creation hardware. Here, instead of tapping touchpads on top of an orb-like device, the app lets you wave your hands, smile, frown and bust a move; the camera will capture your gestures and turn them into corresponding music.

The app is an extension of the company’s mission to make music creation a fun and simple activity that anyone can do. “With an intuitive interface and zero learning curve, Minibeats allows you to make music in innovative ways using simple gestures,” Artiphon’s announcement reads. “Dance to the beat as Minibeats tracks your movements and mixes the music. Wave your hands to draw across the sky with sparkles, lasers, and ripples. And even play music by smiling and frowning as Minibeats detects your emotions and scores it with a mood that matches the moment.”

Artiphon

The app taps into the Snapchat CameraKit SDK, which Artiphon already used in custom lenses it launched earlier this year in collaboration with electronic artists San Holo and LP Giobbi. “The iOS app will take this idea even further with more music to choose from and even more exciting ways to play it,” the launch video below states.

Although the app is tailored for simplicity, it provides hint videos to show you the ropes and learn the subtler details of AR music creation. Additionally, it includes “dozens” of visual effects corresponding to your gestures and sounds. And, of course, the app makes it easy to share your creations, letting you download your makeshift music video to your iOS Photos library or share with friends through text, email or social apps.

This article originally appeared on Engadget at https://www.engadget.com/artiphons-minibeats-ar-app-creates-music-from-movement-and-gestures-130025054.html?src=rss

Bank of Canada asks for public feedback about a national digital currency

The Bank of Canada wants the public’s opinions on a potential digital Canadian dollar. Although the country’s central bank says a national digital currency isn’t yet needed, it wants to remain flexible and ready should that ever change.

“As Canada’s central bank, we want to make sure everyone can always take part in our country’s economy. That means being ready for whatever the future holds,” said Senior Deputy Governor Carolyn Rogers in a press release published today. The bank cites the diminishing use of cash, potential competition with cryptocurrencies and national economic stability as reasons to prepare for the potential shift.

“The Bank has been providing bank notes to Canadians for more than 85 years,” its announcement states. “Cash is a safe, accessible and trusted method of payment that anyone can use, including people who don’t have a bank account, a credit score or official identification documents. However, there may come a time when bank notes are not widely used in day-to-day transactions, which could risk excluding many Canadians from taking part in the economy.”

Although cryptocurrency is less of a threat to traditional financial institutions after last year’s epic collapses, it’s still a looming danger that likely motivated this move. If decentralized currencies ever became widely enough used to reduce demand for the Canadian dollar, that could threaten the bank’s (and government’s) ability to assert control over the economy, maintain stability and implement policies. “A digital Canadian dollar would ensure Canadians always have an official, safe, and stable digital payment option issued by Canada’s central bank,” the bank says. But it also emphasized that, even if it eventually launched a national digital currency, it would still issue bank notes for anyone who wants them. “Cash isn’t going anywhere,” it unequivocally states.

The survey is a standard online questionnaire about how Canadians would likely use digital currency, which security features are essential, and their concerns about accessibility and privacy. “We want to hear from Canadians about what they value most in the design of a digital dollar. This will help us make design choices and ensure that it is secure, reliable and meets the needs of Canadians,” said Rogers. The bank says Canadians’ feedback “will be kept anonymous, confidential, and be reported in aggregate only.”

This article originally appeared on Engadget at https://www.engadget.com/bank-of-canada-asks-for-public-feedback-about-a-national-digital-currency-172630056.html?src=rss

Sony is shutting down PixelOpus, the studio behind ‘Concrete Genie’

PixelOpus, a small in-house studio within PlayStation Studios, is closing down next month. In a statement to Engadget, a PlayStation representative confirmed, “PlayStation Studios regularly evaluates its portfolio and the status of studio projects to ensure they meet the organization’s short and long-term strategic objectives. As part of a recent review process, it has been decided that PixelOpus will close on June 2.” Earlier today, the studio tweeted that its “adventure has come to an end.” The developer of imaginative passion projects Concrete Genie and Entwined, PixelOpus had reportedly been working on a new PS5 game with Sony Pictures Animation.

The Santa Mateo-based studio was founded in 2014 under the Sony Interactive Entertainment umbrella in response to the surprise success of indie studio thatgamecompany’s Journey on the PS3. PixelOpus’ two games strived for the same kinds of original ideas: Entwined was a dual-stick rhythm game with a distinct art style about a blue bird and orange fish soaring through the cosmos, while Concrete Genie is a story about using street art to stand up to bullying. Engadget’s Andrew Tarantola found the latter a “surprisingly pleasant and laid-back experience.”

Dear friends, our PixelOpus adventure has come to an end. As we look to new futures, we wanted to say a heartfelt thank you to the millions of passionate players who have supported us, and our mission to make beautiful, imaginative games with heart.
We are so grateful! ❤️🙏 pic.twitter.com/rQO2Cgvhnq

— PixelOpus (@Pixelopus) May 5, 2023

Sadly, it’s often the experimental studios with outside-the-box ideas that are first on the chopping block when mega-corporation parent companies look to cut costs. Although PixelOpus will soon be gone, the original and stylistic Concrete Genie remains available on the PlayStation Store for $30.

This article originally appeared on Engadget at https://www.engadget.com/sony-is-shutting-down-pixelopus-the-studio-behind-concrete-genie-210303026.html?src=rss

‘Hogwarts Legacy’ adds arachnophobia mode for spider-free gaming

Arachnophobic Harry Potter fans, rejoice. Developer Avalanche Software has added a new accessibility feature to Hogwarts Legacy that removes spiders from the game. The update coincides with the title’s arrival on PS4 and Xbox One today.

The Hogwarts Legacy update (build 1140773) launched Thursday adds the new Arachnophobia Mode to the game’s accessibility options. It changes all enemy spider appearances to what you see in the image above: a floating meanie with glowing red eyes surrounded by hovering roller skates. The skates are a wink to Ron’s boggart encounter (manifested as a giant spider) in Harry Potter and the Prisoner of Azkaban when students imagine their greatest fears in ridiculous situations that diminish their power; Ron conquers his fears by imagining the arachnid clumsily trying to stand up on slippery skates.

Arachnophobia Mode also “reduces and removes spider skitters and screeches,” “removes small spider ground effect spawners,” and “makes static spider corpses in the world invisible.” However, the game’s creators note that spider images in the Field Guide remain unchanged, so avoid that if static images of spiders creep you out.

Our latest patch for #HogwartsLegacy includes Arachnophobia Mode, making venturing into spider-infested areas significantly less intimidating!
Full notes: https://t.co/9Cods9n1G5pic.twitter.com/nDck8b6SH1

— Hogwarts Legacy (@HogwartsLegacy) May 4, 2023

It’s the latest example of the gaming industry showing increased sensitivity toward people with common phobias. Similarly, the miniaturized survival game Grounded added a similar mode that turned its spiders into floating orange blobs analogous to those in Hogwarts Legacy. Last month, a patch for Horizon Forbidden Westaddressed fears of deep bodies of water (thalassophobia). Games are ideally a fun time for all, and a little extra work from developers can go a long way toward preventing anxiety triggers.

The Hogwarts Legacy update corresponds with the game’s launch on previous-generation (PS4 / Xbox One) consoles today. It arrived on PC, PS5 and Xbox Series X / S in February. A Switch port is due on July 25th. Engadget’s Jessica Conditt found it “the coolest work of Harry Potter fanfiction in years,” fulfilling a teenage dream of being a witch.

This article originally appeared on Engadget at https://www.engadget.com/hogwarts-legacy-adds-arachnophobia-mode-for-spider-free-gaming-194215306.html?src=rss

8Bitdo launches a $30 version of its Ultimate controller

Gamepad maker 8Bitdo unveiled a cheaper version of its beloved Ultimate Controller today. The new Ultimate C 2.4G Wireless Controller is a $30 wireless accessory in purple or green color options. It’s compatible with Windows, Android, Steam Deck and Raspberry Pi.

As its name suggests, the new gamepad connects wirelessly using an included 2.4GHz USB dongle. 8Bitdo describes it as a “simplified” version of the popular Ultimate series of controllers while “offering the same ultimate quality.” As for what “simplified” means, the company appears to have helped scale back production costs by skipping the charging dock (using cable charging instead) and the profile-toggling switch from the more expensive variants. It also doesn’t support the company’s Ultimate Software for customizations.

8BitDo says the gamepad can last up to 25 hours of playtime and recharge fully in two hours. In addition, it supports asymmetrical rumble, although vibration feedback only works on Windows. The controller also works in wired mode and is plug-and-play on PC.

The company expanded into modern console-style controllers last year after making its bones on nostalgic gamepads mimicking classic NES and SNES inputs. The Ultimate line’s design is much closer to today’s Xbox controllers, including asymmetric stick layouts. The more expensive 2.5GHz version is still available for $50, while the Bluetooth variant costs $70. You can pre-order the new model from Amazon ahead of its scheduled May 31st release date.

This article originally appeared on Engadget at https://www.engadget.com/8bitdo-launches-a-30-version-of-its-ultimate-controller-214509427.html?src=rss