Posts with «social & online media» label

Meta caught an Israeli marketing firm running hundreds of fake Facebook accounts

Meta caught an Israeli marketing firm using fake Facebook accounts to run an influence campaign on its platform, the company said in its latest report on coordinated inauthentic behavior. The scheme targeted people in the US and Canada and posted about the Israel-Hamas war.

In all, Meta’s researchers uncovered 510 Facebook accounts, 11 pages, 32 Instagram accounts and one group that were tied to the effort, including fake and previously hacked accounts. The accounts posed as “Jewish students, African Americans and ‘concerned’ citizens” and shared posts that praised Israel’s military actions and criticized the United Nations Relief and Works Agency (UNRWA) and college protests. They also shared Islamaophobic comments in Canada, saying that “radical Islam poses a threat to liberal values in Canada.”

Meta’s researchers said the campaign was linked to STOIC, a “a political marketing and business intelligence firm” based in Israel, though they didn’t speculate on the motives behind it. STOIC was also active on X and YouTube and ran websites “focused on the Israel-Hamas war and Middle Eastern politics.”

According to Meta, the campaign was discovered before it could build up a large audience and many of the fake accounts were disabled by the company’s automated systems. The accounts reached about 500 followers on Facebook and about 2,000 on Instagram.

The report also notes that the people behind the accounts seemed to use generative AI tools to write many of their comments on the pages of politicians, media organizations and other public figures.“These comments generally linked to the operations’ websites, but they were often met with critical responses from authentic users calling them propaganda,” Meta’s policy director for threat disruption, David Agranovich, said during a briefing with reporters “So far, we have not seen novel Gen AI driven tactics that would impede our ability to disrupt the adversarial networks behind them.”

This article originally appeared on Engadget at https://www.engadget.com/meta-caught-an-israeli-marketing-firm-running-hundreds-of-fake-facebook-accounts-150021954.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Former OpenAI head of alignment Jan Leike
Jan Leike / X

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned. 

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-safety-team-is-led-by-board-members-including-ceo-sam-altman-164927745.html?src=rss

You can now hum to find a song on YouTube Music for Android

YouTube Music for Android is finally releasing a long-awaited tool that lets people hum a song to search for it, in addition to singing the tune or playing the melody on an instrument, according to reporting by 9to5Google. The software has been in the testing phase since March.

All you have to do is tap the magnifying glass in the top-right corner and look for the waveform icon next to the microphone icon. Tap the waveform icon and start humming or singing. A fullscreen results page should quickly bring up the cover art, song name, artist, album, release year and other important data about the song. The software builds upon the Pixel’s Now Playing feature, which uses AI to “match the sound to the original recording.”

The tool comes in a server-side update with version 7.02 of YouTube Music for Android. There doesn’t look to be any availability information for the iOS release, though it’s most likely headed our way in the near future.

You don't need to be @KidCudi to use Hum to Search. Hum a song into your Google app, and we'll identify it for you. Test it with your favorite songs, or use it to figure out the song that's been stuck in your head and find your new favorite. 🎶 pic.twitter.com/MluVNesTpE

— Google (@Google) December 21, 2020

This type of feature isn’t exactly new, even if it’s new to YouTube Music. Google Search rolled out a similar tool back in 2020 and the regular YouTube app began offering something like this last year. Online music streaming platform Deezer also has a “hum to search” tool, released back in 2022.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-hum-to-find-a-song-on-youtube-music-for-android-190037510.html?src=rss

Sam Altman is ‘embarrassed’ that OpenAI threatened to revoke equity if exiting employees wouldn’t sign an NDA

OpenAI reportedly made exiting employees choose between keeping their vested equity and being able to speak out against the company. According to Vox, which viewed the document in question, employees could “lose all vested equity they earned during their time at the company, which is likely worth millions of dollars” if they didn’t sign a nondisclosure and non-disparagement agreement, thanks to a provision in the off-boarding papers. OpenAI CEO Sam Altman confirmed in a tweet on Saturday evening that such a provision did exist, but said “we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement).”

An OpenAI spokesperson echoed this in a statement to Vox, and Altman said the company “was already in the process of fixing the standard exit paperwork over the past month or so.” But as Vox notes in its report, at least one former OpenAI employee has spoken publicly about sacrificing equity by declining to sign an NDA upon leaving. Daniel Kokotajlo recently posted on an online forum that this decision led to the loss of equity likely amounting to “about 85 percent of my family's net worth at least.”

in regards to recent stuff about how openai handles equity:

we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was…

— Sam Altman (@sama) May 18, 2024

In Altman’s response, the CEO apologized and said he was “embarrassed” after finding out about the provision, which he claims he was previously unaware of. “[T]here was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication,” he wrote on X. “this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have [sic].” In addition to acknowledging that the company is changing the exit paperwork, Altman went on to say, “[I]f any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too.”

All of this comes after two more high-profile resignations from OpenAI this week. OpenAI co-founder and Chief Scientist Ilya Sutskever announced on Wednesday that he was leaving the company, and was followed soon after by Jan Leike, who was a team leader on OpenAI’s now-dissolved “Superalignment” AI safety team.

This article originally appeared on Engadget at https://www.engadget.com/sam-altman-is-embarrassed-that-openai-threatened-to-revoke-equity-if-exiting-employees-wouldnt-sign-an-nda-184000462.html?src=rss

Twitter has officially moved to X.com

Twitter officially went through a rebranding almost a year ago, but most of its pages still used Twitter in their URL until now. Now, Elon Musk has announced that the social network is done moving all of its core systems on X.com, which means it's done transitioning into its new identity and scrubbing all traces of the name Twitter and its iconic blue bird logo. As The Verge notes, the website has also edited its landing and log-in page with a note at the bottom that says: "Welcome to x.com! We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same." It then links to its Privacy page, which now uses x.com in its address.

All core systems are now on https://t.co/bOUOek5Cvy pic.twitter.com/cwWu3h2vzr

— Elon Musk (@elonmusk) May 17, 2024

Over the past year, the company has been shedding its pre-Elon Musk identity little by little. It changed its official handle from @Twitter to @X and replaced the Twitter logo on its headquarters building. Its website changed favicons, which initially triggered some browsers' security safeguards, while its apps switched over to the new X logo from its previous blue bird design. Tweetdeck has been renamed into XPro and Twitter Blue became X Premium. The company has slowly been moving its pages to x.com, as well — slow enough that the move became something of a security risk, since bad actors could take advantage of the inconsistent URL to phish victims. Well, now the company is done moving to its new URL, and it's time to say goodbye to one of the last remaining parts of a website that helped shape the social media landscape.

This article originally appeared on Engadget at https://www.engadget.com/twitter-has-officially-moved-to-xcom-120028269.html?src=rss

Meta’s Oversight Board will wade into the debate over political content on Threads

Meta’s Oversight Board has accepted its first case involving a post on Threads and it will allow the group to weigh in on the debate over the role of political content on Threads. The board, which started taking appeals from Threads users earlier this year, announced its first case involving Meta’s newest app.

The case stems from a post by a Japanese user who was replying to a screenshot of a news article about Prime Minister Fumio Kishida and allegations of tax evasion. The reply, according to the board, included “several hashtags using the phrase ‘drop dead.’” Meta’s content moderators removed the post, citing the company’s rules against inciting violence. But after the user appealed to the Oversight Board and had the case accepted, Meta reversed course, saying that the post didn’t violate its rules after all.

All that may sound like a fairly typical case for the board, which regularly reviews Meta’s content moderation decisions and pushes the social media company to change its policies. But it’s the first time the group will apply that same process to Threads. And the board has suggested it will use the case to weigh in on the company’s controversial decision to stop showing political content in its algorithmic recommendations on Threads and Instagram.

“The Board selected thi case to examine Meta’s content moderation policies and enforcement practices on political content on Threads,” the Oversight Board wrote in a statement. “This is particularly important, in the context of Meta’s decision not to proactively recommend political content on Threads.”

As usual, it will likely be several months before we see the Oversight Board’s decision actually play out in any policy changes at Meta. In the meantime, the board is seeking public comment on “how Meta’s choice not to recommend political content on Threads and Instagram newsfeeds, or pages not followed by users, affects access to information and political speech.”

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-wade-into-the-debate-over-political-content-on-threads-120001168.html?src=rss

Threads search will finally be usable with 'recent' tab rollout

Threads is inching closer to becoming an actually useful source for real-time news and updates. The app is finally rolling out the ability to search posts in order of recency, after testing the feature last month.

“In an effort to make it easier to find timely, relevant content on Threads, we’re introducing a Recent tab for your searches,” Instagram’s Adam Mosseri wrote in an update. “Search results here are still evaluated for quality, but you can now see them in chronological order.”

The change has been a long requested one from users hoping Meta’s app will one day be a source of breaking news and real-time information the way that Twitter historically functioned. Being able to search for topics and keywords and find the most recent results is key to finding up-to-date details and commentary about breaking news, sports and anything else happening in real time.

On the other hand, Meta has also made it clear that it would prefer “news” to not be what Threads is known for. Mosseri has said he doesn’t want to “encourage” hard news on Threads and the company actively discourages political content. Threads’ default “for you” algorithm is also known for surfacing days-old posts, random personal stories and other content that’s not exactly timely.

It’s also worth pointing out that Threads’ new recency filter in search is not the same as the “latest” search filter on X. As Mosseri noted in his post, Meta still hides an unknown number of posts in search results that have been “evaluated for quality,” so Threads search will never surface all of the posts containing your search terms. But being able to at least find posts that aren’t a few days old should make looking for timely information a lot less frustrating.

This article originally appeared on Engadget at https://www.engadget.com/threads-search-will-finally-be-usable-with-recent-tab-rollout-202054011.html?src=rss

Threads gets its own fact-checking program

This might come as a shock to you but the things people put on social media aren't always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what's real without context or expertise in a specific area. That's part of why many platforms use a fact-checking team to keep an eye (often more so look like they're keeping an eye) on what's getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company's plans to do so in December. 

Mosseri stated that Threads "recently" made it so that Meta's third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching "near-identical false content" that users shared on Threads. However, there's no indication of exactly when the program started or if it's global.

Then there's the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?

This article originally appeared on Engadget at https://www.engadget.com/threads-gets-its-own-fact-checking-program-130013115.html?src=rss

With Gemini Live, Google wants you to relax and have a natural chat with AI

While Google and OpenAI have been racing to win the AI crown over the past year, we've seemingly reverted away from the idea of speaking to virtual assistants. Generative AI products have typically launched with text-only inputs, and only later add the ability to search images and basic voice commands. At Google I/O today, the company showed off Gemini Live, a new mobile experience for natural conversations with its AI. 

Google offered up a few potential use cases; You could have a conversation with Gemini Live to help prepare for a job interview, where it could potentially ask you relevant questions around the positions. It could also give you public speaking tips if you want to research a speech. What makes Gemini Live unique is that you'll be able to speak at your own pace, or even interrupt its responses if you'd like. Ideally, it should be more like having a conversation with a person, instead of just voicing smart assistant commands or generative AI queries.

At I/O, Google also showed off Project Astra, a next-generation virtual assistant that takes the concept of Gemini Live even further. Astra is able to view your camera feed and answer questions in real-time. It's unclear how long that'll take to arrive, but Google says some of Astra's live video features will come to Gemini Live later this year. Gemini Live will be available for Gemini Advanced subscribers in the next few months.

Developing...

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/with-gemini-live-google-wants-you-to-relax-and-have-a-natural-chat-with-ai-181329788.html?src=rss