Posts with «social & online media» label

'Beat Saber' gets a $13 Billie Eilish track pack with 'Bad Guy' and 'Bury a Friend'

Facebook has just released a Billie Eilish pack for Beat Saber. Priced at $13 for the entire collection, the pack features 10 songs, including fan-favorites like “Bury a Friend” and “Bad Guy.” It also comes with a new environment inspired by Eilish’s “Happier Than Ever” music video. If you want to buy specific tracks, you can do so for $2 per song. The DLC is available on Oculus Quest, Rift, PSVR and SteamVR headsets. 

If you own an Oculus headset, you can also look forward to watching the singer’s upcoming Governors Ball performance when it’s livestreamed through the platform’s Venues app on September 24th. Facebook acquired Beat Saber creator Beat Games in 2019. Since then, the company has used its robust music licensing deals to bring paid content from all sorts of artists, including Kendrick Lamar, Linkin Park and others. 

Facebook has a new policy for fighting 'coordinated social harm'

Facebook has announced a new policy that allows it to take out networks of accounts engaging in “coordinated social harm.” The company said the change could help the platform fight harmful behavior it wouldn’t otherwise be able to fully address under its existing rules.

Unlike “coordinated inauthentic behavior,” which is Facebook’s policy for dealing with harm that comes from networks of fake accounts, coordinated social harm gives the company a framework to address harmful actions from legitimate accounts. During a call with reporters, the company’s head of security policy Nathaniel Gleicher said the policy is necessary because bad actors are increasingly trying to “blur the lines” between authentic and inauthentic behavior.

“We are seeing groups that pose a risk of significant social harm, that also engage in violations on our platform, but don't necessarily rise to the level for either of those where we’d enforce against for inauthenticity under CIB [coordinated inauthentic behavior] or under our dangerous organizations policy,” Gleicher said. “So this protocol is designed to capture these groups that are sort of in between spaces.”

Gleicher added that the new protocols could help Facebook address networks of accounts spreading anti-vaccine misinformation or groups trying to organize political violence. In announcing the change, Facebook said it took down a small network of accounts in Germany that were linked to the “Querdenken” movement, which has spread conspiracy theories about the country COVID-19 restrictions and has been “linked to off-platform violence.”

Facebook said it could take “a range of actions” in enforcing its new rules around coordinated social harm. That could include banning accounts — as it did with the “Querdenken” movement — or throttling their reach to prevent content from spreading as widely.

The issue of how to handle groups that break Facebook’s rules in a coordinated way has been a difficult one for the company, which up until now has primarily focused on taking down networks that rely on fake accounts to manipulate its platform. The issue came up earlier this year following the January 6th insurrection as Facebook investigated the “Stop the Steal” movement. According to an internal report obtained by BuzzFeed News, Facebook employees suggested its existing policies weren’t equipped to handle “inherently harmful” coordination by legitimate accounts, which prevented it from realizing “Stop the Steal” was a “cohesive movement” until it was too late.

During a press call, Gleicher said that the “work on this policy started well before January 6th.” But he added that the company’s work against high-profile groups had informed their decision making. “If you think about our enforcement against QAnon-related actors, if you think about our enforcement against ‘Stop the Steal,’ if you think about our enforcement against other groups — we learned from all of them.”

Instagram is internally testing a feature that'll show some people higher in its feed

Instagram is working on a tool that could give people more control over its famously obtuse feed algorithm. Mobile developer Alessandro Paluzzi recently shared screenshots of an in-development feature called Favorites. Those images suggest the tool will allow you to add friends, family members and creators to a list of accounts you want the software to prioritize when you’re scrolling through your feed.

#Instagram is working on "Favorites" 👀

ℹ️ Posts from your favorites are shown higher in feed. pic.twitter.com/NfBd8v4IHR

— Alessandro Paluzzi (@alex193a) September 9, 2021

Since Instagram switched from a chronological feed to an algorithmic one back in 2016, people have consistently complained the app doesn’t do an adequate job of showing them the images and videos they want to see the most. Adam Mosseri, the head of Instagram, tried to speak to those concerns recently when he wrote a blog post about how the platform's various algorithms work. Currently, the feed algorithm tends to look at the popularity of a post, in addition to your recent activity and history of interacting with someone, when deciding how to prioritize the content it shows you.

It’s unclear if Favorites will become an official feature within Instagram. A spokesperson for Instagram told Engadget the company is currently testing the tool internally but offered no further details on when we might see an external test, if at all.

Facebook's first smart glasses are the Ray-Ban Stories

Facebook's first foray into the world of smart glasses is here. Confusingly dubbed Ray-Ban Stories, they start at $299 and bring together much of the technology we've already seen in smart eyewear. They'll let you take first-person photos and videos on the go, like Snap's Spectacles. And, similar to Bose and Amazon's speaker-equipped glasses, you'll be able to listen to media, as well as take calls.

But what's most impressive is that Facebook and Ray-Ban owner Luxottica have crammed the hardware components into frames that look and feel almost exactly like a pair of typical designer glasses. The only difference is that the pair of cameras mounted along the corners. 

The Ray-Ban Stories in the iconic Wayfarer style — those chunky '50s-era frames that still look fashionable today — weigh just five grams more than the standard version. And that's including its dual 5-megapixel cameras, Snapdragon processor, touchpad, speakers, three-microphone array and other hardware. I'll be honest, I was a bit shocked when I learned how much they weighed. We're used to smart glasses being thick and heavy, even when they're coming from major brands like Bose. The Ray-Ban Stories look, well, normal.

I suppose that shouldn't be too surprising, though, as both Facebook and Ray-Ban ultimately want to normalize smart frames to the point where they're as common as wireless earbuds. That also helps the companies avoid the mistake Google made with Glass: Those things looked so alien and Borg-like that they were almost instantly reviled. 

Ray-Ban and Facebook

Privacy remains a concern with all smart glasses, though. The Ray-Ban Stories have a bright LED that lights up when you're taking photos and video, but I could see many people taking issue with the subtle camera placement. We're all used to people capturing everything with their smartphones these days, but doing so still requires more effort than tapping your glasses or issuing a voice command to an all-seeing social network. 

If Facebook can successfully deliver the first smart glasses that don't make the wearer feel like a joke, and which the general public doesn't want to throw in a fire, it could gain a serious foothold in the augmented reality market. And, well, we know how much Mark Zuckerberg wants to transform it into a "metaverse company." 

In addition to the Wayfarer style, Ray-Ban Stories will be available in the brand's Round and Meteor frames, five different colors, and your typical array of lenses: Clear, sun, prescription, transition and polarized. I'm surprised Ray-Ban isn't offering polarized sunglass lenses by default though, which can reduce glare far better than lenses that are just tinted dark. As for battery life, Facebook claims the Ray-Ban Stories will last for around a day of use (around three hours of audio streaming), while the bundled charging case adds another three days of use.

As ambitious as they may seem, Ray-Ban Stories are also yet another example of how Facebook seemingly can't help but imitate Snapchat, which has been dabbling in smart glasses since 2016. Even their name hearkens back to the social story format that Snap kicked off and was later copied by Facebook, Instagram and pretty much every other social media outfit. But at this point, I don't think Facebook cares if everyone calls them copycats if it ultimately leads to more engagement.

Devindra Hardawar/Engadget

After testing out the Ray-Ban Stories for a few days, I found them far more compelling than any smart glasses today. They don't look as goofy as the Snap Spectacles, and they're far more comfortable to wear than Bose and Amazon's Frames. I could only use the Stories in limited situations though, since I need prescription lenses to actually see well.

Video quality is surprisingly good, very stable even when I’m moving around. The best use case is being able to record your kids without picking up a phone pic.twitter.com/axhkMYfN8e

— Devindra Hardawar (@Devindra) September 9, 2021

Still, I was surprised by how smooth video footage looked; it reminded me of YouTube professionals like J. Kenji Lopez-Alt who use head-mounted GoPros. It was also nice to have both hands free to capture fleeting moments of play with my daughter. I was less impressed with the Stories' photo quality, but I suppose it could be useful if you wanted to take a pic without pulling out your phone. You can import your photos and videos from the smart glasses into Facebook View, a new app that lets you quickly edit your media and share it to practically every social media site (yes, even Snapchat!).

Devindra Hardawar/Engadget

While I didn't expect much when it comes to audio playback, the Stories surprised me with sound that was good enough for listening to light tunes or podcasts. I could see them being particularly useful while jogging or biking outdoors, where you want to maintain situational awareness. During the day, I'm never too far from my wireless earbuds, but being able to get a bit of audio from my glasses in a pinch could be genuinely useful.

To control the Ray-Ban Stories, you can either invoke the Facebook assistant by saying "Hey Facebook" or by tapping the button on the right arm, or swiping on the side touchpad. Personally, I never want to be caught in public talking to Facebook, so I mostly relied on touch controls. But the voice controls worked just fine during the few occasions when nobody could hear my shame.

Ray-Ban and Facebook

While they're not exactly perfect, the Ray-Ban Stories are the first smart glasses I'd recommend to someone looking for a pair. But the Facebook of it all is still concerning. While the company says the glasses will only collect basic information to be functional — things like the battery level, your Facebook login and Wi-Fi details — who knows how that'll change as its future smart glasses become more fully featured. Perhaps that's why there's no Facebook branding on the Ray-Ban Stories case and frames: It's probably better if people forget these are also Facebook-powered products.

You'll be able to buy Ray-Ban Stories today in 20 different styles in the US, Australia, Canada, Ireland, Italy and the UK. Even though the Ray-Ban Stories may seem to have limited availability right now, Facebook and Luxottica have a multi-year partnership that will result in even more products. It's likely that true AR glasses, which can display information on your lenses, aren't far off. And you can be sure of that, since Snapchat has already shown off its own AR Spectacles

Twitter tests four new emoji Tweet reactions alongside 'Like'

Other than some developer tests, Twitter has only ever offered you one way to react to tweets: the classic heart "Like" emoji and prior to 2015, the "Favorite" star button. Now, the social media network might be finally expanding that range as it's testing a new feature called "Reactions" with additional emoji over the coming days. 

In Turkey only for a limited time, the company is testing four new emojis on top of the heart icon: 😂, 🤔, 👏 and 😢, aka "tears of joy," "thinking face," "clapping hands" and "crying face." Back in 2015, Twitter tested a wider variety including the the "100," "heart eyes" and other emoji. However, this time it wanted to "find emoji that are universally recognizable and represent what people want to express about Tweets," the company said.

Twitter

Twitter narrowed it down to those additional four after conducting surveys and researching what the most common words and emoji are in Tweets. It found that the most popular one is the laughing emoji, and that people want to express reactions centered around "funny," "support/cheer," "agreement" and "awesome." It also identified "entertained" and "curious" as the top emotions people feel when reading tweets.

Its surveys also revealed that "frustration" and "anger" were common emotions experienced by users. While some people wanted to express disagreement with Tweets, the company decided not to incorporate that. Rather, it's trying to see if the new, more positive emoji will drive "healthy public conversations." It's also likely related to the high levels of polarization and toxicity on the site, something that Twitter has been keen to reduce over the past several years.

More specifically, Twitter said that reactions will help people better show how they feel in conversations "while also giving those Tweeting a better understanding of how their Tweets are received." The new reactions will be available in Turkey only for a limited time on Twitter for iOS, Android and the web over the coming days. However, it added that "based on this test, we may expand the availability of the Reactions experiment to other regions."

Twitter starts rolling out Communities, it's dedicated space for groups

After 15 years, Twitter is getting dedicated features for groups. The company is now starting to test Communities, “a more intimate space for conversations” on the platform.

Communities, which the company first teased back in February, are sort of like Twitter’s version of a subreddit or a public-facing group on Facebook. Communities are dedicated to specific topics, and members can post tweets to a dedicated group timeline. Each community has its own moderators who set rules for the group, and users must be invited by an existing member or moderator to participate.

The feature is meant to address what’s been a long-running issue for the platform: that it can be incredibly difficult for new users to wade through the noise and find the corner of Twitter that speaks to their interests. The company has tried to address this with Topics, which injects tweets into your timeline based on your interests, but Communities takes the idea a step further.

Twitter

Twitter notes that some of its first Communities will focus on popular topics like skincare, astrology, sneakers and dogs, but that over time it expects the groups to reflect the more “niche discussions” that happen on the platform. For now, Twitter is starting with just a handful of Communities, though moderators and members are able to invite anyone to join. The company says it plans to open up the feature for more users to create Communities in the “coming months.”

Notably, Twitter seems to be trying to avoid some of the issues that have plagued Facebook’s Groups. All Communities are publicly accessible and viewable by anyone on the platform —there’s no such thing as a private or “secret” Community — though only members can participate in the discussion directly. Like Reddit and Facebook, Twitter will also rely on admins and moderators to steer the day-to-day conversations and keep members in check. The company is also working on new reporting and detection features to weed out “potentially problematic” groups that may spring up.

Though the company is calling the feature a test, Twitter seems to be quite serious about its potential. Communities is getting its own tab in the center of Twitter’s app, between explore and notifications, which suggests the company plans for Communities to be a prominent feature of its platform for the long term.

Twitter web test lets you remove followers without blocking them

Twitter has launched its second feature test in one day, and this one could be particularly helpful if you've ever been subjected to online abuse. A newly available web test lets you remove followers without blocking them. You'll disappear from their feed without notifications that might spark harassment and threats.

The social network hasn't said if or when it might roll out follower removals. This is coming alongside a string of anti-harassment and privacy-related projects, though, including a "Safety Mode" test and an experimental option to automatically archive tweets. It might be just a matter of time before tighter follower control is available to a wider audience.

This test may be particularly useful in fighting abuse. Until now, Twitter users have typically had to either report offending accounts (and hope Twitter takes action) or block them and risk retaliation. This won't prevent creeps from following your activity if you have a public account, but it could lessen the chance of immediate outrage.

We're making it easier to be the curator of your own followers list. Now testing on web: remove a follower without blocking them.

To remove a follower, go to your profile and click “Followers”, then click the three dot icon and select “Remove this follower”. pic.twitter.com/2Ig7Mp8Tnx

— Twitter Support (@TwitterSupport) September 7, 2021

Twitter's latest test gives iOS users a larger, edge-to-edge view of photos

Twitter is an increasingly visual social network, and it's accordingly giving your media some more breathing room. The company has started testing an "edge-to-edge" timeline on iOS that gives you a much larger, borderless view for photos and videos.. You won't have to tap on a picture just to make full use of your big smartphone screen, to put it another way.

The firm didn't say how soon the feature might move beyond the experimental stage, but did vow to "iterate" on the test. We've asked about the possibility of (and timing for) Android and web tests.

The test is a bid to "bring more focus to the content" as more Twitter users share media. We'd add that it could also help Twitter counter Instagram, TikTok and other imagery-driven social networks. You may have a stronger incentive to post on Twitter if you know people are more likely to see (and appreciate) your snapshots.

Now testing on iOS:

Edge to edge Tweets that span the width of the timeline so your photos, GIFs, and videos can have more room to shine. pic.twitter.com/luAHoPjjlY

— Twitter Support (@TwitterSupport) September 7, 2021

The fight to study what happens on Facebook

Facebook recently added a new report to its transparency center. The "widely viewed content" report was ostensibly meant to shed light on what’s been a long-running debate: What is the most popular content on Facebook?

The 20-page report raised more questions than answers. For example, it showed that the most viewed URL was a seemingly obscure website associated with former Green Bay Packers players. It boasted nearly 90 million views even though its official Facebook page has just a few thousand followers. The report also included URLs for e-commerce sites that seemed at least somewhat spammy, like online stores for CBD products and Bible-themed t-shirts. There was also a low-res cat GIF and several bland memes that asked people to respond with foods they like or don’t like or items they had recently purchased.

Notably absent from the report were the right-wing figures who regularly dominate the unofficial “Facebook Top 10” Twitter account, which ranks content by engagement. In fact, there wasn’t very much political content at all, a point Facebook has long been eager to prove. For Facebook, its latest attempt at “transparency” was evidence that most users’ feeds aren’t polarizing, disinformation-laced swamps but something much more mundane.

Days later, The New York Times reported that the company had prepped an earlier version of the report, but opted not to publish it. The top URL from thatreport was a story from the Chicago Sun Timesthat suggested the death of a doctor may have been linked to the COVID-19 vaccine. Though the story was from a credible news source, it’s also the kind of story that’s often used to fuel anti-vaccine narratives.

Almost as soon as the initial report was published, researchers raised other issues. Ethan Zuckerman, an associate professor of public policy and communication at University of Massachusetts at Amherst, called it “transparency theatre.” It was, he said, “a chance for FB to tell critics that they’re moving in the direction of transparency without releasing any of the data a researcher would need to answer a question like ‘Is extreme right-wing content disproportionately popular on Facebook?’”

The promise of ‘transparency’

For researchers studying how information travels on Facebook, it’s a familiar tactic: provide enough data to claim “transparency,” but not enough to actually be useful. “The findings of the report are debatable,” says Alice Marwick, principal researcher at the Center for Information Technology and Public Life at University of North Carolina. “The results just didn't hold up, they don't hold up to scrutiny. They don't map to any of the ways that people actually share information.”

Marwick and other researchers have suggested that this may be because Facebook opted to slice its data in an unusual way. They have suggested that Facebook only looked for URLs that were actually in the body of a post, rather than the link previews typically shared. Or perhaps Facebook just has a really bad spam problem. Or maybe it’s a combination of the two. “There's no way for us to independently verify them … because we have no access to data compared to what Facebook has,” Marwick told Engadget.

Those concerns were echoed by Laura Edelson, a researcher at New York University. “No one else can replicate or verify the findings in this report,” she wrote in a tweet. “We just have to trust Facebook.” Notably, Edelson has her own experience running into the limits of Facebook’s push for “transparency.”

The company recently shut down her personal Facebook account, as well as those of several NYU colleagues, in response to their research on political ad targeting on the platform. Since Facebook doesn’t make targeting data available in its ad library, the researchers recruited volunteers to install a browser extension that could scoop up advertising info based on their feeds.

Facebook called it “unauthorized scraping,” saying it ran afoul of their privacy policies. In doing so, it cited its obligation to the FTC, which the agency later said was “misleading.” Outside groups had vetted the project and confirmed it was only gathering data about advertisers, not users’ personal data. Guy Rosen, the company’s VP of Integrity, later said that even though the research was “well-intentioned” it posed too great a privacy risk. Edelson and others said Facebook was trying to silence research that could make the company look bad.“If this episode demonstrates anything it is that Facebook should not have veto power over who is allowed to study them,” she wrote in a statement.

Rosen and other Facebook execs have said that Facebook does want to make more data available to researchers, but that they need to go through the company’s official channels to ensure the data is made available in a “privacy protected” way. The company has a platform called FORT (Facebook Open Research and Transparency), which allows academics to request access to some types of Facebook data, including election ads from 2020. Earlier this year, the company said it would expand the program to make more info available to researchers studying “fringe” groups on the platform.

But while Facebook has billed FORT as yet another step in its efforts to provide “transparency,” those who have used FORT have cited shortcomings. A group of researchers at Princeton hoping to study election ads ultimately pulled the project, citing Facebook’s restrictive terms. They said Facebook pushed a “strictly non-negotiable” agreement that required them to submit their research to Facebook for review prior to publishing. Even more straightforward questions about how they were permitted to analyze the data were left unanswered.

“Our experience dealing with Facebook highlights their long running pattern of misdirection and doublespeak to dodge meaningful scrutiny of their actions,” they wrote in a statement describing their experience.

A Facebook spokesperson said the company only checks for personally identifiable information, and that it’s never rejected a research paper.

“We support hundreds of academic researchers at more than 100 institutions through the Facebook Open Research and Transparency project,” Facebook’s Chaya Nayak, who heads up FORT at Facebook, said in a statement. “Through this effort, we make massive amounts of privacy-protected data available to academics so they can study Facebook’s impact on the world. We also pro-actively seek feedback from the research community about what steps will help them advance research most effectively going forward.”

Data access affects researchers’ ability to study Facebook’s biggest problems. And the pandemic has further highlighted just how significant that work can be. Facebook’s unwillingness to share more data about vaccine misinformation has been repeatedly called out by researchers and public health officials. It’s all the more vexing because Facebook employs a small army of its own researchers and data scientists. Yet much of their work is never made public. “They have a really solid research team, but virtually everything that research team does is kept only within Facebook, and we never see any of it,” says Marwick, the UNC professor.

But much of Facebook’s internal research could help those outside the platform who are trying to understand the same questions, she says. “I want more of the analysis and research that's going on within Facebook to be communicated to the larger scholarly community, especially stuff around polarization [and] news sharing. I have a fairly strong sense that there's research questions that are actively being debated in my research community that Facebook knows the answer to, but they can't communicate it to us.”

The rise of ‘data donation’

To get around this lack of access, researchers are increasingly looking to “data donation” programs. Like the browser extension used by the NYU researchers, these projects recruit volunteers to “donate” some of their own data for research.

NYU’s Ad Observer, for example, collected data about ads on Facebook and YouTube, with the goal of helping them understand the platform’s ad targeting at amore granular level. Similarly, Mozilla, maker of the Firefox browser, has a browser add-on called Rally that helps researchers study a range of issues from COVID-19 misinformation to local news. The Markup, a nonprofit news organization, has also created Citizen Browser, a customized browser that aids journalists’ investigations into Facebook and YouTube. (Unlike Mozilla and NYU’s browser-based projects, The Markup pays users who participate in Citizen Browser.)

“The biggest single problem in our research community is the lack of access to private proprietary data,” says Marwick. “Data donation programs are one of the tactics that people in my community are using to try to get access to data, given that we know the platform's aren't going to give it to us.”

Crucially, it’s also data that’s collected independently, and that may be the best way to ensure true transparency, says Rebecca Weiss, who leads Mozilla’s Rally project. “We keep getting these good faith transparency efforts from these companies but it's clear that transparency also means some form of independence,” Weiss tells Engadget.

For participants, these programs offer social media users a way to make sure some of their data, which is constantly being scooped up by mega-platforms like Facebook, can also be used in a way that is within their control: to aid in research. Weiss says that, ultimately, it’s not that different from market research or other public science projects. “This idea of donating your time to a good faith effort — these are familiar concepts.”

Researchers also point out that there are significant benefits to gaining a better understanding of how the most influential and powerful platforms operate. The study of election ads, for example, can expose bad actors trying to manipulate elections. Knowing more about how health misinformation spreads can help public health officials understand how to combat vaccine hesitancy. Weiss notes that having a better understanding of why we see the ads we do — political or otherwise — can go a long way toward demystifying how social media platforms operate.

“This affects our lives on a daily basis and there's not a lot of ways that we as consumers can prepare ourselves for the world that exists with these increasingly more powerful ad networks that have no transparency.”

Facebook AI mislabels video of Black men as 'Primates' content

Facebook has apologized after its AI slapped an egregious label on a video of Black men. According to The New York Times, users who recently watched a video posted by Daily Mail featuring Black men saw a prompt asking them if they'd like to "[k]eep seeing videos about Primates." The social network apologized for the "unacceptable error" in a statement sent to the publication. It also disabled the recommendation feature that was responsible for the message as it looks into the cause to prevent serious errors like this from happening again.

Company spokeswoman Dani Lever said in a statement: "As we have said, while we have made improvements to our AI, we know it's not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations."

Gender and racial bias in artificial intelligence is hardly a problem that's unique to the social network — facial recognition technologies are still far from perfect and tend to misidentify POCs and women in general. Last year, false facial recognition matches led to the wrongful arrests of two Black men in Detroit. In 2015, Google Photos tagged the photos of Black people as "gorillas," and Wired found a few years later that the tech giant's solution was to censor the word "gorilla" from searches and image tags.

The social network shared a dataset it created with the AI community in an effort to combat the issue a few months ago. It contained over 40,000 videos featuring 3,000 paid actors who shared their age and gender with the company. Facebook even hired professionals to light their shoot and to label their skin tones, so AI systems can learn what people of different ethnicities look like under various lighting conditions. The dataset clearly wasn't enough to completely solve AI bias for Facebook, further demonstrating that the AI community still has a lot of work ahead of it.