Twitter has given Ye, formerly known as Kanye West, a 12-hour suspension after he tweeted a photo of the Star of David merged with the swastika. In a public exchange on Twitter, website owner Elon Musk told the rapper that tweeting a photo of him being hosed down on a yacht is fine, but tweeting the antisemitic image is not. After that, Ye posted a screenshot of his account on Truth Social, the social media platform backed by Donald Trump, showing that his account has been limited for 12 hours for violating Twitter's terms of service.
Ye also shared a screenshot of his private exchange with Musk, wherein the executive said: "Sorry, but you have gone too far. This is not love." In a couple of follow-up tweets, Musk said he tried his best to communicate with Ye, but the rapper still chose to violate Twitter's rule against inciting violence. Twitter suspended Ye in October for posting antisemitic messages that said he would go "death con 3 On JEWISH PEOPLE." His account was reinstated in November, along with other controversial personalities', such as former President Donald Trump's and Marjorie Taylor Greene's.
Shortly after he was suspended on both Twitter and Instagram in October, Ye entered a deal to acquire the "free speech" social media app Parler. "In a world where conservative opinions are considered to be controversial we have to make sure we have the right to freely express ourselves," Ye said back then. Yesterday, however, Parlement announced that the acquisition will no longer push through. While Parler said that they had mutually agreed to "terminate the intent of sale" in mid-November, the news came out after Ye's guesting on Alex Jones' podcast InfoWars. During the interview, Ye went on an antisemitic tirade, wherein he denied that the Holocause had happened while praising Nazis and Hitler.
Meta's election integrity efforts on Facebook may not have been as robust as claimed. Researchers at New York University's Cybersecurity for Democracy and the watchdog Global Witness have revealed that Facebook's automatic moderation system approved 15 out of 20 test ads threatening election workers ahead of last month's US midterms. The experiments were based on real threats and used "clear" language that was potentially easy to catch. In some cases, the social network even allowed ads after the wrong changes were made — the research team just had to remove profanity and fix spelling to get past initial rejections.
The investigators also tested TikTok and YouTube. Both services stopped all threats and banned the test accounts. In an earlier experiment before Brazil's election, Facebook and YouTube allowed all election misinformation sent during an initial pass, although Facebook rejected up to 50 percent in follow-up submissions.
In a statement to Engadget, a spokesperson said the ads were a "small sample" that didn't represent what users saw on platforms like Facebook. The company maintained that its ability to counter election threats "exceeds" that of rivals, but only backed the claim by pointing to quotes that illustrated the amount of resources committed to stopping violent threats, not the effectiveness of those resources.
The ads wouldn't have done damage, as the experimenters had the power to pull them before they went live. Still, the incident highlights the limitations of Meta's partial dependence on AI moderation to fight misinformation and hate speech. While the system helps Meta's human moderators cope with large amounts of content, it also risks greenlighting ads that might not be caught until they're visible to the public. That could not only let threats flourish, but invite fines from the UK and other countries that plan to penalize companies which don't quickly remove extremist content.
I don’t know about your LinkedIn experience, but each time I visit the website I find my inbox flooded with messages. Most aren’t even worth reading, but a few inevitably promise new career opportunities and the chance to work with interesting people.
LinkedIn wants to make it easier to find those messages quickly. Starting today, the social network is rolling out a new feature called Focused Inbox. It separates your inbox into two tabs titled “Focused” and “Other.” A machine learning algorithm will then do its best to flag messages that include the most relevant outreach to you and push them to the top of the Focused tab. If you don’t find the feature useful, you can switch to the old interface at any time.
LinkedIn’s hope is that the feature helps people be more productive. The Focused Inbox comes at a time when the company says more of its users are turning to its instant messaging feature to communicate. In the last year, LinkedIn says it has seen a 20 percent increase in those types of chats.
Starting next year, a lot more Netflix viewers will reportedly be able to watch its originals before they become available for streaming. According to The Wall Street Journal, the streaming service is expanding its pool of preview viewers early next year to include as many as tens of thousands of subscribers around the world from its current group of around 2,000 people.
When Variety reported about the company's focus group earlier this year, the publication said that Netflix has been asking subscribers if they want to join "a community of members to view and give feedback on upcoming movies and series" since at least May 2021. "It's simple, but an incredibly important part of creating best-in-class content for you and Netflix members all around the world," the email reportedly said. Apparently, Netflix asks members of the group to watch several unreleased shows and movies over the course of six months. They then have to fill out a survey form to tell the company what they liked and what they didn't.
In The Journal's newer report, it said the streaming service calls the group the "Netflix Preview Club" and that the Leonardo DiCaprio/Jennifer Lawrence starrer Don't Look Up was one of the movies that benefited from its feedback. The movie was initially too serious, the preview group's members reportedly told Netflix, and the film's creators chose to listen to them and ratcheted up its comedic elements.
As The Journal notes, Netflix is known for giving creators a lot of creative freedom — even if it doesn't always lead to great content — so running a preview group has been tricky. The company has apparently been careful when it comes to sharing feedback with creators and has not been forcing changes. It's still the creators' decision whether to incorporate changes based on the previewers' response.
YouTube has revealed its top videos and creators of 2022. At the top of the US trending video list is the final video from Technoblade, a Minecraftcreator who died after a battle with cancer. Technoblade wrote a farewell message to fans that his father read in the video, which has more than 87.6 million views.
The trending video list is based on US video views, which explains why MrBeast's recreation (and giveaway) of Willy Wonka's chocolate factory is in fifth place despite having 126 million total views. YouTube also excludes Shorts, music videos, trailers and children's videos from this list.
Speaking of MrBeast, he was the top creator based on the number of subscribers gained in the US. That's not too surprising, since he has the most subscribers of any individual creator (Indian music label T-Series has the most overall). YouTube says that list doesn't take into account artists, brands, media companies or children's content.
Elsewhere, YouTube revealed the top songs in the US for 2022 (featuring tracks released this year or older ones that saw a significant uptick in views). "We Don't Talk About Bruno" from Disney's Encantotopped the list with 503 million views. Bad Bunny and Karol G each had two songs on the list.
Twitter is now pushing more tweets from accounts users don’t already follow into their timelines. The company revealed that it’s now surfacing recommendations to all its users, even people who had successfully avoided them in the past.
“We want to ensure everyone on Twitter sees the best content on the platform, so we’re expanding recommendations to all users, including those who may not have seen them in the past,” the company wrote in a tweet.
It’s not clear if this means recommendations will begin to appear in the “latest” timeline, which sorts tweets chronologically and has historically not included recommendations, or if Twitter is simply making recommendations more prominent in other parts of the app. In its tweet, the company pointed to a blog post from September, which states that “recommendations can appear in your Home timeline, certain places within the Explore tab, and elsewhere on Twitter.”
We want to ensure everyone on Twitter sees the best content on the platform, so we’re expanding recommendations to all users, including those who may not have seen them in the past.
You can learn more about them, and how to best control your experience: https://t.co/ekYWf57JSc
Anecdotally, it seems some users are already reportingnoticeablechanges to their timelines, with the appearance of new topic suggestions and many tweets from seemingly random accounts.
Though the change may feel jarring, it’s not the first time the company has experimented with adding more suggested content. Twitter has been pushing recommendations into various parts of its service for years, though it has sometimes tweaked how often these suggestions appear. In the past, Twitter has also been careful to note that it bars certain types of content from recommendations in order to avoid amplifying potentially harmful or low-quality content, though it’s not entirely clear if that’s still the case. The company no longer has a communications team.
Interestingly, Twitter’s current CEO, Elon Musk, hasn’t always spoken favorably about the platform’s recommendation algorithms. Back in May, he tweeted that using the app’s “latest” timeline was crucial to “fix” Twitter’s feed. “You are being manipulated by the algorithm in ways you don’t realize,” he said at the time. Musk, who has also spoken about his desire to open source Twitter’s algorithms, hasn’t yet weighed in on the new expansion of recommendations, or how the feature works.
From the moment that people started getting nasty with Johannes Gutenberg's newfangled printing press, sexually explicit content has led the way towards wide-scale adoption of mass communication technologies. But with every advance in methodology has invariably come a backlash — a moral panic here, a book burning there, the constant uncut threat of mass gun violence — aiming to suppress that expression. Now, given the things I saw Googling "sexually explicit printing press," dear reader, I can assure you that their efforts will ultimately be in vain.
But it hasn't stopped social media corporations, advertisers, government regulators and the people you most dread seeing in your building's elevator from working to erase sexuality-related content from the world wide web. In the excerpt below from her most excellent new book, How Sex Changed the Internet and the Internet Changed Sex: An Unexpected History, Motherboard Senior Editor Samantha Cole discusses the how and why to Facebook, Instagram and Google's slow strangling of online sexual speech over the past 15 years.
Human and algorithmic censorship has completely changed the power structure of who gets to post what types of adult content online. This has played out as independent sex workers struggling to avoid getting kicked off of sites like Instagram or Twitter just for existing as people—while big companies like Brazzers, displaying full nudity, have no problem keeping their accounts up.
Despite Facebook’s origins as Mark Zuckerberg’s Hot-or-Not rating system for women on his Harvard campus, the social network’s policies on sexuality and nudity are incredibly strict. Over the years, it’s gone through several evolutions and overhauls, but in 2022 forbidden content includes (but isn’t limited to) “real nude adults,” “sexual intercourse” and a wide range of things that could imply intercourse “even when the contact is not directly visible,” or “presence of by-products of sexual activity.” Nudity in art is supposedly allowed, but artists and illustrators still fight against bans and rejected posts all the time.
That’s not to mention “sexual solicitation,” which Facebook will not tolerate. That includes any and all porn, discussions of states of sexual arousal, and anything that both asks or offers sex “directly or indirectly” and also includes sexual emojis like peaches and eggplants, sexual slang, and depictions or poses of sexual activity.
These rules also apply on Instagram, the photo-sharing app owned by Facebook. As the number one and two biggest social networks in the US, these dictate how much of the internet sees and interacts with sexual content.
In the earliest archived versions of Facebook’s terms of use, sex was never mentioned—but its member conduct guidelines did ban “any content that we deem to be harmful, threatening, abusive, harassing, vulgar, obscene, hateful, or racially, ethnically or otherwise objectionable.” This vagueness gives Facebook legal wiggle room to ban whatever it wants.
The platform took a more welcoming approach to sexual speech as recently as 2007, with Sexuality listed as one of the areas of interest users could choose from, and more than five hundred user-created groups for various discussions around the topic. But the platform’s early liberality with sex drew scrutiny. In 2007, then–New York attorney general Andrew Cuomo led a sting operation on Facebook where an investigator posed as teens and caught child predators.
As early as 2008, it started banning female breasts—specifically, nipples. The areola violated its policy on “obscene, pornographic or sexually explicit” material. In December 2008, a handful of women gathered outside the company’s Palo Alto office to breastfeed in front of the building in protest (it was a Saturday; no executives were working).
As of 2018, Facebook lumped sex work under banned content that depicts “sexual exploitation,” stating that all references and depictions of “sexual services” were forbidden, “includ[ing] prostitution, escort services, sexual massages, and filmed sexual activity.”
A lot of this banned content is health and wellness education.
In 2018, sexuality educator Dr. Timaree Schmit logged in to Facebook and checked her page for SEXx Interactive, which runs an annual sex ed conference she’d held the day before. A notification from Facebook appeared: She and several other admins for the page were banned from the entire platform for thirty days, and the page was taken down, because an “offending image” had violated the platform’s community standards. The image in question was the word SEXx in block letters on a red background.
The examples of this sort of thing are endless and not limited to Facebook. Google AdWords banned “graphic sexual acts with intent to arouse including sex acts such as masturbation” in 2014. Android keyboards’ predictive text banned anything remotely sexual, including the words “panty,” “braless,” “Tampax,” “lactation,” “preggers, “uterus,” and “STI” from its autocomplete dictionary. Chromecast and Google Play forbid porn. You can’t navigate to adult sites using Starbucks Wi-Fi. For a while in 2018, Google Drive seemed to be blocking users from downloading documents and files that contained adult content. The crowdfunding site Patreon forbids porn depicting real people, and in 2018 blamed its payment processor, Stripe, for not being sex-friendly. Much of this followed FOSTA/SESTA.
This is far from a complete list. There are countless stories like this, where sex educators, sex workers, artists, and journalists are censored or pushed off platforms completely for crossing these imaginary lines that are constantly moving.
Over the years, as these policies have evolved, they’ve been applied inconsistently and often with vague reasoning for the users themselves. There is one way platforms have been consistent, however: Images and content of Black and Indigenous women, as well as queer and trans people, sex workers, and fat women, experience the brunt of platform discrimination. This can lead to serious self-esteem issues, isolation, and in some cases, suicidal thoughts for people who are pushed off platforms or labeled “sexually explicit” because of their body shape or skin color.
“I’m just sick of feeling like something is wrong with my body. That it’s not OK to look how I do,” Anna Konstantopoulos, a fat Instagram influencer, said after her account was shut down and posts were deleted multiple times. Her photos in bikinis or lingerie were deleted by Instagram moderators, while other influencers’ posts stayed up and raked in the likes. “It starts to make you feel like crap about yourself.”
In spite of all of this, people project their full selves, or at least a version of themselves, onto Facebook accounts. Censorship of our sexual sides doesn’t stop people from living and working on the internet—unless that is your life and work.
Twitter won't be firing and laying off more people, Elon Musk reportedly told the staff members who remained after asking employees to commit to an "extremely hardcore" Twitter during an all-hands meeting. According to The Verge, which heard a partial recording of the event, the company is even actively looking for people to fill roles in engineering and sales. Musk apparently made the announcement on the same day layoffs hit the company's sales and partnerships teams. Robin Wheeler, Twitter’s head of ad sales, and VP of partnerships Maggie Suniewick were reportedly fired for opposing Musk's directive to cut more employees. Of course, these all happened after the website's new owner ordered layoffs that cut the company's workforce in half.
Musk didn't specify which roles Twitter is hiring for during the meeting, The Verge said, but he did say that "[i]n terms of critical hires, people who are great at writing software are the highest priority." Since this all-hands was also the first time Musk met with staff members following his takeover, employees asked him questions about the company's future, including whether Twitter will move its HQ to Texas like Tesla did. Musk replied that there are no plans for Twitter to move, but that being "dual-headquartered" in both states could make sense.
He also said moving to Texas would "play into the idea that Twitter has gone from being left-wing to right-wing." Musk said that's not the case. "It is a moderate-wing takeover of Twitter... to be the digital town square, we must represent people with a wide array of views even if we disagree with those views," he added. As The Verge notes, Twitter recently fired people who called out Musk through tweets and through other avenues.
In addition to addressing questions about the inner workings of the company, Musk announced during the meeting that Twitter might not be relaunching paid verification before this month ends, after all. If you'll recall, the website had to pause its $8-a-month Blue subscription with verification shortly after it was launched due to a steep rise in impersonation and fake accounts on the website.
Musk previously said that Blue Verified would return on November 29th. But now he told employees and has also announced that Twitter won't be relaunching the subscription system until the website is confident that it can stop impersonation. Also, Twitter might ultimately give individuals and organizations different color checkmarks, which will make it apparent if users are interacting with a company's or org's actual account. Twitter already has a gray "Official" checkmark reserved for organizations, but it looks like it wants to make the indicator more visible and recognizable as a way to prevent people from being duped by impersonators.
Holding off relaunch of Blue Verified until there is high confidence of stopping impersonation.
Will probably use different color check for organizations than individuals.
Meta is taking new steps to lock down teens’ privacy settings. The company is making changes to the default privacy settings for teens’ Facebook accounts, and further limiting the ability of “suspicious” adults to message teens on Instagram and Facebook.
On Facebook, Meta says it will start automatically changing the default privacy settings on new accounts created by teens under 16. With the changes, the visibility of their friend list, tagged posts, and pages and accounts they follow will be automatically set to “more private settings.”
Notably, the new settings will only be automatically switched on for new accounts created by teens, though Meta says it will nudge existing teen accounts to adopt similar settings. The update follows a similar move from Instagram, which began making teen accounts private by default last year.
Meta is also making new changes meant to prevent “suspicious” adults from contacting teens. On Facebook, it will block these accounts from the site’s “people you may know” feature, and on Instagram it will test removing the message button from teens’ profiles. The company didn’t share exactly how it will determine who is “suspicious,” but said it would take into account factors like whether someone has been recently blocked or reported by a younger user.
Additionally, Meta said it’s working with the National Center for Missing and Exploited Children (NCMEC) on a “global platform” to prevent the non-consensual sharing of intimate images of teens. According to Meta, the platform, which could launch by mid-December, will work similarly to a system designed to prevent the sharing of similar images from adults.
According to a Facebook spokesperson, the system will allow teens to generate a “private report” for images on their devices they don’t want shared. The platform, operated by NCMEC, would then create a unique hash of the image, which would go into a database so companies like Facebook can detect when matching images are shared on their platforms. The spokesperson added that the original image never leaves the teen’s device.
Twitter may not be restoring Blue verification for a couple of weeks, but it hopes to be more careful when the feature comes back. The social network has updated its FAQ site to warn that new accounts will have to wait 90 days before they can subscribe to Blue. The company also says it reserves the right to demand waiting periods "at our discretion without notice."
The new policy comes shortly after Twitter blocked new accounts from joining Blue. Within two days of Twitter adopting its pay-to-verify system, the social media service grappled with a flood of impersonators and trolls using their new checkmarks to confuse users. The firm tried using a secondary "official" checkmark for public figures and organizations, but new Twitter owner scrapped the system mere hours after it launched.
Musk added that a "new release" would discourage fraudsters by dropping the Blue checkmark if they change their name — they wouldn't get it back until Twitter confirmed that the new handle honored the Terms of Service. There isn't yet any official policy to this effect, however.
There's plenty of pressure for revised policies like these. Senator Ed Markey has grilled Elon Musk over the ease of creating fake accounts under the new verification system, and suggested that Congress might intervene if the entrepreneur doesn't fix Twitter and his other brands. Twitter is also dealing with internal chaos as employees resign en masse in response to Musk's demands for "long hours" from "hardcore" staff.