Posts with «author_name|karissa bell» label

What happened to the 'Meta' Instagram handle?

When Facebook announced it was rebranding to Meta, the company was prepared. Right after Mark Zuckerberg delivered a meandering keynote extolling the benefits of the metaverse, the company revealed it had repainted its iconic “thumbs up” sign that sits at its headquarters in Menlo Park. Many of its social media accounts also switched over, from Facebook to Meta.

Except for one key account, that is. As many pointed out at the time, the company didn’t control the @Meta handle on Instagram. It belonged to a small Denver-based magazine called META. The day of Facebook’s announcement, the company, which publishes lifestyle stories about motorcycles, posted a photo of assorted print issues with the caption “Since 2014.”

That evening, recent posts from the @Meta account were filled with comments encouraging the owner to “hold” the account, or at least sell it for a high price. “Hold and sell high,” one user wrote. But by the next day, the account had mysteriously vanished, as Quartz reported. It’s unclear exactly what happened, but @Meta has now subsumed all the content from the previous @Facebook Instagram page. Posts on the account predate October 28th, as if the social network had always controlled it. Posts from META, the magazine, now appear under the @readmeta handle.

META the publisher didn’t respond to requests for comment. But there are still signs of its former Instagram account on its website. The company’s website still links to its old instagram.com/meta account. Oddly, clicking on that link from the publisher’s website turns up an error, even though it links to the same URL as the now Facebook-owned Meta account.

Screenshot / Engadget

On Tuesday, Ben Geise, META’s co-founder and editor-in-chief, announced that the magazine’s most recent issue would be the last under the name it had used for more than eight years. “We value our individuality above all else, so when the news broke that a corporate Goliath was changing its name to Meta, it felt like a punch to the gut,” he wrote in a blog post. “With the flip of a switch our identity was suddenly watered down, and we watched our name circle the drain and wash away with something we had no control over.”

Geise didn’t respond to requests for comment, so it’s difficult to know exactly what happened. But Instagram’s terms of service state that businesses are unable to “reserve” handles. And the terms stipulate that companies can’t claim trademark violations if the account owner is using it for an unrelated purpose. “Using another's trademark in a way that has nothing to do with the product or service for which the trademark was granted is not a violation of Instagram's trademark policy,” the policy states. “Instagram usernames are provided on a first-come, first-served basis.”

Of course, accounts and handles often trade hands anyway. Businesses have been known to use escrow services to negotiate account transfers, while others have used shadier marketplaces to gain access to accounts with desirable handles.

But the practice is also officially prohibited by Instagram’s terms of service. “You can’t sell, license, or purchase any account or data obtained from us or our Service,” the terms of service states. “This includes attempts to buy, sell, or transfer any aspect of your account (including your username); solicit, collect, or use login credentials or badges of other users; or request or collect Instagram usernames, passwords, or misappropriate access tokens.”

That raises questions about whether Facebook skirted its own rules in order to gain access to a coveted username, the kind of action other users are routinely banned for. Or whether the company found another justification for taking over the account. An Instagram spokesperson didn’t immediately respond to a request for comment.

For now, META the publisher says it’s focused on the future. “Our brand is much more than just a name. We represent a way of living,” Geise wrote. “We speak to inspire and encourage the rare breed of humans out there bold enough to chase their dreams and never look back.”

Meta expands bug bounty program to reward discoveries of scraped data

Meta is expanding its bug bounty program to reward researchers who report data scraping. The change will allow researchers to report both bugs that could enable scraping activity, as well as previously scraped data that has already been published online.

In a blog post, Meta says it believes it is the first to launch a bug bounty program to specifically target scraping activity. “We're looking to find vulnerabilities that enable attackers to bypass scraping limitations to access data at greater scale than what we initially intended,” Security Engineering Manager Dan Gurfinkle told reports during a briefing.

Data scraping is different than other “malicious” activity Meta tracks as it uses automated tools to mass-collect personal information from users’ profiles, such as email addresses, phone numbers, profile photos and other details. Even though users often willingly share this information on their public Facebook profiles, scrapers can expose these details more widely, such as publishing the information in searchable databases.

It can also be difficult for Meta to combat this activity. For example, in April the personal information of more than 500 million Facebook users was published on a forum. In that case, the actual data scraping had occurred years prior, and the company had already addressed the underlying flaw. But there was little it could do once the data started circulating online. In some cases, the company has alsosued individuals for data scraping.

Under the new bug bounty program, researchers will be rewarded for finding “unprotected or openly public databases containing at least 100,000 unique Facebook user records with PII [personally identifiable information] or sensitive data (e.g. email, phone number, physical address, religious or political affiliation).” Instead of its usual payouts though, Meta says it will donate to a charity chosen by the researcher in order not to incentivize the publishing of scraped data.

For reports of bugs that can lead to data scraping, researchers can choose between a donation or a direct payout. Meta says each bug or dataset is eligible for at least a $500 award.

Senators came to the Instagram hearing armed with their teenaged finstas

Instagram’s top executive spent more than two hours being grilled by the Senate about Instagram’s safety policies and its impact on teens’ metal health. Unfortunately for Mosseri, members of the subcommittee on Consumer Protection, Product Safety, and Data Security came to the hearing armed with fresh anecdotes from their own finstas.

During the hearing, Mosseri’s first time appearing in Congress, multiple senators revealed that they and their staffs had created fresh Instagram accounts disguised as teenagers. They all said that the app had steered them toward content that was inappropriate for young users, including “anorexia coaches,” and other content related to self harm.

The staff of one lawmaker, Tennessee Senator Marsha Blackburn, managed to uncover a significant bug in one of Instagram's teen safety features. Blackburn said that her staff created a fresh account as a 15-year old-girl, but that the account defaulted to public, not private. Instagram said in July that teens younger than 16 signing up for the first time would be defaulted to private accounts.

“While Instagram is touting all these safety measures, they aren't even making sure that the safety measures are in effect for me,” Blackburn said. Mosseri later confirmed that the company had mistakenly not enabled the private default settings for new accounts created on the web. “We will correct that quickly,” he said.

Blackburn wasn’t the only senator who came to the hearing prepared with questions about what they saw on a staff-created finsta. Connecticut Senator Richard Blumenthal said that his staff had created a new account posing as 13-year-old just days before. He said after following “a few accounts promoting eating disorders,” that “within an hour, all of our recommendations promoted pro-anorexia and eating disorder content.” He later added that a search for self harm content turned up results so graphic he didn’t feel he could describe them.

It was a notable shift from the cringeworthy moment at a September hearing, when Blumenthal clumsily pushed Facebook’s Head of Safety, Antigone Davis, on whether she could “commit to ending finsta.” This time, Blumenthal pressed Mosseri on whether he would commit to ending work on Instagram Kids entirely — Mosseri did not — and whether he would make more data about Instagram’s algorithms available to researchers outside the company.

“I can commit to you today that we will provide meaningful access to data so that third party researchers can design their own studies and make their own conclusions about the effects of well being on young people,” Mosseri said. He later added that Instagram is working on giving users the option for a chronological feed.

Utah Senator Mike Lee also shared his experience creating an Instagram account for a fictitious 13-year-old girl. He described how the recommendations in the account’s Explore page changed after following just one account. “The Explore page yielded fairly benign results at first,” he said. “We followed the first account that was recommended by Instagram, which happened to be a very famous female celebrity. After following that account, we went back to the Explore page and the content quickly changed.

“Why did following Instagrams top recommended account for a 13-year-old girl cause our Explore page to go from showing really innocuous things, like hairstyling videos, to content that promotes body dysmorphia. sexualization of women and content otherwise unsuitable for a 13 year old girl?”

Mosseri replied that, according to the company’s Community Standards Enforcement report, content promoting eating disorders accounts for “roughly five out of every 10,000 things viewed.” Lee didn’t buy it. “It went dark fast,” he said. “It was not five in 1,000 or five in 10,000, it was rampant.”

While much of what the senators described was similar to what journalists and others have reported experiencing on Instagram, the exchanges were telling because they underscored a point that’s been raised by whistleblower Frances Haugen and others studying the company: That Facebook often uses deceptive statistics to mask its problems. And that the sheer size of the platform means that even relatively low amounts of harmful content can have an outsize impact on users. 

It also indicated just how seriously lawmakers are taking the issue of teen safety and social media. While previous hearings with big tech executives have often veered wildly off topic, senators stayed relatively focused on the issues. And it was clear that there was bipartisan agreement on the need for Instagram to disclose more information about its platform to the public and to researchers.

“I would support federal legislation around the transparency of data, or the access to data from researchers, and around the prevalence of content problems on the platform,” Mosseri said. But Blumenthal pushed back, saying Instagram’s previous actions haven’t gone far enough.

“​​The kinds of baby steps that you've suggested so far, very respectfully, are underwhelming,” Blumenthal said at the close of the hearing. “I think you will sense on this committee, pretty strong determination to do something well beyond what you've indicated you have in mind. That's the reason that I think self-policing based on trust is no longer a viable solution.”

Instagram will bring back a chronological feed in 2022

After more than five years, Instagram plans to bring back “a version” of its chronological feed next year, the company’s top executive said on Wednesday. Speaking to lawmakers at a Senate hearing on Instagram and teen safety, Mosseri said that he supports “giving people the option to have a chronological feed.”

“We're currently working on a version of a chronological feed that we hope to launch next year,” Mosseri said, adding that the company has been working on the feature “for months.”He didn’t share additional details about how such a feed would work, but said that the company is “targeting the first quarter of next year” for a launch.

Launching an option for a chronological feed would be a major reversal for the photo sharing app, which has consistently defended its algorithmic feed despite persistent complaints and conspiracy theories from users about how their posts are ranked.

Developing…

Twitter will overhaul its reporting process for harmful tweets

Twitter is testing a new process for reporting tweets in what it says is a major overhaul intended to make it easier to flag harmful behavior on its platform. With the change, the company is changing how it allows users to flag tweets and significantly expanding the criteria that can be included in reports.

In a blog post, Twitter says it’s revamping the process to take a “people first” approach, in which the reporting process begins by asking users “what happened” rather than expecting them to figure out which of the company’s complex policies may have been violated. That’s a significant change from the current process, which requires users to navigate through a series of menus and identify specific rules that were broken by the tweet in question.

Instead, the new reporting flow allows users to specify who was targeted and then describe how it happened. For example, it includes much more detailed ways to report hate speech, including hate speech targeting groups of people. Then, once users have described the incident, Twitter will suggest which of its rules may apply.

Twitter

The company says this process is both simpler for users, and could help the company improve its policies and process further. “The more first-hand information they can gather about how people are experiencing certain content, the more precise Twitter can be when it comes to addressing it or ultimately removing it,” the company writes. “This rich pool of information, even if the Tweets in question don't technically violate any rules, still gives Twitter valuable input that they can use to improve people’s experience on the platform.”

Twitter is currently testing the new reporting flow with a “small group” of users in the US, and plans to expand it to more people in 2022.

Snapchat is hoping lens creators can make augmented reality useful

While Facebook is still trying to explain what a metaverse is, Snap has been working toward a very different vision for an AR-enabled future. Over the last four years, the company has recruited a small army of artists, developers and other AR enthusiasts to help build out its massive library of in-app AR effects.

Now Snap is tapping lens creators — it has more than 250,000 — to help it make AR more useful. The company is releasing new tools that allow artists and developers to essentially make mini apps inside of their lenses. And it’s working on getting its latest AR innovation into more of their hands: its next-generation Spectacles, which have AR displays.

The goal, according to Sophia Dominguez, Snap’s head of AR platform partnerships, is to expand the ways AR can be used, while keeping those experiences firmly rooted in the real world — not the metaverse. “You don't want to escape into another world,” she says of Snap’s philosophy. “To us, the world around us is magical, and there's things to learn from it.” Instead, she says, the company is looking to enable experiences that can “bridge” digital spaces and physical ones, and create “more useful applications” for AR.

For now, Snap is leaving it up to creators to figure out exactly what a “useful” augmented reality lens is, but the company is introducing a number of new tools to help them build it. At its annual Lens Fest event, Snap is also introducing a new version of Lens Studio, the software that allows developers to create and publish lenses inside of Snapchat.

The latest version includes new APIs that allow AR creators to connect their effects to real-time information. The resulting lenses are almost like mini apps inside Snapchat lenses. For example, creators can build lenses with real-time translation capabilities via iTranslate. Or check on their preferred cryptocurrency with a lens powered by crypto platform FTX. Weather data (via Accuweather) and stock market info (from Alpaca) will also be available, and the company is planning to add more partners in the near future.

Snap

These kinds of lenses are even more intriguing in the context of Snapchat’s augmented reality glasses, its “next-gen” Spectacles. Snap first showed off the glasses earlier this year, saying that the device would only be available to a small number of AR creators and developers. Since then, the company has handed out its latest Specs to hundreds of creators, who have been helping Snap figure out how far they can push the tech, and what kinds of new experiences AR glasses can enable.

That’s because with Spectacles, a “lens” doesn’t just have to be something that goes on top of a selfie — it can add contextual information to the world around you. For example, one creator experimenting with Spectacles recently showed off a concept where staring out a train window can surface details about where you are and what the weather is.

On a train trip today, I wanted to know what cities I'm passing by, so I created an #AR lens for @Spectacles to tell me where I am. pic.twitter.com/98Wud3i7K3

— Vova Kurbatov (@V_Kurbatov) November 29, 2021

Nike created a Spectacles lens that allows runners to follow an AR pigeon along their route, and view different animations along the way.

Nike + @snapchat AR = no more boring runs.

What do you think @EliudKipchoge? 🏃‍♀️🏃 pic.twitter.com/dB0Ik91DZB

— Nike.com (@nikestore) November 5, 2021

Creator Brielle Garcia, who has also been experimenting with Spectacles, recently previewed a concept for an AR menu that allows users to view 3D models of meals on a restaurant menu. Other creators are experimenting with interactive shopping and gaming lenses.

“When you think about what's gonna get people to put glasses on their face every single day, those are the things today you’re checking your apps for,” Dominguez said. “We’re really excited to see the different UIs that people can create in augmented reality with this kind of utility.”

Snap

All that comes with an important caveat: the reason why the AR Spectacles aren’t for sale and likely won’t be anytime soon is due to some pretty significant hardware limitations. Battery life is extremely limited (the charging case provides four extra charges) and the glasses themselves are, well, ugly.

While previous iterations of Spectacles look and feel like sunglasses, the next-gen Specs are comically large. Every time I see them, all I can think of is the chunky black frames worn by Roddy Piper in They Live. And, after spending some time wearing them, I can confirm that they are even more ridiculous looking when you put them on your face.

That said, Snap has been clear from the get-go that this version isn’t intended to look good, or even like something people will want to buy. Rather, the goal is to enable new types of AR development. 

And, despite the looks, their capabilities are impressive.The frames are equipped with “3D waveguides,” which power the AR displays; as well as dual cameras, speakers and microphones. On the left side is a capture button, so you can snap a photo or video of your surroundings, and on the right is a “scan” button. Much like the feature of the same name in the Snapchat app, scanning can help you find lenses based on your surroundings.

I only got to experiment with a handful of AR lenses while wearing the Specs, but the process was strikingly similar to using lenses in Snapchat’s app. I was able to scroll through a selection of lenses by swiping along a touchpad on the outside of the frames. Then, you can place the lens into your surroundings to see the AR effects around you. Like other AR headsets, the field of view is narrow enough that it’s not fully immersive, but I was impressed by the resolution and brightness of what I saw.

“'I’ve worked on each generation of Spectacles and this one is by far the most fun,” says David Meisenholder, a senior product designer at Snap, pointing to the company’s close collaboration with its creator community. “We're also learning how much we need to go to make those perfect for the consumer glasses of the future.”

Congress quizzes Facebook whistleblower on potential Section 230 reforms

Frances Haugen, the former Facebook employee turned whistleblower, testified in Congress for the second time in less than two months. Speaking to the House Energy and Commerce subcommittee, Haugen once again urged Congress to act to rein in Facebook.

Unlike Haugen’s last Congressional hearing, during which she briefed senators on Facebook’s internal research, Wednesday’s hearing was meant to be focused on potential reforms of social media platforms. Specifically, Section 230 of the Communications Decency Act, the 1996 law that shields online platforms from liability for their users' actions.

“This committee's attention and this Congress' action are critical,” she said during her opening statement. But she also told Congress they should be careful with changing the law as it could have unintended consequences.

“As you consider reform to section 230, I encourage you to move forward with your eyes open to the consequences of reform,” Haugen said. “Congress has instituted carve outs to Section 230 in recent years. I encourage you to talk to human rights advocates who can help provide context on how the last reform of 230 had dramatic impacts on the safety of some of the most vulnerable people in our society, but has been rarely used for its original purpose.”

Pennsylvania Rep. Michael Doyle began the hearing by acknowledging the importance of Section 230, but said the courts’ interpretation of the rule should change. “To be clear, Section 230 is critically important to promoting a vibrant and free internet,” he said. “But I agree with those who suggest the courts have allowed it to stray too far.”

But throughout the hearing, there was little discussion of specific changes or potential legislation that would change 230. Many members of Congress repeated the need for bipartisan action, but there seemed to be little agreement on what actions they should take. Doyle noted in his opening statement that members of the committee have proposed four bills that would make changes to Section 230, including one that would limit protections for companies that deployed “malicious” algorithms.

But those four bills were barely discussed during the four-hour hearing, which once again, veered into other issues. Many Republican members on the committee opted to focus on “censorship,” and their belief that platforms like Facebook are biased against them. Haugen countered that Facebook could implement changes that would make the platform safer regardless of a user’s political beliefs.

“We spent a lot of time today talking about censorship ... what we need to do is make the platform safer through product choices,” Haugen said, describing how adding “friction” to resharing content could reduce the spread of misinformation. “We need solutions like friction to make the platform safe for everyone even if you don’t speak English.”

At one point, Jim Steyer, CEO of Common Sense Media, appeared to grow frustrated. “I would like to say to this committee, you've talked about this for years, but you haven't done anything,” he said. “Show me a piece of legislation that you passed. 230 reform is going to be very important for protecting kids and teens on platforms like Instagram and holding them accountable and liable. But you also as a committee have to do privacy, antitrust and design reform.”

Meta’s crypto chief is leaving the company

David Marcus, the longtime Facebook executive who has overseen the company’s embattled cryptocurrency plans, is leaving the company. Marcus plans to leave the company at the end of 2021, he wrote in a Facebook post.

The former PayPal executive first joined Facebook in 2014; he ran Messenger for four years before leaving the post to kickstart Facebook’s blockchain division. Since then, he’s overseen Meta’s long-troubled cryptocurrency plans, as well as other payments products like Facebook Pay.

“While there’s still so much to do right on the heels of hitting an important milestone with Novi launching — and I remain as passionate as ever about the need for change in our payments and financial systems — my entrepreneurial DNA has been nudging me for too many mornings in a row to continue ignoring it,” Marcus said.

Developing...

Jack Dorsey took on Twitter’s biggest problems, but leaves plenty of challenges for his successor

After a six-year stint as CEO (again), Jack Dorsey is leaving Twitter in a very different place than when he took it over in 2015. Back then, not everyone was excited about the return of the company’s cofounder. Even though he initially came back temporarily, employees and investors were concerned that dual CEO roles — he was, and still is, the CEO of Square — would keep him from being able to tackle the company’s many problems.

“The general feeling among Twitter employees now is trepidation,” The New York Times wrote in 2015 of Dorsey’s surprise return. “Many are concerned at the prospect of Mr. Dorsey’s interim title becoming permanent, given his divisive and sometimes erratic management style and the fact that he had been dismissed and returned to the company before.”

At the time, the company was often described as being “in turmoil.” Twitter was churning through executives, and investors were concerned about lackluster user growth. Journalists and other pundits often noted that Twitter never knew how to explain what it was or why it mattered. The actual service had barely changed in years. Harassment was rampant, and relatively unchecked.

Much has changed since then. Hand-wringing over Dorsey’s two jobs never really abated, but turnover at the top of the company eventually slowed, and Twitter started growing again. The platform still struggles with harassment, but has made a concerted effort at encouraging “healthy conversations” and has significantly ramped up its policies against hate speech and harassment.

More recently, the company has undertaken a number of ambitious initiatives to change its core features and create new sources of revenue. In the last year alone, Twitter has introduced new features for live audio, groups, and payments. It rolled out creator-focused features like Super Follows, and acquired a newsletter platform for longform content. Last month, it introduced Twitter Blue, a subscription service aimed at power users. The company is also in the early stages of BlueSky, a plan to create a decentralized standard for social media platforms.

But incoming CEO Parag Agrawal will still be inheriting significant challenges alongside all the shiny new projects. Though the company has made strides in increasing conversational “health,” it’s also grappled with where to draw the line between free speech and toxicity, particularly when political figures are involved. And, like other platforms, the company struggled to rein in misinformation during the COVID-19 pandemic and 2020 presidential election.

“Dorsey leaves behind a mixed legacy: a platform that's useful and potent for quick communication but one that's been exploited by a range of bad actors, including former President Donald Trump, who did his best on Twitter to undermine democracy—until Dorsey's people finally had enough and shut him down,” says Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, who has researched social media polarization.

That Twitter under Dorsey did eventually permanently ban Trump has only made the company more of a target for politicians. And that’s unlikely to change just because Twitter’s new CEO has been one of the company’s lowest profile executives.

Agrawal is taking over as social media platforms face a bigger reckoning about their role in society. As lawmakers eye regulating algorithms and other reforms, Twitter has started to research algorithmic amplification and potential “unintentional harms” caused by its ranking systems. It will now be up to the company’s former CTO to steer that work while navigating scrutiny from lawmakers.

Agrawal will also inherit ambitious goals Twitter set earlier this year: To double its revenue and grow its user base to 315 million monetizable daily active users (mDAU) by the end of 2023 (the company reported 211 million mDAU in its most recent earnings report). And there are some signs he may be well positioned to make that happen. While Twitter under Dorsey has been slow to make decisions and release updates, Agrawal has been a proponent of new features like Bitcoin tipping. He also over saw Bluesky, the decentralization project.

The company has been betting that moving away from advertising and leaning into subscription services and other new features will help it get there. But Twitter is hardly alone in pursuing creators and subscriptions, and it’s not clear the company will be able to easily persuade large swaths of users to start paying for extra content or premium features.

Twitter’s new CEO seems to be well aware of the challenges ahead. “We recently updated our strategy to hit ambitious goals, and I believe that strategy to be bold and right,” Agrawal wrote in an email to employees he shared on Twitter. “But our critical challenge is how we work to execute against it and deliver results.”

Personalized warnings could reduce hate speech on Twitter, researchers say

A set of carefully-worded warnings directed to the right accounts could help reduce the amount of hate on Twitter. That’s the conclusion of new research examining whether targeted warnings could reduce hate speech on the platform.

Researchers at New York University’s Center for Social Media and Politics found that personalized warnings alerting Twitter users to the consequences of their behavior reduced the number of tweets with hateful language a week after. While more study is needed, the experiment suggests that there is a “potential path forward for platforms seeking to reduce the use of hateful language by users,” according to Mustafa Mikdat Yildirim, the lead author of the paper.

In the experiment, researchers identified accounts at risk of being suspended for breaking Twitter’s rules against hate speech. They looked for people who had used at least one word contained in “hateful language dictionaries” over the previous week, who also followed at least one account that had recently been suspended after using such language.

From there, the researchers created test accounts with personas such as “hate speech warner,” and used the accounts to tweet warnings at these individuals. They tested out several variations, but all had roughly the same message: that using hate speech put them at risk of being suspended, and that it had already happened to someone they follow.

“The user @account you follow was suspended, and I suspect this was because of hateful language,” reads one sample message shared in the paper. “If you continue to use hate speech, you might get suspended temporarily.” In another variation, the account doing the warning identified themselves as a professional researcher, while also letting the person know they were at risk of being suspended. “We tried to be as credible and convincing as possible,” Yildirim tells Engadget.

The researchers found that the warnings were effective, at least in the short term. “Our results show that only one warning tweet sent by an account with no more than 100 followers can decrease the ratio of tweets with hateful language by up to 10%,” the authors write. Interestingly, they found that messages that were “more politely phrased” led to even greater declines, with a decrease of up to 20 percent. “We tried to increase the politeness of our message by basically starting our warning by saying that ‘oh, we respect your right to free speech, but on the other hand keep in mind that your hate speech might harm others,’” Yildirim says.

In the paper, Yildirim and his co-authors note that their test accounts only had around 100 followers each, and that they weren’t associated with an authoritative entity. But if the same type of warnings were to come from Twitter itself, or an NGO or other organization, then the warnings may be even more useful. “The thing that we learned from this experiment is that the real mechanism at play could be the fact that we actually let these people know that there's some account, or some entity, that is watching and monitoring their behavior,” Yildirim says. “The fact that their use of hate speech is seen by someone else could be the most important factor that led these people to decrease their hate speech.”