Posts with «author_name|karissa bell» label

Why Facebook’s betting $1 billion on creators

Last month, Instagram held its first-ever Creator Week, a virtual event the company described as “a life-changing three days with new feature news and celeb drop-ins.” One of those drop-ins was CEO Mark Zuckerberg, who made a brief appearance to share a message with creators.

“I think that any good vision of the future has to involve a lot more people being able to make a living by expressing their creativity and by doing things they want to do, rather than things they have to — and having the tools and the economy around them to support their work is critical,” he said. “Our goal is to be the best platform for creators like you to make a living.”

This week, Zuckerberg went even farther, announcing that Facebook plans to invest $1 billion in creators by the end of 2022. The investment will fund bonus programs, creator funds and other monetization programs to boost all stripes of creators on its platform.

That Facebook is funneling so much money and resources toward creators is indicative of not just the opportunity the company sees, but how much ground it has to make up.

For years, Facebook simply didn’t do much for creators. While Instagram has long had its own influencer community, the company has at times tried to limit their reach. Instagram’s founders were reportedly uncomfortable with the rise of influencers, and introduced an algorithmic feed to ensure users would see more posts from friends and family than brands and businesses.

While YouTube has offered monetization features for more than a decade, Instagram didn’t offer any kind of revenue sharing feature until last year. And many creators often felt at odds with Instagram. The company’s ever-changing algorithm fueled suspicions that it “shadowbans” or otherwise penalizes users who post too much or about the “wrong” topics.

“Facebook has been late to the game in terms of supporting the creative community in a meaningful way,” says Qianna Smith Bruneteau, founder of the American Influencer Council, a trade group representing the creator industry.

But Facebook is now trying to reverse those perceptions. For the past year, the company has been steadily churning out new tools for creators to make money. Since last May alone, the company has introduced a dizzying number of money-making features.

On Instagram, creators can now make money from commercials in IGTV or open their own shops. They can sell badges and products in live streams. On Facebook, they can host paid virtual events, promote fan subscriptions, or sell in-app gifts in live streams or audio rooms. Soon, they’ll be able to start paid newsletters, earn affiliate commission from products their followers buy and participate in a branded content marketplace. The company is also launching several new bonus programs that will pay creators for signing up for IGTV ads, creating Reels or meeting live-streaming milestones.

Facebook

Zuckerberg and other top executives now regularly speak about creators and the opportunity they represent. The company is so eager to win over the creator community it’s promised it won’t take a cut of their earnings until 2023.

Li Jin, founder of Atelier Ventures, a venture capital firm that invests in the creator economy, says surging interest in creators is because the industry has gotten so big it’s no longer something platforms can afford to ignore.

“I think for a long time there was no need to separately think of creators as a distinct segment that was in need of specialized features or funds,” Jin says. “I think what changed is the realization that … these creators’ content is driving a disproportionate amount of activity and engagement on the platforms.”

That Facebook is late to the creator economy also means the company is facing an incredible amount of competition. TikTok, which has a reputation for a creator-friendly algorithm, just passed 3 billion downloads, the first non-Facebook owned app to do so, according to analytics company Sensor Tower. Users of TikTok, and its Chinese counterpart Duoyin, together spent more than a half billion dollars in the app during the second quarter of 2021, alone. In the United States in 2020, TikTok was significantly ahead of Facebook and Instagram in user engagement, according to App Annie.

App Annie

Meanwhile Twitter, Snapchat, Pinterest and other platforms are also pouring money into new initiatives for creators. “There's a limited number of creators and everyone is in competition for them,” Jin says.

Facebook has offered various explanations for its sudden interest in creators. Zuckerberg has said he wants to help more people “make a living” off Facebook’s services. Instagram chief Adam Mosseri recently said the company was responding to “the shift in power from institutions to individuals across industries.”

It’s also a major opportunity to shift Facebook’s business away from ads. Though Facebook has promised it won’t take a cut of creators’ earnings for more than a year, that will eventually change (the company hasn’t said what its cut will be, only that it will be “less” than Apple’s 30-percent commission).

Creators could also provide a massive boost to the company’s push into shopping. Commerce has also been a major focus for the social network, which has already crammed shopping features into nearly every corner of Instagram, and Zuckerberg has said he intends to create “a full-featured commerce platform” across Facebook’s services.

What’s less clear is just how much creators will be willing to buy-in to Facebook’s vision. While a $1 billion investment will almost certainly fuel more interest in the platform, it’s not clear if it will prompt the kind of content Facebook might be hoping for. Instagram’s Reels, for example, was meant to be the company’s chief TikTok competitor. Yet the company has at times had to push creators to post original content there.

And concerns about Facebook’s algorithms remain, says Bruneteau. “The algorithm should be favorable to creators like it is on TikTok,” she says. “You have these instant influencers on TikTok, who have been able to grow million-plus followings in less than a year. However those same instant influencers who have those accounts have a tendency to have less followers on Instagram.”

There are signs that Facebook might be willing to address these concerns. Mosseri recently raised eyebrows when he said that Instagram is no longer a photo-sharing app, and that the company was working one ways to insert more recommended content in users’ feeds in order to compete with TikTok.

But even with a kinder algorithm, both Bruneteau and Jin caution that creators should be cautious in throwing too many resources into Facebook or any one platform.

“When creators are building their processes on top of these like centralized platforms, they're actually creating more value for the underlying platform than they're able to create for themselves,” Jin says. “At the end of the day you're strengthening Facebook's dominance because the more content you put there, the more it attracts consumer users and the more that translates into Facebook revenue and Facebook's network effects.”

Facebook is notifying some users whose posts were removed by automation

Facebook is testing a change that will let users know when their post was removed as a result of automation. The new experiment comes in response to the Oversight Board, which has said the social network should be more transparent with users about how their posts are removed.

The company revealed the new test in a new report that provides updates on how Facebook is handling the Oversight Board’s policy recommendations. The test comes in response to one of the first cases the Oversight Board took up, which dealt with an Instagram post meant to raise awareness of breast cancer that the company removed under its nudity rules.

Facebook restored the post, saying its automated systems had made a mistake, and updated Instagram’s rules to allow for “health-related nudity.” But the Oversight Board had also recommended that Facebook alert users in cases when a post was removed with automation rather than as a result of a human content reviewer. Facebook previously said it would test this change, which is now in effect.

“We’ve launched a test on Facebook to assess the impact of telling people more about whether automation was involved in enforcement,” facebook writes in its report. “People in the test now see whether technology or a Facebook content reviewer made the enforcement decision about their content. We will analyze the results to see if people had a clearer understanding of who removed their content, while also watching for a potential rise in recidivism and appeals rates.” The company added that it will provide an update on the test later this year.

The report also shed some additional insight into how the company is working with the Oversight Board. The report notes that between November 2020 and March 2021 it referred 26 cases to the board, though it’s only chosen to take up three — one of which was in response to its suspension of Donald Trump. (Notably, the latest report only covers the first quarter of 2021, so it doesn’t address the board’s recommendations in response to Trump’s suspension.)

Though the Oversight Board has only weighed in on a handful of cases, its decisions have resulted in a few policychanges by Facebook that could have a much broader effect. However, in some areas, the company has declined to follow up on its policy suggestions, such as one that Facebook study its own role in enabling the events of January 6th. In a blog post, the company noted that “the size and scope of the board’s recommendations go beyond the policy guidance that we first anticipated when we set up the board, and several require multi-month or multi-year investments.”

US Surgeon General warns that health misinformation is an 'urgent threat'

US Surgeon General Dr. Vivek Murthy has issued an advisory warning of the dangers posed health misinformation, calling it an “urgent threat” that social media companies and technology platforms need to do more to address.

As The New York Timespoints out, it’s a rather unusual step for the office of the Surgeon General, which typically issues advisories centered around specific health concerns like the opioid epidemic. In a press release, the Surgeon General said that “health misinformation has already caused significant harm” and undermined vaccination efforts.

The advisory includes a 22-page report on steps that individuals, health organizations, researchers and journalists can take to help mitigate the spread of misinformation. Notably, it also calls out social media companies, though it stops short of calling any of the platforms out by name. But the report echoes much of the criticism that platforms like Facebook and Twitter have faced during the pandemic.

“Product features built into technology platforms have contributed to the spread of misinformation, the report states. “For example, social media platforms incentivize people to share content to get likes, comments, and other positive signals of engagement. These features help connect and inform people but reward engagement rather than accuracy, allowing emotionally charged misinformation to spread more easily than emotionally neutral content.”

The report also highlights the problem of algorithmic amplification, which can make it difficult for companies like Facebook to prevent misinformation from going viral.

“Algorithms that determine what users see online often prioritize content based on its popularity or similarity to previously seen content,” the report says. “As a result, a user exposed to misinformation once could see more and more of it over time, further reinforcing one’s misunderstanding. Some websites also combine different kinds of information, such as news, ads, and posts from users, into a single feed, which can leave consumers confused about the underlying source of any given piece of content.”

The report also recommends that companies “prioritize early detection of misinformation ‘super-spreaders’ and repeat offenders.” A widely cited report from the Center for Countering Digital Hate found more than half of anti-vaccine misinformation online can be linked to just 12 individuals. On Thursday, White House Press Secretary Jen Psaki also referenced that same report, noting that many of these “super-spreaders” remain active on Facebook.

Instagram’s latest test is… telling people about the Facebook app

Instagram is running a new test to tell users about another app they might want to check out: Facebook. The photo sharing app is experimenting with a notice at the top of users’ feeds that encourages them to check out features that are “only available” on Facebook.

“We’re testing a way to let people who have connected their Instagram accounts to Facebook know about features only available there, such as how to find a job, date online, buy and sell goods, or catch up on the latest news,” a spokesperson said in a statement.

That the company is using one of its billion-user apps to promote another billion-user app might not seem to make a lot of sense, but it’s only the latest (and perhaps most aggressive) way the social network has used Instagram to drive people back to its main app. The company has been steadily bringing the two apps closer together and has been encouraging users to link their accounts. (A book published last year reported that Mark Zuckerberg was “jealous” of Instagram’s success and worried the app could eventually “cannibalize” Facebook. Tensions between him and the app’s founders ultimately led to their departure in 2018.)

Facebook points out that only “a very small group” of Instagram users who have previously opted to link their accounts will see the messages, which can be dismissed. But even if it never expands, it would suggest that the company is far from done with its attempts to get Instagram users to spend more time on Facebook.

Twitter is killing Fleets

Twitter is killing Fleets less than a year after launching the Stories-like feature to all its users. All Fleets will disappear for the final time on August 3rd.

The short lived feature was at times controversial. Soon after it rolled out to all Twitter users last November, many raised questions about how the feature could be used to target others for harassment. Others questioned whether Twitter really needed a “Stories” feature of its own.

In a blog post, Twitter VP of Product Ilya Brown said the company hadn’t “seen an increase in the number of new people joining the conversation with Fleets like we hoped.” Brown added that Spaces will continue to get placement at the top of users’ timelines and that the company is still analyzing the full-screen ads it started testing in Fleets last month (Twitter hinted at the time that the new ad format could eventually make its way to other places in the service, too).

Both Brown and Twitter Product Lead Kayvon Beykpour pointed out that “winding down features every once in awhile” is something the company fully anticipates as it tries to reinvent itself. Twitter has been working on a number of new features that could dramatically change its service – including subscriptions and paid features for creators — and has publicly previewed several newideas it’s considering in recent months. 

Facebook Groups can now have dedicated topic 'experts'

Facebook is working on a new way to highlight authoritative information within Groups. The platform is starting to roll out a new “expert” label for group members who have expertise in an area related to the group’s interests.

With the change, which Facebook says is available to “select” Groups, an admin can invite a group member to be a group “expert.” If the person accepts, then they’ll get a badge next to their name similar to the way group moderators and admins are identified.

Notably, being a group “expert” doesn’t grant you extra control of group features, or higher visibility within a group. Instead, Facebook is billing it as a way for group admins to highlight members who are likely to have helpful insights to share with the rest of the group. Experts can also host question and answer sessions or live audio rooms. 

Separately, Facebook is also testing a feature that would allow group admins to proactively find expert voices for their group. That test, which is starting with groups related to fitness and gaming, allows individuals to identify themselves as experts in a particular topic. In these cases, group admins will be able to search for experts who aren’t already members of their group and invite them to join. The experts will also have the ability to automatically invite “recently engaged Page followers” to join any group in which they join as an expert.

Experts is the latest of several changes to Facebook Groups in recent months. The company has also taken steps to get moderators to shoulder more responsibility in ensuring group members follow Facebook’s rules, and introduced new tools to limit toxic conversations. While dedicated “experts” won’t directly impact these efforts, the addition of more knowledgeable voices could free up time for group admins.

Oversight Board says Facebook 'lost' an important rule for three years

Facebook “lost” an important policy for three years and only noticed after the Oversight Board began looking at the issue, according to the latest decision from the board. In its decision, the board questioned Facebook’s internal policies and said the company should be more transparent about whether other key policies may have been “lost.”

The underlying case stems from an Instagram post about Abdullah Öcalan, in which the poster “encouraged readers to engage in conversation about Öcalan’s imprisonment and the inhumane nature of solitary confinement.” (As the board notes, Öcalan is a founding member of the Kurdistan Workers’ Party, which Facebook has officially designated as a “dangerous organization.”)

Facebook had initially removed the post, as Facebook users are barred from praising or showing support for dangerous organizations or individuals. However, Facebook also had “internal guidance” — created partially as a result of discussions around Öcalan’s imprisonment — that “allows discussion on the conditions of confinement for individuals designated as dangerous.” But that rule was not applied, even after the user’s initial appeal. Facebook told the board it had “inadvertently not transferred” that part of its policy when it moved to a new review system in 2018.

Though Facebook had already admitted the error and reinstated the post, the board said it was “concerned” with how the case had been handled, and that “an important policy exception” had effectively fallen through the cracks for three years.

“The Board is concerned that Facebook lost specific guidance on an important policy exception for three years,” the group wrote. “Facebook’s policy of defaulting towards removing content showing ‘support’ for designated individuals, while keeping key exceptions hidden from the public, allowed this mistake to go unnoticed for an extended period. Facebook only learned that this policy was not being applied because of the user who decided to appeal the company’s decision to the Board.”

The board also chastised Facebook for not being transparent about how many other users may have been affected by the same issue. Facebook told the board it wasn’t “technically feasible” to determine how many other posts may have been mistakenly taken down. “Facebook’s actions in this case indicate that the company is failing to respect the right to remedy, contravening its Corporate Human Rights Policy,” the board said.

The case highlights how Facebook’s complex rules are often shaped by guidance that users can’t see, and how the Oversight Board has repeatedly challenged the company to make all its policies more clear to users.

Though it’s only taken up a handful of cases so far, the Oversight Board has repeatedly criticized Facebook for not following its own rules. “They can't just invent new unwritten rules when it suits them,” board co-chair Helle Thorning-Schmidt told reporters after they said Facebook was wrong to impose an “indefinite” suspension on Donald Trump. The board has also criticized Facebook for not alerting users to key parts of its policies, such as its “satire exception.” It’s pushed the company to clarify its hate speech policies, and how it treats speech from politicians and other high-profile figures.

Facebook has 30 days to respond to the Oversight Board in this case, including several recommendations that it further clarify its “Dangerous Individuals and Organizations” policy and update its transparency reporting process.

Facebook test warns users who may have seen 'harmful extremist content'

Facebook is testing new prompts to reach users who may be “becoming an extremist.” The in-app messages, which Facebook has confirmed is a test, direct users to resources aimed at combating extremism.

CNN first reported the new prompts, which have been spotted by Twitter users in recent days. One version is aimed at people who may know someone falling into extremism. “Are you concerned that someone you know is becoming an extremist,” it reads. 

Facebook will also alert you if you’ve been exposed to extremist content pic.twitter.com/H64Qrki8Kj

— Matt Navarra (@MattNavarra) July 1, 2021

Another prompt appears to warn users who may have encountered extremist content on the platform. “Violent groups try to manipulate your anger and disappointment,” it says. “You can take action now to protect yourself and others.”

Facebook spokesperson Andy Stone confirmed the messages are “part of our ongoing Redirect Initiative work.” The initiative is part of a broader effort by Facebook to fight extremism on its platform by working with groups like Life After Hate, which helps people leave extremist groups. The prompts will send users to Life After Hate or other resources, according to CNN.

It’s not clear how Facebook is determining which users may be most likely to be affected by extremism, but the issue has become a hot-button topic for Facebook. The company was widely criticized for not doing enough to prevent QAnon and other fringe groups from using its platform to grow their followings. Facebook has also been accused of downplaying its role in enabling the events of January 6th. And when the Oversight Board recommended the company conduct its own inquiry into the issue, the company said investigations should remain in the hands of law enforcement and elected officials.

Twitter considers letting you tweet to 'trusted friends' only

Twitter is thinking about new ways to share tweets with specific groups of people. The company showed off two concepts for new features that would allow users to target tweets toward specific audiences without having to switch accounts or change privacy settings.

The first would enable people to designate “trusted friends” so some tweets would only be visible to that group. The idea is similar to Instagram’s “close friends” feature for Stories. According to an image shared by Twitter designer Andrew Courter, Twitter’s version would allow users to toggle the audience much like the way you can choose who is able to reply to you.

He added that “perhaps you could also see trusted friends' Tweets first” in your timeline, which would offer another alternative to the chronological or algorithmic “home” timelines Twitter currently offers.

Twitter

Another feature would allow people to take on different personas or “facets” from the same account. For example, a user could have a professional identity, where they tweet about work-related topics, and a personal one that’s meant more for friends and family. According to the images, users could have the option of making any one persona public or private, and new followers would be able to choose which “facet” they want to see tweets from.

Finally, Courter showed off a new concept for filtering replies that would allow users to choose specific words or phrases “they prefer not to see.” Then, if a user who is replying or mentioning the user tries to use one of those words or phrases, Twitter will let them know the words go against that person’s preference.

Twitter

According to the images shared by Courter, the feature wouldn’t prevent anyone from sending a tweet using the offending words, but it would make it less visible to the person on the receiving end. The idea is similar to other kinds of anti-bullying nudges Twitter has employed in the past, but would go a step further as each user could set their own conversational “boundaries.”

All these features are still just ideas — Courter noted that “we’re not building these yet” — so they may never actually launch. But the company is looking for feedback on the designs, so they could inform future tools Twitter does decide to build. At the very least, it sheds some light on how Twitter is thinking about issues like identity.

Apple’s developer problems are much bigger than Epic and ‘Fortnite’

Near the end of the Epic v. Apple trial, Judge Yvonne Gonzales Rogers had some pointed questions for Tim Cook on the state of Apple’s relationship with its developers. Citing an internal survey of developers, she noted that 39 percent of them indicated they were unhappy with the App Store’s distribution. What incentive then, she asked, does Apple have to work with them.

Cook seemed to be caught off guard by the question. He said Apple rejects a lot of apps and that “friction” can be a good thing for users. Rogers replied that it “doesn’t seem you feel pressure or competition to change the manner in which you act to address concerns of developers.”

It was a brief, but telling, exchange. And one that strikes at the heart of Apple’s currently rocky relationship with developers.

Epic vs. Apple vs. developers

Ostensibly, Epic’s antitrust case against Apple was about the iPhone maker’s treatment of Fortnite and its refusal to allow the game developer to bypass the App Store for in-app purchases. Epic, along with many other prominent developers, has long chafed at Apple’s 30 percent commission, or “App Store” tax.

It’s not just that they see 30 percent as greedy and unfair (Apple recently lowered its take to 15 percent for small developers). It’s that Apple has appeared to treat some developers differently than others. For example, documents unearthed during the trial detail how Apple went to great lengths to prevent Netflix from yanking in-app purchases from its app.

After considering “punitive measures” toward the streaming giant, Apple offered Netflix custom APIs that most developers don't have access to. It also dangled the possibility of additional promotion in the App Store or even at its physical retail stores. Netflix ended up pulling in-app purchases anyway, but it was illustrative of the kind of “special treatment” many developers have long suspected Apple employs towards some apps.

Meanwhile, game developers have no choice but to pay Apple’s “tax.” Not only that, but Apple’s rules prohibit them from even alerting their users that they may be able to make the same purchase elsewhere for less — what’s known as its “anti-steering” rules.

Friction over these rules is nothing new. But the details of these arrangements, and Apple’s hardball tactics with developers, had never been as exposed as they were during the trial.

“What was great about the Epic trial was that it brought many of these issues to light and into the public dialogue,” said Meghan DiMuzio, executive director for the Coalition for App Fairness, an advocacy group representing developers who believe Apple’s policies are anticompetitive. “I think we saw how Apple more generally chooses to approach their relationships with developers and how they value, or don't value, their relationships with developers. I think those are really incredible soundbites and storylines to have out in the public eye.”

The case touched on other issues that have been the source of long-simmering developer frustrations with Cupertino, and not just for giants like Netflix. Epic also highlighted common developer complaints around App Store search ads, fraudulent apps and Apple’s often inscrutable review process.

In one particularly memorable exchange, the developer of yoga app Down Dog spoke at length about how Apple’s opaque policies can have an outsize impact on developers. For example, he said Apple had repeatedly rejected app updates for seemingly bizarre reasons, like using a “wrong” color on a login page. Once, he said, an update was rejected because App Store reviewers couldn’t find his app’s integration with Apple’s Health app. He later realized it was because the reviewers were testing on an iPad, which doesn’t support the Health app.

These types of complaints are probably familiar to most developers. It’s not unusual for Apple to quibble over the placement of a particular button, or some other minor feature. These seemingly small issues can drag on for days or weeks, as Epic repeatedly pointed out. But it’s rare for such squabbles to spill over into public view as they did during the trial.

The trial raised other, more fundamental issues, too. A witness for Epic testified that the operating margin for the App Store was 78 percent, a figure Apple disputed but didn’t offer evidence to the contrary. Instead, Tim Cook and other execs have maintained they simply don’t know how much money the App Store makes.

Cook did, however, have much more to say when pressed on whether game developers effectively “subsidize” the rest of the App Store. “We are creating the entire amount of commerce on the store, and we’re doing that by focusing on getting the largest audience there,” Cook stated.

The argument struck a nerve with some. Marco Arment, a longtime iOS developer whose apps have been featured by Apple, wrote a scathing blog post in response.

“The idea that the App Store is responsible for most customers of any reasonably well-known app is a fantasy,” Arment writes. “The App Store is merely one platform’s forced distribution gateway, ‘facilitating’ the commerce no more and no less than a web browser, an ISP or cellular carrier, a server-hosting company, or a credit-card processor. For Apple to continue to claim otherwise is beyond insulting, and borders on delusion.”

Determining just how many developers agree with that sentiment, though, is trickier. There are millions of iOS developers and for much of the App Store’s history, most have been reluctant to publicly criticize Apple. The company has conducted its own surveys — as evidenced in the Epic trial disclosures — but the findings aren’t typically made public. And even Cook admitted he was unsure if it’s a metric the company regularly tracks.

“There's not a lot of actual third party survey on the developer ecosystem,” says Ben Bajarin, CEO of analyst firm Creative Strategies. He has been conducting his own poll of Apple developers to gauge their feelings toward the company.

He says he sees “a pretty big gap” between the smaller, independent developers and the larger businesses on the App Store. Developers with smaller projects, he says, are “simply much more reliant on Apple.” And while they quibble with things like search ads or Apple’s review process, they don’t have many alternatives. “These aren’t developers that have a huge budget for marketing… they’re entirely reliant on Apple to get them customers.”

The coming antitrust battles

These issues could end up being much bigger than Epic’s or a few other high-profile frustrated developers. Regardless of the outcome of the Epic trial, Apple is facing other antitrust battles in the United States and Europe, where many of the same issues are being raised.

UK regulators launched an investigation into the App Store in March. That probe, which came in response to developer complaints, is looking at Apple’s rules for developers and its policies around in-app purchases. Separately, the European Union is moving forward with its own antitrust case centered around the company’s commission structure and anti-steering rules. And earlier this month, US lawmakers, who have also heard from frustrated app developers in recent months, introduced five antitrust bills targeting Apple and its fellow tech giants. One of which would bar Apple from pre-installing any apps on iPhones at all.

The outcome of any one of these could dramatically reshape how Apple runs the App Store, and the rules it sets for developers.

On its part, Apple has argued that opening the App Store would harm users and affect its ability to protect their privacy. Behind the scenes, Cook has reportedly personally lobbied members of Congress to rethink the proposed legislation.

Even if Apple is able to emerge from its antitrust fights relatively unscathed, dissatisfied developers could eventually pose a more existential problem for Apple. Bajarin, of Creative Strategies, says that issues with developers are unlikely to hurt Apple in the short term because there are still few alternatives. But, he says, that could change should Apple face competition from an emerging platform it doesn’t yet dominate, such as AR or VR.

“You just don't want this strain on developer relationships because Apple wants all those developers to be right on board day one for whatever's coming next. They need those larger developers to still prioritize their OS.”