Posts with «cultural groups» label

Meta’s Oversight Board will rule on AI-generated sexual images

Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.

While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.

One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.

The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.

In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”

The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video of Joe Biden recently resulted in Meta agreeing to label more AI-generated content on its platform.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-rule-on-ai-generated-sexual-images-100047138.html?src=rss

The Morning After: Zuckerberg's Vision Pro review, and robotaxis crashing twice into same truck.

Sometimes, timing ruins things. Take this week, instead of detailing the disgust I feel towards this 'meaty' rice, this week's Morning After sets its sights on Mark Zuckerberg, the multimillionaire who's decided to review technology now. Does he know that's my gig?

The Meta boss unfavorably compared Apple's new Vision Pro to his company's Meta Quest 3 headset, which is a delightfully hollow and petty reason to 'review' something. But hey, I had to watch it. And now maybe, you'll watch me? 

We also look closer at Waymo's disastrous December, where two of its robotaxis collided with a truck. The ... same truck.

This week:

🥽🥽: Zuckerberg thinks the Quest 3 is a 'better product' than the Vision Pro

🤖🚙💥💥: Waymo robotaxis crash into the same pickup truck, twice

🚭🛫🚫: United Airlines grounds new Airbus fleet over no smoking sign law

Read this:

GLAAD, the world's largest LGBTQ media advocacy group, has published its first annual report on the video game industry. It found that nearly 20 percent of all players in the United States identify as LGBTQ, yet just 2 percent of games contain characters and storylines relevant to this community. And half of those might be Baldur's Gate 3 alone. (I half-joke.) The report notes that not only does representation matter to many LGBTQ players, but also that new generations of gamers are only becoming increasingly more open to queer content regardless of their sexual orientation. We break down the full report here.

Like email more than video? Subscribe right here for daily reports, direct to your inbox.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-zuckerbergs-vision-pro-review-and-robotaxis-crashing-twice-into-same-truck-150021958.html?src=rss

Lyft aims to match women and nonbinary riders and drivers with each other more often

Lyft has announced an initiative that aims to bolster safety for riders and drivers who identify as women or nonbinary. Women+ Connect is a feature that gives women and nonbinary drivers the option to match with women and nonbinary riders more often. Lyft says this is an opt-in feature that's preference-based. If a driver activates Women+ Connect but there are no women or nonbinary people who are looking for a ride close by, they'll still be matched with a male rider and vice versa.

The feature will initially be available in Chicago, Phoenix, San Diego, San Francisco and San Jose. Lyft plans to enable it in more cities in the near future. When it's available in their area, women and nonbinary riders and drivers will see a "count me in" prompt. If they agree to this, it's more likely that they'll be matched with a woman or nonbinary person.

Improving safety is a major goal for Lyft with this effort. The company is also hoping it will encourage more women and nonbinary folks to sign up as drivers. Lyft says that, according to a recent survey, nearly half of riders are women, but they make up 23 percent of drivers on the platform. “Women drivers tell us it’s hard to drive at night,” Jody Kelman, Lyft’s executive vice president of customers, told The New York Times. “We need to remove a barrier for women drivers today.”

Ridesharing platforms such as Lyft and Uber have added more safety features to their apps over the years amid reports of sexual assaults and other violent encounters. They have made it easier for riders and drivers to contact support staff and 911, keep loved ones up to date with their location and record audio from the ride. Lyft consulted with experts such as the Human Rights Campaign and the National Association of Women Law Enforcement Executives as it built Women+ Connect.

It is worth noting that Lyft makes it a cinch for riders and drivers to change their gender identity in the app with a few taps. You'll see a driver or rider's preferred pronouns in the app, but not their gender identity.

Access to Women+ Connect is based on the gender that users identify with in the Lyft app. Lyft says the default gender identity it uses for drivers is based on the license it has on file, while riders always self-identify their gender. However, any user can change their gender identity in the app at any time. Balancing the ability for users to easily express their gender accurately (particularly for those who are transitioning) while ensuring this feature works as intended is a tricky needle to thread and may cause some issues, but at least Lyft is considering that factor while implementing Women+ Connect.

This article originally appeared on Engadget at https://www.engadget.com/lyft-aims-to-match-women-and-nonbinary-riders-and-drivers-with-each-other-more-often-145047131.html?src=rss

Discord bans teen dating servers and the sharing of AI-generated CSAM

Discord has updated its policy meant to protect children and teens on its platform after reports came out that predators have been using the app to create and spread child sexual abuse materials (CSAM), as well as to groom young teens. The platform now explicitly prohibits AI-generated photorealistic CSAM. As The Washington Post recently reported, the rise in generative AI has also led to the explosion of lifelike images with sexual depictions of children. The publication had seen conversations about the use of Midjourney — a text-to-image generative AI on Discord — to create inappropriate images of children.

In addition to banning AI-generated CSAM, Discord now also explicitly prohibits any other kind of text or media content that sexualizes children. The platform has banned teen dating servers, as well, and has vowed to take action against users engaging in this behavior. A previous NBC News investigation found Discord servers advertised as teen dating servers with participants that solicited nude images from minors. 

Adult users had previously been prosecuted for grooming children on Discord, and there are even crime rings extorting underage users to send sexual images of themselves. Banning teen dating servers completely could help mitigate the issue. Discord has also included a line in its policy, which states that older teens found to be grooming younger teens will be "reviewed and actioned under [its] Inappropriate Sexual Conduct with Children and Grooming Policy."

Aside from updating its rules, Discord recently launched a Family Center tool that parents can use to keep an eye on their kids' activity on the chat service. While parents won't be able to see the actual contents of their kids' message, the opt-in tool allows them to see who their children are friends with and who they talk to on the platform. Discord is hoping that these new measures and tools can help keep its underage users safe along with its old measures, which include proactively scanning images uploaded to its platform using PhotoDNA. 

Discord's Family Center is a new opt-in tool that makes it easy for teens to keep their parents and guardians informed about their Discord activity while respecting their own autonomy. pic.twitter.com/UFY0ybo0LR

— Discord (@discord) July 11, 2023

This article originally appeared on Engadget at https://www.engadget.com/discord-bans-teen-dating-servers-and-the-sharing-of-ai-generated-csam-071626058.html?src=rss

Facebook and Instagram will limit ads targeting teens' follows and likes

Meta is taking more steps to limit potentially harmful ad campaigns. The company is placing more restrictions on advertisers' ability to target teens. From February onward, Facebook and Instagram will no longer let marketers aim ads at teens based on gender — only age and location. Follows and likes on the social networks also won't influence the ads teens see.

In March, Meta will expand the ad preferences in Facebook and Instagram to let teens see fewer sales pitches for a given topic. Teens could already hide the ads from specific advertisers, but this gives them the choice of automatically downplaying whole categories like TV dramas or footwear.

The social media giant has put ever-tighter restrictions on the content teens can access. In 2021, Facebook and Instagram barred advertisers from using teens' interests to target ads. Instagram also made accounts private by default for teens under 16, and this year limited sensitive content for all new teen users. Meta has likewise limited the ability of "suspicious" adults to message teens on both platforms.

This is the second major ad policy change in a week. Just a day before, Meta rolled out an AI-based system meant to reduce discriminatory ad distribution. The technology is launching as part of a settlement with the federal government over charges that Facebook let companies target ads based on ethnicity, gender and other protected classes.

As with those earlier efforts, Meta has a strong incentive to act. The attorneys general of 10 states are investigating Instagram's effects on teens, while the European Union recently fined Meta the equivalent of $402 million for allegedly mishandling privacy settings for younger users. Governments are concerned that Meta might be exploiting teens' usage habits or exposing them to threats, including content that could lead to mental health issues. The new protections won't solve these problems by themselves, but they might show officials that Meta is serious about curbing ads that prey on teens.

Snapchat Family Center shows parents their children's friends list

Snapchat has launched a parental control portal that allows parents to keep an eye on who their young teenagers have been chatting with. The new in-app feature called Family Center shows parents their kids' friends list, as well as who they've messaged in the last seven days. Take note that parents can only see who their teens have been talking to, but they won't be able to read their chat history. Snap says the center was designed to "reflect the way... parents engage with their teens in the real world" in that they know (for the most part) who their kids have been hanging out with but don't listen in on their conversations.

In addition, parents can confidentially report accounts they think might be violating Snap's rules straight from the Family Center. Back in January, Snapchat changed its friend recommendation feature following calls for increased safety on the app by making it harder for adults to connect with teen users: In particular, it stopped showing accounts owned by 13-to-17-year-old users in Quick Add. Teens also can't have public profiles and have to be mutual friends to be able to communicate with each other. Plus, their accounts will only show up in search results under certain circumstances, such as if the one searching has a mutual friend with them.

Snap promised to launch new parental controls and other features designed to protect underage users on its service last year. The company revealed its plans in a hearing wherein lawmakers put the pressure on social networks and apps that cater to teens, such as Snapchat and TikTok, to do more to protect children on their platforms. 

Family Center is completely voluntary, and teens can always leave the portal if they want — they'll even be given the choice to accept or ignore a parent's invitation to join. And since the feature was made for underage teens, users who turn 18 will automatically be removed from the tool.

The company plans to roll out more features for the Family Center on top of what it already has. It will allow parents to easily see the newest friends their teens have added in the coming weeks. And over the next months, Snap will add content controls for parents, as well as the ability for teens to notify their parents whenever they report an account or a piece of content.

Nintendo Japan will offer benefits to employees in same-sex unions

Nintendo Japan will provide employees in same-sex domestic partnerships with the same benefits it offers to those in heterosexual unions, even though Japanese law does not currently recognize gay marriages. The company announced the policy in a July 12th update to its corporate social responsibility guidelines that was spotted by Go Nintendo (via Variety).

A new section titled “Introduction of a Partnership System” notes the policy has been in place since March 2021, and that the company has since begun recognizing common-law marriages in the same way as legal marriages. “At Nintendo, we want to create a work environment that supports and empowers each and every one of our unique employees,” the company said.

Additionally, the update notes that Nintendo President Shuntaro Furukawa sent a note to employees on gender diversity, asking workers to understand that their words and action can cause emotional pain, even if there was no harm intended. Nintendo says it’s also working on implementing new systems and training courses designed to create a more supportive working environment.

Among G7 nations, Japan is the only country that does not recognize same-sex marriage. While LGBT activists have made some breakthroughs in recent years, a court in Osaka upheld the country’s ban this past June. While there’s growing public support for legalizing same-sex marriage, LGBTQ individuals still frequently face discrimination, according to a 2020 survey. Of course, discrimination, particularly the kind that happens in the workplace, is not unique to Japan. You need only look at the all news coming out of Activision Blizzard – and before that Riot Games, Ubisoft and countless other examples – to know that gaming companies frequently fail to protect their most vulnerable employees.

Nine women accuse Sony of systemic sexism in a potential class-action lawsuit

In November, former PlayStation IT security analyst Emma Majo filed a lawsuit against Sony, claiming the company discriminated against women at an institutional level. Majo alleged she was fired because she spoke up about gender bias at the studio, noting she was terminated shortly after submitting a signed statement to management detailing sexism she experienced there. 

Majo later filed the paperwork to turn her case into a class-action lawsuit, and just last month Sony attempted to have the whole thing thrown out, claiming her allegations were too vague to stand up to legal scrutiny. Plus, Sony's lawyers said, no other women were stepping forward with similar claims.

Today, eight additional women joined the lawsuit against Sony. The new plaintiffs are current and former employees, and only one of them has chosen to remain anonymous. One plaintiff, Marie Harrington, worked at Sony for 17 years and eventually became a senior director of program management and chief of staff to senior VP of engineering George Cacciopo.

"When I left Sony, I told the SVP and the Director of HR Rachel Ghadban in the Rancho Bernardo office that the reason I was leaving was systemic sexism against females," Harrington said in a court statement. "The Director of HR simply said, 'I understand.' She did not ask for any more information. I had spoken with the Director of HR many times before about sexism against females."

Harrington claimed women were overlooked for promotions, and said that during annual review sessions, Sony Interactive Entertainment engineering leaders rarely discussed female employees as potential "high performers." She said that in their April 2019 session, only four of the 70 employees under review were women, and while all of the men in this group were marked as high performers, just two of the women were. 

"Further, when two of the females were discussed, managers spent time discussing the fact that they have families," Harrington's statement reads. "Family status was never discussed for any males."

The remaining women shared similar stories in their statements, with the common theme being a lack of opportunity for female employees to advance and systemic favoritism toward male employees. The plaintiffs claimed male leaders at Sony made derogatory comments including, "you just need to marry rich," and, "I find that in general, women can’t take criticism.” 

One plaintiff alleged that while on a work trip to E3, her superior tricked her into having drinks with him at the hotel bar, hit on her even after she declined, and told other male employees that "he was going to try to 'hit that.'" Another plaintiff shared a story about a gender equality meeting at Sony that had a five-person panel, all of them men.

The lawsuit against Sony comes at a time of reckoning for many major video game studios, including Activision Blizzard, Ubisoft and Riot Games. Activision Blizzard is facing a lawsuit and multiple investigations into claims of institutional sexism, sexual harassment and gender discrimination, while Ubisoft has long faced similar allegations from former and current employees. Riot Games paid $100 million in December to settle a class-action lawsuit over workplace sexual harassment and discrimination.

Sony has not yet responded to the latest movement in the class-action lawsuit, though it denies Majo's claims of gender discrimination. The company has requested the lawsuit be dismissed, and that will be decided in a hearing in April.

Facebook and Instagram will limit advertisers’ ability to target teens

Facebook is taking new steps to limit advertisers’ ability to reach teens with targeted ads. With the change, advertisers will no longer be able to use “interests” or information gleaned from other services to show ads to Facebook, Instagram and Messenger’s youngest users.

The change won’t prevent advertisers from reaching teens at all — they can still use broad demographic information like age, gender, and location — but the update will prevent more granular data from being used, including info from third-party websites and apps.

Instagram is also making several changes to make teens on its platform less visible. The app will begin making new accounts private by default for teens younger than 16, though teens as young as 13 can still opt for a public-facing account if they wish. Instagram said that in early tests “eight out of ten young people accepted the private default settings during sign-up,” suggesting the change could lead more teens to have non-public accounts.

For teens who do opt for public accounts, Instagram is making it more difficult for adults they don’t know to interact with them in the app. The company says it has “developed new technology” that makes it easier to identify “potentially suspicious behavior” in adults who could pose a risk to teens.

According to Instagram, adults flagged as “potentially suspicious” will be blocked from following teens or commenting on their posts (the app has previously limited adults ability to direct message teens). These adults also won’t see content from teens in Reels, Explore and other in-app recommendations. The company isn’t sharing many details about how it determines which adults might be sketchy, but said one factor would be adults who get blocked or reported by younger users.

The changes come as Instagram is vying for younger users. The company has publicly discussed future plans for a version of its service for children younger than 13 years old. That idea, which the company has said is in early stages, has already prompted pushback from lawmakers and other officials. But Facebook is still pushing ahead with the idea. In a separate blog post, the company again said it plans to work with experts in child development and online safety as it creates the service, and that it welcomes "productive collaboration with lawmakers."