Meta’s Oversight Board said Thursday that posts of two AI-generated nude images resembling public figures in the United States and India violated Meta rules. Both posts have been removed. The board recommended making rules prohibiting non-consensual sexualized images “more intuitive.”
File photo by Terry Schmitt/UPI |
License Photo
July 25 (UPI) — Mata’s independent Oversight Board on Thursday found the company failed to remove two non-consensual deep faked explicit images from its platforms and urged the social media company to do more to prevent this content from appearing on its platforms.
The board said that two images of people resembling Indian and American public figures were removed following its review and that labeling them as manipulated content was not a sufficient response as the harm inflicted “stems from sharing and viewing these images” not just misleading people about their authenticity.”
“These two cases involve AI-generated images of nude women, one resembling an Indian public figure, the other an American public figure,” the Oversight Board said in a statement. “The Board finds that both images violated Meta’s rule that prohibits ‘derogatory sexualized photoshop’ under the Bullying and Harassment policy.”
The board recommended changes to strengthen efforts to stop non-consensual and deep fake sexualized images.
The board overturned Meta’s decision to leave up the post with the Indian figure and upheld Meta’s decision to take down the post featuring the American public figure.
The board said a different term like “non-consensual” would be a clearer description “to explain the idea of unwanted sexualized manipulations of images.”
“Additionally, the Board finds that, ‘photoshop’ is too narrow to cover the array of media manipulation techniques available today, especially generative AI,” the board statement said. “Meta needs to specify in this rule that the prohibition on this content covers this broader range of editing techniques.”
And to make sure the rules barring non-consensual sexualized images are more intuitive the board found they should be part of the Adult Sexual Exploitation Community Standard rather than falling under Bullying and Harassment.
The Oversight Board recommended Meta “harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated.”
The board said the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating.