Welcome to Hopeflytech!

Oversight Board Criticizes Meta’s Automated Moderation in Israel-Hamas War


Today, Meta’s Oversight Board released its first emergency decision about content moderation on Facebook, spurred by the conflict between Israel and Hamas.

The two cases center around two pieces of content posted on Facebook and Instagram: one depicting the aftermath of a strike on Al-Shifa Hospital in Gaza and the other showing the kidnapping of an Israeli hostage, both of which the company had initially removed and then restored once the board took on the cases. The kidnapping video had been removed for violating Meta’s policy, created in the aftermath of the October 7 Hamas attacks, of not showing the faces of hostages, as well as the company’s long-standing policies around removing content related to “dangerous organizations and individuals.” The post from Al-Shifa Hospital was removed for violating the company’s policies around violent imagery.

In the rulings, the Oversight Board supported Meta’s decisions to reinstate both pieces of content, but took aim at some of the company’s other practices, particularly the automated systems it uses to find and remove content that violates its rules. To detect hateful content, or content that incites violence, social media platforms use “classifiers,” machine learning models that can flag or remove posts that violate their policies. These models make up a foundational component of many content moderation systems, particularly because there is too much content for a human being to make a decision about every single post.

“We as the board have recommended certain steps, including creating a crisis protocol center, in past decisions,” Michael McConnell, a cochair of the Oversight Board, told WIRED. “Automation is going to remain. But my hope would be to provide human intervention strategically at the points where mistakes are most often made by the automated systems, and [that] are of particular importance due to the heightened public interest and information surrounding the conflicts.”

Both videos were removed due to changes to these automated systems to make them more sensitive to any content coming out of Israel and Gaza that might violate Meta’s policies. This means that the systems were more likely to mistakenly remove content that should otherwise have remained up. And these decisions can have real-world implications.

“The [Oversight Board] believes that safety concerns do not justify erring on the side of removing graphic content that has the purpose of raising awareness about or condemning potential war crimes, crimes against humanity, or grave violations of human rights,” the Al-Shifa ruling notes. “Such restrictions can even obstruct information necessary for the safety of people on the ground in those conflicts.” Meta’s current policy is to retain content that may show war crimes or crimes against humanity for one year, though the board says that Meta is in the process of updating its documentation systems.

“We welcome the Oversight Board’s decision today on this case,” Meta wrote in a company blog post. “Both expression and safety are important to us and the people who use our services.”

Hopeflytech
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart