Meta’s oversight board is calling for a reversal of the ban on the word “shaheed” (meaning martyr in English). The social media giant currently enforces a policy banning the word on Facebook, Instagram and Threads — not taking into account that the word has multiple meanings — or it being several people’s first or last names. The Meta oversight board, a group that operates independently but is funded by Meta, saw that the approach was “overbroad,” and has been curtailing countless users’ liberty to express themselves, according to the ruling on their website.
Instead, the board is arguing for a context-sensitive approach. The oversight committee believes that the policy is too broad, and fails to consider the context in which “shaheed” is used. Banning the word altogether can prevent people from sharing important stories or commemorating those who have died, says the group, which marginalizes whole populations. They now recommend that Meta develop a more nuanced approach that takes into account the context in which the term is used.
This is more urgent considering Israel’s ongoing war on Gaza,where Facebook and Instagram have been accused of limiting the content that supports Palestinians. At present, Meta hides any posts that use the word when used to describe individuals that the tech giant has placed on its list of “dangerous organizations and individuals,” such as militant groups, drug organizations, or racist groups, including Hamas.
This could be a potential precedent for future moderation.The board’s recommendation, if adopted by Meta, could set a precedent for how tech companies handle content moderation in the future. This development highlights the limitations of content moderation as it exists in its current form on platforms that operate globally. In a statement reported by Reuters, a Meta spokesperson said that the company would review the board’s recommendation and make a decision within 60 days.
This isn’t the first time Meta has considered this issue. Almost exactly a year ago, Meta sought the board’s opinion on this issue as they reassessed the policy, which went into effect in 2020, after trying to reach a consensus internally. In its appeal to the board, the company revealed that the ban on the word accounted for more content removals than any other words or phrases on both Facebook and Instagram.