Instagram chief Adam Mosseri.
Elizabeth Frantz | Reuters
Instagram chief Adam Mosseri said in a post on Threads that the platform’s “biggest safety focus” is “managing content responsibly” around the Israel-Hamas war, adding that the platform is “getting pulled in a lot of directions at once right now.”
In the meantime, Threads will continue to temporarily block searches for terms such as “Covid-19” and “long Covid.”
Threads is Meta’s text-based competitor to X, formerly known as Twitter, which launched in July. Meta also owns the social media platforms Instagram and Facebook. The company has been facing pressure from regulators to be “vigilant” about removing disinformation during the Israel-Hamas war and ahead of upcoming elections.
But Meta has also received sharp criticism over its decision to block search results for certain terms on Threads. For instance, when users type the word “Covid” or “vaccine” into the search bar, they are prompted with a suggestion to leave the platform and visit the website for the Centers for Disease Control and Prevention.
Mosseri said he did not have a timeline as to when the company will stop blocking Covid-related terms on the platform, but said it is “temporary and we are working on it,” adding it will likely be resolved within weeks or months.
“The biggest safety focus right now is managing content responsibly given the war in Israel in Gaza,” Mosseri wrote. “The broader team is working on deeper integrations into Instagram and Facebook, graph building, EU compliance, Fediverse support, trending, and generally making sure Threads continues to grow.”
A Meta spokesperson told CNBC that people will be able to search for terms like “Covid” in future updates when the company is “confident in the quality of the results.”
In a blog post published Friday, Meta described the actions it has taken to mitigate content around the Israel-Hamas war since the conflict began. The company has created a special operations center with experts fluent in Hebrew and Arabic, and it has removed or marked more than 795,000 posts that violated policies against violent and graphic content, hate speech, harassment or coordinating harm, among others.
“The reality is that we have lots of important work to do,” Mosseri said. “The team is moving fast, but we’re not yet where we want to be.”
Don’t miss these CNBC PRO stories: