mindly.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mindly.Social is an English speaking, friendly Mastodon instance created for people who want to use their brains and their hearts to make social networking more social. 🧠💖

Administered by:

Server stats:

1.2K
active users

#contentmoderation

1 post1 participant0 posts today
Miguel Afonso Caetano<p>"A sweeping crackdown on posts on Instagram and Facebook that are critical of Israel—or even vaguely supportive of Palestinians—was directly orchestrated by the government of Israel, according to internal Meta data obtained by Drop Site News. The data show that Meta has complied with 94% of takedown requests issued by Israel since October 7, 2023. Israel is the biggest originator of takedown requests globally by far, and Meta has followed suit—widening the net of posts it automatically removes, and creating what can be called the largest mass censorship operation in modern history.</p><p>Government requests for takedowns generally focus on posts made by citizens inside that government’s borders, Meta insiders said. What makes Israel’s campaign unique is its success in censoring speech in many countries outside of Israel. What’s more, Israel's censorship project will echo well into the future, insiders said, as the AI program Meta is currently training how to moderate content will base future decisions on the successful takedown of content critical of Israel’s genocide.</p><p>The data, compiled and provided to Drop Site News by whistleblowers, reveal the internal mechanics of Meta’s “Integrity Organization”—an organization within Meta dedicated to ensuring the safety and authenticity on its platforms. Takedown requests (TDRs) allow individuals, organizations, and government officials to request the removal of content that allegedly violates Meta’s policies. The documents indicate that the vast majority of Israel’s requests—95%—fall under Meta’s “terrorism” or “violence and incitement” categories. And Israel’s requests have overwhelmingly targeted users from Arab and Muslim-majority nations in a massive effort to silence criticism of Israel."</p><p><a href="https://www.dropsitenews.com/p/leaked-data-israeli-censorship-meta" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">dropsitenews.com/p/leaked-data</span><span class="invisible">-israeli-censorship-meta</span></a></p><p><a href="https://tldr.nettime.org/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SocialMedia</span></a> <a href="https://tldr.nettime.org/tags/Meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Meta</span></a> <a href="https://tldr.nettime.org/tags/Facebook" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Facebook</span></a> <a href="https://tldr.nettime.org/tags/Instagram" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Instagram</span></a> <a href="https://tldr.nettime.org/tags/Israel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Israel</span></a> <a href="https://tldr.nettime.org/tags/Palestine" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Palestine</span></a> <a href="https://tldr.nettime.org/tags/Censorship" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Censorship</span></a> <a href="https://tldr.nettime.org/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ContentModeration</span></a></p>
Miguel Afonso Caetano<p>"Yes, Facebook lied to the press often, about a lot of things; yes, Internet.org (Facebook’s strategy to give “free internet to people in the developing world) was a cynical ploy at getting new Facebook users; yes, Facebook knew that it couldn’t read posts in Burmese and didn’t care; yes, it slow-walked solutions to its moderation problems in Myanmar even after it knew about them; yes, Facebook bent its own rules all the time to stay unblocked in specific countries; yes, Facebook took down content at the behest of China then pretended it was an accident and lied about it; yes, Mark Zuckerberg and Sheryl Sandberg intervened on major content moderation decisions then implied that they did not. Basically, it confirmed my priors about Facebook, which is not a criticism because reporting on this company and getting anything beyond a canned statement or carefully rehearsed answer from them over and over for years and years and years has made me feel like I was going crazy. Careless People confirmed that I am not.</p><p>It has been years since Wynn-Williams left Facebook, but it is clear these are the same careless people running the company. When I wonder if the company knows that its platforms are being taken over by the worst AI slop you could possibly imagine, if it knows that it is directly paying people to flood these platforms with spam, if it knows it is full of deepfakes and AI generated content of celebrities and cartoon characters doing awful things, if it knows it is showing terrible things to kids. Of course it does. It just doesn’t care."</p><p><a href="https://www.404media.co/careless-people-is-the-book-about-facebook-ive-wanted-for-a-decade/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">404media.co/careless-people-is</span><span class="invisible">-the-book-about-facebook-ive-wanted-for-a-decade/</span></a></p><p><a href="https://tldr.nettime.org/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SocialMedia</span></a> <a href="https://tldr.nettime.org/tags/Facebook" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Facebook</span></a> <a href="https://tldr.nettime.org/tags/Meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Meta</span></a> <a href="https://tldr.nettime.org/tags/BigTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BigTech</span></a> <a href="https://tldr.nettime.org/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ContentModeration</span></a> <a href="https://tldr.nettime.org/tags/Censorship" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Censorship</span></a> <a href="https://tldr.nettime.org/tags/SiliconValley" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SiliconValley</span></a></p>
Calishat<p><a href="https://researchbuzz.masto.host/tags/Facebook" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Facebook</span></a> <a href="https://researchbuzz.masto.host/tags/NonEnglish" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NonEnglish</span></a> <a href="https://researchbuzz.masto.host/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ContentModeration</span></a> <a href="https://researchbuzz.masto.host/tags/misinformation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>misinformation</span></a> <a href="https://researchbuzz.masto.host/tags/disinformation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>disinformation</span></a> </p><p>'Meta’s automated filters and content moderators often fall short when dealing with non-English content. This isn’t just a technical limitation — it’s a dangerous oversight. In communities where English isn’t the dominant language, this gap leaves space for disinformation to flourish unchecked.</p><p>Two recent examples show the effects disinformation can have.'</p><p><a href="https://www.poynter.org/fact-checking/2025/meta-disinformation-non-english-languages/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">poynter.org/fact-checking/2025</span><span class="invisible">/meta-disinformation-non-english-languages/</span></a></p>
The Nexus of Privacy<p>Here's the summary from DAIR's page at <a href="https://www.dair-institute.org/tigray-genocide/" rel="nofollow noopener noreferrer" target="_blank">www.dair-institute.org/tigray-genoc...</a> <a class="hashtag" href="https://bsky.app/search?q=%23tigraygenocide" rel="nofollow noopener noreferrer" target="_blank">#tigraygenocide</a> <a class="hashtag" href="https://bsky.app/search?q=%23contentmoderation" rel="nofollow noopener noreferrer" target="_blank">#contentmoderation</a></p>

"Protecting #democracy from threats created by Internet #platforms is a laudable goal. But it is not worth the cost imposed by legislative attempts so far: empowering the government to control legal speech online. Lawmakers’ attempts to impose their own top-down speech rules are particularly unwarranted given the far more promising possibilities offered by #usercontrolled and #decentralized #contentmoderation systems."

techpolicy.press/regulated-dem

Tech Policy Press · Regulated Democracy and Regulated Speech | TechPolicy.PressThe First Amendment is meant to protect us from short-sightedness about state power, writes Daphne Keller.

Online #ContentModeration: What works, and what people want | MIT Sloan mitsloan.mit.edu/ideas-made-to ““Contrary to recent claims from many political elites, the main problem with professional #factchecking is not bias or overreach,” he added. “The problem is that professional fact checkers can’t keep up with the vast scale of content posted every day” #misinformation

MIT SloanOnline content moderation: What works, and what people want | MIT Sloan
Looks like Mastodon is going to need better moderator tools

Underprivileged people are apparently especially easy to target on #ActivityPub , or so I have been told, and I believe it. They have been complaining about it to the Mastodon developers over the years, but the Mastodon developers at best don’t give a shit, at worst are hostile to the idea, and have been mostly ignoring these criticisms. Well, now we have “Nicole,” the infamous “Fediverse Chick”, a spambot that seems to be registering hundreds of accounts across several #Mastodon instances, and then once registered, sends everyone a direct message introducing itself.

You can’t block it by domain or by name since the name keeps changing and spans multiple instances. It is the responsibility of each domain to prevent registrations of bots like this.

But what happens when the bot designer ups the ante? What happens when they try this approach but with a different name each time? Who is to say that isn’t already happening and we don’t notice it? This seems to be an attempt to show everyone a huge weakness in the content moderation toolkit, and we are way overdue to address these weaknesses.

Professor of international and public affairs at Columbia University, Tamar Mitts for @time breaks down why the fight against online extremism always fails and why we need to understand that the issue "is bigger than any one site can handle."

flip.it/qS0RGQ

TIME · Why the Fight Against Online Extremism Keeps FailingYes, Big Tech can do more. But all online spaces must commit to a more unified stance against extremism.

"Mark Zuckerberg might be done with factchecking, but he cannot escape the truth. The third richest man in the world announced that Meta will replace its independent factchecking with community notes. I went to the AI Action Summit in Paris this week to tell tech execs and policymakers why this is wrong.

Instead of scaling back programmes that make social media and artificial intelligence more trustworthy, companies need to invest in and respect the people who filter social media and who label the data that AI relies on. I know because I used to be one of them.

A mum of two young children, I was recruited from my native South Africa with the promise to join the growing tech sector in Kenya for a Facebook subcontractor, Sama, as a content moderator. For two years, I spent up to 10 hours a day staring at child abuse, human mutilation, racist attacks and the darkest parts of the internet so you did not have to.

It was not just the type of content I had to watch that gave me insomnia, anxiety and migraines, it was the quantity too. In Sama we had something called AHT, or action handling time. This was the amount of time we were given to analyse and rate a piece of content. We were being timed, and the company measured our success in seconds. We were constantly under pressure to get it right.

You could not stop if you saw something traumatic. You could not stop for your mental health. You could not stop to go the bathroom. You just could not stop. We were told the client, in our case Facebook, required us to keep going."

theguardian.com/commentisfree/

The Guardian · I was a content moderator for Facebook. I saw the real cost of outsourcing digital labourBy Sonia Kgomo