Mirror of https://github.com/roostorg/awesome-safety-tools
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge branch 'main' into add-google-content-safety-api

Signed-off-by: Cassidy James Blaede <cassidyjames@roost.tools>

authored by

Cassidy James Blaede and committed by
GitHub
23c3233b 955a696f

+143 -113
+2
.github/pull_request_template.md
··· 1 + <!-- Thank you for opening a pull request! Please ensure your addition is in the correct section, follows existing formatting, and is in alphabetical order. If you have more information or context about your addition, please share it below: --> 2 +
+141 -113
README.md
··· 1 1 # awesome-safety-tools 2 + 2 3 A collection of open source tools for online safety 3 4 5 + Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology. ❤️ 4 6 5 - Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology ❤️ 7 + Help contribute by opening a pull request to add more resources and tools! 6 8 7 - Help and contribute by adding a pull request to add more resources and tools! 8 9 10 + ## Hash Matching 9 11 10 - ## Hash Matching 12 + * [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude) 13 + * web UI and hash matching for violent extremism and terrorism content 11 14 * [Hasher Matcher Action (HMA) by Meta](https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner) 12 15 * hashing algorithm, matching function, and ability to hook into actions 16 + * [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo) 17 + * HMA extension for CLIP as reference for adding other format extensions 18 + * [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract) 19 + * grid and lattice detection to guard against FP in hash matching 20 + * [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file) 21 + * CSAM hash matching for Wikimedia 13 22 * [PDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/pdq) 14 23 * perceptual hash algorithm for images 15 - * [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk) 16 - * visual similarity match for videos 17 - * [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq) 18 - * visual similarity match for videos using PDQ algorithm 19 - * [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo) 20 - * HMA extension for CLIP as reference for adding other format extensins 21 24 * [Perception by Thorn](https://github.com/thorn-oss/perception) 22 25 * provides a common wrapper around existing, popular perceptual hashes (such as those implemented by ImageHash) 23 - * [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude) 24 - * web UI and hash matching for violent extremism and terrorism content 25 - * [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract) 26 - * grid and lattice detection to guard against FP in hash matching 27 26 * [RocketChat CSAM](https://github.com/prostasia/rocketchatcsam) 28 27 * CSAM hash matching for RocketChat 29 - * [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file) 30 - * CSAM hash matching for Wikimedia 28 + * [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk) 29 + * visual similarity match for videos 30 + * [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq) 31 + * visual similarity match for videos using PDQ algorithm 32 + 31 33 32 34 ## Classification 35 + 36 + * [CoPE by Zentropi](https://huggingface.co/zentropi-ai/cope-a-9b) 37 + * small language model trained for accurate, fast, steerable content classification based on developer-defined content policies 38 + * [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify) 39 + * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text 40 + * [Content Safety API by Google](https://cloud.google.com/safesearch/docs/content-safety) 41 + * uses machine learning to detect child sexual abuse material (CSAM), nudity, and sexually explicit content in images and videos 42 + * free service, but not open source 43 + * [gpt-oss-safeguard by OpenAI](https://github.com/openai/gpt-oss-safeguard) 44 + * open-weight reasoning model to classify text content based on provided safety policies 45 + * [NSFW Keras Model](https://github.com/GantMan/nsfw_model) 46 + * convoluted neural network (CNN) based explicit image ML model 47 + * [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter) 48 + * browser extension to block explicit images from online platforms; user facing 33 49 * [OSmod by Jigsaw](https://github.com/conversationai/conversationai-moderator) 34 50 * toolkit of machine learning (ML) tools, models, and APIs that platforms can use to moderate content 35 51 * [Perspective API by Jigsaw](https://github.com/conversationai/perspectiveapi) 36 52 * machine learning-powered tool that helps platforms detect and assess the toxicity of online conversations 53 + * [Private Detector by Bumble](https://github.com/bumble-tech/private-detector) 54 + * pretrained model for detecting lewd images 37 55 * [Roblox Voice Safety Classifier](https://github.com/Roblox/voice-safety-classifier) 38 - * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox. Focuses on spoken language detection. 39 - * [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify) 40 - * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text 56 + * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox; focuses on spoken language detection 57 + * [Sentinel by Roblox](https://github.com/Roblox/Sentinel/tree/main) 58 + * Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles 41 59 * [Toxic Prompt RoBERTa by Intel](https://huggingface.co/Intel/toxic-prompt-roberta) 42 - * a BERT-based model for detecting toxic content in prompts to language models 43 - * [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter) 44 - * browser extension to block explicit images from online platforms. User facing. 45 - * [NSFW Keras Model](https://github.com/GantMan/nsfw_model) 46 - * convoluted neural network (CNN) based explicit image ML model 47 - * [Private Detector by Bumble](https://github.com/bumble-tech/private-detector) 48 - * a pretrained model for detecting lewd images 49 - * [Google Content Safety API](https://cloud.google.com/safesearch/docs/content-safety) 50 - * a free service by Google that uses machine learning to detect child sexual abuse material (CSAM), nudity, and sexually explicit content in images and videos. Widely used by NGOs, platforms, and law enforcement partners to support online child safety initiatives. (industry service not open source) 51 - * [Sentinel by Roblox](https://github.com/Roblox/Sentinel/tree/main) 52 - * a Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles 60 + * BERT-based model for detecting toxic content in prompts to language models 61 + 53 62 54 63 ## AI-powered Guardrails 64 + 65 + * [Guardrails AI](https://github.com/guardrails-ai/guardrails) 66 + * Python framework that helps build safe AI applications checking input/output for predefined risks 67 + * [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b) 68 + * harmful content detection model based on Kanana 8B 55 69 * [Llama Guard by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3) 56 70 * AI-powered content moderation model to detect harm in text-based interactions 57 71 * [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md) 58 - * Detects prompt injection and jailbreaking attacks in LLM inputs. 72 + * Detects prompt injection and jailbreaking attacks in LLM inputs 59 73 * [Purple Llama by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3) 60 74 * set of tools to assess and improve LLM security. Includes Llama Guard, CyberSec Eval, and Code Shield 75 + * [RoGuard](https://github.com/Roblox/RoGuard-1.0) 76 + * LLM that helps safeguard unlimited text generation on Roblox 61 77 * [ShieldGemma by Google DeepMind](https://www.kaggle.com/code/fernandosr85/shieldgemma-web-content-safety-analyzer?scriptVersionId=198456916) 62 78 * AI safety toolkit by Google DeepMind designed to help detect and mitigate harmful or unsafe outputs in LLM applications 63 - * [Guardrails AI](https://github.com/guardrails-ai/guardrails) 64 - * a Python framework that helps build safe AI applications checking input/output for predefined risks 65 - * [RoGuard](https://github.com/Roblox/RoGuard-1.0) 66 - * a LLM that helps safeguard unlimited text generation on Roblox 79 + 67 80 68 81 ## Privacy Protection 82 + 69 83 * [Fawkes Facial De-Recognition Cloaking](https://github.com/Shawn-Shan/fawkes) 70 84 * Code and binaries to confuse AIs when trying to match identity to photos, such as [Clearview](https://www.theverge.com/23919134/kashmir-hill-your-face-belongs-to-us-clearview-ai-facial-recognition-privacy-decoder) 71 85 * Many other great tools at github.com/Shawn-Shan, MIT researcher 72 86 * [Presidio by Microsoft](https://github.com/microsoft/presidio) 73 87 * toolset for detecting Personal Identifiable Information (PII) and other sensitive data in images and text 74 88 89 + 75 90 ## Core Infrastructure 76 - * [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir) 77 - * moderation bot for the Matrix protocol that automatically enforces content policies 91 + 78 92 * [AbuseIO](https://github.com/AbuseIO/AbuseIO) 79 93 * abuse management platform designed to help organizations handle and track abuse complaints related to online content, infrastructure, or services 80 - * [Open Truss by Github](https://github.com/open-truss/open-truss) 81 - * framework designed to help users create internal tools without needing to write code 82 94 * [Access by Discord](https://github.com/discord/access) 83 - * a centralized portal for managing access to internal systems within any organization 95 + * centralized portal for managing access to internal systems within any organization 96 + * [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir) 97 + * moderation bot for the Matrix protocol that automatically enforces content policies 98 + * [Open Truss by GitHub](https://github.com/open-truss/open-truss) 99 + * framework designed to help users create internal tools without needing to write code 84 100 101 + 85 102 ## Redteaming Tools 86 103 104 + * [Aymara](https://github.com/aymara-ai/aymara-sdk-python) 105 + * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability 106 + * [Counterfit by Microsoft](https://github.com/Azure/counterfit/) 107 + * Automation tool for assessing AI model security and robustness 108 + * [Garak by NVIDIA](https://github.com/NVIDIA/garak) 109 + * Framework for adversarial testing and model evaluation 110 + * [LLM Canary](https://github.com/LLM-Canary/LLM-Canary) 111 + * AI benchmarking tool that evaluates models for security vulnerabilities and adversarial robustness 112 + * [Prompt Fuzzer](https://github.com/prompt-security/ps-fuzz) 113 + * Tool for testing prompt injection vulnerabilities in AI systems 114 + * [Promptfoo](https://github.com/promptfoo/promptfoo) 115 + * Automated LLM evaluations, report generations, several ready-to-use attack strategies 87 116 * [PyRIT Documentation](https://azure.github.io/PyRIT/) 88 - * Microsoft’s Python-based tool for AI red teaming and security testing. 89 - * [AI Benchmarking Tool](https://github.com/LLM-Canary/LLM-Canary) 90 - * Evaluates AI models for security vulnerabilities and adversarial robustness. 91 - * [Prompt Fuzzer Red Teaming Tool](https://github.com/prompt-security/ps-fuzz) 92 - * Tool for testing prompt injection vulnerabilities in AI systems. 93 - * [Open Source Red Teaming Tool – Nvidia](https://github.com/NVIDIA/garak) 94 - * Framework for adversarial testing and model evaluation. 95 - * [Tool that Enables Models to Chat with One Another](https://github.com/socketteer?tab=repositories) 96 - * Allows AI models to interact, helping test conversational weaknesses. 97 - * [Microsoft AI Tool – Counterfit](https://github.com/Azure/counterfit/) 98 - * Automation tool for assessing AI model security and robustness. 99 - * [Automated AI Alignment Evals - Aymara](https://github.com/aymara-ai/aymara-sdk-python) 100 - * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability. 117 + * Microsoft’s Python-based tool for AI red teaming and security testing 118 + * [Socketteer](https://github.com/socketteer?tab=repositories) 119 + * Allows AI models to interact, helping test conversational weaknesses 120 + 101 121 102 122 ## Clustering 103 - * [SpamAssassin by Apache](https://spamassassin.apache.org) 104 - * anti-spam platform that uses a variety of techniques, including text analysis, Bayesian filtering, and DNS blocklists, to classify and block unsolicited email 123 + 105 124 * [scikit-learn](https://github.com/scikit-learn/scikit-learn) 106 125 * python library including clustering through various algorithms, such as K-Means, DBSCAN, and hierarchical clustering 126 + * [SpamAssassin by Apache](https://spamassassin.apache.org) 127 + * anti-spam platform that uses a variety of techniques, including text analysis, Bayesian filtering, and DNS blocklists, to classify and block unsolicited email 107 128 108 129 109 130 ## Rules Engines 131 + 132 + * [Druid by Apache](https://github.com/apache/druid) 133 + * high performance real-time analytics database 134 + * [Marble](https://github.com/checkmarble/marble) 135 + * real-time fraud detection and compliance engine tailored for fintech companies and financial institutions 110 136 * [Osprey by ROOST](https://github.com/roostorg/osprey) 111 - * a high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work 137 + * high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work 112 138 * [RulesEngine by Microsoft](https://microsoft.github.io/RulesEngine/) 113 - * a library for abstracting business logic, rules, and policies from a system via JSON for .NET language families 114 - * [Marble](https://github.com/checkmarble/marble) 115 - * a real-time fraud detection and compliance engine tailored for fintech companies and financial institutions 139 + * library for abstracting business logic, rules, and policies from a system via JSON for .NET language families 116 140 * [Wikimedia Smite Spam](https://github.com/wikimedia/mediawiki-extensions-SmiteSpam) 117 - * an extension for MediaWiki that helps identify and manage spam content on a wiki 118 - * [Druid by Apache](https://github.com/apache/druid) 119 - * a high performance real-time analytics database 141 + * extension for MediaWiki that helps identify and manage spam content on a wiki 120 142 121 143 122 144 ## Review 123 - * [RabbitMQ](https://github.com/rabbitmq) 124 - * a message broker that enables applications to communicate with each other by sending messages through queues 145 + 125 146 * [BullMQ](https://github.com/taskforcesh/bullmq) 126 147 * message queue and batch processing for NodeJS and Python based on Redis 127 - * [Owlculus](https://github.com/be0vlk/owlculus) 128 - * an OSINT (Open-Source Intelligence) toolkit and case management platform 129 148 * [NCMEC Reporting by ello](https://github.com/ello/ncmec_reporting) 130 - * a Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline 149 + * Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline 150 + * [Owlculus](https://github.com/be0vlk/owlculus) 151 + * OSINT (Open-Source Intelligence) toolkit and case management platform 152 + * [RabbitMQ](https://github.com/rabbitmq) 153 + * message broker that enables applications to communicate with each other by sending messages through queues 131 154 132 155 133 156 ## Investigation 134 - * [ThreatExchange by Meta](https://github.com/facebook/ThreatExchange ) 135 - * a platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner 136 - * [ThreatExchange Client via PHP](https://github.com/certly/threatexchange) 137 - * a PHP client for ThreatExchange 138 - * [ThreatExchange via Python](https://github.com/facebook/ThreatExchange/tree/main/python-threatexchange) 139 - * a Python library for ThreatExchange 140 - * [Feluda by Tattle](https://github.com/tattle-made/feluda) 141 - * A configurable engine for analysing multi-lingual and multi-modal content 157 + 158 + * [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website) 159 + * collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis 160 + * [Crossover](https://crossover.social/) 161 + * open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring 142 162 * [DAU Dashboard by Tattle](https://github.com/tattle-made/dau-dashboard) 143 163 * Deepfake Analysis Unit(DAU) is a collaborative space for analyzing deepfakes 144 - * [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website) 145 - * A collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis 164 + * [Feluda by Tattle](https://github.com/tattle-made/feluda) 165 + * configurable engine for analysing multi-lingual and multi-modal content 146 166 * [Interference by Digital Forensics Research Lab](https://github.com/DFRLab/interference2024) 147 - * an interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election 148 - * [Crossover](https://crossover.social/) 149 - * An open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring. 150 - * [TikTok Observatory](https://github.com/aiforensics/tkobservatory) 151 - * An open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm. 167 + * interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election 152 168 * [OpenMeasures](https://gitlab.com/openmeasures) 153 - * an open source platform for investigating internet trends 169 + * open source platform for investigating internet trends 170 + * [ThreatExchange by Meta](https://github.com/facebook/ThreatExchange ) 171 + * platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner 172 + * [ThreatExchange Client via PHP](https://github.com/certly/threatexchange) 173 + * PHP client for ThreatExchange 174 + * [ThreatExchange via Python](https://github.com/facebook/ThreatExchange/tree/main/python-threatexchange) 175 + * Python library for ThreatExchange 176 + * [TikTok Observatory](https://github.com/aiforensics/tkobservatory) 177 + * open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm 154 178 155 179 156 180 ## Datasets 181 + 157 182 * [Aegis Content Safety by NVIDIA](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) 158 - * a dataset created by NVIDIA to aid in content moderation and toxicity detection 183 + * dataset created by NVIDIA to aid in content moderation and toxicity detection 184 + * [Toxic Chat by LMSYS](https://huggingface.co/datasets/lmsys/toxic-chat) 185 + * dataset of toxic conversations collected from interactions with Vicuna 159 186 * [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred) 160 - * a large number of Wikipedia comments which have been labeled by human raters for toxic behavior 161 - * [Toxic Chat by LMSYS](https://huggingface.co/datasets/lmsys/toxic-chat) 162 - * a dataset of toxic conversations collected from interactions with Vicuna 187 + * large number of Wikipedia comments which have been labeled by human raters for toxic behavior 163 188 * [Uli Dataset by Tattle](https://github.com/tattle-made/uli_dataset) 164 - * A dataset of gendered abuse, created for Uli ML redaction. 189 + * dataset of gendered abuse, created for Uli ML redaction. 165 190 * [VTC by Unitary AI](https://github.com/unitaryai/VTC) 166 - * an implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video. 191 + * implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video 167 192 168 193 169 194 ## Red Teaming Datasets 170 - * [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark) 171 - * rankings of AI models based on resistance to adversarial attacks. 172 - * [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub) 173 - * a collection of jailbreak prompts and corresponding model responses 174 - * [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB) 175 - * a collection of prompts used for jailbreaking AI models. 195 + 196 + * [AI Alignment Dataset by Anthropic](https://atlas.nomic.ai/map/anthropic_rlhf) 197 + * data used for reinforcement learning with human feedback (RLHF) to align AI models. 198 + * [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data) 199 + * dataset from DEF CON’s AI red teaming event. 176 200 * [HackAPrompt Jailbreak Dataset](https://huggingface.co/datasets/hackaprompt/hackaprompt-dataset/viewer/default/train?p=1&row=137) 177 - * a dataset for testing AI vulnerability to prompt-based jailbreaking. 201 + * dataset for testing AI vulnerability to prompt-based jailbreaking 178 202 * [HiroKachi Jailbreak Dataset](https://sizu.me/love) 179 - * adataset focused on adversarial AI prompt attacks. 203 + * dataset focused on adversarial AI prompt attacks 204 + * [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator) 205 + * AI model that generates jailbreak-style prompts 206 + * [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub) 207 + * collection of jailbreak prompts and corresponding model responses 208 + * [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark) 209 + * rankings of AI models based on resistance to adversarial attacks 180 210 * [Rentry Jailbreak Datasets](https://rentry.org/gpt0721) 181 - * collection of datasets related to jailbreak attempts on AI models. 182 - * [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data) 183 - * dataset from DEF CON’s AI red teaming event. 184 - * [Anthropic’s AI Alignment Dataset](https://atlas.nomic.ai/map/anthropic_rlhf) 185 - * data used for reinforcement learning with human feedback (RLHF) to align AI models. 186 - * [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator) 187 - * AI model that generates jailbreak-style prompts. 211 + * collection of datasets related to jailbreak attempts on AI models 212 + * [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB) 213 + * collection of prompts used for jailbreaking AI models 188 214 215 + 189 216 ## Decentralized Platforms 217 + 218 + * [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod) 219 + * tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol 190 220 * [FediCheck](https://connect.iftas.org/library/iftas-documentation/fedicheck/) 191 221 * domain moderation tool to assist ActivityPub service providers, such as Mastodon servers, now open-sourced. 192 222 * [Fediverse Spam Filtering](https://github.com/MarcT0K/Fediverse-Spam-Filtering/ ) 193 - * a spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept. 223 + * spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept. 194 224 * [FIRES](https://github.com/fedimod/fires) 195 225 * reference server + protocol for the exchange of moderation adivsories and recommendations 196 226 * [Ozone by Bluesky](https://github.com/bluesky-social/ozone) 197 227 * labeling tool designed for Bluesky. Includes moderation features to action on abuse flags, policy enforcement tools, and investigation features 198 - * [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod) 199 - * a tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol 228 + 200 229 201 230 ## User Safety Tools 202 - * [Uli by Tattle](https://github.com/tattle-made/Uli) 203 - * Software and Resources for Mitigating Online Gender Based Violence in India 231 + 204 232 * [Frankly by Applied Social Media Lab](https://github.com/berkmancenter/frankly/) 205 - * an online deliberations platform that allows anyone to host video-enabled conversations about any topic 233 + * online deliberations platform that allows anyone to host video-enabled conversations about any topic 206 234 * [PolicyKit by UW Social Futures Lab](https://github.com/policykit/policykit) 207 - * a toolkit for building governance in your online community 235 + * toolkit for building governance in your online community 208 236 * [SquadBox by UW Social Futures Lab](https://github.com/amyxzhang/squadbox) 209 - * a tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages 210 - 211 - 237 + * tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages 238 + * [Uli by Tattle](https://github.com/tattle-made/Uli) 239 + * Software and Resources for Mitigating Online Gender Based Violence in India 212 240