Mirror of https://github.com/roostorg/awesome-safety-tools
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

README: alphabetize and tweak for consistency

+136 -117
+136 -117
README.md
··· 1 1 # awesome-safety-tools 2 + 2 3 A collection of open source tools for online safety 3 4 5 + Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology. ❤️ 4 6 5 - Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology ❤️ 7 + Help contribute by opening a pull request to add more resources and tools! 6 8 7 - Help and contribute by adding a pull request to add more resources and tools! 8 9 10 + ## Hash Matching 9 11 10 - ## Hash Matching 12 + * [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude) 13 + * web UI and hash matching for violent extremism and terrorism content 11 14 * [Hasher Matcher Action (HMA) by Meta](https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner) 12 15 * hashing algorithm, matching function, and ability to hook into actions 16 + * [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo) 17 + * HMA extension for CLIP as reference for adding other format extensions 18 + * [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract) 19 + * grid and lattice detection to guard against FP in hash matching 20 + * [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file) 21 + * CSAM hash matching for Wikimedia 13 22 * [PDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/pdq) 14 23 * perceptual hash algorithm for images 15 - * [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk) 16 - * visual similarity match for videos 17 - * [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq) 18 - * visual similarity match for videos using PDQ algorithm 19 - * [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo) 20 - * HMA extension for CLIP as reference for adding other format extensins 21 24 * [Perception by Thorn](https://github.com/thorn-oss/perception) 22 25 * provides a common wrapper around existing, popular perceptual hashes (such as those implemented by ImageHash) 23 - * [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude) 24 - * web UI and hash matching for violent extremism and terrorism content 25 - * [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract) 26 - * grid and lattice detection to guard against FP in hash matching 27 26 * [RocketChat CSAM](https://github.com/prostasia/rocketchatcsam) 28 27 * CSAM hash matching for RocketChat 29 - * [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file) 30 - * CSAM hash matching for Wikimedia 28 + * [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk) 29 + * visual similarity match for videos 30 + * [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq) 31 + * visual similarity match for videos using PDQ algorithm 32 + 31 33 32 34 ## Classification 35 + 33 36 * [CoPE by Zentropi](https://huggingface.co/zentropi-ai/cope-a-9b) 34 37 * small language model trained for accurate, fast, steerable content classification based on developer-defined content policies 38 + * [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify) 39 + * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text 40 + * [gpt-oss-safeguard by OpenAI](https://github.com/openai/gpt-oss-safeguard) 41 + * open-weight reasoning model to classify text content based on provided safety policies 42 + * [NSFW Keras Model](https://github.com/GantMan/nsfw_model) 43 + * convoluted neural network (CNN) based explicit image ML model 44 + * [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter) 45 + * browser extension to block explicit images from online platforms; user facing 35 46 * [OSmod by Jigsaw](https://github.com/conversationai/conversationai-moderator) 36 47 * toolkit of machine learning (ML) tools, models, and APIs that platforms can use to moderate content 37 48 * [Perspective API by Jigsaw](https://github.com/conversationai/perspectiveapi) 38 49 * machine learning-powered tool that helps platforms detect and assess the toxicity of online conversations 39 - * [Roblox Voice Safety Classifier](https://github.com/Roblox/voice-safety-classifier) 40 - * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox. Focuses on spoken language detection. 41 - * [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify) 42 - * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text 43 - * [Toxic Prompt RoBERTa by Intel](https://huggingface.co/Intel/toxic-prompt-roberta) 44 - * a BERT-based model for detecting toxic content in prompts to language models 45 - * [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter) 46 - * browser extension to block explicit images from online platforms. User facing. 47 - * [NSFW Keras Model](https://github.com/GantMan/nsfw_model) 48 - * convoluted neural network (CNN) based explicit image ML model 49 50 * [Private Detector by Bumble](https://github.com/bumble-tech/private-detector) 50 - * a pretrained model for detecting lewd images 51 + * pretrained model for detecting lewd images 52 + * [Roblox Voice Safety Classifier](https://github.com/Roblox/voice-safety-classifier) 53 + * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox; focuses on spoken language detection 51 54 * [Sentinel by Roblox](https://github.com/Roblox/Sentinel/tree/main) 52 - * a Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles 53 - * [gpt-oss-safeguard by OpenAI](https://github.com/openai/gpt-oss-safeguard) 54 - * open-weight reasoning model to classify text content based on provided safety policies 55 + * Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles 56 + * [Toxic Prompt RoBERTa by Intel](https://huggingface.co/Intel/toxic-prompt-roberta) 57 + * BERT-based model for detecting toxic content in prompts to language models 58 + 55 59 56 60 ## AI-powered Guardrails 61 + 62 + * [Guardrails AI](https://github.com/guardrails-ai/guardrails) 63 + * Python framework that helps build safe AI applications checking input/output for predefined risks 64 + * [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b) 65 + * harmful content detection model based on Kanana 8B 57 66 * [Llama Guard by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3) 58 67 * AI-powered content moderation model to detect harm in text-based interactions 59 68 * [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md) 60 - * Detects prompt injection and jailbreaking attacks in LLM inputs. 69 + * Detects prompt injection and jailbreaking attacks in LLM inputs 61 70 * [Purple Llama by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3) 62 71 * set of tools to assess and improve LLM security. Includes Llama Guard, CyberSec Eval, and Code Shield 72 + * [RoGuard](https://github.com/Roblox/RoGuard-1.0) 73 + * LLM that helps safeguard unlimited text generation on Roblox 63 74 * [ShieldGemma by Google DeepMind](https://www.kaggle.com/code/fernandosr85/shieldgemma-web-content-safety-analyzer?scriptVersionId=198456916) 64 75 * AI safety toolkit by Google DeepMind designed to help detect and mitigate harmful or unsafe outputs in LLM applications 65 - * [Guardrails AI](https://github.com/guardrails-ai/guardrails) 66 - * a Python framework that helps build safe AI applications checking input/output for predefined risks 67 - * [RoGuard](https://github.com/Roblox/RoGuard-1.0) 68 - * a LLM that helps safeguard unlimited text generation on Roblox 69 - * [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b) 70 - * A harmful content detection model based on Kanana 8B 76 + 71 77 72 78 ## Privacy Protection 79 + 73 80 * [Fawkes Facial De-Recognition Cloaking](https://github.com/Shawn-Shan/fawkes) 74 81 * Code and binaries to confuse AIs when trying to match identity to photos, such as [Clearview](https://www.theverge.com/23919134/kashmir-hill-your-face-belongs-to-us-clearview-ai-facial-recognition-privacy-decoder) 75 82 * Many other great tools at github.com/Shawn-Shan, MIT researcher 76 83 * [Presidio by Microsoft](https://github.com/microsoft/presidio) 77 84 * toolset for detecting Personal Identifiable Information (PII) and other sensitive data in images and text 85 + 78 86 79 87 ## Core Infrastructure 80 - * [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir) 81 - * moderation bot for the Matrix protocol that automatically enforces content policies 88 + 82 89 * [AbuseIO](https://github.com/AbuseIO/AbuseIO) 83 90 * abuse management platform designed to help organizations handle and track abuse complaints related to online content, infrastructure, or services 84 - * [Open Truss by Github](https://github.com/open-truss/open-truss) 85 - * framework designed to help users create internal tools without needing to write code 86 91 * [Access by Discord](https://github.com/discord/access) 87 - * a centralized portal for managing access to internal systems within any organization 92 + * centralized portal for managing access to internal systems within any organization 93 + * [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir) 94 + * moderation bot for the Matrix protocol that automatically enforces content policies 95 + * [Open Truss by GitHub](https://github.com/open-truss/open-truss) 96 + * framework designed to help users create internal tools without needing to write code 88 97 98 + 89 99 ## Redteaming Tools 90 100 101 + * [Aymara](https://github.com/aymara-ai/aymara-sdk-python) 102 + * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability 103 + * [Counterfit by Microsoft](https://github.com/Azure/counterfit/) 104 + * Automation tool for assessing AI model security and robustness 105 + * [Garak by NVIDIA](https://github.com/NVIDIA/garak) 106 + * Framework for adversarial testing and model evaluation 107 + * [LLM Canary](https://github.com/LLM-Canary/LLM-Canary) 108 + * AI benchmarking tool that evaluates models for security vulnerabilities and adversarial robustness 109 + * [Prompt Fuzzer](https://github.com/prompt-security/ps-fuzz) 110 + * Tool for testing prompt injection vulnerabilities in AI systems 111 + * [Promptfoo](https://github.com/promptfoo/promptfoo) 112 + * Automated LLM evaluations, report generations, several ready-to-use attack strategies 91 113 * [PyRIT Documentation](https://azure.github.io/PyRIT/) 92 - * Microsoft’s Python-based tool for AI red teaming and security testing. 93 - * [AI Benchmarking Tool](https://github.com/LLM-Canary/LLM-Canary) 94 - * Evaluates AI models for security vulnerabilities and adversarial robustness. 95 - * [Prompt Fuzzer Red Teaming Tool](https://github.com/prompt-security/ps-fuzz) 96 - * Tool for testing prompt injection vulnerabilities in AI systems. 97 - * [Open Source Red Teaming Tool – Nvidia](https://github.com/NVIDIA/garak) 98 - * Framework for adversarial testing and model evaluation. 99 - * [Tool that Enables Models to Chat with One Another](https://github.com/socketteer?tab=repositories) 100 - * Allows AI models to interact, helping test conversational weaknesses. 101 - * [Microsoft AI Tool – Counterfit](https://github.com/Azure/counterfit/) 102 - * Automation tool for assessing AI model security and robustness. 103 - * [Automated AI Alignment Evals - Aymara](https://github.com/aymara-ai/aymara-sdk-python) 104 - * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability. 105 - * [LLM Evals & red teaming Promptfoo](https://github.com/promptfoo/promptfoo) 106 - * Automated evaluations, report generations, several ready-to-use attack strategies. 114 + * Microsoft’s Python-based tool for AI red teaming and security testing 115 + * [Socketteer](https://github.com/socketteer?tab=repositories) 116 + * Allows AI models to interact, helping test conversational weaknesses 117 + 107 118 108 119 ## Clustering 120 + 121 + * [scikit-learn](https://github.com/scikit-learn/scikit-learn) 122 + * python library including clustering through various algorithms, such as K-Means, DBSCAN, and hierarchical clustering 109 123 * [SpamAssassin by Apache](https://spamassassin.apache.org) 110 124 * anti-spam platform that uses a variety of techniques, including text analysis, Bayesian filtering, and DNS blocklists, to classify and block unsolicited email 111 - * [scikit-learn](https://github.com/scikit-learn/scikit-learn) 112 - * python library including clustering through various algorithms, such as K-Means, DBSCAN, and hierarchical clustering 113 125 114 126 115 127 ## Rules Engines 128 + 129 + * [Druid by Apache](https://github.com/apache/druid) 130 + * high performance real-time analytics database 131 + * [Marble](https://github.com/checkmarble/marble) 132 + * real-time fraud detection and compliance engine tailored for fintech companies and financial institutions 116 133 * [Osprey by ROOST](https://github.com/roostorg/osprey) 117 - * a high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work 134 + * high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work 118 135 * [RulesEngine by Microsoft](https://microsoft.github.io/RulesEngine/) 119 - * a library for abstracting business logic, rules, and policies from a system via JSON for .NET language families 120 - * [Marble](https://github.com/checkmarble/marble) 121 - * a real-time fraud detection and compliance engine tailored for fintech companies and financial institutions 136 + * library for abstracting business logic, rules, and policies from a system via JSON for .NET language families 122 137 * [Wikimedia Smite Spam](https://github.com/wikimedia/mediawiki-extensions-SmiteSpam) 123 - * an extension for MediaWiki that helps identify and manage spam content on a wiki 124 - * [Druid by Apache](https://github.com/apache/druid) 125 - * a high performance real-time analytics database 138 + * extension for MediaWiki that helps identify and manage spam content on a wiki 126 139 127 140 128 141 ## Review 129 - * [RabbitMQ](https://github.com/rabbitmq) 130 - * a message broker that enables applications to communicate with each other by sending messages through queues 142 + 131 143 * [BullMQ](https://github.com/taskforcesh/bullmq) 132 144 * message queue and batch processing for NodeJS and Python based on Redis 133 - * [Owlculus](https://github.com/be0vlk/owlculus) 134 - * an OSINT (Open-Source Intelligence) toolkit and case management platform 135 145 * [NCMEC Reporting by ello](https://github.com/ello/ncmec_reporting) 136 - * a Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline 146 + * Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline 147 + * [Owlculus](https://github.com/be0vlk/owlculus) 148 + * OSINT (Open-Source Intelligence) toolkit and case management platform 149 + * [RabbitMQ](https://github.com/rabbitmq) 150 + * message broker that enables applications to communicate with each other by sending messages through queues 137 151 138 152 139 153 ## Investigation 154 + 155 + * [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website) 156 + * collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis 157 + * [Crossover](https://crossover.social/) 158 + * open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring 159 + * [DAU Dashboard by Tattle](https://github.com/tattle-made/dau-dashboard) 160 + * Deepfake Analysis Unit(DAU) is a collaborative space for analyzing deepfakes 161 + * [Feluda by Tattle](https://github.com/tattle-made/feluda) 162 + * configurable engine for analysing multi-lingual and multi-modal content 163 + * [Interference by Digital Forensics Research Lab](https://github.com/DFRLab/interference2024) 164 + * interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election 165 + * [OpenMeasures](https://gitlab.com/openmeasures) 166 + * open source platform for investigating internet trends 140 167 * [ThreatExchange by Meta](https://github.com/facebook/ThreatExchange ) 141 - * a platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner 168 + * platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner 142 169 * [ThreatExchange Client via PHP](https://github.com/certly/threatexchange) 143 - * a PHP client for ThreatExchange 170 + * PHP client for ThreatExchange 144 171 * [ThreatExchange via Python](https://github.com/facebook/ThreatExchange/tree/main/python-threatexchange) 145 - * a Python library for ThreatExchange 146 - * [Feluda by Tattle](https://github.com/tattle-made/feluda) 147 - * A configurable engine for analysing multi-lingual and multi-modal content 148 - * [DAU Dashboard by Tattle](https://github.com/tattle-made/dau-dashboard) 149 - * Deepfake Analysis Unit(DAU) is a collaborative space for analyzing deepfakes 150 - * [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website) 151 - * A collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis 152 - * [Interference by Digital Forensics Research Lab](https://github.com/DFRLab/interference2024) 153 - * an interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election 154 - * [Crossover](https://crossover.social/) 155 - * An open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring. 172 + * Python library for ThreatExchange 156 173 * [TikTok Observatory](https://github.com/aiforensics/tkobservatory) 157 - * An open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm. 158 - * [OpenMeasures](https://gitlab.com/openmeasures) 159 - * an open source platform for investigating internet trends 174 + * open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm 160 175 161 176 162 177 ## Datasets 178 + 163 179 * [Aegis Content Safety by NVIDIA](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) 164 - * a dataset created by NVIDIA to aid in content moderation and toxicity detection 165 - * [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred) 166 - * a large number of Wikipedia comments which have been labeled by human raters for toxic behavior 180 + * dataset created by NVIDIA to aid in content moderation and toxicity detection 167 181 * [Toxic Chat by LMSYS](https://huggingface.co/datasets/lmsys/toxic-chat) 168 - * a dataset of toxic conversations collected from interactions with Vicuna 182 + * dataset of toxic conversations collected from interactions with Vicuna 183 + * [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred) 184 + * large number of Wikipedia comments which have been labeled by human raters for toxic behavior 169 185 * [Uli Dataset by Tattle](https://github.com/tattle-made/uli_dataset) 170 - * A dataset of gendered abuse, created for Uli ML redaction. 186 + * dataset of gendered abuse, created for Uli ML redaction. 171 187 * [VTC by Unitary AI](https://github.com/unitaryai/VTC) 172 - * an implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video. 188 + * implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video 173 189 174 190 175 191 ## Red Teaming Datasets 176 - * [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark) 177 - * rankings of AI models based on resistance to adversarial attacks. 178 - * [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub) 179 - * a collection of jailbreak prompts and corresponding model responses 180 - * [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB) 181 - * a collection of prompts used for jailbreaking AI models. 192 + 193 + * [AI Alignment Dataset by Anthropic](https://atlas.nomic.ai/map/anthropic_rlhf) 194 + * data used for reinforcement learning with human feedback (RLHF) to align AI models. 195 + * [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data) 196 + * dataset from DEF CON’s AI red teaming event. 182 197 * [HackAPrompt Jailbreak Dataset](https://huggingface.co/datasets/hackaprompt/hackaprompt-dataset/viewer/default/train?p=1&row=137) 183 - * a dataset for testing AI vulnerability to prompt-based jailbreaking. 198 + * dataset for testing AI vulnerability to prompt-based jailbreaking 184 199 * [HiroKachi Jailbreak Dataset](https://sizu.me/love) 185 - * adataset focused on adversarial AI prompt attacks. 200 + * dataset focused on adversarial AI prompt attacks 201 + * [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator) 202 + * AI model that generates jailbreak-style prompts 203 + * [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub) 204 + * collection of jailbreak prompts and corresponding model responses 205 + * [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark) 206 + * rankings of AI models based on resistance to adversarial attacks 186 207 * [Rentry Jailbreak Datasets](https://rentry.org/gpt0721) 187 - * collection of datasets related to jailbreak attempts on AI models. 188 - * [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data) 189 - * dataset from DEF CON’s AI red teaming event. 190 - * [Anthropic’s AI Alignment Dataset](https://atlas.nomic.ai/map/anthropic_rlhf) 191 - * data used for reinforcement learning with human feedback (RLHF) to align AI models. 192 - * [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator) 193 - * AI model that generates jailbreak-style prompts. 208 + * collection of datasets related to jailbreak attempts on AI models 209 + * [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB) 210 + * collection of prompts used for jailbreaking AI models 194 211 212 + 195 213 ## Decentralized Platforms 214 + 215 + * [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod) 216 + * tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol 196 217 * [FediCheck](https://connect.iftas.org/library/iftas-documentation/fedicheck/) 197 218 * domain moderation tool to assist ActivityPub service providers, such as Mastodon servers, now open-sourced. 198 219 * [Fediverse Spam Filtering](https://github.com/MarcT0K/Fediverse-Spam-Filtering/ ) 199 - * a spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept. 220 + * spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept. 200 221 * [FIRES](https://github.com/fedimod/fires) 201 222 * reference server + protocol for the exchange of moderation adivsories and recommendations 202 223 * [Ozone by Bluesky](https://github.com/bluesky-social/ozone) 203 224 * labeling tool designed for Bluesky. Includes moderation features to action on abuse flags, policy enforcement tools, and investigation features 204 - * [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod) 205 - * a tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol 225 + 206 226 207 227 ## User Safety Tools 208 - * [Uli by Tattle](https://github.com/tattle-made/Uli) 209 - * Software and Resources for Mitigating Online Gender Based Violence in India 228 + 210 229 * [Frankly by Applied Social Media Lab](https://github.com/berkmancenter/frankly/) 211 - * an online deliberations platform that allows anyone to host video-enabled conversations about any topic 230 + * online deliberations platform that allows anyone to host video-enabled conversations about any topic 212 231 * [PolicyKit by UW Social Futures Lab](https://github.com/policykit/policykit) 213 - * a toolkit for building governance in your online community 232 + * toolkit for building governance in your online community 214 233 * [SquadBox by UW Social Futures Lab](https://github.com/amyxzhang/squadbox) 215 - * a tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages 216 - 217 - 234 + * tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages 235 + * [Uli by Tattle](https://github.com/tattle-made/Uli) 236 + * Software and Resources for Mitigating Online Gender Based Violence in India 218 237