···11# awesome-safety-tools
22+23A collection of open source tools for online safety
3455+Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology. ❤️
4655-Inspired by prior work like [Awesome Redteaming](https://github.com/yeyintminthuhtut/Awesome-Red-Teaming/) and [Awesome Phishing](https://github.com/PhishyAlice/awesome-phishing). This list is not an endorsement, but rather an attempt to organize and map the available technology ❤️
77+Help contribute by opening a pull request to add more resources and tools!
6877-Help and contribute by adding a pull request to add more resources and tools!
891010+## Hash Matching
9111010-## Hash Matching
1212+* [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude)
1313+ * web UI and hash matching for violent extremism and terrorism content
1114* [Hasher Matcher Action (HMA) by Meta](https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner)
1215 * hashing algorithm, matching function, and ability to hook into actions
1616+* [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo)
1717+ * HMA extension for CLIP as reference for adding other format extensions
1818+* [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract)
1919+ * grid and lattice detection to guard against FP in hash matching
2020+* [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file)
2121+ * CSAM hash matching for Wikimedia
1322* [PDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/pdq)
1423 * perceptual hash algorithm for images
1515-* [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk)
1616- * visual similarity match for videos
1717-* [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq)
1818- * visual similarity match for videos using PDQ algorithm
1919-* [Hasher-Matcher-Actioner (CLIP demo)](https://github.com/juanmrad/HMA-CLIP-demo)
2020- * HMA extension for CLIP as reference for adding other format extensins
2124* [Perception by Thorn](https://github.com/thorn-oss/perception)
2225 * provides a common wrapper around existing, popular perceptual hashes (such as those implemented by ImageHash)
2323-* [Altitude by Jigsaw](https://github.com/jigsaw-code/altitude)
2424- * web UI and hash matching for violent extremism and terrorism content
2525-* [Lattice Extract by Adobe](https://github.com/adobe/lattice_extract)
2626- * grid and lattice detection to guard against FP in hash matching
2726* [RocketChat CSAM](https://github.com/prostasia/rocketchatcsam)
2827 * CSAM hash matching for RocketChat
2929-* [MediaModeration (Wiki Extension)](https://github.com/wikimedia/mediawiki-extensions-MediaModeration?tab=readme-ov-file)
3030- * CSAM hash matching for Wikimedia
2828+* [TMK by Meta](https://github.com/facebook/ThreatExchange/tree/main/tmk)
2929+ * visual similarity match for videos
3030+* [VPDQ by Meta](https://github.com/facebook/ThreatExchange/tree/main/vpdq)
3131+ * visual similarity match for videos using PDQ algorithm
3232+31333234## Classification
3535+3336* [CoPE by Zentropi](https://huggingface.co/zentropi-ai/cope-a-9b)
3437 * small language model trained for accurate, fast, steerable content classification based on developer-defined content policies
3838+* [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify)
3939+ * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text
4040+* [gpt-oss-safeguard by OpenAI](https://github.com/openai/gpt-oss-safeguard)
4141+ * open-weight reasoning model to classify text content based on provided safety policies
4242+* [NSFW Keras Model](https://github.com/GantMan/nsfw_model)
4343+ * convoluted neural network (CNN) based explicit image ML model
4444+* [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter)
4545+ * browser extension to block explicit images from online platforms; user facing
3546* [OSmod by Jigsaw](https://github.com/conversationai/conversationai-moderator)
3647 * toolkit of machine learning (ML) tools, models, and APIs that platforms can use to moderate content
3748* [Perspective API by Jigsaw](https://github.com/conversationai/perspectiveapi)
3849 * machine learning-powered tool that helps platforms detect and assess the toxicity of online conversations
3939-* [Roblox Voice Safety Classifier](https://github.com/Roblox/voice-safety-classifier)
4040- * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox. Focuses on spoken language detection.
4141-* [Detoxify by Unitary AI](https://github.com/unitaryai/detoxify)
4242- * detects and mitigates generalized toxic language (including hate speech, harassment, bullying) in text
4343-* [Toxic Prompt RoBERTa by Intel](https://huggingface.co/Intel/toxic-prompt-roberta)
4444- * a BERT-based model for detecting toxic content in prompts to language models
4545-* [NSFW Filtering](https://github.com/nsfw-filter/nsfw-filter)
4646- * browser extension to block explicit images from online platforms. User facing.
4747-* [NSFW Keras Model](https://github.com/GantMan/nsfw_model)
4848- * convoluted neural network (CNN) based explicit image ML model
4950* [Private Detector by Bumble](https://github.com/bumble-tech/private-detector)
5050- * a pretrained model for detecting lewd images
5151+ * pretrained model for detecting lewd images
5252+* [Roblox Voice Safety Classifier](https://github.com/Roblox/voice-safety-classifier)
5353+ * machine learning model that detects and moderates harmful content in real-time voice chat on Roblox; focuses on spoken language detection
5154* [Sentinel by Roblox](https://github.com/Roblox/Sentinel/tree/main)
5252- * a Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles
5353-* [gpt-oss-safeguard by OpenAI](https://github.com/openai/gpt-oss-safeguard)
5454- * open-weight reasoning model to classify text content based on provided safety policies
5555+ * Python library designed specifically for realtime detection of extremely rare classes of text by using contrastive learning principles
5656+* [Toxic Prompt RoBERTa by Intel](https://huggingface.co/Intel/toxic-prompt-roberta)
5757+ * BERT-based model for detecting toxic content in prompts to language models
5858+55595660## AI-powered Guardrails
6161+6262+* [Guardrails AI](https://github.com/guardrails-ai/guardrails)
6363+ * Python framework that helps build safe AI applications checking input/output for predefined risks
6464+* [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b)
6565+ * harmful content detection model based on Kanana 8B
5766* [Llama Guard by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
5867 * AI-powered content moderation model to detect harm in text-based interactions
5968* [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md)
6060- * Detects prompt injection and jailbreaking attacks in LLM inputs.
6969+ * Detects prompt injection and jailbreaking attacks in LLM inputs
6170* [Purple Llama by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
6271 * set of tools to assess and improve LLM security. Includes Llama Guard, CyberSec Eval, and Code Shield
7272+* [RoGuard](https://github.com/Roblox/RoGuard-1.0)
7373+ * LLM that helps safeguard unlimited text generation on Roblox
6374* [ShieldGemma by Google DeepMind](https://www.kaggle.com/code/fernandosr85/shieldgemma-web-content-safety-analyzer?scriptVersionId=198456916)
6475 * AI safety toolkit by Google DeepMind designed to help detect and mitigate harmful or unsafe outputs in LLM applications
6565-* [Guardrails AI](https://github.com/guardrails-ai/guardrails)
6666- * a Python framework that helps build safe AI applications checking input/output for predefined risks
6767-* [RoGuard](https://github.com/Roblox/RoGuard-1.0)
6868- * a LLM that helps safeguard unlimited text generation on Roblox
6969-* [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b)
7070- * A harmful content detection model based on Kanana 8B
7676+71777278## Privacy Protection
7979+7380* [Fawkes Facial De-Recognition Cloaking](https://github.com/Shawn-Shan/fawkes)
7481 * Code and binaries to confuse AIs when trying to match identity to photos, such as [Clearview](https://www.theverge.com/23919134/kashmir-hill-your-face-belongs-to-us-clearview-ai-facial-recognition-privacy-decoder)
7582 * Many other great tools at github.com/Shawn-Shan, MIT researcher
7683* [Presidio by Microsoft](https://github.com/microsoft/presidio)
7784 * toolset for detecting Personal Identifiable Information (PII) and other sensitive data in images and text
8585+78867987## Core Infrastructure
8080-* [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir)
8181- * moderation bot for the Matrix protocol that automatically enforces content policies
8888+8289* [AbuseIO](https://github.com/AbuseIO/AbuseIO)
8390 * abuse management platform designed to help organizations handle and track abuse complaints related to online content, infrastructure, or services
8484-* [Open Truss by Github](https://github.com/open-truss/open-truss)
8585- * framework designed to help users create internal tools without needing to write code
8691* [Access by Discord](https://github.com/discord/access)
8787- * a centralized portal for managing access to internal systems within any organization
9292+ * centralized portal for managing access to internal systems within any organization
9393+* [Mjolnir by Matrix](https://github.com/matrix-org/mjolnir)
9494+ * moderation bot for the Matrix protocol that automatically enforces content policies
9595+* [Open Truss by GitHub](https://github.com/open-truss/open-truss)
9696+ * framework designed to help users create internal tools without needing to write code
88979898+8999## Redteaming Tools
90100101101+* [Aymara](https://github.com/aymara-ai/aymara-sdk-python)
102102+ * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability
103103+* [Counterfit by Microsoft](https://github.com/Azure/counterfit/)
104104+ * Automation tool for assessing AI model security and robustness
105105+* [Garak by NVIDIA](https://github.com/NVIDIA/garak)
106106+ * Framework for adversarial testing and model evaluation
107107+* [LLM Canary](https://github.com/LLM-Canary/LLM-Canary)
108108+ * AI benchmarking tool that evaluates models for security vulnerabilities and adversarial robustness
109109+* [Prompt Fuzzer](https://github.com/prompt-security/ps-fuzz)
110110+ * Tool for testing prompt injection vulnerabilities in AI systems
111111+* [Promptfoo](https://github.com/promptfoo/promptfoo)
112112+ * Automated LLM evaluations, report generations, several ready-to-use attack strategies
91113* [PyRIT Documentation](https://azure.github.io/PyRIT/)
9292- * Microsoft’s Python-based tool for AI red teaming and security testing.
9393-* [AI Benchmarking Tool](https://github.com/LLM-Canary/LLM-Canary)
9494- * Evaluates AI models for security vulnerabilities and adversarial robustness.
9595-* [Prompt Fuzzer Red Teaming Tool](https://github.com/prompt-security/ps-fuzz)
9696- * Tool for testing prompt injection vulnerabilities in AI systems.
9797-* [Open Source Red Teaming Tool – Nvidia](https://github.com/NVIDIA/garak)
9898- * Framework for adversarial testing and model evaluation.
9999-* [Tool that Enables Models to Chat with One Another](https://github.com/socketteer?tab=repositories)
100100- * Allows AI models to interact, helping test conversational weaknesses.
101101-* [Microsoft AI Tool – Counterfit](https://github.com/Azure/counterfit/)
102102- * Automation tool for assessing AI model security and robustness.
103103-* [Automated AI Alignment Evals - Aymara](https://github.com/aymara-ai/aymara-sdk-python)
104104- * Automated eval tools for AI safety, accuracy, and jailbreak vulnerability.
105105-* [LLM Evals & red teaming Promptfoo](https://github.com/promptfoo/promptfoo)
106106- * Automated evaluations, report generations, several ready-to-use attack strategies.
114114+ * Microsoft’s Python-based tool for AI red teaming and security testing
115115+* [Socketteer](https://github.com/socketteer?tab=repositories)
116116+ * Allows AI models to interact, helping test conversational weaknesses
117117+107118108119## Clustering
120120+121121+* [scikit-learn](https://github.com/scikit-learn/scikit-learn)
122122+ * python library including clustering through various algorithms, such as K-Means, DBSCAN, and hierarchical clustering
109123* [SpamAssassin by Apache](https://spamassassin.apache.org)
110124 * anti-spam platform that uses a variety of techniques, including text analysis, Bayesian filtering, and DNS blocklists, to classify and block unsolicited email
111111-* [scikit-learn](https://github.com/scikit-learn/scikit-learn)
112112- * python library including clustering through various algorithms, such as K-Means, DBSCAN, and hierarchical clustering
113125114126115127## Rules Engines
128128+129129+* [Druid by Apache](https://github.com/apache/druid)
130130+ * high performance real-time analytics database
131131+* [Marble](https://github.com/checkmarble/marble)
132132+ * real-time fraud detection and compliance engine tailored for fintech companies and financial institutions
116133* [Osprey by ROOST](https://github.com/roostorg/osprey)
117117- * a high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work
134134+ * high-performance rules engine for real-time event processing at scale, designed for Trust & Safety and anti-abuse work
118135* [RulesEngine by Microsoft](https://microsoft.github.io/RulesEngine/)
119119- * a library for abstracting business logic, rules, and policies from a system via JSON for .NET language families
120120-* [Marble](https://github.com/checkmarble/marble)
121121- * a real-time fraud detection and compliance engine tailored for fintech companies and financial institutions
136136+ * library for abstracting business logic, rules, and policies from a system via JSON for .NET language families
122137* [Wikimedia Smite Spam](https://github.com/wikimedia/mediawiki-extensions-SmiteSpam)
123123- * an extension for MediaWiki that helps identify and manage spam content on a wiki
124124-* [Druid by Apache](https://github.com/apache/druid)
125125- * a high performance real-time analytics database
138138+ * extension for MediaWiki that helps identify and manage spam content on a wiki
126139127140128141## Review
129129-* [RabbitMQ](https://github.com/rabbitmq)
130130- * a message broker that enables applications to communicate with each other by sending messages through queues
142142+131143* [BullMQ](https://github.com/taskforcesh/bullmq)
132144 * message queue and batch processing for NodeJS and Python based on Redis
133133-* [Owlculus](https://github.com/be0vlk/owlculus)
134134- * an OSINT (Open-Source Intelligence) toolkit and case management platform
135145* [NCMEC Reporting by ello](https://github.com/ello/ncmec_reporting)
136136- * a Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline
146146+ * Ruby client library for reporting incidents to the National Center for Missing & Exploited Children (NCMEC) CyberTipline
147147+* [Owlculus](https://github.com/be0vlk/owlculus)
148148+ * OSINT (Open-Source Intelligence) toolkit and case management platform
149149+* [RabbitMQ](https://github.com/rabbitmq)
150150+ * message broker that enables applications to communicate with each other by sending messages through queues
137151138152139153## Investigation
154154+155155+* [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website)
156156+ * collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis
157157+* [Crossover](https://crossover.social/)
158158+ * open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring
159159+* [DAU Dashboard by Tattle](https://github.com/tattle-made/dau-dashboard)
160160+ * Deepfake Analysis Unit(DAU) is a collaborative space for analyzing deepfakes
161161+* [Feluda by Tattle](https://github.com/tattle-made/feluda)
162162+ * configurable engine for analysing multi-lingual and multi-modal content
163163+* [Interference by Digital Forensics Research Lab](https://github.com/DFRLab/interference2024)
164164+ * interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election
165165+* [OpenMeasures](https://gitlab.com/openmeasures)
166166+ * open source platform for investigating internet trends
140167* [ThreatExchange by Meta](https://github.com/facebook/ThreatExchange )
141141- * a platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner
168168+ * platform that enables organizations to share information about threats, such as malware, phishing attacks, and online safety harms in a structured and privacy-compliant manner
142169* [ThreatExchange Client via PHP](https://github.com/certly/threatexchange)
143143- * a PHP client for ThreatExchange
170170+ * PHP client for ThreatExchange
144171* [ThreatExchange via Python](https://github.com/facebook/ThreatExchange/tree/main/python-threatexchange)
145145- * a Python library for ThreatExchange
146146-* [Feluda by Tattle](https://github.com/tattle-made/feluda)
147147- * A configurable engine for analysing multi-lingual and multi-modal content
148148-* [DAU Dashboard by Tattle](https://github.com/tattle-made/dau-dashboard)
149149- * Deepfake Analysis Unit(DAU) is a collaborative space for analyzing deepfakes
150150-* [CIB MangoTree](https://github.com/CIB-Mango-Tree/CIB-Mango-Tree-Website)
151151- * A collection of tools to aid researchers in coordinated inauthentic behavior (CIB) analysis
152152-* [Interference by Digital Forensics Research Lab](https://github.com/DFRLab/interference2024)
153153- * an interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election
154154-* [Crossover](https://crossover.social/)
155155- * An open-source project that builds dashboards for monitoring and analyzing the recommendation algorithms of social networks, with a focus on disinformation and election monitoring.
172172+ * Python library for ThreatExchange
156173* [TikTok Observatory](https://github.com/aiforensics/tkobservatory)
157157- * An open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm.
158158-* [OpenMeasures](https://gitlab.com/openmeasures)
159159- * an open source platform for investigating internet trends
174174+ * open-source project maintained by [AI Forensics](https://aiforensics.org/) that allows researchers to monitor the promotion and demotion of content by the TikTok reccomendation algorithm
160175161176162177## Datasets
178178+163179* [Aegis Content Safety by NVIDIA](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0)
164164- * a dataset created by NVIDIA to aid in content moderation and toxicity detection
165165-* [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred)
166166- * a large number of Wikipedia comments which have been labeled by human raters for toxic behavior
180180+ * dataset created by NVIDIA to aid in content moderation and toxicity detection
167181* [Toxic Chat by LMSYS](https://huggingface.co/datasets/lmsys/toxic-chat)
168168- * a dataset of toxic conversations collected from interactions with Vicuna
182182+ * dataset of toxic conversations collected from interactions with Vicuna
183183+* [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred)
184184+ * large number of Wikipedia comments which have been labeled by human raters for toxic behavior
169185* [Uli Dataset by Tattle](https://github.com/tattle-made/uli_dataset)
170170- * A dataset of gendered abuse, created for Uli ML redaction.
186186+ * dataset of gendered abuse, created for Uli ML redaction.
171187* [VTC by Unitary AI](https://github.com/unitaryai/VTC)
172172- * an implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video.
188188+ * implementation of video-text retrieval with comments including a dataset, method of identifying relevant auxiliary information that adds context to videos, and quantification of the value comment-modality bring to video
173189174190175191## Red Teaming Datasets
176176-* [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark)
177177- * rankings of AI models based on resistance to adversarial attacks.
178178-* [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub)
179179- * a collection of jailbreak prompts and corresponding model responses
180180-* [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB)
181181- * a collection of prompts used for jailbreaking AI models.
192192+193193+* [AI Alignment Dataset by Anthropic](https://atlas.nomic.ai/map/anthropic_rlhf)
194194+ * data used for reinforcement learning with human feedback (RLHF) to align AI models.
195195+* [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data)
196196+ * dataset from DEF CON’s AI red teaming event.
182197* [HackAPrompt Jailbreak Dataset](https://huggingface.co/datasets/hackaprompt/hackaprompt-dataset/viewer/default/train?p=1&row=137)
183183- * a dataset for testing AI vulnerability to prompt-based jailbreaking.
198198+ * dataset for testing AI vulnerability to prompt-based jailbreaking
184199* [HiroKachi Jailbreak Dataset](https://sizu.me/love)
185185- * adataset focused on adversarial AI prompt attacks.
200200+ * dataset focused on adversarial AI prompt attacks
201201+* [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator)
202202+ * AI model that generates jailbreak-style prompts
203203+* [JailbreakHub by WalledAI](https://huggingface.co/datasets/walledai/JailbreakHub)
204204+ * collection of jailbreak prompts and corresponding model responses
205205+* [Red Team Resistance Leaderboard](https://huggingface.co/spaces/HaizeLabs/red-teaming-resistance-benchmark)
206206+ * rankings of AI models based on resistance to adversarial attacks
186207* [Rentry Jailbreak Datasets](https://rentry.org/gpt0721)
187187- * collection of datasets related to jailbreak attempts on AI models.
188188-* [DEFCOM Red Teaming Dataset](https://github.com/humane-intelligence/ai_village_defcon_grt_data)
189189- * dataset from DEF CON’s AI red teaming event.
190190-* [Anthropic’s AI Alignment Dataset](https://atlas.nomic.ai/map/anthropic_rlhf)
191191- * data used for reinforcement learning with human feedback (RLHF) to align AI models.
192192-* [Jailbreak Prompt Generator AI Model](https://huggingface.co/tsq2000/Jailbreak-generator)
193193- * AI model that generates jailbreak-style prompts.
208208+ * collection of datasets related to jailbreak attempts on AI models
209209+* [SidFeel Jailbreak Dataset](https://github.com/sidfeels/PromptsDB)
210210+ * collection of prompts used for jailbreaking AI models
194211212212+195213## Decentralized Platforms
214214+215215+* [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod)
216216+ * tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol
196217* [FediCheck](https://connect.iftas.org/library/iftas-documentation/fedicheck/)
197218 * domain moderation tool to assist ActivityPub service providers, such as Mastodon servers, now open-sourced.
198219* [Fediverse Spam Filtering](https://github.com/MarcT0K/Fediverse-Spam-Filtering/ )
199199- * a spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept.
220220+ * spam filter for Fediverse social media platforms. For now, the current version is only a proof of concept.
200221* [FIRES](https://github.com/fedimod/fires)
201222 * reference server + protocol for the exchange of moderation adivsories and recommendations
202223* [Ozone by Bluesky](https://github.com/bluesky-social/ozone)
203224 * labeling tool designed for Bluesky. Includes moderation features to action on abuse flags, policy enforcement tools, and investigation features
204204-* [Automod by Bluesky](https://github.com/bluesky-social/indigo/tree/main/automod)
205205- * a tool for automating content moderation processes for the Bluesky social network and other apps on the AT Protocol
225225+206226207227## User Safety Tools
208208-* [Uli by Tattle](https://github.com/tattle-made/Uli)
209209- * Software and Resources for Mitigating Online Gender Based Violence in India
228228+210229* [Frankly by Applied Social Media Lab](https://github.com/berkmancenter/frankly/)
211211- * an online deliberations platform that allows anyone to host video-enabled conversations about any topic
230230+ * online deliberations platform that allows anyone to host video-enabled conversations about any topic
212231* [PolicyKit by UW Social Futures Lab](https://github.com/policykit/policykit)
213213- * a toolkit for building governance in your online community
232232+ * toolkit for building governance in your online community
214233* [SquadBox by UW Social Futures Lab](https://github.com/amyxzhang/squadbox)
215215- * a tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages
216216-217217-234234+ * tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages
235235+* [Uli by Tattle](https://github.com/tattle-made/Uli)
236236+ * Software and Resources for Mitigating Online Gender Based Violence in India
218237