···11-<!-- Thank you for opening a pull request! Please ensure your addition is in the correct section, follows existing formatting, and is in alphabetical order. If you have more information or context about your addition, please share it below: -->
11+<!-- Thank you for opening a pull request!
22+33+Please ensure your addition:
44+- links to a source code repo (versus a marketing or documentation website, if possible)
55+- is in the correct section
66+- follows existing formatting
77+- is in alphabetical order
2899+If you have more information or context about your addition, please share it below: -->
+4
README.md
···6565 * Python framework that helps build safe AI applications checking input/output for predefined risks
6666* [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b)
6767 * harmful content detection model based on Kanana 8B
6868+* [Granite Guardian by IBM Research](https://github.com/ibm-granite/granite-guardian)
6969+ * an input-output guardrail for detecting harms in a variety of use cases (general harm, RAG settings, agentic workflows, etc.)
6870* [Llama Guard by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
6971 * AI-powered content moderation model to detect harm in text-based interactions
7072* [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md)
···184186185187* [Aegis Content Safety by NVIDIA](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0)
186188 * dataset created by NVIDIA to aid in content moderation and toxicity detection
189189+* [badwords by Richard Hughes](https://github.com/hughsie/badwords)
190190+ * simple list of bad words in different locales that can be used to flag suspicious user-submitted content
187191* [Toxic Chat by LMSYS](https://huggingface.co/datasets/lmsys/toxic-chat)
188192 * dataset of toxic conversations collected from interactions with Vicuna
189193* [Toxicity by Jigsaw](https://huggingface.co/datasets/google/jigsaw_toxicity_pred)