···6565 * Python framework that helps build safe AI applications checking input/output for predefined risks
6666* [Kanana Safeguard By Kakao](https://huggingface.co/kakaocorp/kanana-safeguard-8b)
6767 * harmful content detection model based on Kanana 8B
6868+* [Granite Guardian by IBM Research](https://github.com/ibm-granite/granite-guardian)
6969+ * an input-output guardrail for detecting harms in a variety of use cases (general harm, RAG settings, agentic workflows, etc.)
6870* [Llama Guard by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
6971 * AI-powered content moderation model to detect harm in text-based interactions
7072* [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md)