···6767 * AI-powered content moderation model to detect harm in text-based interactions
6868* [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md)
6969 * Detects prompt injection and jailbreaking attacks in LLM inputs
7070+* [OpenGuardrails](https://www.openguardrails.com/)
7171+ * Security Gateway providing a transparent reverse proxy for OpenAI apis with integrated safety protection
7072* [Purple Llama by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
7173 * set of tools to assess and improve LLM security. Includes Llama Guard, CyberSec Eval, and Code Shield
7274* [RoGuard](https://github.com/Roblox/RoGuard-1.0)
7375 * LLM that helps safeguard unlimited text generation on Roblox
7476* [ShieldGemma by Google DeepMind](https://www.kaggle.com/code/fernandosr85/shieldgemma-web-content-safety-analyzer?scriptVersionId=198456916)
7577 * AI safety toolkit by Google DeepMind designed to help detect and mitigate harmful or unsafe outputs in LLM applications
7676-* [OpenGuardrails](https://www.openguardrails.com/)
7777- * Security Gateway providing a transparent reverse proxy for OpenAI apis with integrated safety protection
7878+787979808081## Privacy Protection