···6969 * AI-powered content moderation model to detect harm in text-based interactions
7070* [Llama Prompt Guard 2 by Meta](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Prompt-Guard-2/86M/MODEL_CARD.md)
7171 * Detects prompt injection and jailbreaking attacks in LLM inputs
7272-* [OpenGuardrails](https://www.openguardrails.com/)
7272+* [OpenGuardrails](https://github.com/openguardrails/openguardrails)
7373 * Security Gateway providing a transparent reverse proxy for OpenAI apis with integrated safety protection
7474* [Purple Llama by Meta](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)
7575 * set of tools to assess and improve LLM security. Includes Llama Guard, CyberSec Eval, and Code Shield
···114114 * Tool for testing prompt injection vulnerabilities in AI systems
115115* [Promptfoo](https://github.com/promptfoo/promptfoo)
116116 * Automated LLM evaluations, report generations, several ready-to-use attack strategies
117117-* [PyRIT Documentation](https://azure.github.io/PyRIT/)
117117+* [PyRIT](https://github.com/Azure/PyRIT)
118118 * Microsoft’s Python-based tool for AI red teaming and security testing
119119* [Socketteer](https://github.com/socketteer?tab=repositories)
120120 * Allows AI models to interact, helping test conversational weaknesses