···5959 * BERT-based model for detecting toxic content in prompts to language models
606061616262-## AI-powered Guardrails
6262+## AI for Safety
63636464* [Guardrails AI](https://github.com/guardrails-ai/guardrails)
6565 * Python framework that helps build safe AI applications checking input/output for predefined risks