Mirror of https://github.com/roostorg/coop github.com/roostorg/coop
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

docs: Clean up Content Safety API application details (#23)

Moves the explanatory/disclaimer-y text to a footnote; uses more descriptive link names

authored by

Cassidy James Blaede and committed by
GitHub
0408de5b ee002e6e

+4 -3
+4 -3
docs/SIGNALS.md
··· 88 88 | Integration | Signals | Configuration | 89 89 | :---- | :---- | :---- | 90 90 | **Moderation API by OpenAI** | There are two models you can use with this endpoint: **omni-moderation-latest:** This model and all snapshots support more categorization options and multi-modal inputs. <br> <br> **text-moderation-latest (Legacy):** Older model that supports only text inputs and fewer input categorizations. The newer omni-moderation models will be the best choice for new applications. | OpenAI API key | 91 - | **Content Safety API by Google** | V0: image classification | Content Safety API Key <br> <br> Industry and civil society third parties seeking to protect their platform against abuse can sign up to access the Content Safety API. Applications are subject to approval. You can submit an interest form through Google’s Child Safety Toolkit program [here](https://protectingchildren.google/toolkit-interest-form/?roost-coop). | 91 + | **Content Safety API by Google** | V0: image classification | Content Safety API Key[^csapi] | 92 + 93 + [^csapi]: Industry and civil society third parties seeking to protect their platform against abuse can sign up to access the Content Safety API. Applications are subject to approval. You can submit an interest form through [Google’s Child Safety Toolkit program](https://protectingchildren.google/toolkit-interest-form/?roost-coop). 92 94 93 95 #### Moderation API by OpenAI 94 96 Use the [moderations endpoint](https://platform.openai.com/docs/guides/moderation) to check whether text or images are potentially harmful. If harmful content is identified, you can take corrective action, like filtering content or intervening with user accounts creating offending content. The moderation endpoint is free to use. 95 97 96 - 97 98 #### Content Safety API by Google 98 99 99 - The Content Safety API is an AI classifier which issues a Child Safety prioritization recommendation on content sent to it. Content Safety API users must conduct their own manual review in order to determine whether to take action on the content, and comply with applicable local reporting laws. Apply for API keys [HERE](https://protectingchildren.google/toolkit-interest-form/?roost-coop) and mention in your application that you are using the Coop review tool. Upon reviewing your application, Google will be back in touch shortly to take the application forward if you qualify. 100 + The Content Safety API is an AI classifier which issues a Child Safety prioritization recommendation on content sent to it. Content Safety API users must conduct their own manual review in order to determine whether to take action on the content, and comply with applicable local reporting laws. [Apply for an API key](https://protectingchildren.google/toolkit-interest-form/?roost-coop) and mention in your application that you are using the Coop review tool. Upon reviewing your application, Google will be back in touch shortly to take the application forward if you qualify. 100 101 101 102 The API accepts a list of raw image bytes. The supported file types are listed below: 102 103