Mirror of https://github.com/roostorg/coop
github.com/roostorg/coop
1# Coop User Guide
2
3The Coop UI has several pages accessible by a left-hand menu:
4
5* Overview
6* Automated Enforcement
7* Policies
8* Review Console
9
10As well as a bottom menu with buttons for logging out, managing settings, and viewing your profile.
11
12## Dashboard
13
14
15
16The Overview dashboard provides top level metrics for Coop that can be filtered by hourly or daily breakdown across a window of time including:
17
18* Total actions taken
19* Jobs pending review
20* Automated vs manual actions
21* Top policy violations
22* Decisions per moderator
23* Actions per rule (if rules are enabled)
24* Count of violations by Policy
25
26## Settings
27
28### Configuring Items and Actions
29
30
31
32Item Types represent the different types of entities on your platform. For example, if you've built a social network that allows users to create profiles, upload posts, and comment on other users' posts, then your Item Types might be **Profile**, **Post**, **Comment**, and **Comment Thread**. If you've built a marketplace platform, your Item Types might be **Buyer**, **Seller**, **Product Listing**, **Product Review,** **Direct Message**, **Transaction**, etc. Every Item you send Coop needs to be an instance of exactly one of these Item Types.
33
34When creating an Item Type, define the schema to include which fields will be included and shown to reviewers. These fields are also available in any rule logic to connect with signals for routing or automation.
35
36
37
38Actions represent any action you can perform on Items. Some common examples include Label, Send Warning, Delete, Ban, Mute, Send to Manual Review, Approve, etc.
39
40
41
42For every Action you define in Coop, you have to expose the action through an API endpoint that can receive requests from Coop. Whenever your rules determine that some Item should receive an Action, Coop will send a POST request to the Action's API endpoint. When your server receives that POST request, your code should actually perform the corresponding action.
43
44
45Coop uses an API key to gate its endpoints. Use the UI to generate an API key to authenticate any requests Coop makes to your organization’s endpoints.
46
47### Integrations
48
49Coop comes with pre-built integrations to common software used for online safety. Add your API key to enable integrations like OpenAI’s Moderation API or Google’s Content Safety API, and set up your instance of Meta’s Hasher-Matcher-Actioner.
50
51### User Management
52
53
54Coop uses role-based access controls to make sure the right people can access and view the right data. You can use the UI to invite more users and either copy the link for them to sign up with an account, or set up an email service to email the link to the invited user.
55
56
57### User Roles
58
59Coop comes with 7 predefined roles that can be further customized:
60
61| User Role | Access Manual Review Tool | View all Queues | Create, Delete and Edit Queues | Create, Delete and Edit Rules | Access NCMEC data | Access Insights |
62| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
63| Admin | Yes | Yes | Yes | Yes | Yes | Yes |
64| Moderator Manager | Yes | Yes | Yes | No | Yes | Yes |
65| Analyst/Rules Manager | No | Yes | No | Yes | No | Yes |
66| Child Safety Moderator | Yes | No | No | No | Yes | No |
67| Moderator | Yes | No | No | No | No | No |
68| External Moderator | Yes | No | No | No | No | No |
69
70
71### NCMEC Reporting Settings
72
73If your organization submits reports to the NCMEC CyberTipline, configure NCMEC reporting under **Settings → NCMEC** (or `/dashboard/settings/ncmec`). These settings are used when building and submitting CyberTip reports.
74
75
76
77| Setting | Required | Description |
78|--------|----------|-------------|
79| **Username** | Yes | Your NCMEC CyberTipline API username. |
80| **Password** | Yes | Your NCMEC CyberTipline API password. |
81| **Company Report Name** | Yes | Your organization name as it appears in NCMEC reports. This value is also sent as the reporter’s product/service name (ESP service) for the reported user in each report. |
82| **Legal URL** | Yes | URL to your Terms of Service or legal policies (e.g. `https://yourcompany.com/terms`). |
83| **Contact Email** | No | Email for the reporting person on the CyberTip report. The XML receipt from NCMEC can serve as the ESP notification. |
84| **Terms of Service** | No | Optional TOS relevant to the incident being reported or URL to acceptable use policy. |
85| **Contact person (for law enforcement)** | No | Optional person law enforcement can contact (other than the reporting person): first name, last name, email, phone. All fields optional. |
86| **More Info URL** | No | Optional URL for additional information (e.g. `https://yourcompany.com/ncmec-info`). |
87| **Default NCMEC queue** | No | When reviewers choose “Enqueue to NCMEC,” jobs are sent to this manual review queue. Leave as “Use org default queue” to use the organization’s default queue. |
88| **Default internet detail type** | No | Incident context (channel/medium) for CyberTip reports: Web page, Email, Newsgroup, Chat/IM, Online gaming, Cell phone, Non-internet, or Peer-to-peer. When set, each report includes this in internetDetails. For "Web page," the More Info URL is used if set. |
89| **NCMEC Preservation Endpoint** | No | Optional webhook URL for NCMEC preservation requests after a report is submitted. Your service can use this to preserve user/data as required. |
90| **NCMEC Additional Info Endpoint** | No | Optional webhook URL. When building a report, Coop calls this endpoint to request additional information (e.g. user email, screen name, IP capture events) for the reported users and media. If not set, reports use minimal defaults (e.g. user ID as screen name). |
91
92Saving credentials and required fields (Company Report Name, Legal URL) enables NCMEC reporting for the organization. Reporting only occurs when reviewers submit a report from the NCMEC Review queue.
93
94**Admin**
95Admins manage their entire organizations. They have full control over all of the organization's resources and settings within Coop.
96
97**Analyst**
98Analysts can view metrics for all Rules, create or edit Draft and Background Rules, and run Backtests. They cannot create or edit Live Rules, run Retroaction on Live rules, or edit any other resources (Actions, Content Types, Signals, other Users, etc.). In short, they can experiment with Background Rules and view Rule metrics, but cannot affect any Live Rules or other features that actually mutate your data.
99
100**Child Safety Moderator**
101Child Safety Moderators have the same permissions as Moderators, but they are also able to review Child Safety jobs and can see previous Child Safety decisions.
102
103**External Moderator**
104External Moderators can only review jobs in the Manual Review tool. They cannot see any decisions or use any other tooling
105
106**Moderator**
107Moderators can view the Manual Review tool, but are only able to review jobs from queues that they've been given permission to see. They can also view overall Manual Review metrics. They cannot see any Child Safety-related jobs or decisions.
108
109**Moderator Manager**
110Moderator managers can view and edit queues within the Manual Review Tool. They have full control over the permissions that moderators have, and the Routing Rules that determine how to route each incoming job to the right queue.
111
112**Rules Manager**
113Rules Managers can create, edit, and deploy Rules, and they can view all metrics related to Rules. They cannot create, edit, or delete other organization-level settings, including Actions, Item Types, Manual Review Queues, or other Users in the organization.
114
115
116Once you invite a new user to Coop, you can either configure an email service to send the link to that person or copy the invite link and share it directly with them.
117
118### SSO
119
120Learn how to configure SSO using Okta SAML.
121
122Coop only supports SSO through Okta SAML.
123
124**Prerequisites**
125
126To configure Okta SAML SSO, you must:
127
128* Be in Admin mode in Okta.
129* Have group names that match exactly between Okta and SAML.
130* Have admin permissions in Coop.
131* Have the ability to create a custom SAML application.
132
133**Configuration**
134
1351. Create a [custom SAML application](https://help.okta.com/oag/en-us/content/topics/access-gateway/add-app-saml-pass-thru-add-okta.htm) in Okta. Use the following settings.
136
137 | Setting | Value |
138 | :------ | :---- |
139 | Single sign-on URL | Your organization's callback URL (e.g. `https://your-coop-instance.com/login/saml/12345/callback`). You can find your callback link in Coop under **Settings → SSO**. |
140 | Audience URI (SP Entity ID) | Your Coop instance base URL (e.g. `https://your-coop-instance.com`). |
141 | `email` attribute (in **Attribute Statements**) | `email`. This field depends on your Identity Provider's attribute mappings (e.g. Google SSO may use "Primary Email"). |
142
1432. In the **Feedback** tab, check **I'm a software vendor. I'd like to integrate my app with Okta**.
1443. In your app's settings, go to the **Sign On** tab. Under **SAML Signing Certificates → SHA-2**, click **Actions → View IdP metadata**.
1454. Copy the contents of the XML file. In Coop, go to **Settings → SSO** and paste the XML into the **Identity Provider Metadata** field.
1465. On the same page, enter `email` in the **Attributes** section.
1476. In your Okta app under **Assignments**, assign users or groups to your app.
148
149### Wellness and Safety
150
151Reviewer safety and well-being is critical. Trust & Safety is an incredibly difficult field of work, and it can take a severe mental toll, especially for the moderators who are on the front line, reviewing disturbing content for hours on end every day.
152
153That's why Coop includes customizable settings to prioritize reviewer safety. These come in two forms:
154
155
1561. **Company-wide Safety Settings:** If you are an Admin in your platform’s Coop organization, you can set the default safety settings for every employee at your company who has their own login credentials. Those are configured in your Employee Safety settings page.
157
158
159
1602. **Personal Safety Settings:** Any user can customize their own personal safety settings in their account settings page. These will override the default, company-wide safety settings and allow users to create the best experience for themselves.
161
162
163For both levels of safety settings, you can customize the following properties:
164
1651. **Image & video blur strength:** you can configure whether images and videos are always blurred by default, along with the strength of the blur. When images are blurred within Coop, hovering over the image with your mouse will unblur the image, and moving your mouse outside the image will blur it again. When videos are blurred within Coop, playing the video will unblur it.
1662. **Image & video grayscale:** you can decide whether images and videos are displayed in grayscale or in full color.
1673. **Video muted by default:** you can ensure videos are always muted by default, even if your device's volume is on.
168
169## Policies
170
171Policies are categories of harm that are prohibited or monitored on your platform. Some typical examples include Spam, Nudity, Fraud, Violence, etc. Policies can have sub-policies underneath them, so the Spam policy could have sub-policies like Commercial Spam, Repetitive Content, Fake Engagement, Scams and Phishing, etc., all of which are specific types of Spam that could occur on your platform.
172
173It is often useful (and in some cases, required by some legislation) to tie every Action you take to one or more specific Policies. For example, you could Delete a comment under your Nudity policy, or you could Delete it under your Spam policy. Coop allows you to track those differences and measure how many Actions you've taken for each Policy. That way, you can see how effectively you're enforcing each Policy over time, identify Policies for which your enforcement is poor or degrading, and report performance metrics to your team (or to regulators).
174
175Policies added in Coop’s UI will be visible to reviewers directly in the review flow, so they can easily reference policies and enforcement guidelines.
176
177Learn more about Policies from the [Trust & Safety Professional Association.](https://www.tspa.org/curriculum/ts-fundamentals/policy/policy-development/)
178
179
180## Manual Review
181
182### Queues
183
184
185Coop uses [Queues](https://en.wikipedia.org/wiki/Queue_\(abstract_data_type\)) (i.e. ordered containers) to organize tasks. When you create a task in Coop, the task will enter a Queue and wait to be reviewed. Then, when a user visits the Queue dashboard and clicks "Start Reviewing" on a particular Queue, Coop pulls the oldest task in the Queue so the user can review it. After a user makes a decision on a task, they will automatically see the next oldest task until the Queue is empty or the user stops. Coop will automatically make sure that two moderators won't receive the same job to avoid duplicate work.
186
187Queues can be starred to prioritize at the top of the review console (per user) and show the number of pending jobs.
188
189You may want to restrict which Actions can be triggered from a particular Queue. When you create or edit a Queue, you can configure which Actions are hidden for that Queue through its "Hidden Actions" setting.
190
191### Task View
192
193
194The task (aka Job) view, shows information about the flagged content or actor being reviewed. Each task has its own hyperlink and can be shared across anyone at your organization who is allowed to access Coop.
195When a user reviews a task, Coop will show as much relevant information about the task and its corresponding Item as possible. As a baseline, the Item's fields will render, and depending on the Item's type (e.g. whether it's a DM, Comment, Profile, etc.), Coop will try to render additional relevant information and surrounding context, including:
196
197* Associated user account connected to the content
198* Additional content associated with the same user
199
200Default actions include:
201
202* Ignore (task is closed with no action taken)
203* Enqueue to NCMEC (moves the task to the NCMEC review queue and converts it into a User type review, aggregating all media associated with the user)
204* Move (moves the task to another existing queue)
205
206Any Actions you configure will also show up. You can hide specific actions in specific queues when creating or editing a queue.
207
208## Investigation
209
210
211You can use the Investigation tool (either plug in the unique ID of the Item or click through from a manual review task) to see more information about it:
212
213* The Item itself, including all relevant attributes and metadata Coop has on that Item.
214* The user who created the Item (if applicable), and all the attributes and metadata Coop has on the user.
215* The details and full context of any previous actions taken on the Item, and the user who created it.
216* Other Items that are closely related to the main Item you're investigating. For example, if you're investigating a single comment within a larger comment thread, we'll show you preceding and subsequent comments in the thread.
217
218**Taking action from Investigation:** You can also manually take action on the Item as you're investigating it. Use the **"Take action on this item"** form (above the results): select an action, add a policy if required, then click **Submit Actions**. You can do this without being in a Review Queue—for example, to unban a user after a ban was applied earlier.
219
220**Reversing an action (e.g. unbanning):** Coop has no built-in undo. To be able to do this, create a separate custom action in Settings that calls your platform’s reverse endpoint (e.g. unban) if it exists. Run that action on the item from Investigation tool or by navigating from Recent Decisions. Once you take action on the item the callback would send the request to your platform to perform any required actions.
221
222## Automated Enforcement
223
224Read more about rules in Coop in the [rules doc](RULES.md#automated-action-rules).
225
226### Matching Banks
227
228
229A Matching Bank is a way to keep track of a large list of values that you'd want to check for matches. For example, you could create a Matching Bank that holds 10,000 keywords, and you can check every new post or comment to see if the text in the post or comment matches any of the 10,000 banned keywords in the Matching Bank. You can build Rules that reference your Matching Banks, so you don't have to list out all 10,000 banned keywords every time you create a Rule that checks for matches against those keywords.
230
231You can also use regex in rules, for example a regex checking if a URL has been shortened:
232
233These banks can then be used as signals in both automatic enforcement rules and routing rules for manual review.
234
235
236#### Hash Banks
237Coop integrates with [hasher-matcher-actioner (HMA)](https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner/), providing a configurable way to match known CSAM, non-consensual intimate imagery, terrorist and violent extremist content, and any internal hash banks you maintain. Setup requires API credentials from supported hash databases like NCMEC and StopNCII. Read more about HMA documentation [here]([https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner](https://github.com/facebook/ThreatExchange/tree/main/hasher-matcher-actioner/docs)).
238
239HMA signals are available as a signal in Coop’s signal library.
240
241##### Setting Up HMA
242How you set things up depends on your use case:
243
244* If items are submitted by user reports (`POST /api/v1/report`): no enforcement rule is needed. Reported items are automatically enqueued to MRT, and routing rules will direct them to the right queue. Simply create a routing rule with the image hash condition and your target queue.
245
246* If items are submitted via the items API `(POST /api/v1/items/async/)` and you want Coop to proactively flag matches without a user report: you need an automated enforcement rule with the image hash condition and a "Send to Manual Review" action. Optionally pair it with a routing rule to direct matches to a specific queue (otherwise they go to the default queue).
247
248##### Managing hash banks
249
250Banks created directly in HMA (e.g. via the HMA UI or seed scripts) will not appear in Coop's Matching Banks UI unless they are also registered in the hash_banks table. The recommended approach is to create banks through the Coop UI (Settings → Matching Banks), which registers the bank in both HMA and Coop's database automatically.
251
252
253
254
255Banks created through Coop are named in HMA using the convention COOP_<ORGID>_<NORMALIZED_NAME> — for example, a bank named "Test Bank" for org e7c89ce7729 becomes `COOP_E7C89CE7729_TEST_BANK` in HMA. This is what you will see in the HMA UI. You can use the HMA UI to manually add content to the bank for local testing.
256
257
258
259
260#### Location Banks
261
262
263A Location Matching Bank holds a list of [geohashes](https://en.wikipedia.org/wiki/Geohash) or [Google Maps Places](https://developers.google.com/maps/documentation/places/web-service), which are two types of references to geographical locations. This can be helpful if you have a list of locations where you want to apply a different set of Rules. For example, you can apply stricter sexual content Rules on college and high school campuses than you do elsewhere.
264
265## Appeals
266
267When you make a decision that affects a user on your platform, you may want to give them the ability to "appeal" your decision (and the EU's [Digital Services Act](https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM%3A2020%3A825%3AFIN) requires this for many platforms). If a user appeals a moderation decision, you'll send that Appeal to Coop's Appeal API so that a moderator can review the Appeal and decide whether or not to overturn the original decision.
268
269
270
271Reports and appeals can be closely related. For example:
272
273* User A can report User B's profile, which you would send to Coop through the Report API.
274* If a moderator suspend's User B, then User B may appeal that suspension decision, which you would send to Coop through the Appeal API.
275* A different moderator can then accept the Appeal and overturn the suspension, or they can reject the Appeal and sustain the suspension.
276
277## Recent Decision Log
278
279Coop logs all actions taken in a Recent Decisions Log that includes basic information like the decision(s) taken on each Job and the user who made the decision.
280
281
282 You can also see the full Job itself if you decide you want to investigate further, take additional Actions, or overturn a previous Decision.
283
284## NCMEC Review and Reporting
285
286Coop is integrated with the [CyberTip Reporting API](https://report.cybertip.org/ispws/documentation) from the National Center for Missing and Exploited Children. Head to [NCMEC Reporting](NCMEC.md) for more information.
287
288
289
290 ### Prerequisites
291In order to review accounts and content to report to NCMEC, you must have:
292
293 1. NCMEC API credentials — a username and password for the CyberTip API, obtained from NCMEC directly by [registering as an Electronic Service Prover](https://esp.ncmec.org/registration).
294 2. A User item type with a creatorId field role on content types A User item type that is a RELATED_ITEM field for associated content, sometimes the `creatorId` field. This stores a structured reference to the User item. NCMEC-type jobs extract the user identifier from this structured reference to look up the full user in the Item Investigation Service.
295 3. NCMEC org settings configured. This is set via the Settings page (Admin only): API credentials, company template, legal URL, and contact email
296 4. A dedicated NCMEC manual review queue called "NCMEC Review". Coop uses a default_ncmec_queue_id setting to route NCMEC jobs. Queue IDs registered as production queues submit real CyberTips; all
297 others use the NCMEC test environment.
298 5. An Additional Info endpoint (optional but recommended) is a signed webhook Coop calls before submitting a CyberTip to retrieve user email addresses, screen names, IP
299 capture events, and per-media metadata. If not configured, Coop submits with minimal user data that can make reports less actionable.
300 6. A Preservation endpoint (optional) is a webhook Coop calls after a successful CyberTip submission with the report ID, so you can preserve relevant data per NCMEC
301 requirements.
302
303 Coop will automatically convert content Item Types and aggregate all media associated with the user to convert the job into a NCMEC-type job. This creates a detailed NCMEC report around one user, rather than multiple NCMEC reports for multiple pieces of content from the same user.
304
305
306
307The NCMEC job UI includes:
308
309* Incident Type category (from the [Reporting API](https://report.cybertip.org/ispws/documentation/index.html#incident-summary)). Values include:
310 * Child Pornography (possession, manufacture, and distribution)
311 * Child Sex Trafficking
312 * Child Sex Tourism
313 * Child Sexual Molestation
314 * Misleading Domain Name
315 * Misleading Words or Digital Images on the Internet
316 * Online Enticement of Children for Sexual Acts
317 * Unsolicited Obscene Material Sent to a Child
318* [Industry categorization](https://report.cybertip.org/ispws/documentation/index.html#incident-summary) (A categorization from the [ESP-designated categorization scale](https://technologycoalition.org/wp-content/uploads/Tech_Coalition_Industry_Classification_System.pdf)):
319 * A1
320 * A2
321 * B1
322 * B2
323* [Labels (file annotations)](https://report.cybertip.org/ispws/documentation/index.html#file-annotations):
324 * **animeDrawingVirtualHentai**: The file is depicting anime, drawing, cartoon, virtual or hentai.
325 * **potentialMeme**: The file is being shared/posted out of mimicry or other seemingly non-malicious intent.
326 * **viral:** The file is circulating rapidly from one user to another.
327 * **possibleSelfProduction**: The file contains content that is believed to be self-produced.
328 * **physicalHarm**: The file depicts an intentional act of causing physical injury or trauma to a person.
329 * **violenceGore**: The file depicts graphic violence, including but not limited to acts of brutality or detailed or vivid gruesomeness.
330 * **bestiality**: The file involves an animal.
331 * **liveStreaming**: The file depicts content that was streamed live at the time it was uploaded.
332 * **infant**: The file depicts an infant.
333 * **generativeAi**: The file contains content that is believed to be Generative Artificial Intelligence.