Mirror of https://github.com/roostorg/osprey github.com/roostorg/osprey
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

explicitly bind to localhost in docker compose (#124)

authored by

hailey and committed by
GitHub
2e977211 5210214d

+22 -20
+5 -3
README.md
··· 107 107 108 108 alternatively, you can start Osprey with `osprey-coordinator`, refer to the [Coordinator README](./example_docker_compose/run_osprey_with_coordinator/README.md) for more information 109 109 110 - 6. (Optional) **Port Forward the UI/UI API:** 110 + 6. (Optional) **Open ports for the UI/UI API:** 111 111 112 - If you are running the docker compose on a headless machine, you will need to port forward the UI and UI API. 113 - Namely, ports `5002` (UI) and `5004` (UI API). Then, you can connect via http://localhost:5002/ :D 112 + By default, the `docker-compose.yaml` binds running services to `127.0.0.1`. If you are running the docker compose on a headless machine, you may need to modify this configuration and/or make changes to your firewall, specifically for ports `5002` and `5004`. 114 113 114 + For example, if you use Tailscale to access your Osprey instance, you may change `127.0.0.1:5002:5002` to `<Tailscale IP>:5002:5002`. Alternatively, if you wish for your instance to be accessible from the public internet, you may set it simply to `5002:5002` to bind to `0.0.0.0`. 115 + 116 + Be aware that some firewalls like iptables/UFW do _not_ prevent access to ports being used by Docker networking. Not explicitly setting a bind address with only UFW as a firewall will not prevent access from the public internet unless [properly configured](https://github.com/chaifeng/ufw-docker). 115 117 116 118 ### Development Workflow 117 119
+1 -1
docker-compose.test.yaml
··· 13 13 container_name: etcd 14 14 image: quay.io/coreos/etcd:v3.4.18 15 15 ports: 16 - - "2379:2379" 16 + - "127.0.0.1:2379:2379" 17 17 environment: 18 18 - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 19 19 - ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379
+16 -16
docker-compose.yaml
··· 15 15 hostname: osprey-kafka 16 16 container_name: osprey-kafka 17 17 ports: 18 - - "9092:9092" 18 + - "127.0.0.1:9092:9092" 19 19 environment: 20 20 KAFKA_NODE_ID: 1 21 21 KAFKA_PROCESS_ROLES: "broker,controller" ··· 47 47 container_name: minio 48 48 hostname: minio 49 49 ports: 50 - - "9000:9000" # minio API 51 - - "9001:9001" # minio Console 50 + - "127.0.0.1:9000:9000" # minio API 51 + - "127.0.0.1:9001:9001" # minio Console 52 52 environment: 53 53 MINIO_ROOT_USER: minioadmin 54 54 MINIO_ROOT_PASSWORD: minioadmin123 ··· 103 103 minio-bucket-init: 104 104 condition: service_completed_successfully 105 105 ports: 106 - - "5001:5000" 106 + - "127.0.0.1:5001:5000" 107 107 command: ["osprey-worker"] 108 108 environment: 109 109 - PYTHONPATH=/osprey ··· 149 149 - bigtable 150 150 - bigtable-initializer 151 151 ports: 152 - - "5004:5004" 152 + - "127.0.0.1:5004:5004" 153 153 command: ["osprey-ui-api"] 154 154 environment: 155 155 - PYTHONPATH=/osprey ··· 180 180 depends_on: 181 181 - osprey-ui-api 182 182 ports: 183 - - "5002:5002" 183 + - "127.0.0.1:5002:5002" 184 184 environment: 185 185 - NODE_ENV=development 186 186 - REACT_APP_API_BASE_URL=http://localhost:5004 ··· 193 193 container_name: snowflake-id-worker 194 194 image: ghcr.io/ayubun/snowflake-id-worker:0 195 195 ports: 196 - - "8088:8088" 196 + - "127.0.0.1:8088:8088" 197 197 environment: 198 198 - WORKER_ID=0 199 199 - DATA_CENTER_ID=0 ··· 206 206 container_name: bigtable 207 207 image: gcr.io/google.com/cloudsdktool/cloud-sdk:latest 208 208 ports: 209 - - "8361:8361" 209 + - "127.0.0.1:8361:8361" 210 210 command: > 211 211 bash -c " 212 212 gcloud beta emulators bigtable start --host-port=0.0.0.0:8361 --project=osprey-dev ··· 256 256 container_name: postgres 257 257 image: postgres:18 258 258 ports: 259 - - "5432:5432" 259 + - "127.0.0.1:5432:5432" 260 260 volumes: 261 261 - metadata_data:/var/lib/postgresql 262 262 environment: ··· 277 277 container_name: druid-zookeeper 278 278 image: zookeeper:3.5.10 279 279 ports: 280 - - "2181:2181" 280 + - "127.0.0.1:2181:2181" 281 281 environment: 282 282 - ZOO_MY_ID=1 283 283 ··· 292 292 - druid-zookeeper 293 293 - postgres 294 294 ports: 295 - - "8081:8081" 295 + - "127.0.0.1:8081:8081" 296 296 command: 297 297 - coordinator 298 298 env_file: ··· 309 309 - postgres 310 310 - druid-coordinator 311 311 ports: 312 - - "8082:8082" 312 + - "127.0.0.1:8082:8082" 313 313 command: 314 314 - broker 315 315 env_file: ··· 327 327 - postgres 328 328 - druid-coordinator 329 329 ports: 330 - - "8083:8083" 330 + - "127.0.0.1:8083:8083" 331 331 command: 332 332 - historical 333 333 env_file: ··· 345 345 - postgres 346 346 - druid-coordinator 347 347 ports: 348 - - "8091:8091" 349 - - "8100-8105:8100-8105" 348 + - "127.0.0.1:8091:8091" 349 + - "127.0.0.1:8100-8105:8100-8105" 350 350 command: 351 351 - middleManager 352 352 env_file: ··· 363 363 - postgres 364 364 - druid-coordinator 365 365 ports: 366 - - "8888:8888" 366 + - "127.0.0.1:8888:8888" 367 367 command: 368 368 - router 369 369 env_file: