Shitty IaC
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

*: first commit yooo

Gee Sawra 6dfd0837

+1282
+1
.gitignore
··· 1 + .vault_password
+34
README.md
··· 1 + # clorofilla.casa IaC 2 + 3 + This is my first attempt at writing IaC """code""" for my home server. 4 + 5 + It's a very basic Ansible-based setup: 6 + - OS is AlmaLinux 10 for maximum street cred and stability 7 + - service orchestration is managed by K3s 8 + - storage is handled by my NAS 9 + - everything is only accessible either through local IPs or Tailscale 10 + - TLS managed by LetsEncrypt 11 + - DNS handled by PorkBun, using Tailscale IPs 12 + - all secrets are encrypted with Ansible Vault 13 + 14 + Playbooks will take care of enrolling the system in the Tailscale Tailnet 15 + attached to the provided API key. 16 + 17 + ## How do I deploy this thing 18 + 19 + Suppose you have a freshly-built AlmaLinux 10 system, for which you have SSH keys configured, 20 + you should run playbooks in the order they appear given their filename: 21 + 22 + ```sh 23 + TAILSCALE_KEY='tskey-your-API-key-here' ansible-playbook ansible/setup.yml -i ansible/inventory/hosts.yml --vault-password-file .vault_password 24 + ``` 25 + 26 + After that's done, the system from which you're executing the playbooks should have a Kubeconfig file in `~/.kube/config`: you're ready to use K8s: 27 + 28 + ```sh 29 + ansible-playbook services.yml -i ansible/inventory/hosts.yml --vault-password-file .vault_password 30 + ``` 31 + 32 + ## A note on reproducibility 33 + 34 + This repo is mostly to keep me safe, you're not really supposed to deploy this stuff anywhere else, though feel free to draw inspiration from it!
+177
ansible/.gitignore
··· 1 + .logs/* 2 + *.retry 3 + *.vault 4 + collections/* 5 + !collections/ansible_collections 6 + !collections/requirements.yml 7 + collections/ansible_collections/* 8 + !collections/ansible_collections/clorofilla 9 + collections/ansible_collections/clorofilla/* 10 + !collections/ansible_collections/clorofilla/casa 11 + # https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore 12 + # Byte-compiled / optimized / DLL files 13 + __pycache__/ 14 + *.py[cod] 15 + *$py.class 16 + 17 + # C extensions 18 + *.so 19 + 20 + # Distribution / packaging 21 + .Python 22 + build/ 23 + develop-eggs/ 24 + dist/ 25 + downloads/ 26 + eggs/ 27 + .eggs/ 28 + lib/ 29 + lib64/ 30 + parts/ 31 + sdist/ 32 + var/ 33 + wheels/ 34 + share/python-wheels/ 35 + *.egg-info/ 36 + .installed.cfg 37 + *.egg 38 + MANIFEST 39 + 40 + # PyInstaller 41 + # Usually these files are written by a python script from a template 42 + # before PyInstaller builds the exe, so as to inject date/other infos into it. 43 + *.manifest 44 + *.spec 45 + 46 + # Installer logs 47 + pip-log.txt 48 + pip-delete-this-directory.txt 49 + 50 + # Unit test / coverage reports 51 + htmlcov/ 52 + .tox/ 53 + .nox/ 54 + .coverage 55 + .coverage.* 56 + .cache 57 + nosetests.xml 58 + coverage.xml 59 + *.cover 60 + *.py,cover 61 + .hypothesis/ 62 + .pytest_cache/ 63 + cover/ 64 + 65 + # Translations 66 + *.mo 67 + *.pot 68 + 69 + # Django stuff: 70 + *.log 71 + local_settings.py 72 + db.sqlite3 73 + db.sqlite3-journal 74 + 75 + # Flask stuff: 76 + instance/ 77 + .webassets-cache 78 + 79 + # Scrapy stuff: 80 + .scrapy 81 + 82 + # Sphinx documentation 83 + docs/_build/ 84 + 85 + # PyBuilder 86 + .pybuilder/ 87 + target/ 88 + 89 + # Jupyter Notebook 90 + .ipynb_checkpoints 91 + 92 + # IPython 93 + profile_default/ 94 + ipython_config.py 95 + 96 + # pyenv 97 + # For a library or package, you might want to ignore these files since the code is 98 + # intended to run in multiple environments; otherwise, check them in: 99 + # .python-version 100 + 101 + # pipenv 102 + # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 103 + # However, in case of collaboration, if having platform-specific dependencies or dependencies 104 + # having no cross-platform support, pipenv may install dependencies that don't work, or not 105 + # install all needed dependencies. 106 + #Pipfile.lock 107 + 108 + # poetry 109 + # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 110 + # This is especially recommended for binary packages to ensure reproducibility, and is more 111 + # commonly ignored for libraries. 112 + # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 113 + #poetry.lock 114 + 115 + # pdm 116 + # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 117 + #pdm.lock 118 + # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 119 + # in version control. 120 + # https://pdm.fming.dev/#use-with-ide 121 + .pdm.toml 122 + 123 + # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 124 + __pypackages__/ 125 + 126 + # Celery stuff 127 + celerybeat-schedule 128 + celerybeat.pid 129 + 130 + # SageMath parsed files 131 + *.sage.py 132 + 133 + # Environments 134 + .env 135 + .venv 136 + env/ 137 + venv/ 138 + ENV/ 139 + env.bak/ 140 + venv.bak/ 141 + 142 + # Spyder project settings 143 + .spyderproject 144 + .spyproject 145 + 146 + # Rope project settings 147 + .ropeproject 148 + 149 + # mkdocs documentation 150 + /site 151 + 152 + # mypy 153 + .mypy_cache/ 154 + .dmypy.json 155 + dmypy.json 156 + 157 + # Pyre type checker 158 + .pyre/ 159 + 160 + # pytype static type analyzer 161 + .pytype/ 162 + 163 + # Cython debug symbols 164 + cython_debug/ 165 + 166 + # PyCharm 167 + # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 168 + # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 169 + # and can be added to the global gitignore or merged into this file. For a more nuclear 170 + # option (not recommended) you can uncomment the following to ignore the entire idea folder. 171 + #.idea/ 172 + 173 + # MacOS 174 + .DS_Store 175 + 176 + # Ansible 177 + .ansible/
+147
ansible/1_setup.yml
··· 1 + --- 2 + - name: Import global variables (required due to import playbook for K3s) 3 + hosts: all 4 + gather_facts: false 5 + tasks: 6 + - name: Import global variables (required due to import playbook for K3s) 7 + set_fact: 8 + ansible_become_password: "{{ ansible_become_password }}" 9 + k3s_version: "{{ k3s_version_num }}" 10 + api_endpoint: "{{ api_endpoint_str }}" 11 + delegate_to: localhost 12 + run_once: true 13 + no_log: true 14 + 15 + - name: Initialize system 16 + hosts: all 17 + tags: init 18 + become: true 19 + roles: 20 + - { 21 + role: exploide.dnf-automatic, 22 + dnf_automatic_reboot: true, 23 + dnf_automatic_reboot_OnCalendar: "04:00", 24 + } 25 + tasks: 26 + - name: Stop, disable, and mask firewalld 27 + ansible.builtin.systemd: 28 + name: firewalld 29 + state: stopped 30 + enabled: no 31 + masked: yes 32 + 33 + - name: Set hostname 34 + hostname: 35 + name: "{{ inventory_hostname | replace('_', '-') }}" 36 + 37 + - name: Ensure updated packages 38 + ansible.builtin.dnf: 39 + name: "*" 40 + state: latest 41 + 42 + - name: Install Tailscale 43 + hosts: all 44 + become: true 45 + 46 + roles: 47 + - role: artis3n.tailscale.machine 48 + vars: 49 + tailscale_authkey: "{{ lookup('env', 'TAILSCALE_KEY') }}" 50 + tailscale_args: "--ssh" 51 + 52 + - name: Install K3s 53 + tags: deploy_k3s 54 + ansible.builtin.import_playbook: k3s.orchestration.site 55 + 56 + - name: Copy kubeconfig to current user's home 57 + hosts: all 58 + become: false 59 + tags: k8s,kubeconfig 60 + 61 + tasks: 62 + - name: Copy K3s kubeconfig 63 + ansible.builtin.fetch: 64 + src: /etc/rancher/k3s/k3s.yaml 65 + dest: ~/.kube/config 66 + flat: yes 67 + become: yes 68 + 69 + - name: Fix server address in kubeconfig 70 + ansible.builtin.replace: 71 + path: ~/.kube/config 72 + regexp: "https://127.0.0.1:6443" 73 + replace: "https://{{ ansible_host }}:6443" 74 + delegate_to: localhost 75 + 76 + - name: Setup Traefik with Porkbun certs 77 + hosts: localhost 78 + become: false 79 + tags: traefik,k8s 80 + 81 + tasks: 82 + - name: Create Porkbun API credentials secret 83 + kubernetes.core.k8s: 84 + state: present 85 + definition: 86 + apiVersion: v1 87 + kind: Secret 88 + metadata: 89 + name: porkbun-api-credentials 90 + namespace: kube-system 91 + type: Opaque 92 + stringData: 93 + api-key: "{{ porkbun_api_key }}" 94 + secret-api-key: "{{ porkbun_api_secret }}" 95 + 96 + - name: Configure traefik to use certificates 97 + kubernetes.core.k8s: 98 + state: present 99 + src: k8s/traefik_porkbun.yml 100 + 101 + - name: Harden Log Rotation for K3s and Systemd 102 + hosts: all 103 + become: true 104 + tags: k8s,logging 105 + tasks: 106 + - name: Limit journald total disk usage to 500MB 107 + ansible.builtin.lineinfile: 108 + create: yes 109 + path: /etc/systemd/journald.conf 110 + regexp: "^#?SystemMaxUse=" 111 + line: "SystemMaxUse=500M" 112 + notify: Restart journald 113 + 114 + - name: Ensure journald persistent storage is enabled 115 + ansible.builtin.lineinfile: 116 + create: yes 117 + path: /etc/systemd/journald.conf 118 + regexp: "^#?Storage=" 119 + line: "Storage=persistent" 120 + notify: Restart journald 121 + 122 + - name: Ensure K3s config directory exists 123 + ansible.builtin.file: 124 + path: /etc/rancher/k3s 125 + state: directory 126 + mode: "0755" 127 + 128 + - name: Configure K3s log rotation via config.yaml 129 + ansible.builtin.blockinfile: 130 + path: /etc/rancher/k3s/config.yaml 131 + create: yes 132 + block: | 133 + kubelet-arg: 134 + - "container-log-max-files=3" 135 + - "container-log-max-size=10Mi" 136 + notify: Restart k3s 137 + 138 + handlers: 139 + - name: Restart journald 140 + ansible.builtin.service: 141 + name: systemd-journald 142 + state: restarted 143 + 144 + - name: Restart k3s 145 + ansible.builtin.service: 146 + name: k3s 147 + state: restarted
+57
ansible/2_services.yml
··· 1 + --- 2 + - name: Deploy K8s services 3 + hosts: localhost 4 + become: false 5 + tags: k8s 6 + tasks: 7 + - name: Add NFS provisioner Helm repo 8 + kubernetes.core.helm_repository: 9 + name: nfs-subdir-external-provisioner 10 + repo_url: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ 11 + 12 + - name: Deploy NFS provisioner 13 + kubernetes.core.helm: 14 + name: nfs-provisioner 15 + chart_ref: nfs-subdir-external-provisioner/nfs-subdir-external-provisioner 16 + release_namespace: kube-system 17 + create_namespace: true 18 + values: 19 + nfs: 20 + server: "{{ nas_ip }}" 21 + path: "{{ nas_path }}" 22 + storageClass: 23 + name: nfs-client 24 + defaultClass: false 25 + 26 + - name: Deploy Home Assistant stable 27 + tags: ha 28 + kubernetes.core.k8s: 29 + state: present 30 + src: k8s/ha.yml 31 + 32 + - name: Deploy Transmission for my torrents 33 + tags: transmission 34 + kubernetes.core.k8s: 35 + state: present 36 + src: k8s/transmission.yml 37 + 38 + - name: Deploy postgres operator 39 + tags: postgres 40 + kubernetes.core.k8s: 41 + state: present 42 + src: https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.0.yaml 43 + 44 + - name: Deploy Immich dependencies 45 + tags: immich 46 + kubernetes.core.k8s: 47 + state: present 48 + src: k8s/immich.yml 49 + 50 + - name: Deploy Immich through Helm 51 + tags: immich 52 + kubernetes.core.helm: 53 + name: immich 54 + chart_ref: oci://ghcr.io/immich-app/immich-charts/immich 55 + release_namespace: immich 56 + values_files: 57 + - k8s/immich_values.yml
+6
ansible/AGENTS.md
··· 1 + <!--# cspell: ignore SSOT CMDB --> 2 + # AGENTS.md 3 + 4 + Ensure that all practices and instructions described by 5 + https://raw.githubusercontent.com/ansible/ansible-creator/refs/heads/main/docs/agents.md 6 + are followed.
+19
ansible/ansible.cfg
··· 1 + [defaults] 2 + # Specify the inventory file 3 + inventory = inventory/hosts.yml 4 + 5 + # Set the logging verbosity level 6 + verbosity = 2 7 + 8 + # Set the default user for SSH connections 9 + remote_user = geesawra 10 + 11 + # Define the default become method 12 + become_method = sudo 13 + 14 + [persistent_connection] 15 + # Controls how long the persistent connection will remain idle before it is destroyed 16 + connect_timeout=30 17 + 18 + # Controls the amount of time to wait for response from remote device before timing out persistent connection 19 + command_timeout=30
ansible/collections/ansible_collections/clorofilla/casa/CHANGELOG.md

This is a binary file and will not be displayed.

+80
ansible/collections/ansible_collections/clorofilla/casa/README.md
··· 1 + # Clorofilla Casa Collection 2 + 3 + This repository contains the `clorofilla.casa` Ansible Collection. 4 + 5 + ## Tested with Ansible 6 + 7 + Tested with ansible-core >=2.14 releases and the current development version of 8 + ansible-core. 9 + 10 + ## External requirements 11 + 12 + Some modules and plugins require external libraries. Please check the 13 + requirements for each plugin or module you use in the documentation to find out 14 + which requirements are needed. 15 + 16 + ## Included content 17 + 18 + Please check the included content on the 19 + [Ansible Galaxy page for this collection](https://galaxy.ansible.com/clorofilla/casa). 20 + 21 + ## Using this collection 22 + 23 + ```shell 24 + ansible-galaxy collection install clorofilla.casa 25 + ``` 26 + 27 + You can also include it in a `requirements.yml` file and install it via 28 + `ansible-galaxy collection install -r requirements.yml` using the format: 29 + 30 + ```yaml 31 + collections: 32 + - name: clorofilla.casa 33 + ``` 34 + 35 + To upgrade the collection to the latest available version, run the following 36 + command: 37 + 38 + ```bash 39 + ansible-galaxy collection install clorofilla.casa --upgrade 40 + ``` 41 + 42 + You can also install a specific version of the collection, for example, if you 43 + need to downgrade when something is broken in the latest version (please report 44 + an issue in this repository). Use the following syntax where `X.Y.Z` can be any 45 + [available version](https://galaxy.ansible.com/clorofilla/casa): 46 + 47 + ```bash 48 + ansible-galaxy collection install clorofilla.casa:==X.Y.Z 49 + ``` 50 + 51 + See 52 + [Ansible Using Collections](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html) 53 + for more details. 54 + 55 + ## Release notes 56 + 57 + See the 58 + [changelog](https://github.com/ansible-collections/clorofilla.casa/tree/main/CHANGELOG.rst). 59 + 60 + ## Roadmap 61 + 62 + <!-- Optional. Include the roadmap for this collection, and the proposed release/versioning strategy so users can anticipate the upgrade/update cycle. --> 63 + 64 + ## More information 65 + 66 + <!-- List out where the user can find additional information, such as working group meeting times, slack/Matrix channels, or documentation for the product this collection automates. At a minimum, link to: --> 67 + 68 + - [Ansible collection development forum](https://forum.ansible.com/c/project/collection-development/27) 69 + - [Ansible User guide](https://docs.ansible.com/ansible/devel/user_guide/index.html) 70 + - [Ansible Developer guide](https://docs.ansible.com/ansible/devel/dev_guide/index.html) 71 + - [Ansible Collections Checklist](https://docs.ansible.com/ansible/devel/community/collection_contributors/collection_requirements.html) 72 + - [Ansible Community code of conduct](https://docs.ansible.com/ansible/devel/community/code_of_conduct.html) 73 + - [The Bullhorn (the Ansible Contributor newsletter)](https://docs.ansible.com/ansible/devel/community/communication.html#the-bullhorn) 74 + - [News for Maintainers](https://forum.ansible.com/tag/news-for-maintainers) 75 + 76 + ## Licensing 77 + 78 + GNU General Public License v3.0 or later. 79 + 80 + See [LICENSE](https://www.gnu.org/licenses/gpl-3.0.txt) to see the full text.
+15
ansible/collections/ansible_collections/clorofilla/casa/galaxy.yml
··· 1 + --- 2 + # Minimal galaxy.yml for a playbook project for tools to recognize this as a collection 3 + 4 + namespace: "clorofilla" 5 + name: "casa" 6 + readme: README.md 7 + version: 0.0.1 8 + authors: 9 + - your name <example@domain.com> 10 + 11 + description: Collection for clorofilla.casa playbook project 12 + 13 + # TO-DO: update the tags based on your content type 14 + tags: ["tools"] 15 + repository: NA
+2
ansible/collections/ansible_collections/clorofilla/casa/meta/runtime.yml
··· 1 + --- 2 + requires_ansible: ">=2.17.0"
+57
ansible/collections/ansible_collections/clorofilla/casa/roles/run/README.md
··· 1 + Clorofilla.Casa Run Role 2 + ======================== 3 + 4 + A brief description of the role is here. 5 + 6 + Requirements 7 + ------------ 8 + 9 + Any prerequisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 + 11 + Role Variables 12 + -------------- 13 + 14 + A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. host vars, group vars, etc.) should be mentioned here as well. 15 + 16 + Dependencies 17 + ------------ 18 + 19 + A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 + 21 + Example Playbook 22 + ---------------- 23 + 24 + Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 + 26 + ```yaml 27 + - name: Execute tasks on servers 28 + hosts: servers 29 + roles: 30 + - role: clorofilla.casa.run 31 + run_x: 42 32 + ``` 33 + 34 + Another way to consume this role would be: 35 + 36 + ```yaml 37 + - name: Initialize the run role from clorofilla.casa 38 + hosts: servers 39 + gather_facts: false 40 + tasks: 41 + - name: Trigger invocation of run role 42 + ansible.builtin.include_role: 43 + name: clorofilla.casa.run 44 + vars: 45 + run_x: 42 46 + ``` 47 + 48 + License 49 + ------- 50 + 51 + <!-- TO-DO: Update the license to the one you want to use (delete this line after setting the license) --> 52 + BSD 53 + 54 + Author Information 55 + ------------------ 56 + 57 + An optional section for the role authors to include contact information, or a website (HTML is not allowed).
+12
ansible/collections/ansible_collections/clorofilla/casa/roles/run/tasks/main.yml
··· 1 + --- 2 + - name: Debug print task-1 3 + ansible.builtin.debug: 4 + msg: "This is task-1" 5 + 6 + - name: Debug print task-2 7 + ansible.builtin.debug: 8 + msg: "This is task-2" 9 + 10 + - name: Debug print task-3 11 + ansible.builtin.debug: 12 + msg: "This is task-3"
+20
ansible/collections/requirements.yml
··· 1 + --- 2 + collections: 3 + - name: ansible.posix 4 + version: 1.4.0 5 + 6 + - name: ansible.scm 7 + version: 2.0.0 8 + 9 + - name: ansible.utils 10 + version: 4.0.0 11 + 12 + - name: cisco.ios 13 + 14 + - name: https://github.com/redhat-cop/network.backup 15 + type: git 16 + 17 + # TO-DO: User's own collections can also be specified as mention below. 18 + # - name: my_organization.my_collection 19 + # version: 1.2.3 20 + # source: https://github.com/my_organization/my_collection.git
+5
ansible/group_vars/all/vars.yml
··· 1 + nas_ip: 192.168.1.155 2 + nas_path: /mnt/fast/1tb/services 3 + ansible_port: 22 4 + k3s_version_num: v1.35.0+k3s1 5 + api_endpoint_str: "{{ hostvars[groups['server'][0]]['ansible_host'] | default(groups['server'][0]) }}"
+21
ansible/group_vars/all/vault.yml
··· 1 + $ANSIBLE_VAULT;1.1;AES256 2 + 65393661633633663565303165306635313264613262666235336264306335623833623633376537 3 + 6161333636373538363639653464656638316634346639350a363734396433623137636236346334 4 + 36623464666436323461363462313565313939623466633064316632353165323365353439643466 5 + 3334356437386164340a616234626136653831363561656232653063393466306562363633356335 6 + 38346361666462376635333936376432356636326331626634363961636332353638643334353132 7 + 63363436306437376537623735616361333166363730313435663432633831663334663833643761 8 + 64313435333536643132356134353462383933303765323430336432333064643332326536333464 9 + 62346232383266633837616566393531656339343139353764363966393935383763303836333064 10 + 35666664333664303663313632333765386630666239643739326630363561363362343631343935 11 + 34343965613132633163353934636333323562656464343861626338666262343834356231306234 12 + 35313430613565636261653161626362623566623863343062346236316162646335396166613237 13 + 31386333323436656566346166386232333761393161643030643765373635626663396533616131 14 + 34323639376662646536653666323831343036376135643339613162633365383834663761353066 15 + 30336264623665623431646437626237343433343430633165393762633364343366616562646561 16 + 35346230373037653235646561653835316263643764636261386134393063336434353735363864 17 + 30363261353762663334346130366635636232666631356430383834376539313538363038343365 18 + 62623962653131653532633765383034343435316462383264643262633033616138636564376438 19 + 34643430653864623565363635313366366235346531623639396163633164646439646536663433 20 + 63303262613539653534303231646637313363656530656337343530663630643264646637323233 21 + 39306631666365373065
+15
ansible/inventory/hosts.yml
··· 1 + --- 2 + all: 3 + children: 4 + k3s_cluster: 5 + children: 6 + server: 7 + hosts: 8 + elitedesk-downstairs: 9 + ansible_host: 192.168.1.242 10 + agent: 11 + hosts: 12 + # Leave empty or add other agent nodes here 13 + elitedesk_home_lab: 14 + hosts: 15 + elitedesk-downstairs:
+24
ansible/k8s/argocd_traefik.yml
··· 1 + apiVersion: traefik.io/v1alpha1 2 + kind: IngressRoute 3 + metadata: 4 + name: argocd-server 5 + namespace: argocd 6 + spec: 7 + entryPoints: 8 + - websecure 9 + routes: 10 + - kind: Rule 11 + match: Host(`argocd.clorofilla.casa`) 12 + priority: 10 13 + services: 14 + - name: argocd-server 15 + port: 80 16 + - kind: Rule 17 + match: Host(`argocd.clorofilla.casa`) && Header(`Content-Type`, `application/grpc`) 18 + priority: 11 19 + services: 20 + - name: argocd-server 21 + port: 80 22 + scheme: h2c 23 + tls: 24 + certResolver: default
+154
ansible/k8s/ha.yml
··· 1 + apiVersion: v1 2 + kind: Namespace 3 + metadata: 4 + name: home-automation 5 + --- 6 + apiVersion: v1 7 + kind: PersistentVolumeClaim 8 + metadata: 9 + name: home-assistant-config 10 + namespace: home-automation 11 + spec: 12 + storageClassName: nfs-client 13 + accessModes: 14 + - ReadWriteMany 15 + resources: 16 + requests: 17 + storage: 1Gi 18 + --- 19 + apiVersion: v1 20 + kind: PersistentVolumeClaim 21 + metadata: 22 + name: home-assistant-time-pvc 23 + namespace: home-automation 24 + labels: 25 + app: home-assistant 26 + spec: 27 + accessModes: 28 + - ReadWriteOnce 29 + storageClassName: local-path 30 + resources: 31 + requests: 32 + storage: 1Mi 33 + --- 34 + apiVersion: v1 35 + kind: PersistentVolume 36 + metadata: 37 + name: home-assistant-time-pv 38 + spec: 39 + capacity: 40 + storage: 1Mi 41 + accessModes: 42 + - ReadWriteOnce 43 + persistentVolumeReclaimPolicy: Retain 44 + hostPath: 45 + path: /etc/localtime 46 + claimRef: 47 + name: home-assistant-time-pvc 48 + namespace: home-automation 49 + --- 50 + apiVersion: v1 51 + kind: ConfigMap 52 + metadata: 53 + name: home-assistant-http-config 54 + namespace: home-automation 55 + data: 56 + http.yaml: | 57 + http: 58 + use_x_forwarded_for: true 59 + trusted_proxies: 60 + - 10.42.0.0/16 61 + - 127.0.0.1 62 + - ::1 63 + --- 64 + apiVersion: apps/v1 65 + kind: Deployment 66 + metadata: 67 + name: home-assistant 68 + namespace: home-automation 69 + labels: 70 + app: home-assistant 71 + spec: 72 + replicas: 1 73 + selector: 74 + matchLabels: 75 + app: home-assistant 76 + template: 77 + metadata: 78 + labels: 79 + app: home-assistant 80 + spec: 81 + initContainers: 82 + - name: copy-config 83 + image: busybox:latest 84 + command: ["sh", "-c"] 85 + args: 86 + - | 87 + grep -q "^http:" /config/configuration.yaml || { echo "" >> /config/configuration.yaml; cat /tmp/http-config/http.yaml >> /config/configuration.yaml; } 88 + echo "Config file copied" 89 + volumeMounts: 90 + - mountPath: /config 91 + name: home-assistant-config 92 + - mountPath: /tmp/http-config 93 + name: http-config 94 + containers: 95 + - name: home-assistant 96 + image: "ghcr.io/home-assistant/home-assistant:stable" 97 + securityContext: 98 + privileged: true 99 + ports: 100 + - name: http 101 + containerPort: 8123 102 + protocol: TCP 103 + volumeMounts: 104 + - mountPath: /config 105 + name: home-assistant-config 106 + - mountPath: /etc/localtime 107 + name: home-assistant-time 108 + readOnly: true 109 + hostNetwork: true 110 + volumes: 111 + - name: home-assistant-config 112 + persistentVolumeClaim: 113 + claimName: home-assistant-config 114 + - name: http-config 115 + configMap: 116 + name: home-assistant-http-config 117 + - name: home-assistant-time 118 + persistentVolumeClaim: 119 + claimName: home-assistant-time-pvc 120 + --- 121 + apiVersion: v1 122 + kind: Service 123 + metadata: 124 + name: home-assistant 125 + namespace: home-automation 126 + labels: 127 + app: home-assistant 128 + spec: 129 + selector: 130 + app: home-assistant 131 + ports: 132 + - name: http 133 + port: 8123 134 + targetPort: 8123 135 + protocol: TCP 136 + type: ClusterIP 137 + --- 138 + apiVersion: traefik.io/v1alpha1 139 + kind: IngressRoute 140 + metadata: 141 + name: home-assistant-route 142 + namespace: home-automation 143 + spec: 144 + entryPoints: 145 + - websecure 146 + routes: 147 + - kind: Rule 148 + match: Host(`ha.clorofilla.casa`) 149 + priority: 10 150 + services: 151 + - name: home-assistant 152 + port: 8123 153 + tls: 154 + certResolver: default
+168
ansible/k8s/immich.yml
··· 1 + --- 2 + apiVersion: v1 3 + kind: Namespace 4 + metadata: 5 + name: immich 6 + --- 7 + apiVersion: postgresql.cnpg.io/v1 8 + kind: Cluster 9 + metadata: 10 + namespace: immich 11 + name: immich-database 12 + spec: 13 + instances: 1 14 + storage: 15 + size: 1Gi 16 + imageName: ghcr.io/tensorchord/cloudnative-vectorchord:16.9-0.4.3 17 + postgresql: 18 + shared_preload_libraries: 19 + - "vchord.so" 20 + bootstrap: 21 + initdb: 22 + # TODO: Use managed extensions (pg 18) 23 + postInitApplicationSQL: 24 + # Commands based on: https://immich.app/docs/administration/postgres-standalone/#without-superuser-permission 25 + - CREATE EXTENSION vchord CASCADE; 26 + - CREATE EXTENSION earthdistance CASCADE; 27 + --- 28 + apiVersion: v1 29 + kind: PersistentVolume 30 + metadata: 31 + name: immich-data 32 + namespace: immich 33 + spec: 34 + storageClassName: standard 35 + capacity: 36 + storage: 1Gi 37 + accessModes: 38 + - ReadWriteOnce 39 + nfs: 40 + server: 192.168.1.155 41 + path: /mnt/fast/1tb/photos/ImmichData 42 + mountOptions: 43 + - nfsvers=3 44 + --- 45 + apiVersion: v1 46 + kind: PersistentVolume 47 + metadata: 48 + name: immich-camera-photos 49 + namespace: immich 50 + spec: 51 + storageClassName: standard 52 + capacity: 53 + storage: 1Gi 54 + accessModes: 55 + - ReadWriteOnce 56 + nfs: 57 + server: 192.168.1.155 58 + path: /mnt/fast/1tb/photos/CameraPhotos 59 + mountOptions: 60 + - nfsvers=3 61 + --- 62 + apiVersion: v1 63 + kind: PersistentVolume 64 + metadata: 65 + name: previous-immich-uploads 66 + namespace: immich 67 + spec: 68 + storageClassName: standard 69 + capacity: 70 + storage: 1Gi 71 + accessModes: 72 + - ReadWriteOnce 73 + nfs: 74 + server: 192.168.1.155 75 + path: /mnt/fast/1tb/photos/PreviousImmichUploads 76 + mountOptions: 77 + - nfsvers=3 78 + --- 79 + apiVersion: v1 80 + kind: PersistentVolume 81 + metadata: 82 + name: immich-ml 83 + namespace: immich 84 + spec: 85 + storageClassName: standard 86 + capacity: 87 + storage: 1Gi 88 + accessModes: 89 + - ReadWriteOnce 90 + nfs: 91 + server: 192.168.1.155 92 + path: /mnt/fast/1tb/photos/ImmichMl 93 + mountOptions: 94 + - nfsvers=3 95 + --- 96 + apiVersion: v1 97 + kind: PersistentVolumeClaim 98 + metadata: 99 + name: immich-data-pvc 100 + namespace: immich 101 + spec: 102 + resources: 103 + requests: 104 + storage: 1Gi 105 + accessModes: 106 + - ReadWriteOnce 107 + storageClassName: standard 108 + volumeName: immich-data 109 + --- 110 + apiVersion: v1 111 + kind: PersistentVolumeClaim 112 + metadata: 113 + name: immich-camera-photos-pvc 114 + namespace: immich 115 + spec: 116 + resources: 117 + requests: 118 + storage: 1Gi 119 + accessModes: 120 + - ReadWriteOnce 121 + storageClassName: standard 122 + volumeName: immich-camera-photos 123 + --- 124 + apiVersion: v1 125 + kind: PersistentVolumeClaim 126 + metadata: 127 + name: previous-immich-uploads-pvc 128 + namespace: immich 129 + spec: 130 + resources: 131 + requests: 132 + storage: 1Gi 133 + accessModes: 134 + - ReadWriteOnce 135 + storageClassName: standard 136 + volumeName: previous-immich-uploads 137 + --- 138 + apiVersion: v1 139 + kind: PersistentVolumeClaim 140 + metadata: 141 + name: immich-ml-pvc 142 + namespace: immich 143 + spec: 144 + resources: 145 + requests: 146 + storage: 1Gi 147 + accessModes: 148 + - ReadWriteOnce 149 + storageClassName: standard 150 + volumeName: immich-ml 151 + --- 152 + apiVersion: traefik.io/v1alpha1 153 + kind: IngressRoute 154 + metadata: 155 + name: immich-route 156 + namespace: immich 157 + spec: 158 + entryPoints: 159 + - websecure 160 + routes: 161 + - kind: Rule 162 + match: Host(`photos.clorofilla.casa`) 163 + priority: 10 164 + services: 165 + - name: immich-server 166 + port: 2283 167 + tls: 168 + certResolver: default
+92
ansible/k8s/immich_values.yml
··· 1 + server: 2 + persistence: 3 + existing-uploads: 4 + enabled: true 5 + existingClaim: previous-immich-uploads-pvc 6 + readOnly: true 7 + 8 + external: 9 + enabled: true 10 + existingClaim: immich-camera-photos-pvc 11 + readOnly: true 12 + 13 + controllers: 14 + main: 15 + containers: 16 + main: 17 + image: 18 + tag: v2.4.1 19 + env: 20 + DB_HOSTNAME: 21 + valueFrom: 22 + secretKeyRef: 23 + name: immich-database-app 24 + key: host 25 + DB_USERNAME: 26 + valueFrom: 27 + secretKeyRef: 28 + name: immich-database-app 29 + key: user 30 + DB_PASSWORD: 31 + valueFrom: 32 + secretKeyRef: 33 + name: immich-database-app 34 + key: password 35 + DB_DATABASE_NAME: 36 + valueFrom: 37 + secretKeyRef: 38 + name: immich-database-app 39 + key: dbname 40 + valkey: 41 + enabled: true 42 + immich: 43 + persistence: 44 + library: 45 + existingClaim: immich-data-pvc 46 + machine-learning: 47 + enabled: true 48 + podSecurityContext: 49 + runAsUser: 0 50 + runAsGroup: 0 51 + containerSecurityContext: 52 + privileged: true 53 + controllers: 54 + main: 55 + containers: 56 + main: 57 + image: 58 + repository: ghcr.io/immich-app/immich-machine-learning 59 + tag: v2.4.1-openvino 60 + pullPolicy: IfNotPresent 61 + env: 62 + LOG_LEVEL: debug 63 + NEOReadDebugKeys: 1 64 + OverrideGpuAddressSpace: 48 65 + TRANSFORMERS_CACHE: /cache 66 + HF_XET_CACHE: /cache/huggingface-xet 67 + MPLCONFIGDIR: /cache/matplotlib-config 68 + persistence: 69 + dev-dri: 70 + enabled: true 71 + type: hostPath 72 + hostPath: /dev/dri 73 + globalMounts: 74 + - path: /dev/dri 75 + dev-bus-usb: 76 + enabled: true 77 + type: hostPath 78 + hostPath: /dev/bus/usb 79 + globalMounts: 80 + - path: /dev/bus/usb 81 + sys-class-drm: 82 + enabled: true 83 + type: hostPath 84 + hostPath: /sys/class/drm 85 + globalMounts: 86 + - path: /sys/class/drm 87 + cache: 88 + enabled: true 89 + type: persistentVolumeClaim 90 + existingClaim: immich-ml-pvc 91 + globalMounts: 92 + - path: /cache
+53
ansible/k8s/traefik_porkbun.yml
··· 1 + apiVersion: helm.cattle.io/v1 2 + kind: HelmChartConfig 3 + metadata: 4 + name: traefik 5 + namespace: kube-system 6 + spec: 7 + valuesContent: |- 8 + ports: 9 + torrent-tcp: 10 + port: 51413 11 + expose: true 12 + exposedPort: 51413 13 + protocol: TCP 14 + torrent-udp: 15 + port: 51413 16 + expose: true 17 + exposedPort: 51413 18 + protocol: UDP 19 + additionalArguments: 20 + - "--certificatesresolvers.default.acme.email=hello@geesawra.industries" 21 + - "--certificatesresolvers.default.acme.storage=/data/acme.json" 22 + - "--certificatesresolvers.default.acme.dnschallenge=true" 23 + - "--certificatesresolvers.default.acme.dnschallenge.provider=porkbun" 24 + - "--certificatesresolvers.default.acme.dnschallenge.resolvers=1.1.1.1:53,8.8.8.8:53" 25 + env: 26 + - name: PORKBUN_API_KEY 27 + valueFrom: 28 + secretKeyRef: 29 + name: porkbun-api-credentials 30 + key: api-key 31 + - name: PORKBUN_SECRET_API_KEY 32 + valueFrom: 33 + secretKeyRef: 34 + name: porkbun-api-credentials 35 + key: secret-api-key 36 + --- 37 + apiVersion: v1 38 + kind: ConfigMap 39 + metadata: 40 + name: traefik-config 41 + namespace: kube-system 42 + data: 43 + traefik.yaml: | 44 + entryPoints: 45 + web: 46 + address: ":80" 47 + http: 48 + redirections: 49 + entryPoint: 50 + to: websecure 51 + scheme: https 52 + websecure: 53 + address: ":443"
+123
ansible/k8s/transmission.yml
··· 1 + apiVersion: v1 2 + kind: Namespace 3 + metadata: 4 + name: torrents 5 + --- 6 + kind: Deployment 7 + apiVersion: apps/v1 8 + metadata: 9 + name: transmission-deployment 10 + namespace: torrents 11 + labels: 12 + app: transmission 13 + spec: 14 + replicas: 1 15 + selector: 16 + matchLabels: 17 + app: transmission 18 + template: 19 + metadata: 20 + name: transmission-pod 21 + namespace: torrents 22 + labels: 23 + app: transmission 24 + spec: 25 + containers: 26 + - name: transmission-container 27 + image: ghcr.io/linuxserver/transmission 28 + ports: 29 + - containerPort: 9091 30 + protocol: TCP 31 + - containerPort: 51413 32 + protocol: TCP 33 + - containerPort: 51413 34 + protocol: UDP 35 + 36 + volumeMounts: 37 + - mountPath: /downloads 38 + name: files 39 + - mountPath: /config 40 + name: config 41 + 42 + env: 43 + - name: PUID 44 + value: "1000" 45 + - name: PGID 46 + value: "1000" 47 + - name: TZ 48 + value: "Europe/Rome" 49 + 50 + volumes: 51 + - name: config 52 + nfs: 53 + server: 192.168.1.155 54 + path: /mnt/torrents/data/mine/config 55 + mountOptions: 56 + - nfsvers=3 57 + 58 + - name: files 59 + nfs: 60 + server: 192.168.1.155 61 + path: /mnt/torrents/data/mine/files 62 + mountOptions: 63 + - nfsvers=3 64 + --- 65 + kind: Service 66 + apiVersion: v1 67 + metadata: 68 + name: transmission-service 69 + namespace: torrents 70 + spec: 71 + type: NodePort 72 + selector: 73 + app: transmission 74 + ports: 75 + - protocol: TCP 76 + name: web-interface 77 + port: 9091 78 + targetPort: 9091 79 + - protocol: TCP 80 + name: torrent-tcp 81 + port: 51413 82 + targetPort: 51413 83 + - protocol: UDP 84 + name: torrents-udp 85 + port: 51413 86 + targetPort: 51413 87 + --- 88 + apiVersion: traefik.io/v1alpha1 89 + kind: IngressRoute 90 + metadata: 91 + name: transmission-mine-route 92 + namespace: torrents 93 + spec: 94 + entryPoints: 95 + - websecure 96 + routes: 97 + - kind: Rule 98 + match: Host(`transmission.clorofilla.casa`) 99 + priority: 10 100 + services: 101 + - name: transmission-service 102 + port: 9091 103 + tls: 104 + certResolver: default 105 + --- 106 + kind: Service 107 + apiVersion: v1 108 + metadata: 109 + name: transmission-peer-lb 110 + namespace: torrents 111 + spec: 112 + type: LoadBalancer 113 + selector: 114 + app: transmission 115 + ports: 116 + - protocol: TCP 117 + name: peer-tcp 118 + port: 51413 119 + targetPort: 51413 120 + - protocol: UDP 121 + name: peer-udp 122 + port: 51413 123 + targetPort: 51413