A container registry that uses the AT Protocol for manifest storage and S3 for blob storage. atcr.io
docker container atproto go
72
fork

Configure Feed

Select the types of activity you want to include in your feed.

question: Storage limits and handling abuse on the hosted atcr.io service #5

open opened by andreijiroh.dev

Do the atcr.io service has any storage limits per AT Proto account and other ratelimiting behind the scenes for abuse prevention and migitaion, similar to Docker Hub and friends or is it planned?

Its planned, I haven't figured out the way I want to do it yet. Its hard to know who owns what layer. for example if 2 people share the base golang-1.24 image, who "pays" for it? is it the first person to upload? Do we need to calculate how many people use it and charge them a percentage of the space?

I have never actually used docker hub for uploading images. So I don't know exactly how they try and calculate it.

As an update, I have added basic support for storage quotas per account. This simply works by counting unique layer sizes per account.

Rate limiting is still planned at some point.

I had a similar question but a different spin on a proposal. I would like to use this project to host images for a continuously deployed app. There is very little need for this service to store older images forever. Services like aws ecr have a notion of lifecycle policy where the registry is free to remove images based on user defined policy, typically after x amount of time or y number of images are published

I was going to ask if there’s a way to keep a cap on storage by garbage collecting old images. I figured this would help both this service as well as the at proto owners pds storage limits in check

I do not currently have granular GC. All it does it delete out layers that are no longer referenced. There is a User setting (as in the user that uploads the image) to allow garbage collection of untagged manifests.

So if its a continuous deployment, on a single user, you can enable that flag and GC will prune the untagged stuff. You can also just delete the manifest records out of the PDS and GC will find them dereferenced and delete them from S3

sign up or login to add to the discussion
Labels

None yet.

assignee

None yet.

Participants 3
AT URI
at://did:plc:wcx4c3osbuzrwmxkqdfqygwv/sh.tangled.repo.issue/3m72dtwyrrr22