-
Notifications
You must be signed in to change notification settings - Fork 178
Allow push #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow push #59
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proxy_cache_convert_head off;
came up too, during the Pull Rate Limits debacle.
In the end it's the right thing to do, just need to hide this under an ENV var for compatibility.
Unfortunately that (currently) involves bash...
nginx.conf
Outdated
proxy_cache_convert_head off; | ||
proxy_cache_methods GET; | ||
proxy_cache_key $scheme$request_method$proxy_host$request_uri; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, this has come up before. it is a breaking change (people with huge 2TB caches will break, due to the cache_key changing, and it will sit there collecting dust, and eventually run out of disk space).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, given that we are only caching GET
requests maybe we could leave proxy_cache_key
with its default value $scheme$proxy_host$request_uri
and drop the change, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the big issue is that DockerHub allegedly does not count HEAD requests against its Pull Rate Limit.
@cpuguy83 discovered that Docker Client does not send HEADs, while containerd does. There's patches and discussion upstream in Moby still.
So the way it is now (before this change), all HEADs get upgraded to GETs and it does not matter if the method is included in the cache_key.
I really don't know how this affects pushes, at all -- why did you include this in the patch to begin with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is copied verbatim from #17, no special reason on my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without those, the patch then just boils down to removing the 405 blocks from non-GET methods.
those were in place for a reason, although I don't remember why.
maybe since then the underlying reason was solved somewhere else, and it's ok to just remove them, but I doubt it.
anyway, if you boil it down to an ENV var ALLOW_PUSH=true
that just removes the blocks it would be okay.
The cache_key stuff I'll handle separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok done, PTAL, i've only left client_max_body_size
and proxy_cache_methods
set when push is allowed, let me know wdyt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rpardini any news about this? let me know if any other changes are required.
Ok, are there any examples of how to do it, similar to manifest caching configuration? |
Yes, a few things are already configured externally, like manifest caching, take a look at the Dockerfile and entrypoint.sh. In essence, the nginx.conf will use an |
Based on #17 adds additional modifications to actually allow pushing, with the changes there I was getting errors about not allowed methods.
Also doesn't restrict the size of the layers to be pushed, for my use case (proxy to be accessed from a private network) makes sense to have this in place, maybe not so much in a generic public network.
Happy to propose the changes to be added on top of #17 let me know what works better for you.