to avoid confusion I first have to say that I am not talking about the scaleway container registry. I’m hosting Gitlab with it’s container registry (I guess the one included in Gitlab is the one directly from docker but I’m not sure) on my own server. Now I want to use the S3 storage driver to store the containers in object storage instead of in the file system of my server.
With AWS S3 everything is working fine but when I use Scaleway’s object storage instead it doesn’t work. When the gitlab runner is pushing the container to the repository (which then puts it in the object storage I guess) I get “Retrying in [some number] seconds” errors. Looking in the registry logs I see these errors:
error resolving upload: s3aws: Path not found: /docker/registry/v2/repositories/myName/projectName/_uploads/bla-bla-bla/data
And even more of these:
s3aws: NoSuchUpload: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.
As I said after some retrys the push fails. Looking in the bucket after that there is a folder structure and some files so it’s definitely not a problem with the basics (authentication, endpoint, …)
I found the following issue which sounds a bit similar https://github.com/docker/distribution/issues/1948
but the difference for me is that in this issue the error occurs after some time when started uploads are piling up and for me it’s even during the first push into an empty bucket. It does not make sense to clean the started uploads while it’s uploading so the solution from there won’t work for me.
Has anyone else tried to configure scaleway object storage as backend for a docker registry? Does the scaleway api misbehave in this point? I can’t know for sure but I don’t think that the storage driver uses more api features then the basic ones which scaleway should have as aws does.
Thanks in advance