Considering we have paid GitHub accounts at work, I’ve started using the GitHub Container Registry (GHCR) for storing our production-level container images. It was my first foray not doing either:
- Docker Hub
- Locally copying around image archives (🤢)
There are limitations to how much GHCR can be used with our current plan, including 2 GB of overall storage and 10 GB of transfer out of the platform per month. As a result, my team currently is only leveraging it for our production-ready images. While we’re testing development either locally or in a lab, we still operate out of local archives. Since this isn’t using Docker Hub, I just had to figure out how to make authentication work for both the Docker CLI on my local machine to push images up and within K3s so that I could pull down whatever images I needed from my production clusters.
Docker Authentication
GitHub Container Registry only supports classic Personal Access Tokens (PATs.) You can generate one by going to your GitHub settings. Select Developer settings at the very bottom of the left-hand navigation menu. Under the Personal access tokens heading, pick Tokens (classic).
From the Generate new token button in the top-right, pick Generate new token (classic):
The token’s name is purely for your sake to know what the purpose is, so pick something meaningful to you. This can be helpful if you’ve generated multiple tokens for different purposes. Pick any settings you desire for the token, but the main ones that are important for container management are:
read:packages
is required at the bare minimum for pulling down images, though in this case I also want the ability to write and delete so that I can push locally built images into GHCR and clean up ones that may no longer be needed so they aren’t eating into our 2 GB of storage.
You will need to instruct the local docker
daemon on the system where you build images with what credentials to use when accessing ghcr.io
. From the CLI enter:
docker login ghcr.io -u {github_username}
For example:
For the password prompt, use the PAT that was previously generated.
If you need to upload an image to GHCR, it must first be properly tagged so the docker
daemon knows where to send it. If you’ve built something locally with Docker for example, it likely has no tag specific to any container registry:
To fix this, run:
docker tag {image_hash} ghcr.io/{company_account}/{image_name}:{version_number}
Note: You most likely should be pushing this to a company in GitHub rather than your own account. While your personal account does include some free storage and egress traffic for GHCR, work images belong to work and should be stored accordingly.
After running this, the same image should show both tags; notice how the image ID is the same value:
It’s worth mentioning that the tags are purely metadata; this isn’t two copies of the same image consuming (in my case) 162 MB of disk space each. The storage is only used once.
To push the image, you can do the same thing that you would before since your local docker
client now knows what to do for GHCR-tagged images.
docker push ghcr.io/{company_account}/{image_name}:{version_number}
For example:
K3s Configuration
Now that the shiny new image is in GHCR, the orchestrator also needs to be able to access it. In this case, I’m using K3s, so I’m sure other Kubernetes deployments may have slightly different steps.
K3s uses a file called registries.yaml
to configure any registries it should leverage. If you want to know more than this, check out their full documentation. On Ubuntu systems, this file lives at: /etc/rancher/k3s/registries.yaml
From a base K3s installation, this file likely doesn’t exist, so create it and then open it with your preferred text editor. The file will have 2 top level sections:
mirrors
: This defines each mirror K3s should use.configs
: This specifies how K3s should interact with each mirror.
A sample for GHCR would be:
mirrors:
ghcr.io:
endpoint:
- "https://ghcr.io"
configs:
ghcr.io:
auth:
username: jfabry-noc
password: {github_pat}
Note: In my case, this was for a demo, and so I just used my own GitHub account for the authentication. In a real scenario, you would want to have some sort of dedicated account that only has permission to read the relevant images.
After the file has been created and updated appropriately, K3s will need to be restarted so it can pick up the changes:
sudo systemctl restart k3s
Assuming you’ve using a YAML file to manage your deployments, it will also need to be updated to point to the image in GHCR:
image: ghcr.io/{company_account}/dashcentral:1.6.1
Note: Depending on your GitHub plan, you may also want to ensure that the imagePullPolicy
is set to IfNotPresent
to avoid unnecessarily eating through your egress limit for GHCR when a local version of the image is already present.
Once this is all set, just re-apply any relevant YAML files:
sudo k3s kubectl apply -f {path/to/deployment.yaml}
Along with seeing the deployment running as expected, checking the local images known to K3s will now also show the new, properly tagged GHCR image: