-
Notifications
You must be signed in to change notification settings - Fork 83
GHCR releases #431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
GHCR releases #431
Conversation
9b8664b to
001ac81
Compare
polarathene
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am posting this as-is without another revision pass. It is incomplete and a good portion was lost during a write-up when my system crashed, losing some useful information and examples.
I would gloss over it, but some of the information may be outdated or invalid. Any questions I raised I'm pretty sure I had already answered myself afterwards. Submitting this review in the event it provides anything useful to you from what Github Recovered.
I will note that I have a a variety of workflow iterations that I could share to you here, rather than parse the caddy library file as you did I had a minimal yq command to process stackbrew-config.yaml directly, optionally generating the matrix from that. I had also done the equivalent with JS but in a much simpler approach than your PR approach with actions/github-script.
- I also investigated Windows image support and was successful at that, but it greatly complicates the workflow setup.
- I additionally had a Docker Bake config to share to you that could handle Linux and Windows image builds rather nicely (including tag management if desired), along with simplifying the matrix and publishing when shared tags were involved.
- The whole
Dockerfilemanagement itself was also explored and could be simplified (but that'd be dependent upon any requirements to comply with DockerHub's official image support), at the very least a restructuring of the file layout to more predictable names where thedirfield wouldn't be necessary would be better. - If you were okay with the release lag from the DockerHub releases, a much simpler process is to use a tool like
orasorcraneand copy from one registry to the other and have that as a scheduled job instead of building images for GHCR. I did have a link to another project that did this to refer you to but have since lost it from that system crash :\
Let me know if any of that interests you, my memory recall on a variety of this work is fading (I worked on it over the past week or two).
| - name: Parse library file | ||
| id: parse | ||
| uses: actions/github-script@v7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can understand why you might want to do this, but you're only handling linux images correct?
Presently that covers Alpine with Caddy 2.10 and the 2.11 beta releases, runtime and builder image variants:
Lines 7 to 19 in cf30c98
| Tags: 2.11.0-beta.1-alpine, 2.11-alpine | |
| SharedTags: 2.11.0-beta.1, 2.11 | |
| GitRepo: https://github.com/caddyserver/caddy-docker.git | |
| Directory: 2.11/alpine | |
| GitCommit: 33c593c5bd99287e66de4187a4e9a4097426253d | |
| Architectures: amd64, arm64v8, arm32v6, arm32v7, ppc64le, riscv64, s390x | |
| Tags: 2.11.0-beta.1-builder-alpine, 2.11-builder-alpine | |
| SharedTags: 2.11.0-beta.1-builder, 2.11-builder | |
| GitRepo: https://github.com/caddyserver/caddy-docker.git | |
| Directory: 2.11/builder | |
| GitCommit: 33c593c5bd99287e66de4187a4e9a4097426253d | |
| Architectures: amd64, arm64v8, arm32v6, arm32v7, ppc64le, riscv64, s390x |
Lines 69 to 81 in cf30c98
| Tags: 2.10.2-alpine, 2.10-alpine, 2-alpine, alpine | |
| SharedTags: 2.10.2, 2.10, 2, latest | |
| GitRepo: https://github.com/caddyserver/caddy-docker.git | |
| Directory: 2.10/alpine | |
| GitCommit: 5572371a83e48fd0368a4917d0fc48e44ef30582 | |
| Architectures: amd64, arm64v8, arm32v6, arm32v7, ppc64le, riscv64, s390x | |
| Tags: 2.10.2-builder-alpine, 2.10-builder-alpine, 2-builder-alpine, builder-alpine | |
| SharedTags: 2.10.2-builder, 2.10-builder, 2-builder, builder | |
| GitRepo: https://github.com/caddyserver/caddy-docker.git | |
| Directory: 2.10/builder | |
| GitCommit: 5572371a83e48fd0368a4917d0fc48e44ef30582 | |
| Architectures: amd64, arm64v8, arm32v6, arm32v7, ppc64le, riscv64, s390x |
Platforms are the same across all 4 configs above. The two Caddy releases only differ by git commit (which appears to reference the Dockerfile last modify date in this repo for the DockerHub generated README to link), and then it's your tags and directory which have a (reasonably) reliable structure to leverage.
You could express this much more succinctly via Docker Bake config?
|
|
||
| env: | ||
| REGISTRY: ghcr.io | ||
| IMAGE_NAME: ${{ github.repository }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would result in ghcr.io/caddyserver/caddy-docker, while for DockerHub there is docker.io/caddy (due to official library status, aka docker.io/library/caddy).
For GHCR, usually you'd associate the package to the actual repo (caddyserver/caddy in this case), with that repo being linked to the GHCR image too. However since you've got a separate repo handling the image build and publish, and it's not being done from a workflow call trigger (which would get the context of github.repository => caddyserver/caddy AFAIK), some extra care needs to be done here.
Since the Dockerfile in this repo aren't generic, it makes sense to not have caddyserver/caddy triggering this workflow and keeping the on.push trigger for this repo.
Making Dockerfile version generic would be a bit more complicated due to:
- No image is building Caddy, only pulling an existing release published from
caddyserver/caddyGH releases page, along with checksum verification as a result. - Presumably building from source with the builder image instead of pulling from GH releases is a no go, as the hard-coded approach with all these very similar
Dockerfilevariants (which I understand are generated from a template) is a requirement/expectation for the DockerHub official library publishing? (since the tags link back to the build commits of theirDockerfile)
You'll likely need to go into the settings after publishing to GHCR and link it to caddyserver/caddy manually, so that it appears on that repo (which is more discoverable and typical expectation for where to find the link to GHCR image of a project).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| IMAGE_NAME: ${{ github.repository }} | |
| IMAGE_NAME: 'caddyserver/caddy' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, an alternative would be to have the build+publish workflow at caddyserver/caddy, which could be triggered by a git tag push/release event.
This would be similar to this PR (which is a callable workflow that also happens to use a Dockerfile that just bundles the GH release).
- You could even have a simple
Dockerfileatcaddyserver/caddyjust for the non-DockerHub image releases (should there be requests for any other registries to publish to). - The referenced workflow keeps it a bit simple by avoiding publishing for pre-releases.
Alternatively, you could call the workflow at caddyserver/caddy with a clone of this repo via the checkout action, or similar to get the appropriate Dockerfile from here, but I can see how that'd complicate the maintenance, especially with the tag management and parallel release channels.
| - name: Build and push image | ||
| shell: bash | ||
| run: | | ||
| set -e | ||
|
|
||
| DIRECTORY="${{ matrix.directory }}" | ||
| PLATFORMS="${{ matrix.platforms }}" | ||
| ALL_TAGS="${{ matrix.tags }}" | ||
| IS_PR="${{ github.event_name == 'pull_request' }}" | ||
|
|
||
| echo "==========================================" | ||
| echo "Building image from: $DIRECTORY" | ||
| echo "Platforms: $PLATFORMS" | ||
| echo "Tags: $ALL_TAGS" | ||
| echo "==========================================" | ||
| echo | ||
|
|
||
| # Build Docker tag arguments | ||
| TAG_ARGS="" | ||
| for tag in $ALL_TAGS; do | ||
| TAG_ARGS="$TAG_ARGS --tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:$tag" | ||
| done | ||
|
|
||
| # Construct buildx command string (print it for logs), then run it | ||
| if [[ "$IS_PR" == "true" ]]; then | ||
| CMD="docker buildx build --platform '$PLATFORMS' --file '$DIRECTORY/Dockerfile' $TAG_ARGS '$DIRECTORY'" | ||
| else | ||
| CMD="docker buildx build --push --platform '$PLATFORMS' --file '$DIRECTORY/Dockerfile' $TAG_ARGS '$DIRECTORY'" | ||
| fi | ||
|
|
||
| echo "==========================================" | ||
| echo "Running buildx command:" | ||
| echo "$CMD" | ||
| echo "==========================================" | ||
|
|
||
| # Execute the command | ||
| eval $CMD | ||
|
|
||
| echo "Successfully processed $DIRECTORY" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest this change as it's more conventional/standard within Github Actions for this task. As you can see it's quite simple to grok :)
- Switched from a bash script to the official action:
docker/build-push-action.- This has a
pushinput that you can more clearly communicate preventing pushes from PR trigger events. Dockerfileis the default used and is relative to thecontextpath, so you can just set the context to the directory with theDockerfile👍tagsis using the matrix input, you'll want to prepend the image name prior (I'm not sure why this wasn't done in your earlier preprocess step, other than for your logging in the script being replaced by this action).
- This has a
- Removed
Dockerfrom thenamesince these days it's not that Docker specific, and is an OCI container image that other container engines can also consume.
| - name: Build and push image | |
| shell: bash | |
| run: | | |
| set -e | |
| DIRECTORY="${{ matrix.directory }}" | |
| PLATFORMS="${{ matrix.platforms }}" | |
| ALL_TAGS="${{ matrix.tags }}" | |
| IS_PR="${{ github.event_name == 'pull_request' }}" | |
| echo "==========================================" | |
| echo "Building image from: $DIRECTORY" | |
| echo "Platforms: $PLATFORMS" | |
| echo "Tags: $ALL_TAGS" | |
| echo "==========================================" | |
| echo | |
| # Build Docker tag arguments | |
| TAG_ARGS="" | |
| for tag in $ALL_TAGS; do | |
| TAG_ARGS="$TAG_ARGS --tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:$tag" | |
| done | |
| # Construct buildx command string (print it for logs), then run it | |
| if [[ "$IS_PR" == "true" ]]; then | |
| CMD="docker buildx build --platform '$PLATFORMS' --file '$DIRECTORY/Dockerfile' $TAG_ARGS '$DIRECTORY'" | |
| else | |
| CMD="docker buildx build --push --platform '$PLATFORMS' --file '$DIRECTORY/Dockerfile' $TAG_ARGS '$DIRECTORY'" | |
| fi | |
| echo "==========================================" | |
| echo "Running buildx command:" | |
| echo "$CMD" | |
| echo "==========================================" | |
| # Execute the command | |
| eval $CMD | |
| echo "Successfully processed $DIRECTORY" | |
| - name: Build and push image | |
| uses: docker/build-push-action@v6 | |
| env: | |
| PLATFORMS: linux/amd64, linux/arm64, linux/arm/v6, linux/arm/v7, linux/ppc64le, linux/riscv64, linux/s390x | |
| with: | |
| context: ${{ matrix.directory }} | |
| platforms: ${{ env.PLATFORMS }} | |
| push: ${{ github.event_name != 'pull_request' }} | |
| tags: ${{ matrix.tags }} |
Since it's only handling the alpine images, the platforms are hardcoded here instead of using ${{ matrix.platforms }}, minor maintenance drawback if that were to ever change in support 😅 (but simplifies the workflow by skipping the platform parser mapping logic)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For your tags generation, you can get the registry + image name prefix handled by a related official action docker/metadata-action.
This will also help with the tag generation in general, but would be a bit more complicated to support an exact parity with what you've got on DockerHub (type=semver docs for example mention pre-release versions are excluded from major/minor/patch expression syntax - only supporting {{version}}/{{raw}} template expressions). If that is acceptable (the linked reasoning is quite valid IMO) I'd encourage adopting this action too.
Without publishing the Windows image variants, this becomes rather simple to support:
List of tags to generate
# Generated from Docker Bake
target:
2_10-alpine:
context: 2.10/alpine
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.10.2
- ghcr.io/caddyserver/caddy:2.10
- ghcr.io/caddyserver/caddy:2
- ghcr.io/caddyserver/caddy:latest
- ghcr.io/caddyserver/caddy:2.10.2-alpine
- ghcr.io/caddyserver/caddy:2.10-alpine
- ghcr.io/caddyserver/caddy:2-alpine
- ghcr.io/caddyserver/caddy:alpine
2_10-builder-alpine:
context: 2.10/builder
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.10.2-builder
- ghcr.io/caddyserver/caddy:2.10-builder
- ghcr.io/caddyserver/caddy:2-builder
- ghcr.io/caddyserver/caddy:builder
- ghcr.io/caddyserver/caddy:2.10.2-builder-alpine
- ghcr.io/caddyserver/caddy:2.10-builder-alpine
- ghcr.io/caddyserver/caddy:2-builder-alpine
- ghcr.io/caddyserver/caddy:builder-alpine
2_11-alpine:
context: 2.11/alpine
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.11.0-beta.1
- ghcr.io/caddyserver/caddy:2.11.0-beta.1-alpine
2_11-builder-alpine:
context: 2.11/builder
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.11.0-beta.1-builder
- ghcr.io/caddyserver/caddy:2.11.0-beta.1-builder-alpineExcluded tags: (Semver pre-release <major>.<minor>)
2_11-alpine:
context: 2.11/alpine
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.11
- ghcr.io/caddyserver/caddy:2.11-alpine
2_11-builder-alpine:
context: 2.11/builder
dockerfile: Dockerfile
tags:
- ghcr.io/caddyserver/caddy:2.11-builder
- ghcr.io/caddyserver/caddy:2.11-builder-alpineThere's a variety of ways to go about processing the tags, but that is a bit complicated with multiple images and version tag suffixes.
docker/metadata-actionhasflavor.suffix=builder-alpinefor example, but for multiple suffixes you'd have to manually append,suffix=builder-alpineand repeat the line for the sharedbuildersuffix too.- Likewise for multiple images, you would either repeat the step per image, or you could use a
matrixto be DRY (as you have already chosen), but that can be a bit heavy of a workaround to involve an entirely separate CI runner for that convenience. - There's also the runtime image variant that has the bare version tags without any suffix appended (and an additional
latesttag). That logic would look a bit messy to express in a combineddocker/metadata-actionstep. - A variety of conveniences are lost when
docker/metadata-actionis unable to leverage the github context (for tag events, branch expressions, etc). So your decision to extracting the already generated tags from./library/caddyconfig is reasonable.
For the most part, it'd look something like this:
- name: Prepare image metadata
id: image-metadata
uses: docker/metadata-action@v5
with:
# Tags are appended to this list of base names (newline delimited):
images: |
ghcr.io/${{ env.IMAGE_NAME }}
flavor: |
# Avoid implicitly assuming a `latest` tag (based on tag heuristics):
latest=false
# Default suffix for all tags (otherwise can append `,suffix=-alpine` to each tag below):
#suffix=-alpine
tags: |
type=raw,value=latest,enable={{ contains(inputs.suffixes, 'latest') }}
type=semver,pattern={{version}},value=${{ inputs.version }}
# These two additional `type=semver` are excluded from pre-release semver tags:
type=semver,pattern={{major}}.{{minor}},value=${{ inputs.version }}
# Only the primary release version should include the major version tag:
type=semver,pattern={{major}},value=${{ inputs.version }},enable=${{ inputs.is-primary-release }}Using action/github-script it would be possible to either generate the tags input for docker/metadata-action for a dynamic tags template, or post-process the output from docker/metadata-action afterwards (appending extra suffixes).
let patterns = ['{{ version }}', '{{ major }}{{ minor }}']
isPrimaryRelease && patterns.push('{{ major }}')
suffixes.flatMap(suffix => patterns
.map(pattern => type=semver,pattern=${ pattern },value=${ version },suffix=${ suffix })
.concat(isPrimaryRelease ? [type=raw,value=${ suffix }] : [])
)
What we could do is handle the suffixes separately such as with JS:
- name: Append tag suffixes
id: image-tags
uses: actions/github-script@v8
env:
INPUT_DATA: ${{ steps.image-metadata.outputs.json }}
with:
script: |
const tags = JSON.parse(process.env.INPUT_DATA).tags;
tags.map(tag => `${tag}-${suffix}`yq is available by default on GH runners (using the latest version GH releases at the time of the runner image build). We can use yq to get the version and tag info to pass into action/github-script (or even call yq from within the JS script 😎).
yq '{
"versions": .versions,
"variants": .variants | with_entries(.value | {
"key": .tags[0],
"value": [.tags, .shared_tags] | flatten
})
}' ./stackbrew-config.yamlWhich produces this YAML (use -o=j / --output-format=json to get JSON output):
Collapsed for brevity
versions:
- caddy_version: '2.11.0-beta.1'
is_major: false
is_latest: false
dist_commit: 33ae08ff08d168572df2956ed14fbc4949880d94
- caddy_version: '2.10.2'
is_major: true
is_latest: true
dist_commit: 33ae08ff08d168572df2956ed14fbc4949880d94
variants:
"alpine":
- "alpine"
- "latest"
"builder-alpine":
- "builder-alpine"
- "builder"
"windowsservercore-ltsc2022":
- "windowsservercore-ltsc2022"
- "windowsservercore"
- "latest"
"windowsservercore-ltsc2025":
- "windowsservercore-ltsc2025"
- "windowsservercore"
- "latest"
"nanoserver-ltsc2022":
- "nanoserver-ltsc2022"
- "nanoserver"
"nanoserver-ltsc2025":
- "nanoserver-ltsc2025"
- "nanoserver"
"builder-windowsservercore-ltsc2022":
- "builder-windowsservercore-ltsc2022"
- "builder"
"builder-windowsservercore-ltsc2025":
- "builder-windowsservercore-ltsc2025"
- "builder"That yq query is useful for access via data.variants.alpine in GHA expressions or for input into something like Docker Bake, less so for matrix use.
Although you could manually construct a matrix config from this data output, possibly via the GHA * object filter expression (eg: data.variants.*) to get an array of objects.
For matrix usage, we could do this instead (with select() to filter to only the linux / alpine variants)
yq '{
"versions": [.versions[] | {
"caddy": .caddy_version,
"isPrimaryRelease": .is_major
}],
"variants": [.variants[] | {
"dir": .dir,
"tags": ([.tags, .shared_tags] | flatten)
} | select(.tags[] == "*alpine")]
}' ./stackbrew-config.yamlWhich outputs this small YAML snippet:
versions:
- caddy: '2.11.0-beta.1'
isPrimaryRelease: false
- caddy: '2.10.2'
isPrimaryRelease: true
variants:
- dir: alpine
tags:
- "alpine"
- "latest"
- dir: builder
tags:
- "builder-alpine"
- "builder"- name: Generate image matrix
id: image-matrix
shell: bash
run: |
JSON=$(yq --output-format=json --indent=0 "${YQ_QUERY}" ./stackbrew-config.yaml)
echo "image-config=${JSON}" >> "${GITHUB_OUTPUT}"
# YQ query was extracted as an ENV to make `run` easier to grok at a glance
env:
YQ_QUERY: |
{
"versions": [.versions[] | {
"caddy": .caddy_version,
"isPrimaryRelease": .is_major
}],
"variants": [.variants[] | {
"dir": .dir,
"tags": ([.tags, .shared_tags] | flatten)
} | select(.tags[] == "*alpine")]
}
#360