Gitlab Runner Operator for Power Linux

Sonia Garudi
9 min readJun 30, 2021

--

An Operator is a method of packaging, deploying and managing a Kubernetes-native application. With operators, you can manage the lifecycle of the apps in your cluster and consistently install, update, and monitor system components by using operators.

The GitLab Runner operator aims to manage the lifecycle of GitLab Runner instances in your Kubernetes or Openshift container platforms. The GitLab Runner operator uses native Kubernetes resources to deploy and manage GitLab Runner instances in your cluster. It therefore will presumably run on any container platform that is derived from Kubernetes.

Gitlab Runner Operator for Power

Gitlab has undoubtedly gained a lot of popularity in past few years. Providing a way to deploy GitLab and Red Hat together supports developers in application development.

The GitLab Runner operator is currently available only for the amd64 architecture and does not have Power support.

This documentation takes you through the steps followed to create a ppc64le specific Gitlab Runner Operator image. The document demonstrates 2 ways to test the operator — using CatalogSource and running it locally on an OCP cluster.

Pre-requisites

  • A standalone VM to build the required image/binaries.

This example uses a ppc64le Ubuntu 20.04.2 LTS VM. You can use PowerVS service at IBM Cloud or Minicloud to get your ppc64le virtual machine. Alternatively, you can also run these steps on a x86_64 VM.

  • A ppc64le OCP cluster for testing the Gitlab Runner Operator

You can deploy a Red Hat OpenShift cluster on IBM Power Systems Virtual Servers using steps in this article: https://developer.ibm.com/components/ibm-power/tutorials/install-ocp-on-power-vs/

Installing Dependencies

You need to install the following dependencies on the VM you will use to build the operator and related images:

  • Golang, Make and other dependencies
apt-get update && apt-get install -y qemu binfmt-support qemu-user-static qemu-system-ppc64 python3-pip golang
  • Docker

The steps for docker installation can be found here.

  • Operator SDK
wget https://github.com/operator-framework/operator-sdk/releases/download/v1.5.0/operator-sdk_linux_ppc64lechmod +x operator-sdk_linux_ppc64le && mv operator-sdk_linux_ppc64le /usr/local/bin/operator-sdk
  • Kustomize
go get sigs.k8s.io/kustomize/kustomize/v3
  • OPM
wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.6.22/opm-linux-4.6.22.tar.gztar -C /usr/bin/ -zxvf opm-linux-4.6.22.tar.gz && rm opm-linux-4.6.22.tar.gzopm version
  • Export the environment variables below:
export OS=linux
export ARCH=ppc64le
export GITLAB_RUNNER_UBI_IMAGES_REPO_PATH=<Set cloned repo path>
export DOCKER_REGISTRY=”<Path to your registry. We used quay.io>”
export VERSION=v13.11.0
export BASE_IMAGE=registry.access.redhat.com/ubi8/ubi:8.4–203
export GIT_LFS_VERSION=2.11.0–2.el8
export DOCKER_CLI_EXPERIMENTAL=enabled

Gitlab Runner and Helper UBI images

The UBI images from the gitlab-runner-ubi-images project need to be built and published. The project allows building of the rootless UBI-based gitlab-runner and gitlab-runner-helper images for OpenShift. The code does not have Power support and downloads all x86 binaries (Gitlab Runner, Gitlab Runner Helper, Tini) required to build the target UBI images.

  • Prepare the building workspace
./ubi.sh prepare "${VERSION}"curl -Lf https://public.dhe.ibm.com/software/server/cicd/gitlab/v13.11.0/tini -o “${GITLAB_RUNNER_UBI_IMAGES_REPO_PATH}/build/runner/tini"
  • Build Gitlab Runner and Helper binaries for Power

The binaries downloaded in workspace in the previous step are for x86_64. We need the Gitlab runner and helper binaries for Power. The steps to build the binaries can be found here.

For your convenience, you can download the binaries from the IBM hosted Gitlab Runner packages for Power–

curl -Lf “https://public.dhe.ibm.com/software/server/cicd/gitlab/v13.11.0/gitlab-runner-linux-ppc64le" -o “${GITLAB_RUNNER_UBI_IMAGES_REPO_PATH}/build/runner/gitlab-runner”curl -Lf “https://public.dhe.ibm.com/software/server/cicd/gitlab/v13.11.0/gitlab-runner-helper.ppc64le" -o “${GITLAB_RUNNER_UBI_IMAGES_REPO_PATH}/build/helper/gitlab-runner-helper”
  • Build the UBI-based images
docker buildx build — platform “${OS}/${ARCH}” \
— build-arg BASE_IMAGE=”${BASE_IMAGE}” \
— build-arg VERSION=”${VERSION}” \
— build-arg GIT_LFS_VERSION=”${GIT_LFS_VERSION}” \
-f ./build/runner/Dockerfile.OCP \
-t “${DOCKER_REGISTRY}/gitlab-runner-ocp:${VERSION}” ./build/runner
docker push ${DOCKER_REGISTRY}/gitlab-runner-ocp:v13.11.0docker buildx build — platform “${OS}/${ARCH}” \
— build-arg BASE_IMAGE=”${BASE_IMAGE}” \
— build-arg VERSION=”${VERSION}” \
— build-arg GIT_LFS_VERSION=”${GIT_LFS_VERSION}” \
-f ./build/helper/Dockerfile.OCP \
-t “${DOCKER_REGISTRY}/gitlab-runner-helper-ocp:${ARCH}-${VERSION}” ./build/helper
docker push ${DOCKER_REGISTRY}/gitlab-runner-helper-ocp:${ARCH}-${VERSION}

Building Gitlab-Runner-Operator image and bundle

The source code for building this image can be found here. Switch to the multiarch-support branch which has code changes for building required images for specified OS/Architecture. See issue.

  • Generate release.yaml file
CERTIFIED=false \
ARCH=”${ARCH}” \
UPSTREAM_UBI_IMAGES_REPOSITORY=”${DOCKER_REGISTRY}” \
./scripts/create_release_config.sh v1.0.0 “${VERSION}”
cat hack/assets/release.yaml
  • Build gitlab-runner-operator image
docker buildx build — platform “${OS}/${ARCH}” \
— build-arg VERSION=”${VERSION}” \
— build-arg ARCH=”${ARCH}” \
-f Dockerfile \
-t “${DOCKER_REGISTRY}/gitlab-runner-operator-${OS}-${ARCH}” .
docker push ${DOCKER_REGISTRY}/gitlab-runner-operator-${OS}-${ARCH}
  • Generate related_images.yaml and create certification bundle
CERTIFIED=false \
ARCH=”${ARCH}” \
UPSTREAM_UBI_IMAGES_REPOSITORY=”${DOCKER_REGISTRY}” \
GITLAB_RUNNER_OPERATOR_REGISTRY=”${DOCKER_REGISTRY}” \
KUBE_RBAC_PROXY_IMAGE=”registry.redhat.io/openshift4/ose-kube-rbac-proxy:latest” \
./scripts/create_related_images_config.sh v1.0.0 “${VERSION}”
make bundle certification-bundle IMG=${DOCKER_REGISTRY}/gitlab-runner-operator-${OS}-${ARCH}:latest BUNDLE_IMG=${DOCKER_REGISTRY}/gitlab-runner-operator-bundle:v1.0.0 VERSION=1.0.0 CERTIFIED=false
  • Build operator bundle
docker buildx build — platform “${OS}/${ARCH}” \
— build-arg VERSION=”${VERSION}” \
-f certification.Dockerfile \
-t “${DOCKER_REGISTRY}/gitlab-runner-operator-bundle:v1.0.0” .
docker push ${DOCKER_REGISTRY}/gitlab-runner-operator-bundle:v1.0.0
  • Generate index image
opm index add \
— bundles “${DOCKER_REGISTRY}/gitlab-runner-operator-bundle:v1.0.0” \
— tag “${DOCKER_REGISTRY}/operator-catalog-${OS}-${ARCH}:v1.0.0” \
— build-tool docker \
— binary-image registry.redhat.io/openshift4/ose-operator-registry@sha256:80e12437142a91b14c01f6c385c98387daa628669bbfd70bf78fb4602a398bc5
docker push “${DOCKER_REGISTRY}/operator-catalog-${OS}-${ARCH}:v1.0.0”
  • Create manifest
docker manifest create “${DOCKER_REGISTRY}/operator-catalog-manifest:v1.0.0” “${DOCKER_REGISTRY}/operator-catalog-${OS}-${ARCH}:v1.0.0”docker manifest push “${DOCKER_REGISTRY}/operator-catalog-manifest:v1.0.0”

Installing and configuring the GitLab Runner Operator using CatalogSource–

  • Create CatalogSource

1. On the cluster bastion node, create a CatalogSource in the ‘openshift-marketplace’ namespace.

export DOCKER_REGISTRY=”quay.io/soniagarudi”cat > catalogsource.yaml << EOF
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: gitlab-runner-catalog
namespace: openshift-marketplace
spec:
sourceType: grpc
image: ${DOCKER_REGISTRY}/operator-catalog-manifest:v1.0.0
displayName: GitLab Runner Operators
publisher: GitLab Community
EOF

2. Verify the CatalogSource has been created

oc -n openshift-marketplace get catalogsource

Once you have confirmed the catalogsource exists, run the following command to verify it’s associated pod is successfully running. This verifies that the index image has been successfully pulled down from the repository.

oc -n openshift-marketplace get pods

You should see output like this:

The CatalogSource can also be viewed from Cluster Settings> Global Configuration > OperatorHub > Sources :

  • Create OperatorGroup

1. Create a namespace for your project.

export NEW_PROJECT=gitlab-runner-systemoc new-project $NEW_PROJECT

2. Create OperatorGroup.

cat > test-operatorgroup.yaml << EOF
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: my-group
namespace: ${NEW_PROJECT}
spec:
targetNamespaces:
- ${NEW_PROJECT}
EOF
oc apply -f test-operatorgroup.yaml

Verify you have a working operatorgroup within the namespace you created:

oc get ogNAME        AGE
my-group 8s
  • Create Subscription

1. Create a subscription

cat > test-subscription.yaml << EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: test-subscription
spec:
channel: stable
installPlanApproval: Automatic
name: gitlab-runner-operator
source: gitlab-runner-catalog
sourceNamespace: openshift-marketplace
EOF
oc create -f test-subscription.yaml

Verify the subscription is created within your namespace:

oc get sub -n ${NEW_PROJECT}

The creation of the subscription should trigger the creation of the InstallPlan.

oc get installplan -n ${NEW_PROJECT}NAME           CSV                            APPROVAL    APPROVEDinstall-lhfp4 gitlab-runner-operator.v1.0.0   Automatic    true

The install plan will create the gitlab runner manager pod in your namespace:

  • Create Runner

1. Create the secret file with your GitLab project’s runner token:

Go to your Gitlab project’s Settings > CI/CD > Expand Runners > Copy your ‘Registration token”

cat > gitlab-runner-secret.yml << EOF
apiVersion: v1
kind: Secret
metadata:
name: gitlab-runner-secret
type: Opaque
stringData:
runner-registration-token: REPLACE_ME # your project runner secret
EOF
oc apply -f gitlab-runner-secret.yml

2. Create the Runner from CRD

Create Custom Resource Definition (CRD) file with the following information. The tags value must be openshift for the job to run.

cat > gitlab-runner.yml << EOF
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
spec:
gitlabUrl: <YOUR GITLAB INSTANCE URL>
buildImage: alpine
token: gitlab-runner-secret
tags: openshift
EOF
oc apply -f gitlab-runner.yml

Confirm that GitLab Runner is installed by running:

oc get runnersNAME               AGE
gitlab-runner 5m

The runner pod should also be visible in NEW_PROJECT namespace:

Verify that runner is listed in “Specific runners” list for the Gitlab project. In your Gitlab project, go to Settings > CI/CD > Expand Runners :

3. Configuring code to use the new Runner

Modify the tag field in .gitlab-ci.yaml file to use the newly available runner. Link to sample repository used in this tutorial- https://gitlab.com/soniagarudi/firstproject

A new build will now use the ‘openshift’ runner. It will create a new pod to run the build:

Testing Gitlab-Runner-Operator locally

The Gitlab-Runner-Operator can also be tested locally without creating the Index image and CatalogSource. The steps for the same are available here. This section will cover these steps.

After building the gitlab-runner-operator image, clone the gitlab-runner-operator repository on the OCP cluster’s bastion node and switch to the multiarch-support branch.

1. Install dependencies

yum install mercurial wget make golangyum groupinstall ‘Development Tools’export OS=linux
export ARCH=ppc64le
export DOCKER_REGISTRY="<Path to your registry. We used quay.io>"
export VERSION=v13.11.0

2. Create release.yaml

CERTIFIED=false \
ARCH=”${ARCH}” \
UPSTREAM_UBI_IMAGES_REPOSITORY=”${DOCKER_REGISTRY}” \
./scripts/create_release_config.sh v1.0.0 “${VERSION}”
cat hack/assets/release.yaml

3. Make the following code changes:

  • In Makefile, change the IMG to ${DOCKER_REGISTRY}/gitlab-runner-operator-${OS}-${ARCH}:latest
  • In config/manager/manager.yaml, add the following environment variable:
name: ENABLE_WEBHOOKS
value: “false”
  • In config/default/manager_auth_proxy_patch.yaml change kube-rbac-proxy image to registry.redhat.io/openshift4/ose-kube-rbac-proxy:latest

4. Create project namespace

oc new-project gitlab-runner-system

5. Install your Cluster Resource Definitions (CRDs)

CRDs allow end users to extend the Kubernetes API by introducing custom resource types.

make install
./scripts/create_kustomization.sh
make deploy

These commands will create the required resources:

6. Create gitlab-runner-secret

Edit the secret in config/samples/runner-secret.yaml by replacing the placeholder and create the secret:

kubectl create -f config/samples/runner-secret.yaml

7. Deploy the sample GitLab Runner instance included in the config/samples directory:

kubectl create -f config/samples/apps_v1beta2_runner.yaml

Verify that a Runner object is created and the corresponding pod in generated.

8. The steps to further configure Gitlab project to list and use the newly available runner are as mentioned in the previous section.

Troubleshooting

I have listed a few errors I encountered during this entire process:

1. Invalid ClusterServiceVersion

INFO[0000] ConversionReviewVersion not found for the deployment “gitlab-runner-controller-manager”
ERRO[0000] ClusterServiceVersion validation: [CSVFileNotValid] (gitlab-runner-operator.v1.0.0) only AllNamespaces InstallModeType is supported when conversionCRDs is present
FATA[0000] Error generating bundle manifests: error generating ClusterServiceVersion: invalid generated ClusterServiceVersion
make: *** [Makefile:145: bundle] Error 1

Solution:

I faced the above error with Operator SDK v1.9.0. Make sure to install appropriate version of operator-sdk. On using Operator SDK v1.5.0 resolved the issue.

2. Failed calling webhook

Error from server (InternalError): error when creating “config/samples/apps_v1beta2_runner.yaml”: Internal error occurred: failed calling webhook “vrunner.kb.io”: Post “https://gitlab-runner-webhook-service.gitlab-runner-system.svc:443/validate-apps-gitlab-com-v1beta2-runner?timeout=30s": dial tcp 10.254.20.105:9443: connect: connection refused

Solution: Delete the validating webhook

# oc delete validatingwebhookconfiguration.admissionregistration.k8s.io/gitlab-runner-validating-webhook-configurationvalidatingwebhookconfiguration.admissionregistration.k8s.io “gitlab-runner-validating-webhook-configuration” deleted

3. CertDir path not existing

2021–03–10T16:20:15.354Z ERROR setup problem running manager {“error”: “open /tmp/k8s-webhook-server/serving-certs/tls.crt: no such file or directory”}

Solution: Enable Webhook

In config/manager/manager.yaml, add the following environment variable:

name: ENABLE_WEBHOOKS
value: “false”

4. Error while building images using buildx

exec format error

Solution: Make sure your docker host has qemu support

apt-get install -y qemu binfmt-support qemu-user-static qemu-system-ppc
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

What’s Next?

With the active development and increasing popularity of Gitlab, having a Certified Runner Operator for ppc64le which can be deployed into OpenShift from the OpenShift Operator Hub would be a big help for developers.

For the Gitlab Runner Operator to have official power support we need to add Power specific code changes to build binaries/images in the CI/CD pipeline for :

  • Gitlab-Runner
  • Gitlab-Runner-UBI-Images
  • Gitlab-Runner-Operator

While the first step is almost achieved (there is PR awaiting merge for gitlab-runner :)), the next 2 steps are in progress.

In the meantime, do try the steps in this article and give it a shoutout :)

--

--

No responses yet