Develop a Kubernetes Operator using GoLang

·

17 min read

Develop a Kubernetes Operator using GoLang

Kubernetes has revolutionized the world of container orchestration, and Kubernetes Operators take it a step further by automating complex application management tasks. In this blog, we'll delve into the realm of Kubernetes Operators, walking you through the process of developing one using the Go programming language.

Understanding Kubernetes Operators

Before we dive into the technical details, let's understand what a Kubernetes Operator is. An Operator is an extension of Kubernetes that automates the deployment, management, and scaling of applications. It encapsulates domain-specific knowledge to ensure your applications run seamlessly within a Kubernetes environment.

Why GoLang for Developing Operators?

Go, also known as Golang, has gained immense popularity for its simplicity, efficiency, and performance. It's an ideal choice for building Kubernetes Operators due to its strong concurrency support and suitability for systems programming.

Operator SDK

The Operator SDK provides building blocks that simplify Operator development. The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. Use the SDK to create controllers, handlers, and operators that follow best practices and ensure consistency across your project.

Start building the Operator

Prerequisite

You should have a working Kubernetes cluster or Openshift cluster.

Gihub Repository used for this tutorial

Code for this tutorial is here -

https://github.com/techwithhuz/kubernetes-operator/tree/main/techwithhuz-operator

Step 1: Set Up Your Development Environment

Start by installing the necessary tools: Go, the Kubernetes command-line tool (kubectl), and the Operator SDK.

Install the Operator SDK CLI. We will use the GitHub release method to install Operator SDK. For Mac users brew install is available.

Install from GitHub release

Prerequisites

1.Download the release binary

Set platform information:

$export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac)
$export OS=$(uname | awk '{print tolower($0)}')

Download the binary for your platform:

$export OPERATOR_SDK_DL_URL=https://github.com/operator-framework/operator-sdk/releases/download/v1.31.0
$curl -LO ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH}

2. Verify the downloaded binary

Import the operator-sdk release GPG key from keyserver.ubuntu.com:

$gpg --keyserver keyserver.ubuntu.com --recv-keys 052996E2A20B5C7E

Download the checksums file and its signature, then verify the signature:

$curl -LO ${OPERATOR_SDK_DL_URL}/checksums.txt
$curl -LO ${OPERATOR_SDK_DL_URL}/checksums.txt.asc
$gpg -u "Operator SDK (release) <cncf-operator-sdk@cncf.io>" --verify checksums.txt.asc

You should see something similar to the following:

gpg: assuming signed data in 'checksums.txt'
gpg: Signature made Fri 30 Oct 2020 12:15:15 PM PDT
gpg:                using RSA key ADE83605E945FA5A1BD8639C59E5B47624962185
gpg: Good signature from "Operator SDK (release) <cncf-operator-sdk@cncf.io>" [ultimate]

Make sure the checksums match:

$grep operator-sdk_${OS}_${ARCH} checksums.txt | sha256sum -c -

You should see something similar to the following:

operator-sdk_linux_amd64: OK
  1. Install Go

    Download the latest version of Go - https://go.dev/dl

    Then unzip it.

$ wget https://go.dev/dl/go1.21.0.linux-amd64.tar.gz
Saving to: 'go1.21.0.linux-amd64.tar.gz'

go1.21.0.linux-amd64.tar.gz          100%[===================================================================>]  63.40M  85.2MB/s    in 0.7s    

2023-08-13 08:30:42 (85.2 MB/s) - 'go1.21.0.linux-amd64.tar.gz' saved [66479500/66479500]

$tar -xvf go1.21.0.linux-amd64.tar.gz

$pwd
/root/operator
  1. Set PATH variable for GO

    You can set the custom path where you downloaded the GO binary or you can copy the binaries to the default /usr/local path

    Here, we will be set to the custom path and check the version

$ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/operator/go/bin:/snap/bin:/root/operator/go/bin:/snap/bin

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/operator/go/bin:/snap/bin:/root/operator/go/bin:/snap/bin

 $ go version
go version go1.21.0 linux/amd64

Step 2: Start building the Operator

We will create and deploy one demo application with the name techwithhuz through the GoLang operator and we will expose it with a service. The application will be a simple Deployment yaml which will have a nginx image.

  1. Create a new project

    Use the CLI to create a new techwithhuz-operator project

$ mkdir techwithuz-operator
$ cd techwithuz-operator/

$ operator-sdk init --domain techwithhuz.com --repo github.com/techwithhuz/techwithhuz-operator
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.14.1
go: downloading sigs.k8s.io/controller-runtime v0.14.1
go: downloading k8s.io/apimachinery v0.26.0
go: downloading github.com/gogo/protobuf v1.3.2
go: downloading github.com/google/gofuzz v1.1.0
go: downloading github.com/go-logr/logr v1.2.3
go: downloading k8s.io/client-go v0.26.0
go: downloading k8s.io/klog/v2 v2.80.1
go: downloading k8s.io/utils v0.0.0-20221128185143-99ec85e7a448
go: downloading github.com/prometheus/client_golang v1.14.0
go: downloading gopkg.in/inf.v0 v0.9.1
go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.2.3
go: downloading golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10
go: downloading github.com/evanphx/json-patch/v5 v5.6.0
go: downloading github.com/evanphx/json-patch v4.12.0+incompatible
go: downloading golang.org/x/time v0.3.0
go: downloading gomodules.xyz/jsonpatch/v2 v2.2.0
go: downloading k8s.io/api v0.26.0
go: downloading k8s.io/apiextensions-apiserver v0.26.0
go: downloading github.com/imdario/mergo v0.3.6
go: downloading github.com/spf13/pflag v1.0.5
go: downloading golang.org/x/term v0.3.0
go: downloading k8s.io/component-base v0.26.0
go: downloading github.com/prometheus/client_model v0.3.0
go: downloading github.com/prometheus/common v0.37.0
go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da
go: downloading sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2
go: downloading github.com/json-iterator/go v1.1.12
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/davecgh/go-spew v1.1.1
go: downloading golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b
go: downloading github.com/pkg/errors v0.9.1
go: downloading k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280
go: downloading github.com/golang/protobuf v1.5.2
go: downloading github.com/google/gnostic v0.5.7-v3refs
go: downloading golang.org/x/sys v0.3.0
go: downloading sigs.k8s.io/yaml v1.3.0
go: downloading github.com/beorn7/perks v1.0.1
go: downloading github.com/cespare/xxhash/v2 v2.1.2
go: downloading github.com/prometheus/procfs v0.8.0
go: downloading google.golang.org/protobuf v1.28.1
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.2
go: downloading github.com/google/uuid v1.1.2
go: downloading github.com/fsnotify/fsnotify v1.6.0
go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
go: downloading github.com/modern-go/reflect2 v1.0.2
go: downloading golang.org/x/text v0.5.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/google/go-cmp v0.5.9
go: downloading google.golang.org/appengine v1.6.7
go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822
go: downloading github.com/emicklei/go-restful/v3 v3.9.0
go: downloading github.com/go-openapi/swag v0.19.14
go: downloading github.com/go-openapi/jsonreference v0.20.0
go: downloading github.com/mailru/easyjson v0.7.6
go: downloading github.com/go-openapi/jsonpointer v0.19.5
go: downloading github.com/josharian/intern v1.0.0
Update dependencies:
$ go mod tidy
go: downloading github.com/stretchr/testify v1.8.0
go: downloading github.com/onsi/ginkgo/v2 v2.6.0
go: downloading github.com/onsi/gomega v1.24.1
go: downloading github.com/go-logr/zapr v1.2.3
go: downloading go.uber.org/zap v1.24.0
go: downloading github.com/pmezard/go-difflib v1.0.0
go: downloading go.uber.org/goleak v1.2.0
go: downloading go.uber.org/atomic v1.7.0
go: downloading go.uber.org/multierr v1.6.0
go: downloading gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f
go: downloading github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e
go: downloading github.com/kr/text v0.2.0
go: downloading github.com/benbjohnson/clock v1.1.0
Next: define a resource with:
$ operator-sdk create api

$ pwd
/root/techwithuz-operator
$ ls
Dockerfile  Makefile  PROJECT  README.md  config  go.mod  go.sum  hack  main.go

The above operator-sdk command creates all the necessary folder structures for us inside the techwithhuz-operator folder.

  1. Create the api for techwithhuz
$ operator-sdk create api --group cache --version v1alpha1 --kind Techwithhuz --resource --controller
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1alpha1/techwithhuz_types.go
controllers/techwithhuz_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
test -s /root/techwithuz-operator/bin/controller-gen && /root/techwithuz-operator/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/root/techwithuz-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
go: downloading github.com/spf13/cobra v1.6.1
go: downloading github.com/gobuffalo/flect v0.3.0
go: downloading golang.org/x/tools v0.4.0
go: downloading github.com/fatih/color v1.13.0
go: downloading k8s.io/utils v0.0.0-20221107191617-1a15be271d1d
go: downloading github.com/mattn/go-colorable v0.1.9
go: downloading github.com/mattn/go-isatty v0.0.14
go: downloading golang.org/x/mod v0.7.0
go: downloading golang.org/x/net v0.4.0
/root/techwithuz-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests
$ 
$ ls
Dockerfile  Makefile  PROJECT  README.md  api  bin  config  controllers  go.mod  go.sum  hack  main.go

$ ls api/v1alpha1/techwithhuz_types.go 
api/v1alpha1/techwithhuz_types.go

Now we have the API folder created with the above command. Also, we can see we have the types.go file available.

Step 3: Define the API

To begin, we will represent our API by defining the Techwithhuz type, which will have a Techwithhuz.Size and Techwithhuz.ContainerPort field to set the number of our instances (CRs) to be deployed and container port to be used, and a TechwithhuzStatus.Conditions field to store a CR’s Conditions.

Define the API for the Techwithhuz Custom Resource(CR) by modifying the Go type definitions at api/v1alpha1/techwithhuz_types.go to have the following spec and status:

// TechwithhuzSpec defines the desired state of Techwithhuz
type TechwithhuzSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // Foo is an example field of Techwithhuz. Edit techwithhuz_types.go to remove/update
    //Foo string `json:"foo,omitempty"`
    //Add size and containerport property for which values can be passed through Custom Resource(CR) file.
    Size          int32 `json:"size,omitempty"`
    ContainerPort int32 `json:"containerPort,omitempty"`
}

type TechwithhuzStatus struct {
    // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    // Conditions store the status conditions of the TechWithHuz instances
    // +operator-sdk:csv:customresourcedefinitions:type=status
    Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
}

Now add the +kubebuilder:subresource:status marker to add a status subresource to the CRD manifest so that the controller can update the CR status without changing the rest of the CR object:

// Techwithhuz is the Schema for the techwithhuzs API
type Techwithhuz struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec   TechwithhuzSpec   `json:"spec,omitempty"`
    Status TechwithhuzStatus `json:"status,omitempty"`
}

After modifying the *_types.go file always runs the following command to update the generated code for that resource type

$ make generate
test -s /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen && /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/root/kubernetes-operator/techwithhuz-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/root/kubernetes-operator/techwithhuz-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."

The above makefile target will invoke the controller-gen utility to update the api/v1alpha1/zz_generated.deepcopy.go file to ensure our API’s Go type definitions implement the runtime.Object interface that all Kind types must implement.

Step 4: Generating CRD manifests

Once the API is defined with spec/status fields and CRD validation markers, the CRD manifests can be generated and updated with the following command.

$ make manifests
test -s /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen && /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/root/kubernetes-operator/techwithhuz-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/root/kubernetes-operator/techwithhuz-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases

This makefile target will invoke controller-gen to generate the CRD manifests at config/crd/bases/cache.example.com_memcacheds.yaml.

Step 5: Implement the Controller

The controller file will contain the reconciliation logic. We have the controller file already generated controller/techwithuz_controller.go. Now we need to write the logic like in our case we want to create a kind Deployment for our application Techwithhuz.

So we need to write the yaml file of kind deployment but in Go lang format.

We will see now how to do that.

// deploymentForTechwithhuz returns a Techwithhuz Deployment object
func (r *TechwithhuzReconciler) deploymentForTechwithhuz(techwithhuz *cachev1alpha1.Techwithhuz) (*appsv1.Deployment, error) {
    replicas := techwithhuz.Spec.Size
    ls := labelsForTechwithhuz(techwithhuz.Name)

    // Get the Operand image
    image := "nginx:latest"
    dep := &appsv1.Deployment{
        ObjectMeta: metav1.ObjectMeta{
            Name:      techwithhuz.Name,
            Namespace: techwithhuz.Namespace,
        },
        Spec: appsv1.DeploymentSpec{
            Replicas: &replicas,
            Selector: &metav1.LabelSelector{
                MatchLabels: ls,
            },
            Template: corev1.PodTemplateSpec{
                ObjectMeta: metav1.ObjectMeta{
                    Labels: ls,
                },
                Spec: corev1.PodSpec{
                    SecurityContext: &corev1.PodSecurityContext{
                        RunAsNonRoot: &[]bool{true}[0],
                        SeccompProfile: &corev1.SeccompProfile{
                            Type: corev1.SeccompProfileTypeRuntimeDefault,
                        },
                    },
                    Containers: []corev1.Container{{
                        Image:           image,
                        Name:            "techwithhuz",
                        ImagePullPolicy: corev1.PullIfNotPresent,
                        SecurityContext: &corev1.SecurityContext{
                            RunAsNonRoot:             &[]bool{true}[0],
                            RunAsUser:                &[]int64{1001}[0],
                            AllowPrivilegeEscalation: &[]bool{false}[0],
                            Capabilities: &corev1.Capabilities{
                                Drop: []corev1.Capability{
                                    "ALL",
                                },
                            },
                        },
                        Ports: []corev1.ContainerPort{{
                            ContainerPort: techwithhuz.Spec.ContainerPort,
                            Name:          "techwithhuz",
                        }},
                        Command: []string{"sleep", "1000s"},
                    }},
                },
            },
        },
    }
    if err := ctrl.SetControllerReference(techwithhuz, dep, r.Scheme); err != nil {
        return nil, err
    }
    return dep, nil
}

As we can see in the above code we have written the logic for Kind Deployment in the Go language.

To know more about how to write the above code and what all functions are available refer to this link - https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#deploymentspec-v1-apps also you can refer to the API document link from GO - https://pkg.go.dev/k8s.io/api/core/v1

Now we have to write the logic for reconciliation like when the user set Size as 3 in the CR file, then our reconcile logic should increase the pod to 3.

func (r *TechwithhuzReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    //_ = log.FromContext(ctx)
    log := log.FromContext(ctx)

    // TODO(user): your logic here
    // Fetch the Techwithhuz instance
    // The purpose is check if the Custom Resource for the Kind Techwithhuz
    // is applied on the cluster if not we return nil to stop the reconciliation
    techwithhuz := &cachev1alpha1.Techwithhuz{}
    err := r.Get(ctx, req.NamespacedName, techwithhuz)
    if err != nil {
        if apierrors.IsNotFound(err) {
            // If the custom resource is not found then, it usually means that it was deleted or not created
            // In this way, we will stop the reconciliation
            log.Info("techwithhuz resource not found. Ignoring since object must be deleted")
            return ctrl.Result{}, nil
        }
        // Error reading the object - requeue the request.
        log.Error(err, "Failed to get techwithhuz")
        return ctrl.Result{}, err
    }
    // Let's just set the status as Unknown when no status are available
    if techwithhuz.Status.Conditions == nil || len(techwithhuz.Status.Conditions) == 0 {
        meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeAvailableTechwithhuz, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
        if err = r.Status().Update(ctx, techwithhuz); err != nil {
            log.Error(err, "Failed to update techwithhuz status")
            return ctrl.Result{}, err
        }

        // Let's re-fetch the techwithhuz Custom Resource after update the status
        // so that we have the latest state of the resource on the cluster and we will avoid
        // raise the issue "the object has been modified, please apply
        // your changes to the latest version and try again" which would re-trigger the reconciliation
        // if we try to update it again in the following operations
        if err := r.Get(ctx, req.NamespacedName, techwithhuz); err != nil {
            log.Error(err, "Failed to re-fetch techwithhuz")
            return ctrl.Result{}, err
        }
    }
    // Let's add a finalizer. Then, we can define some operations which should
    // occurs before the custom resource to be deleted.
    // More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
    if !controllerutil.ContainsFinalizer(techwithhuz, techwithhuzFinalizer) {
        log.Info("Adding Finalizer for techwithhuz")
        if ok := controllerutil.AddFinalizer(techwithhuz, techwithhuzFinalizer); !ok {
            log.Error(err, "Failed to add finalizer into the custom resource")
            return ctrl.Result{Requeue: true}, nil
        }

        if err = r.Update(ctx, techwithhuz); err != nil {
            log.Error(err, "Failed to update custom resource to add finalizer")
            return ctrl.Result{}, err
        }
    }
    // Check if the Techwithhuz instance is marked to be deleted, which is
    // indicated by the deletion timestamp being set.
    isTechwithhuzMarkedToBeDeleted := techwithhuz.GetDeletionTimestamp() != nil
    if isTechwithhuzMarkedToBeDeleted {
        if controllerutil.ContainsFinalizer(techwithhuz, techwithhuzFinalizer) {
            log.Info("Performing Finalizer Operations for techwithhuz before delete CR")

            // Let's add here an status "Downgrade" to define that this resource begin its process to be terminated.
            meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeDegradedTechwithhuz,
                Status: metav1.ConditionUnknown, Reason: "Finalizing",
                Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", techwithhuz.Name)})

            if err := r.Status().Update(ctx, techwithhuz); err != nil {
                log.Error(err, "Failed to update techwithhuz status")
                return ctrl.Result{}, err
            }

            // Perform all operations required before remove the finalizer and allow
            // the Kubernetes API to remove the custom resource.
            r.doFinalizerOperationsForTechwithhuz(techwithhuz)

            // Re-fetch the techwithhuz Custom Resource before update the status
            // so that we have the latest state of the resource on the cluster and we will avoid
            // raise the issue "the object has been modified, please apply
            // your changes to the latest version and try again" which would re-trigger the reconciliation
            if err := r.Get(ctx, req.NamespacedName, techwithhuz); err != nil {
                log.Error(err, "Failed to re-fetch techwithhuz")
                return ctrl.Result{}, err
            }

            meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeDegradedTechwithhuz,
                Status: metav1.ConditionTrue, Reason: "Finalizing",
                Message: fmt.Sprintf("Finalizer operations for custom resource %s name were successfully accomplished", techwithhuz.Name)})

            if err := r.Status().Update(ctx, techwithhuz); err != nil {
                log.Error(err, "Failed to update Techwithhuz status")
                return ctrl.Result{}, err
            }

            log.Info("Removing Finalizer for Techwithhuz after successfully perform the operations")
            if ok := controllerutil.RemoveFinalizer(techwithhuz, techwithhuzFinalizer); !ok {
                log.Error(err, "Failed to remove finalizer for techwithhuz")
                return ctrl.Result{Requeue: true}, nil
            }

            if err := r.Update(ctx, techwithhuz); err != nil {
                log.Error(err, "Failed to remove finalizer for techwithhuz")
                return ctrl.Result{}, err
            }
        }
        return ctrl.Result{}, nil
    }
    // Check if the deployment already exists, if not create a new one
    found := &appsv1.Deployment{}
    err = r.Get(ctx, types.NamespacedName{Name: techwithhuz.Name, Namespace: techwithhuz.Namespace}, found)
    if err != nil && apierrors.IsNotFound(err) {
        // Define a new deployment
        dep, err := r.deploymentForTechwithhuz(techwithhuz)
        if err != nil {
            log.Error(err, "Failed to define new Deployment resource for techwithhuz")

            // The following implementation will update the status
            meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeAvailableTechwithhuz,
                Status: metav1.ConditionFalse, Reason: "Reconciling",
                Message: fmt.Sprintf("Failed to create Deployment for the custom resource (%s): (%s)", techwithhuz.Name, err)})

            if err := r.Status().Update(ctx, techwithhuz); err != nil {
                log.Error(err, "Failed to update techwithhuz status")
                return ctrl.Result{}, err
            }

            return ctrl.Result{}, err
        }

        log.Info("Creating a new Deployment",
            "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
        if err = r.Create(ctx, dep); err != nil {
            log.Error(err, "Failed to create new Deployment",
                "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
            return ctrl.Result{}, err
        }

        // Deployment created successfully
        // We will requeue the reconciliation so that we can ensure the state
        // and move forward for the next operations
        return ctrl.Result{RequeueAfter: time.Minute}, nil
    } else if err != nil {
        log.Error(err, "Failed to get Deployment")
        // Let's return the error for the reconciliation be re-trigged again
        return ctrl.Result{}, err
    }

    // The CRD API is defining that the techwithhuz type, have a TechwithhuzSpec.Size field
    // to set the quantity of Deployment instances is the desired state on the cluster.
    // Therefore, the following code will ensure the Deployment size is the same as defined
    // via the Size spec of the Custom Resource which we are reconciling.
    size := techwithhuz.Spec.Size
    if *found.Spec.Replicas != size {
        found.Spec.Replicas = &size
        if err = r.Update(ctx, found); err != nil {
            log.Error(err, "Failed to update Deployment",
                "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)

            // Re-fetch the techwithhuz Custom Resource before update the status
            // so that we have the latest state of the resource on the cluster and we will avoid
            // raise the issue "the object has been modified, please apply
            // your changes to the latest version and try again" which would re-trigger the reconciliation
            if err := r.Get(ctx, req.NamespacedName, techwithhuz); err != nil {
                log.Error(err, "Failed to re-fetch techwithhuz")
                return ctrl.Result{}, err
            }

            // The following implementation will update the status
            meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeAvailableTechwithhuz,
                Status: metav1.ConditionFalse, Reason: "Resizing",
                Message: fmt.Sprintf("Failed to update the size for the custom resource (%s): (%s)", techwithhuz.Name, err)})

            if err := r.Status().Update(ctx, techwithhuz); err != nil {
                log.Error(err, "Failed to update techwithhuz status")
                return ctrl.Result{}, err
            }

            return ctrl.Result{}, err
        }

        // Now, that we update the size we want to requeue the reconciliation
        // so that we can ensure that we have the latest state of the resource before
        // update. Also, it will help ensure the desired state on the cluster
        return ctrl.Result{Requeue: true}, nil
    }

    // The following implementation will update the status
    meta.SetStatusCondition(&techwithhuz.Status.Conditions, metav1.Condition{Type: typeAvailableTechwithhuz,
        Status: metav1.ConditionTrue, Reason: "Reconciling",
        Message: fmt.Sprintf("Deployment for custom resource (%s) with %d replicas created successfully", techwithhuz.Name, size)})

    if err := r.Status().Update(ctx, techwithhuz); err != nil {
        log.Error(err, "Failed to update techwithhuz status")
        return ctrl.Result{}, err
    }
    return ctrl.Result{}, nil
}

Lastly, we have to update the SetupWithManager function and tell him to monitor our kind deployment.

// SetupWithManager sets up the controller with the Manager.
func (r *TechwithhuzReconciler) SetupWithManager(mgr ctrl.Manager) error {
    return ctrl.NewControllerManagedBy(mgr).
        For(&cachev1alpha1.Techwithhuz{}).
        Owns(&appsv1.Deployment{}).
        Complete(r)
}

Above controller/techwithhuz_controller.go file can be referred from here - https://github.com/techwithhuz/kubernetes-operator/blob/feature/techwithhuz-operator/techwithhuz-operator/controllers/techwithhuz_controller.go

Step 6: To run the operator

To run and test the operator code we need to first test it locally before creating the Docker image and deploying the operator in an environment.

To test the operator locally run this command

$ make install run
test -s /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen && /root/kubernetes-operator/techwithhuz-operator/bin/controller-gen --version | grep -q v0.11.1 || \
GOBIN=/root/kubernetes-operator/techwithhuz-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/root/kubernetes-operator/techwithhuz-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/root/kubernetes-operator/techwithhuz-operator/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/techwithhuzs.cache.techwithhuz.com created
/root/kubernetes-operator/techwithhuz-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go: downloading github.com/onsi/ginkgo/v2 v2.6.0
go: downloading github.com/onsi/gomega v1.24.1

go run ./main.go
2023-08-16T12:41:36Z    INFO    controller-runtime.metrics      Metrics server is starting to listen    {"addr": ":8080"}
2023-08-16T12:41:36Z    INFO    setup   starting manager
2023-08-16T12:41:36Z    INFO    Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
2023-08-16T12:41:36Z    INFO    Starting server {"kind": "health probe", "addr": "[::]:8081"}
2023-08-16T12:41:36Z    INFO    Starting EventSource    {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz", "source": "kind source: *v1alpha1.Techwithhuz"}
2023-08-16T12:41:36Z    INFO    Starting EventSource    {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz", "source": "kind source: *v1.Deployment"}
2023-08-16T12:41:36Z    INFO    Starting Controller     {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz"}
2023-08-16T12:41:36Z    INFO    Starting workers        {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz", "worker count": 1}

If it throws any error related to variables or functions then first fix those errors.

Successful logs will look like the above.

Step 7: Deploy the Custom Resource Definition(CRD) file

Now open another tab or take another ssh of your server where our operator is running and deploy the Custom Resource Definition file.

$ pwd   
/root/kubernetes-operator/techwithhuz-operator/config/crd/bases

$ kubectl apply -f cache.techwithhuz.com_techwithhuzs.yaml 
customresourcedefinition.apiextensions.k8s.io/techwithhuzs.cache.techwithhuz.com configured

$ kubectl get crds techwithhuzs.cache.techwithhuz.com 
NAME                                 CREATED AT
techwithhuzs.cache.techwithhuz.com   2023-08-16T12:39:16Z

Step 8: Deploy the Custom Resource (CR) file

Now the last step is to deploy the CR file of Kind Techwithhuz and see the magic.

$ pwd
/root/kubernetes-operator/techwithhuz-operator/config/samples

$ cat techwithhuz-cr.yaml 
apiVersion: cache.techwithhuz.com/v1alpha1 
kind: Techwithhuz
metadata:
  name: techwithhuz-sample
spec:
  size: 1
  containerPort: 8080

$ kubectl get techwithhuz
NAME                 AGE
techwithhuz-sample   3m22s

$ kubectl get pods
No resources found in default namespace.

$ kubectl apply -f techwithhuz-cr.yaml 
techwithhuz.cache.techwithhuz.com/techwithhuz-sample created

Go to the previous console where our operator code is already running

2023-08-16T12:47:39Z    INFO    Adding Finalizer for techwithhuz        {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz", "Techwithhuz": {"name":"techwithhuz-sample","namespace":"default"}, "namespace": "default", "name": "techwithhuz-sample", "reconcileID": "674549cf-7a73-437e-a6cd-fd4805db6651"}
2023-08-16T12:47:40Z    INFO    Creating a new Deployment       {"controller": "techwithhuz", "controllerGroup": "cache.techwithhuz.com", "controllerKind": "Techwithhuz", "Techwithhuz": {"name":"techwithhuz-sample","namespace":"default"}, "namespace": "default", "name": "techwithhuz-sample", "reconcileID": "674549cf-7a73-437e-a6cd-fd4805db6651", "Deployment.Namespace": "default", "Deployment.Name": "techwithhuz-sample"}

We can see in the logs that the operator has detected that a CR is applied with kind Techwithhuz with spec.Size as 1 pod and spec.ContainerPort as 8080. It will then run the reconcile loop and deploy our Deployment Go code and create a new deployment.

$ kubectl get deployments
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
techwithhuz-sample   0/1     1            0           70s

$ kubectl get pods
NAME                                  READY   STATUS              RESTARTS   AGE
techwithhuz-sample-6f6b66478b-jnbqg   0/1     ContainerCreating   0          85s

So now we have the deployment created and our application - techwithhuz is managed by our Go Lang operator.

Similar to how deployment kind is created, other Kubernetes objects like services, secrets, routes, configmaps, pvc etc. can also be created following the same logic.

This approach of running the operator is local but for production, a Docker Image of the Operator needs to be created. Dockerfile is already present in the repository which get generated by default when we run the operator sdk command.

That's it for this blog post. We can make the code more modular and generic so that we can deploy and manage multiple applications and make our work and life easier.

Lastly, I would like to say building the Golang operator is a little challenging and complicated but once you have it ready then it is very very easy to manage and also it is very stable for use in a production environment.

Feel free to put in the comment box if any queries.

Did you find this article valuable?

Support TECHWITHHUZ by becoming a sponsor. Any amount is appreciated!