Sunday, May 10, 2020

NFS Server Provisioner on K8S

Another dynamic storage provisioner from Quay. NFS server inside K8S for bare-metal/on-premise kubernetes cluster.

https://github.com/kubernetes-incubator/external-storage/tree/master/nfs

nfs-common and nfs-utils need to be installed on kubelet, otherwise error log might include: 
  • bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
  • Warning  FailedMount  24m  kubelet, MountVolume.SetUp failed for volume "pvc-2c49da00-3431-4e3c-a1cc-84d1daa4a6ab" : mount failed: exit status 32
In case there is error: pod has unbound immediate PersistentVolumeClaims, it has nothing with storage provision, which just indicates that pod does not not have storage class defined in manifest file or K8S doesn't have default storage class.

To set up default storage class in kubernetes:
  • kubectl patch storageclass storage-class-name -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Saturday, May 9, 2020

Jenkins on K8S

Jenkins can deploy via Rancher catalog or Helm chart.

Jenkins K8S deployment will need persistent storage. Jenkins service can be exposed or access via ingress rules.

Jenkins master -> Credentials -> add Credentials -> kind: Kubernetes Service Account
Manage Jenkins -> Manager Nodes and Cloud -> Configure Cloud (for new version of Jenkins)
  • Test Connection 
  • Pod Retention: Never (Jenkins slave pod will be terminated after build job completes)
  • Slave pod will be placed into jenkins namespace by default

Jenkins slave on K8S

Use K8S as Jenkins slave/agent via Jenkins kubernetes plugin. (continuous integration)
  • From Jenkins master, install Kubernetes plugin (This plugin integrates Jenkins with Kubernetes)
  • Credentials -> Add Credentials -> Kind: Secret file (kubeconfig file from K8S)
  • Manage Jenkins -> Manage Nodes and Cloud -> Configure Clouds (new version of Jenkins) 
  1. Credentials: select Secret file name just created from drop-down menu, and Test Connection
  2. Jenkins URL and tunnel match Jenkins master. No need https:// for tunnel
  3. Pod label will be used for slave pod label in K8S
  4. Pod Template: name will be prefix for slave pod name prefix in K8S; blank namespace will create slave pod in default namespace in K8S;
  5. Pod Template -> Labels is critical, which help Jenkins master decides which builder will be used for the build job. 
  6. Pod Template: Usage: only builds job with label expression matching this label
  7. Define container template details: name: jnlp; docker image: jenkins/jnlp-slave:latest; working directory: /home/jenkins/; also environment variable value pair: {JENKINS_URL: http://jenkins-master:8080} (or adding other agent as Pod Template)
When creating a new build job, check Restrict where this project can be run, enter Label expression: Pod template label in previous step 5, so Jenkins master will use K8S pod to execute build task.

Jenkins slave started in K8S, then terminated in K8S after build job completes with proper pod retention setting in Pod Template.


Use K8S as Jenkins slave via Jenkin Kubernetes Continuous Deploy Plugin (continuous deployment on kubernetes)
  • From Jenkins master, install Kubernetes Continuous Deploy Plugin
  • Credentials -> Add Credentials -> Kind: Kubernetes Configuration (KubeConfig); directly copy content of  kubeconfig file into textext. The ID of this Jenkins credential will match value of  kubeconfigId in file: Jenkinsfile, which is in source code Repositories, such as github, or private repository
  • If Kubernetes plugin is already configured, no more configuration is required.
When creating a new pipeline in Jenkins, in Pipeline section: select Pipeline script from SCM, then select proper SCM, repository and branch. Default script path is: Jenkinsfile

Jenkins pipeline checkout source from SCM, and start slave pod on K8S to build images and upload container images into registry such Docker hub or any private/public registry defined in Jenkinsfile from SCM. Then slave pod on K8S is terminated with the completion of Continuous Integration/Delivery process. Finally, pipeline script will deploy app Pod via manifest in Jenkinsfile, always pulling container images from registry.

Tuesday, May 5, 2020

Delete Terminating hanging namespace

clean up content in finalizer section via json/yaml file or directly edit namespace via kubectl

Methold below always works
Assuming local is terminating...
kubectl get namespace local -o json > local.json         
kubectl get namespace local -o json \
            | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
            | kubectl replace --raw /api/v1/namespaces/local/finalize -f -     

Rancher on baremetal k8S

  • cert-manager on K8S (validate cert-manager is up correctly!, otherwise rancher installation might fail with error: x509: certificate signed by unknown authority)
  • ingress-nginx on baremetal (you might need edit NodePort option with your load balancer)
  • Install helm3
  • helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org ###--dry-run first would be good idea.
  • kubectl -n cattle-system rollout status deploy/rancher ###verify status
  • successfully installation of Rancher in namespace: cattle-system, will also create extra namespaces: local, p-xxxxx, p-yyyyy, user-zzzzz, cattle-global-nt, and cattle-global-data. local is used for cluster hosting Racher; user-zzzzz is for user authentication. Not sure for the function of the rest yet. I would guess global is for cross cluster functionalities.

Saturday, May 2, 2020

tdnf install -y kubelet kubeadm kubectl --nogpgcheck

tdnf install -y kubelet kubeadm kubectl fails with error below:

Error processing package: Packages/548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm
Error(1508) : GpgKey Url schemes other than file are not supported

tdnf install -y kubelet kubeadm kubectl --nogpgcheck ####disable gpg check as workaround

credit to: https://unix.stackexchange.com/questions/207907/how-to-fix-gpg-key-retrieval-failed-errno-14


cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF