Thursday, July 30, 2020

Jenkins ssh-agent plug in



Jenkins ssh-agent only takes user and private key as credentials for build jobs, not user and password, otherwise you will get error below:

FATAL: [ssh-agent] Could not find specified credentials
[ssh-agent] Looking for ssh-agent implementation... 
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
.....
Permission denied, please try again. 
Permission denied (publickey,password).

Target remote server must have completed setting for ssh login via private key before. (public key in ~/.ssh/authorized_keys)

Sunday, May 10, 2020

NFS Server Provisioner on K8S

Another dynamic storage provisioner from Quay. NFS server inside K8S for bare-metal/on-premise kubernetes cluster.

https://github.com/kubernetes-incubator/external-storage/tree/master/nfs

nfs-common and nfs-utils need to be installed on kubelet, otherwise error log might include: 
  • bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
  • Warning  FailedMount  24m  kubelet, MountVolume.SetUp failed for volume "pvc-2c49da00-3431-4e3c-a1cc-84d1daa4a6ab" : mount failed: exit status 32
In case there is error: pod has unbound immediate PersistentVolumeClaims, it has nothing with storage provision, which just indicates that pod does not not have storage class defined in manifest file or K8S doesn't have default storage class.

To set up default storage class in kubernetes:
  • kubectl patch storageclass storage-class-name -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Saturday, May 9, 2020

Jenkins on K8S

Jenkins can deploy via Rancher catalog or Helm chart.

Jenkins K8S deployment will need persistent storage. Jenkins service can be exposed or access via ingress rules.

Jenkins master -> Credentials -> add Credentials -> kind: Kubernetes Service Account
Manage Jenkins -> Manager Nodes and Cloud -> Configure Cloud (for new version of Jenkins)
  • Test Connection 
  • Pod Retention: Never (Jenkins slave pod will be terminated after build job completes)
  • Slave pod will be placed into jenkins namespace by default

Jenkins slave on K8S

Use K8S as Jenkins slave/agent via Jenkins kubernetes plugin. (continuous integration)
  • From Jenkins master, install Kubernetes plugin (This plugin integrates Jenkins with Kubernetes)
  • Credentials -> Add Credentials -> Kind: Secret file (kubeconfig file from K8S)
  • Manage Jenkins -> Manage Nodes and Cloud -> Configure Clouds (new version of Jenkins) 
  1. Credentials: select Secret file name just created from drop-down menu, and Test Connection
  2. Jenkins URL and tunnel match Jenkins master. No need https:// for tunnel
  3. Pod label will be used for slave pod label in K8S
  4. Pod Template: name will be prefix for slave pod name prefix in K8S; blank namespace will create slave pod in default namespace in K8S;
  5. Pod Template -> Labels is critical, which help Jenkins master decides which builder will be used for the build job. 
  6. Pod Template: Usage: only builds job with label expression matching this label
  7. Define container template details: name: jnlp; docker image: jenkins/jnlp-slave:latest; working directory: /home/jenkins/; also environment variable value pair: {JENKINS_URL: http://jenkins-master:8080} (or adding other agent as Pod Template)
When creating a new build job, check Restrict where this project can be run, enter Label expression: Pod template label in previous step 5, so Jenkins master will use K8S pod to execute build task.

Jenkins slave started in K8S, then terminated in K8S after build job completes with proper pod retention setting in Pod Template.


Use K8S as Jenkins slave via Jenkin Kubernetes Continuous Deploy Plugin (continuous deployment on kubernetes)
  • From Jenkins master, install Kubernetes Continuous Deploy Plugin
  • Credentials -> Add Credentials -> Kind: Kubernetes Configuration (KubeConfig); directly copy content of  kubeconfig file into textext. The ID of this Jenkins credential will match value of  kubeconfigId in file: Jenkinsfile, which is in source code Repositories, such as github, or private repository
  • If Kubernetes plugin is already configured, no more configuration is required.
When creating a new pipeline in Jenkins, in Pipeline section: select Pipeline script from SCM, then select proper SCM, repository and branch. Default script path is: Jenkinsfile

Jenkins pipeline checkout source from SCM, and start slave pod on K8S to build images and upload container images into registry such Docker hub or any private/public registry defined in Jenkinsfile from SCM. Then slave pod on K8S is terminated with the completion of Continuous Integration/Delivery process. Finally, pipeline script will deploy app Pod via manifest in Jenkinsfile, always pulling container images from registry.

Tuesday, May 5, 2020

Delete Terminating hanging namespace

clean up content in finalizer section via json/yaml file or directly edit namespace via kubectl

Methold below always works
Assuming local is terminating...
kubectl get namespace local -o json > local.json         
kubectl get namespace local -o json \
            | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
            | kubectl replace --raw /api/v1/namespaces/local/finalize -f -     

Rancher on baremetal k8S

  • cert-manager on K8S (validate cert-manager is up correctly!, otherwise rancher installation might fail with error: x509: certificate signed by unknown authority)
  • ingress-nginx on baremetal (you might need edit NodePort option with your load balancer)
  • Install helm3
  • helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.my.org ###--dry-run first would be good idea.
  • kubectl -n cattle-system rollout status deploy/rancher ###verify status
  • successfully installation of Rancher in namespace: cattle-system, will also create extra namespaces: local, p-xxxxx, p-yyyyy, user-zzzzz, cattle-global-nt, and cattle-global-data. local is used for cluster hosting Racher; user-zzzzz is for user authentication. Not sure for the function of the rest yet. I would guess global is for cross cluster functionalities.

Saturday, May 2, 2020

tdnf install -y kubelet kubeadm kubectl --nogpgcheck

tdnf install -y kubelet kubeadm kubectl fails with error below:

Error processing package: Packages/548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm
Error(1508) : GpgKey Url schemes other than file are not supported

tdnf install -y kubelet kubeadm kubectl --nogpgcheck ####disable gpg check as workaround

credit to: https://unix.stackexchange.com/questions/207907/how-to-fix-gpg-key-retrieval-failed-errno-14


cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Tuesday, April 14, 2020

Ingress-nginx on ARMv7 Raspberry Pi 3 Model B

https://github.com/kubernetes/ingress-nginx

Note after Installation Guide:
  1. User Metallb as load balancer on bare metal for K8S on Raspberry 3.0B
  2. In the mandatory.yaml file, change container image into:quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.30.0 (thanks alexellis on github)
  3. In the cloud-generic.yaml remove line:externalTrafficPolicy: Local. Kubernetes default externalTrafficPolicy is: Cluster. (It matters if Weave Net is used for networking)
  4. kubectl create -f mandatory.yaml -f cloud-generic.yaml (one liner, so nginx-ingress-controller would not complain non-existence of service:ingress-nginx, which gets an IP address assigned from Metallb.
  5. check logs inside nginx-ingress-controller for readiness or other errors if any.
I tried Helm 3 without luck. It seems there is repo issue.

Monday, March 30, 2020

Open source load testing tool

https://k6.io/blog/comparing-best-open-source-load-testing-tools

Performance testing is a type of testing for determining the speed of a computer, network or device. It checks the performance of the components of a system by passing different parameters in different load scenarios.

Load testing is the process that simulates actual user load on any application or website. It checks how the application behaves during normal and high loads. This type of testing is applied when a development project nears to its completion.

Stress testing is a type of testing that determines the stability and robustness of the system. It is a non-functional testing technique. This testing technique uses auto-generated simulation model that checks all the hypothetical scenarios.

Conclusion:
  • Performance testing is a testing method used to determine the speed of a computer, network or devices.
  • Load testing simulates real-world load on any application or website.
  • Stress testing determines the stability and robustness of the system
  • Performance testing helps to check the performance of website servers, databases, networks.
  • Load testing is used for the Client/Server, Web-based applications.
  • Stress testing is done unexpected test traffic of your website.

Monday, March 23, 2020

NFS share on OS X Sierra Version 10.12.6 (16G2128) MacBook Pro (15-inch, 2017)

https://support.apple.com/en-us/HT202243

Create NFS share on OS X for NFS client connection
  • mkdir <path to NFS share>
  • chown -R nobody:nobody <path to NFS share>
  • sudo nano /etc/exports, and add line:
<absolute path to NFS share> -maproot=nobody --alldirs ##allow client to mount at any point within NFS file system
  • sudo chmod 640 /etc/exports
  • nfsd status (if not running: nfsd enable && nfsd start)
  • showmount -e
Mount NFS share from OSX command line:
  • mkdir /mnt
  • sudo mount -o hard,nolock <NFS share path> /mnt
  • mount | grep nfs
  • ls /mnt

Thursday, March 19, 2020

Kubernetes Metrics Server on Raspberry Pi 3 Model B (ARM v7)

Metrics Server exposes core Kubernetes metrics via metrics API. Without Metrics Server,Horizontal Pod Autoscale (HPA) and kubectl top command will not work.

Github did not provide deployment yaml file for Raspberry Pi cluster, so I have to change originadeploy/kubernetes/metrics-server-deployment.yaml

1. replace amd64 with arm (under containers and nodeSelectors section)

2. modify args into: (two extra lines)
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls ###get ride error: http: TLS handshake error
          - --kubelet-preferred-address-types=InternalIP ### unable to fully scrape metrics from source kubelet


3. deploy Metrics Server and all the rest manifest yaml files in the same directory , and wait for a while to get rid off error:unable to fetch node metrics for node