Posts

etcd Clustering

In a Kubernetes cluster, the etcd database is the primary data store for all cluster state and configuration. In a multi-master setup, ensuring that etcd data is consistently shared across all master nodes is critical for cluster operation. Here’s how this sharing and synchronization of etcd data is managed:   1. etcd Clustering When you have multiple Kubernetes master nodes, etcd is typically set up as a cluster. This means that etcd runs in a distributed mode with multiple etcd instances, which are all part of a single logical etcd cluster. The key features of this setup include: Consensus and Replication : etcd uses the Raft consensus algorithm to manage a distributed cluster. Raft ensures that all etcd instances in the cluster agree on the current state of the data and replicate changes consistently. This means that even if one etcd instance fails, the data remains available and consistent across the remaining instances. Leader Election : Within an etcd cluster, one...

Resource Request

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ apiVersion: v1 kind: Pod metadata:   name: resource-demo-pod   labels:     app: resource-demo spec:   containers:   - name: resource-demo-container     image: nginx:latest     resources:       requests:         memory: "64Mi"         cpu: "250m"       limits:         memory: "128Mi"         cpu: "500m"

Example Kubernetes Pipeline

 stages:   - build   - deploy variables:   KUBECONFIG: "/path/to/kubeconfig"   KUBE_NAMESPACE: "your-namespace" before_script:   - echo "Starting the CI/CD Pipeline" build:   stage: build   script:     - echo "Build stage, typically used for building Docker images, running tests, etc."     - docker build -t your-image:latest .     - docker push your-image:latest deploy:   stage: deploy   script:     - echo "Deploying to Kubernetes"     - kubectl apply -f k8s/deployment.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/service.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/ingress.yaml -n $KUBE_NAMESPACE   only:     - main

Node Affinity

Taints & Nodes

Taints: Are applied on Noded Toleration: Tolerations are applied to Pods kubectl taint nodes node-name key=value:taintEffect Taint Effects tells Pods , what will happen to the Pods that do not tolerate the taint There are three taint effects --NoSchedule : which means the Pods will not be scheduled on the Node -- PreferNoSchedule: which means the system will try not to schedule the Pod . But this is not guaranteed.  --NoExecute: Which mean new Pods will not be scheduled on the node and existing Pods will be evicted. These Pods may have been scheduled before the taint was applied. kubectl taint nodes node1 app=blue:NoSchedule

Labels & Selector

 kubectl get pods --selector env=dev  kubectl get pods --selector env=dev --no-headers  kubectl get pods --selector env=dev --no-headers | wc -l

Kubernetes - Schedule | Annotations

 Manual Scheduler : what happens when you do not have a scheduler in your cluster you probably do not want to rely on the build in scheduler and want to schedule the pods your self Every pod has a nodeName that by default is not set you don't specify this when you create your pod manifest file. kubernetes adds it automatically. The scheduler looks for all the Pods that do not have the property set they are the candidates for scheduling. To manually schedule a Pod you can set this parameter in the pod definition file to manually schedule your pods to a node of your choice -nodeName you can specify the nodename only during the creation time what if the Pod is already created What if the Pods are already created  The way it to create a Binding Object and send a post request to the Pods biding API In Kubernetes, a Binding object is created when a Pod needs to be assigned to a specific node. This object is used by the Kubernetes scheduler to bind a Pod to a particular Node. Here’...