Posts

Showing posts from August, 2024

Prometheus

  1. Prometheus ConfigMap This ConfigMap contains the configuration for Prometheus, defining which services it will scrape metrics from. apiVersion: v1 kind: ConfigMap metadata:   name: prometheus-config   namespace: monitoring   labels:     app: prometheus data:   prometheus.yml: |     global:       scrape_interval: 15s     scrape_configs:       - job_name: 'kubernetes-apiservers'         kubernetes_sd_configs:         - role: endpoints         scheme: https         tls_config:           ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt           insecure_skip_verify: true         bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token         relabel_configs:         - source_labels: [__m...

Monitoring Kubernetes Cluster

KUBLET : Installing Kublet on Master Nodes on the K8 Cluster

  Yes, in some Kubernetes setups, it is possible and sometimes desirable to install kubelet on master nodes. This configuration can vary based on the deployment model and requirements. Here’s a detailed overview of when and why you might install kubelet on master nodes, and how it fits into different Kubernetes architectures. 1. Master Nodes Running kubelet : Scenarios and Considerations 1.1. Control Plane and Node in Single Node Deployments Single Node Clusters : In development, testing, or small-scale setups, it’s common to run both control plane components (API server, controller manager, scheduler) and worker node components (kubelet, kube-proxy) on a single node. This setup simplifies the deployment and is useful for local testing or development environments. 1.2. High Availability and Redundancy High Availability (HA) Setups : In production environments with a high-availability setup, master nodes can also run kubelet to maintain the Kubernetes control plane components and...

etcd Clustering

In a Kubernetes cluster, the etcd database is the primary data store for all cluster state and configuration. In a multi-master setup, ensuring that etcd data is consistently shared across all master nodes is critical for cluster operation. Here’s how this sharing and synchronization of etcd data is managed:   1. etcd Clustering When you have multiple Kubernetes master nodes, etcd is typically set up as a cluster. This means that etcd runs in a distributed mode with multiple etcd instances, which are all part of a single logical etcd cluster. The key features of this setup include: Consensus and Replication : etcd uses the Raft consensus algorithm to manage a distributed cluster. Raft ensures that all etcd instances in the cluster agree on the current state of the data and replicate changes consistently. This means that even if one etcd instance fails, the data remains available and consistent across the remaining instances. Leader Election : Within an etcd cluster, one...

Resource Request

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ apiVersion: v1 kind: Pod metadata:   name: resource-demo-pod   labels:     app: resource-demo spec:   containers:   - name: resource-demo-container     image: nginx:latest     resources:       requests:         memory: "64Mi"         cpu: "250m"       limits:         memory: "128Mi"         cpu: "500m"

Example Kubernetes Pipeline

 stages:   - build   - deploy variables:   KUBECONFIG: "/path/to/kubeconfig"   KUBE_NAMESPACE: "your-namespace" before_script:   - echo "Starting the CI/CD Pipeline" build:   stage: build   script:     - echo "Build stage, typically used for building Docker images, running tests, etc."     - docker build -t your-image:latest .     - docker push your-image:latest deploy:   stage: deploy   script:     - echo "Deploying to Kubernetes"     - kubectl apply -f k8s/deployment.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/service.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/ingress.yaml -n $KUBE_NAMESPACE   only:     - main

Node Affinity

Taints & Nodes

Taints: Are applied on Noded Toleration: Tolerations are applied to Pods kubectl taint nodes node-name key=value:taintEffect Taint Effects tells Pods , what will happen to the Pods that do not tolerate the taint There are three taint effects --NoSchedule : which means the Pods will not be scheduled on the Node -- PreferNoSchedule: which means the system will try not to schedule the Pod . But this is not guaranteed.  --NoExecute: Which mean new Pods will not be scheduled on the node and existing Pods will be evicted. These Pods may have been scheduled before the taint was applied. kubectl taint nodes node1 app=blue:NoSchedule

Labels & Selector

 kubectl get pods --selector env=dev  kubectl get pods --selector env=dev --no-headers  kubectl get pods --selector env=dev --no-headers | wc -l

Kubernetes - Schedule | Annotations

 Manual Scheduler : what happens when you do not have a scheduler in your cluster you probably do not want to rely on the build in scheduler and want to schedule the pods your self Every pod has a nodeName that by default is not set you don't specify this when you create your pod manifest file. kubernetes adds it automatically. The scheduler looks for all the Pods that do not have the property set they are the candidates for scheduling. To manually schedule a Pod you can set this parameter in the pod definition file to manually schedule your pods to a node of your choice -nodeName you can specify the nodename only during the creation time what if the Pod is already created What if the Pods are already created  The way it to create a Binding Object and send a post request to the Pods biding API In Kubernetes, a Binding object is created when a Pod needs to be assigned to a specific node. This object is used by the Kubernetes scheduler to bind a Pod to a particular Node. Here’...

Kubernetes- High Availability

 Kubernetes High Availability High availability (HA) in a Kubernetes cluster ensures that the system remains operational and accessible even in the event of hardware or software failures. Achieving HA involves configuring the Kubernetes components, control plane, and underlying infrastructure in such a way that there is redundancy and failover mechanisms in place. Here’s how you can achieve high availability in a Kubernetes cluster: 1. High Availability for Control Plane Components : The Kubernetes control plane includes several critical components such as the API server, etcd, scheduler, and controller manager. These components must be highly available to ensure the cluster remains operational. Multiple API Server Instances : Run multiple instances of the API server on different nodes. This can be done by setting up the API server behind a load balancer that distributes traffic among the instances. If one instance fails, the load balancer will direct traffic to the remaining healt...

Delploy Cluster : Managed K8 & Self Managed K8

Self Managed K8 & Managed K8  In a self-managed Kubernetes cluster, the user is responsible for managing almost every aspect of the cluster. Here are the main components that need to be managed by users: 1. Control Plane Components : Kube-apiserver : The API server is the central management point of the Kubernetes cluster. Users are responsible for setting up, configuring, and managing its availability and performance. Etcd : The key-value store used by Kubernetes for all cluster data. Users need to manage the deployment, backup, scaling, and recovery of etcd . Kube-scheduler : This component is responsible for scheduling pods on available nodes. Users manage its configuration and ensure it runs reliably. Kube-controller-manager : This component runs controller processes that handle replication, node management, endpoints, etc. Users are responsible for managing these controllers. 2. Worker Nodes Components : Kubelet : This agent runs on each worker node and is responsible for ...

All Kubernetes Components

  Kubernetes is a powerful container orchestration platform composed of several core components, each serving specific roles to manage containerized applications. Here’s a detailed overview of all the major components of Kubernetes: 1. Master Node Components The master node manages the Kubernetes cluster. It has several key components: API Server ( kube-apiserver ) : Role : Serves the Kubernetes API. It processes API requests from clients, such as kubectl , and communicates with other components in the cluster. Endpoint : Exposes the Kubernetes API. Controller Manager ( kube-controller-manager ) : Role : Manages controllers that handle routine tasks and maintain the desired state of the cluster. For example, it handles replication, deployment, and job controllers. Functions : Includes controllers for nodes, replication, deployments, and more. Scheduler ( kube-scheduler ) : Role : Assigns Pods to available nodes based on resource availability and other constraints. Function : Ensure...

AKS Cluster : Step by Step guide to create AKS cluster AZ CLI

Creating a Kubernetes cluster involves several steps, and you can choose between different platforms like Google Kubernetes Engine (GKE), Amazon EKS, Azure AKS, or using tools like kubeadm for on-premises clusters. Creating an Azure Kubernetes Service (AKS) cluster involves a series of steps, from setting up the Azure environment to deploying and managing your Kubernetes resources. Here's a step-by-step guide to help you create and manage an AKS cluster on Azure. Prerequisites     Azure Account: Ensure you have an Azure account. You can sign up for a free account here.     Azure CLI: Install the Azure CLI on your local machine. You can download it here.     Kubernetes CLI (kubectl): Install kubectl to interact with your Kubernetes cluster. Installation instructions are here.          Step-by-Step Process          Set Up Azure CLI     Log in to Azure:    ...