Posts

Prometheus

  1. Prometheus ConfigMap This ConfigMap contains the configuration for Prometheus, defining which services it will scrape metrics from. apiVersion: v1 kind: ConfigMap metadata:   name: prometheus-config   namespace: monitoring   labels:     app: prometheus data:   prometheus.yml: |     global:       scrape_interval: 15s     scrape_configs:       - job_name: 'kubernetes-apiservers'         kubernetes_sd_configs:         - role: endpoints         scheme: https         tls_config:           ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt           insecure_skip_verify: true         bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token         relabel_configs:         - source_labels: [__m...

Monitoring Kubernetes Cluster

KUBLET : Installing Kublet on Master Nodes on the K8 Cluster

  Yes, in some Kubernetes setups, it is possible and sometimes desirable to install kubelet on master nodes. This configuration can vary based on the deployment model and requirements. Here’s a detailed overview of when and why you might install kubelet on master nodes, and how it fits into different Kubernetes architectures. 1. Master Nodes Running kubelet : Scenarios and Considerations 1.1. Control Plane and Node in Single Node Deployments Single Node Clusters : In development, testing, or small-scale setups, it’s common to run both control plane components (API server, controller manager, scheduler) and worker node components (kubelet, kube-proxy) on a single node. This setup simplifies the deployment and is useful for local testing or development environments. 1.2. High Availability and Redundancy High Availability (HA) Setups : In production environments with a high-availability setup, master nodes can also run kubelet to maintain the Kubernetes control plane components and...

etcd Clustering

In a Kubernetes cluster, the etcd database is the primary data store for all cluster state and configuration. In a multi-master setup, ensuring that etcd data is consistently shared across all master nodes is critical for cluster operation. Here’s how this sharing and synchronization of etcd data is managed:   1. etcd Clustering When you have multiple Kubernetes master nodes, etcd is typically set up as a cluster. This means that etcd runs in a distributed mode with multiple etcd instances, which are all part of a single logical etcd cluster. The key features of this setup include: Consensus and Replication : etcd uses the Raft consensus algorithm to manage a distributed cluster. Raft ensures that all etcd instances in the cluster agree on the current state of the data and replicate changes consistently. This means that even if one etcd instance fails, the data remains available and consistent across the remaining instances. Leader Election : Within an etcd cluster, one...

Resource Request

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ apiVersion: v1 kind: Pod metadata:   name: resource-demo-pod   labels:     app: resource-demo spec:   containers:   - name: resource-demo-container     image: nginx:latest     resources:       requests:         memory: "64Mi"         cpu: "250m"       limits:         memory: "128Mi"         cpu: "500m"

Example Kubernetes Pipeline

 stages:   - build   - deploy variables:   KUBECONFIG: "/path/to/kubeconfig"   KUBE_NAMESPACE: "your-namespace" before_script:   - echo "Starting the CI/CD Pipeline" build:   stage: build   script:     - echo "Build stage, typically used for building Docker images, running tests, etc."     - docker build -t your-image:latest .     - docker push your-image:latest deploy:   stage: deploy   script:     - echo "Deploying to Kubernetes"     - kubectl apply -f k8s/deployment.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/service.yaml -n $KUBE_NAMESPACE     - kubectl apply -f k8s/ingress.yaml -n $KUBE_NAMESPACE   only:     - main

Node Affinity