Posts

Configuration file - Locations and Configurations

In a Kubernetes cluster, the master node is responsible for managing the cluster's control plane components. These components include the API server, scheduler, controller manager, and etcd (the cluster's key-value store). Configuration files for these components are typically found in various locations on the master node. Here's an overview of some common configuration files and their locations: How do you see your kube-api server in your existing cluster. It depends on how you setup your cluster. If you set it up with Kubeadm tool. Kubeadm deploys the kupe-api server as a Pod  Location: /etc/kubernetes/manifests/kube-apiserver.yaml In a non kube admin setup you can inspect the kube-api service  cat /etc/systemd/system/kube-apiservice.service kube-apiserver: If kube-apiservers is installed using kubeadm tool Location: /etc/kubernetes/manifests/kube-apiserver.yaml Description: The kube-apiserver is the Kubernetes API server that exposes the Kubernetes API. It is the fron...

Services

Image
 Services   The only valid port is is port which is the port of the service. because if you do not set the Target Port if is subsituted to the port value of the service and if you do not define a nodePort it will assign a available port with in the range of the 30000 - 32767 There is nothing in the definition file that connect the service to the Pod. We have specified the Target Port , but we haven't specified Target Port in which Pod. There could be hundreds of other pods with webservices running on port 80 we use selectors and labels to do that we know that Pod was created using labels. Pull the labels from the Pod definition file under the selector section. This links the service to the Pod. once done  kubectl create -f service-definition.yml The curl command has the IP Address of the node and followed by the host port number . So far we talked about a service mapped to a single Pod, But that is not the case all the time. what if we have multiple pods . It uses a rand...

Command line Utilities

Run the following commands on the master node: ps -aux | grep kube-apiserver ps -aux | grep kube-scheduler ps -aux | grep kube-controller-manager ps -aux | grep etcd Run the following commands on the worker nodes: ps -aux | grep kubelet ps -aux | grep kube-proxy ps -aux | grep docker   # If Docker is used as the container runtime ps -aux | grep containerd  # If containerd is used as the container runtime Kubectl commands kubectl run nginx --image nginx kubectl get pods Kubernetes Service  kubectl get service kubectl get svc  kubectl describe svc <service_name> Deployment  kubectl get deploy  kubectl describe deploy <deployment_name> Namespace  kubectl get pods --namespace=kube-system kubectl create -f pod-definition --namespace=dev If you do not want to mention the namespace in the command line you can  apiVersion: v1 kind : Pod metadata:    - name: my-app      namespace: dev Creating Namespace  --- a...

Deployments

Image
   Deployments Difference between Deployments and Replica Sets Deployments and ReplicaSets are both concepts in Kubernetes, a popular container orchestration platform. Let's explore the differences between Deployments and ReplicaSets: Purpose and Abstraction Level: Deployments: Deployments are a higher-level abstraction in Kubernetes that manages the deployment and scaling of a set of pods. Deployments provide a declarative way to define and manage applications and their updates. ReplicaSets: ReplicaSets are a lower-level abstraction used for maintaining a set number of identical pod replicas. While Deployments use ReplicaSets internally, Deployments provide additional features such as declarative updates, rollbacks, and scaling. Updates and Rollbacks: Deployments: Deployments allow you to perform rolling updates and rollbacks. Rolling updates allow you to update the pods in a controlled manner, ensuring that the application remains available during the update. Rollbacks en...

Pod using yaml

Creating a Pod using a Yaml based file. Kubernetes uses yaml files as input to create object such as Pods, replicas, deployments, services  pod-definition.yml apiVersion:  kind: metadata:   name: myapp-app   labels:       app: myapp      type: front-end spec:   containers:       -name: nginx-container       image: nginx --- kubectl create -f pod-definition.yml kubectl apply -f  pod-definition.yml Both these commands work the same way when you are creating a new object ---- kubectl get pods kubectl describe pod  myapp-app -- create a new pod using nginx image kubectl run nginx --image=nginx kubectl get pods -o wide -- deleting a pod kubectl delete pod webapp --  dry run creating a pod yaml file , this will create a yaml file output. in this case the image is a wrong image > redis123 kubectl run redis --image redis123 --dry-run -o yaml --dry-run=client   (this is the ...

Kube-Proxy

Image
With in a kubernetes cluster everty pod can reach every other pod. This is accomplished by deploying Pod networking solution to the cluster. A Pod Network is an internal virtual network that spans across all the nodes in the cluster to which all the Pods connect to this network they are able to communicate with each other. There are many solutions available to deploy such a network.  In the first scenario : 1. I have an application installed in the first node 2. Database installed in the second node. The webapplication reach the database simply by using the IP of the Pod. But there is no guarantee that the IP of the database Pod will remain the same. You must know that the better way to access the database is using a service . so we create a service to expose the database application across the cluster. The web application can now access the database by using the name  of the service.  The service also gets an IP Address assigned to it whenever a Pod reaches using its IP ...

Kublet

Image
If you use the kubeadmin tool it does not automatically install kublet. now that is the difference between the other components. If you use kubeadm tool to deploy the cluster it will not deploy kublet that is the difference between the other components. You must always manually install kublet on the worker nodes.  Extract it from kubernetes release page extract it an run it as a service.