Showing posts with label kubernetes. Show all posts
Showing posts with label kubernetes. Show all posts

Thursday 29 July 2021

Unable to connect to server: x509: certificate signed by unknown authority

Today I was facing some strange issue, when I was trying to connect to the Kubernetes in remote sever got the error as "unable to connect to server: x509: certificate signed by unknown authority".


The Solution is make sure you disable the --insecure-skip-tls-verify  by following the posts.


Happy Solvings !!!

Accessing Remote Kubernetes server using the Kubectl

Below are the configuration required to make the kubectl, access the K8s running in a remote server. So Searched through the documentation and found the following good solution that is 100% working.


"This Post does not require any certificate or key to access the remote k8s."


Below are the syntax given by the K8s Documentation. 


Syntax:


kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs


kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs


kubectl config set-context default-system --cluster default-cluster --user <credential-name>


kubectl config use-context default-system


Exmples:


kubectl config set-cluster my-cluster --server=https://1.2.3.4 --insecure-skip-tls-verify=true


kubectl config set-credentials my-credentials [--token=bearer_token] or [--username=username] [--password=password]


In My case it was token. hence I used Token you can use the username and password also.


kubectl config set-context my-system --cluster my-cluster --user my-credentials --namespace=default


kubectl config use-context my-system


After making these changes the context will be switched to the my-system. then when you execute the kubectl it will give results from the remote k8s. In case you need to switch. Use the below command to switch to local or other remote repositories. This information will be available in the .kube/config file. To Access go to Run (win+R) and type .kube and hit enter here you can see this file.


kubectl config use-context my-system


Happy Learning !!!!

Sunday 18 July 2021

Persistent Volumes (PV) Storing Files in Kubernetes

We can go through the definition of PV and PVC First. 


A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. 


A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, see AccessModes).


These are the definitions from the K8s, In short, then Persistent volumes provide the space where we can store the files required for the functioning of our application.


Suppose consider my requirement. Where I need to store the huge shell scripts files, then I need to use them for the cron job to trigger. A persistent volume is one of the ways to do it.

This is one of the approaches to achieve it, there are also other ways where we can achieve this.


We cannot access the persistent volume without the persistent claim. Hence while creating the Persistent volume, we are also asked to create the Persistent claim also.


This Persistent volume remains even after the Pod is deleted.


Step:1 Persistent Volume creation in K8's.


apiVersion: v1

kind: PersistentVolume

metadata:

  name: scripts-pv-volume

  labels:

    type: local

spec:

  storageClassName: manual

  capacity:

    storage: 10Gi

  accessModes:

    - ReadWriteOnce

  hostPath:

    path: "/opt/data"

Step:2 Create the Persistent Volume claim


apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: scripts-pv-claim

spec:

  storageClassName: manual

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 3Gi

    

Step:3 Use the Storage.


Copy the script files to the "/opt/data" persistent volume by the below command.


kubectl cp welcome.ksh default/mypod:/opt/data


where default is the namespace.

my pod is the name of my pod.

/opt/data is the path where this file needs to be copied.


apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: welcome-jobs

spec:

  replicas: 2

  selector:

    matchLabels:

      app: welcome-jobs

  template:

    metadata:

      labels:

        app: welcome-jobs

    spec:

      volumes:

        - name: scripts-pv-storage

          persistentVolumeClaim:

            claimName: scripts-pv-claim

      containers:

        - name: scripts-pv-container

          image: busybox

  command: ["/opt/data/welcome.ksh"]

          volumeMounts:

            - mountPath: "/opt/data"

              name: scripts-pv-storage


This will execute the Script welcome.ksh from the location /opt/data.


Happy Learning !!!!


Kubernetes read local docker images

Follow the below steps to read the docker images from local instead of pulling them every time from the docker hub. By default, it always reads the images from the docker hub.


This saves us a lot of time by reducing the time to push to the docker hub. It takes a lot of time to push the image from local and tags it.


There are two steps involved.


Step:1


Open the command prompt in admin mode and execute the below command.


C:\Users\Syed>minikube docker-env


Once you execute the command "minikube docker-env" you will see the following output. 


SET DOCKER_TLS_VERIFY=1

SET DOCKER_HOST=tcp://127.0.0.1:32770

SET DOCKER_CERT_PATH=C:\Users\Syed\.minikube\certs

SET MINIKUBE_ACTIVE_DOCKERD=minikube

REM To point your shell to minikube's docker-daemon, run:

REM @FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i

Just Copy the Last line after REM and execute in the same command prompt. 


C:\Users\Syed>@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i


After making this change, the local docker images will be visible to the K8's.


Step:2


In the Yaml file of the K8's make sure that the image pulls policy to be "Never". Point this file to the local docker build name and the tag. 


eg: imagePullPolicy: Never


Once you do the above two steps, then from next time make changes to the docker file in local, build it and see the changes in the K8's.


Happy Learning!!!!

Tuesday 29 June 2021

Running Batch Files in Kubernetes (KSH Files)

 Check out the files from the Github here.


Navigate to the Folder location where the checkout is done. Execute the Below commands. 


C:\Users\Syed\Hello-K8s_Job> kubectl create configmap hello --from-file=hello.ksh

configmap/hello created


It creates the config map from the Script provided.


C:\Users\Syed\Hello-K8s_Job>kubectl apply -f deployment.yaml

cronjob.batch/hello-job created


creates the cron job with the deployment.yml


Access the minikube Dashboard.




In the Cron Jobs tab, we can see the Job Created name "hello-job".


Once if you click the job, you will find the below tab as the Active and Inactive jobs. All the Jobs Executing Currently will be in the Active Jobs, Finished Jobs will be in the Inactive Jobs. 


In case, If you need to trigger the job manually then you need to press the play button in the right side top the title bar.





When you look inside the Logs of the Jobs executed we can see the Loggins added.




You can apply the same procedure for the complex KSH scripts as well.


Happy Learning.!!!!

Sunday 8 November 2020

Creating CI/CD Pipeline for Deploying into the kubernetes using Jenkins.

Create the Pipeline script for the Following purpose.


1. Checkout the Code from Git.


2. Build it using the Maven.


3. Create the Docker Image.


4. Push it to the Docker Hub.


5. Deploy the Docker Image to the Kubernetes.


I am using Minikube and docker for the desktop in Windows 10 Machine.



1. Install the Plugin Kubernetes Continuous Deploy.


2. Configure the KubeConfig from Dashboard.


Navigate to Manage Jenkins > Manage Credentials > Jenkins > Global Credentials > Add Credentails 


In-Kind Select the Kubernetes  Configuration and Define the location from the Kube config.


C:\Users\Syed\.kube\config Then Save it.




3. Make Sure the YAML File is available in the location provided.


4.Create the Pipeline with the following syntax.


pipeline {

  environment {

    registry = "syedghouse14/greet-user-repo"

    registryCredential = 'Docker-Hub'

    dockerImage = ''

    dockerfile="${workspace}\\GreetUser\\Dockerfile"

    pomfile="${workspace}\\GreetUser\\pom.xml"

    JAR_FILE="target/*.jar"

  }

  agent any

  stages {

    stage('Cloning Git') {

      steps {

        git 'https://github.com/Syed-SearchEndeca/gretuser.git'

        

      }

    }

    stage ('Build') {

steps {

withMaven(maven : 'apache-maven-3.6.3') {

bat "mvn clean package -f ${pomfile}"

}

}

}

    stage('Building image') {

      steps{

        script {

          dockerImage = docker.build(registry + ":$BUILD_NUMBER",

          "--file ${dockerfile} --build-arg JAR_FILE=target/*.jar .")

        }

      }

    }

    stage('Deploy Image') {

      steps{

        script {

          docker.withRegistry( '', registryCredential ) {

            dockerImage.push()

          }

        }

      }

    }

 

    stage('Deploy on kubernetes') {

            steps {

                script {

                kubernetesDeploy(configs: "**/*.yaml", kubeconfigId: "KubeConfig")

            }

            }

        }

  }

}


The Project is available in the GIT download from here


Happy Learning !!!!

kubernetes Error

I have encountered the following error while executing the stages of the pipeline. I have used the Plugins Kubernetes Continuous Deploy 2.3.1 and Jackson 2 API 2.11.3 

both of the plugins cause the below issue. 

Error Log:


[Pipeline] { (Deploy on kubernetes)

[Pipeline] script
[Pipeline] {
[Pipeline] kubernetesDeploy
Starting Kubernetes deployment
Loading configuration: C:\Users\Syed\AppData\Local\Jenkins\.jenkins\workspace\Maven-Pipeline\GreetUser\greet.yaml
ERROR: ERROR: Can't construct a java object for tag:yaml.org,2002:io.kubernetes.client.openapi.models.V1Service; exception=Class not found: io.kubernetes.client.openapi.models.V1Service
 in 'reader', line 1, column 1:
    apiVersion: v1
    ^

hudson.remoting.ProxyException: Can't construct a java object for tag:yaml.org,2002:io.kubernetes.client.openapi.models.V1Service; exception=Class not found: io.kubernetes.client.openapi.models.V1Service
 in 'reader', line 1, column 1:
    apiVersion: v1
    ^

	at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:335)
	at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:229)
	at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:219)
	at io.kubernetes.client.util.Yaml$CustomConstructor.constructObject(Yaml.java:337)
	at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:173)
	at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:157)
	at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:490)
	at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:456)
	at io.kubernetes.client.util.Yaml.loadAs(Yaml.java:224)
	at io.kubernetes.client.util.Yaml.modelMapper(Yaml.java:494)
	at io.kubernetes.client.util.Yaml.loadAll(Yaml.java:272)
	at com.microsoft.jenkins.kubernetes.wrapper.KubernetesClientWrapper.apply(KubernetesClientWrapper.java:236)
	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.doCall(DeploymentCommand.java:172)
	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.call(DeploymentCommand.java:124)
	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand$DeploymentTask.call(DeploymentCommand.java:106)
	at hudson.FilePath.act(FilePath.java:1163)
	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand.execute(DeploymentCommand.java:68)
	at com.microsoft.jenkins.kubernetes.command.DeploymentCommand.execute(DeploymentCommand.java:45)
	at com.microsoft.jenkins.azurecommons.command.CommandService.runCommand(CommandService.java:88)
	at com.microsoft.jenkins.azurecommons.command.CommandService.execute(CommandService.java:96)
	at com.microsoft.jenkins.azurecommons.command.CommandService.executeCommands(CommandService.java:75)
	at com.microsoft.jenkins.azurecommons.command.BaseCommandContext.executeCommands(BaseCommandContext.java:77)
	at com.microsoft.jenkins.kubernetes.KubernetesDeploy.perform(KubernetesDeploy.java:42)
	at com.microsoft.jenkins.azurecommons.command.SimpleBuildStepExecution.run(SimpleBuildStepExecution.java:54)
	at com.microsoft.jenkins.azurecommons.command.SimpleBuildStepExecution.run(SimpleBuildStepExecution.java:35)
	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: hudson.remoting.ProxyException: org.yaml.snakeyaml.error.YAMLException: Class not found: io.kubernetes.client.openapi.models.V1Service
	at org.yaml.snakeyaml.constructor.Constructor.getClassForNode(Constructor.java:664)
	at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.getConstructor(Constructor.java:322)
	at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:331)
	... 30 more
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Kubernetes deployment ended with HasError
Finished: FAILURE


Resolution:


Downgrade the Both the Plugins to Below version. 


1. Jackson 2 API 2.9.10


2. Kubernetes Continuous Deploy  2.3.0


Hope this helps to resolve the issue.


Happy Resolution!!!!


Saturday 26 September 2020

Important Concepts for Kubernetes

 

Nodes 


Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run Pods, managed by the control plane. 


Pods 


Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. 


Deployments 


A Deployment provides declarative updates for Pods ReplicaSets. 

You describe the desired state in a Deployment, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets or to remove existing Deployments and adopt all their resources with new Deployments. 


ReplicationController 


ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. 


Service 


An abstract way to expose an application running on a set of Pods as a network service. 

With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. 


Ingress 


An API object that manages external access to the services in a cluster, typically HTTP. 

Ingress may provide load balancing, SSL termination and name-based virtual hosting. 

 

Kubernetes Components

There are two types of components are avalible for the Kubernetes. 


1.Control Plane Components 


2.Node Components


1.Control Plane Components

The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied). 


kube-apiserver 

 

This exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.  

The example of a Kubernetes API server is kube-apiserverkube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. Using this api You can run several instances of kube-apiserver and balance traffic between those instances. 


Etcd 

 

This is Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.If your Kubernetes cluster uses etcd as its backing store, make sure you have a backup plan for those data. 


kube-scheduler 

 

The scheduler watches for newly created Pods with no assigned node, and selects a node for them to run on. 

Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines. 
 

kube-controller-manager 


This runs controller processes. 

Each controller runs in a separate process, but to reduce complexity, those are compiled into a single binary and run as single process.  

Those controllers include. 

Node controller: Responsible for noticing and responding when nodes go down. 

Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system. 

Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods). 

Service Account & Token controllers: Create default accounts and API access tokens for new namespaces. 
 

cloud-controller-manager  


The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that just interact with your cluster.  

The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager. 

 
2.Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment. 

 

kubelet  

 
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. 

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.  

The kubelet doesn't manage containers which were not created by Kubernetes. 


kube-proxy  


 kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. 

kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster. 

Addons 

  

Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the Kube-system namespace. 

  

Addons are described below. 

  

DNS   


While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it. 

  

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. 

  

Containers started by Kubernetes automatically include this DNS server in their DNS searches. 

  

Web UI (Dashboard)   


The dashboard is a general-purpose, web-based UI for Kubernetes clusters. 

  

Container Resource Monitoring 

  

Container Resource Monitoring records generic time-series metrics about containers in a central database and provides a UI for browsing that data. 


Cluster-level Logging 


A cluster-level logging mechanism is responsible for saving container logs to a central log store with a search/browsing interface.