In continuation to my last piece on working with services :
where we discussed how we can enable the external world application to talk to our internal pods deployed in the k8s cluster by making use of the NodePort type service. Today I am excited to discuss Kubernetes services in more detail by venturing into the role of ClusterIP as one of the prominent services to enable communications of multiple pods within the cluster.
But, before we jump into the ClusterIP type service, let’s again understand what is Services in K8s briefly
In Kubernetes, a Service is an abstraction that…
The very need to make our complex application highly available, scalable, portable, and deployable in small modules independently lead to the birth of Kubernetes
Kubernetes is popularly known as K8s
K8s is a production-grade open-source container orchestration tool developed by Google to help you manage the containerized/dockerized applications supporting multiple deployment environments like On-premise, cloud, or virtual machines.
k8s automates the deployment of your containerised images and helps the same to scale horizontally to support high level of application availability
One of the primary reasons why K8s became so…
Kubernetes Deployment(controller) is your buddy if you want to deploy, update , maintain your hosted application on the production server without any downtime with utmost efficiency and with the required scalability and availability
Many times while working on your production server, you may encounter an instance where you would like to push a new update of your web application, while doing so it is desired not to break the running web app and to avoid any downtime. …
It is not only sufficient to have skilled individuals who know their job better, how these talented individuals communicate and help each other out as a team player which makes productive work happen, is what matters the most.
In a similar fashion, deploying different types of front-end and backend applications on the server is not sufficient, it is how these applications will exchange data with each other is what makes the client server-based ecosystem function effectively. …
Welcome to this brand new piece of working with Kubernetes series,
In the previous piece:
How one can allow any particular pod with a certain type of workload to be specifically scheduled on the particular Node, using the simplicity and power of nodeSelector.
Let’s revisit the example we discussed in the last piece:
- name: nginx
Here we are trying to place our Pod, using nodeSelector key-value pair
In such a case, one thing which limits the node selector approach…
The same applies to our Kubernetes worker nodes. If you will try to schedule a heavy workload to the worker node of low capacity, there is a higher probability of this node filling fast and create inefficiency in the cluster. So as a Kubernetes administrator or developer what you should do?
To understand this scenario let me paint a visual picture of a hypothetical situation below:
In general life, we often develop an affinity with people whom we like and want to engage very often, similarly, in Kubernetes, we can establish a close affinity between nodes and pods.
If you want to ensure that only a few types of pods get accommodated in any given worker Node and repelling the other pods which you don’t want to be a part of the worker Node, you can use the concept of
“Taints & Tolerations ”
Suppose you don’t want the mosquito to bite you, in that case, what do you do? …
“Choose your resources wisely else you will be thrown out ”
This is true for Kubernetes clusters with multiple nodes too. In our part 1 of handling resources and limits in Kubernetes:
So today we will go further to understand the few resource management scenarios given below :
Any product and its management is somewhat similar to any growing plants which bear fruits. The fruit is a product and the person who grows it is a product manager.
It starts with finding fertile land(Finding the right market/target audience), adopting best seeding practices, planting seeds, nurturing it with the right kind of water and fertilizer, monitoring the growth, trimming the unwanted branches & leaves to keep it in the right shape(Improving product based on customer feedback), plucking the fruits as it comes on the way. Continuing the discipline of caring to help it bear new fruits. …
When it comes to architecting your system infrastructure requirements as a cloud architect or system architect, you need an efficient orchestrating tool to help you achieve the required flexibility and scalability. Kubernetes orchestration platform gives you this ability and simplifies your resource management process hassle-free.
In any given cluster with multiple nodes, there are a defined set of resources in the form of
CPU and memory are collectively referred to as compute resources, or just resources. These resources are measurable quantities that can be requested, allocated, and consumed.
Each of the above resource types has a…
Everything About Artificial Intelligence | Machine Learning | Applied statistics| Data Science. Let’s learn AI & ML together