Friday, September 28, 2018

Kubernetes interview questions


  • Where Kubernetes cluster data is stored?

A) etcd is responsible for storing Kubernetes cluster data.
etcd is written in Go programming language and is a distributed key-value store used for coordinating between distributed work. So, Etcd stores the configuration data of the Kubernetes cluster, representing the state of the cluster at any given point in time.

What is the role of kube-scheduler?
A) kube-scheduler is responsible for assigning a node to newly created pods.

Which process runs on Kubernetes master node?
A) Kube-apiserver process runs on Kubernetes master node.

Which process runs on Kubernetes non-master node?
A) Kube-proxy process runs on Kubernetes non-master node

Which container runtimes supported by Kubernetes?
A) Kubernetes supports docker and rkt container runtimes.

What is k8s?
A) Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers.

What is Kubectl?
A) kubectl is a command line interface for running commands against Kubernetes clusters.

What is Minikube?
A) Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day

What is the Kubelet?
A) Kubelets run pods. The unit of execution that Kubernetes works with is the pod. A pod is a collection of containers that share some resources: they have a single IP, and can share volumes.

What is a node in Kubernetes?
A) A node is a worker machine in Kubernetes, previously known as a minion. A nodemay be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy.


What are the advantages of Kubernetes?
Automated Scheduling-Kubernetes role is to automate the distribution (scheduling) of application containers across a cluster in an efficient way.
Auto Healing Capabilities - Kubernetes auto-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers
Automated Rollback - Sometimes you may want to rollback a Deployment.By default, all of the Deployments rollout history is kept in the system so that you can rollback anytime you want.
Horizontal Scaling -It is a feature in which the cluster is capable of increasing the number of nodes as the demand for service response increases and decrease the number of nodes as the requirement decreases.

What are namespaces in Kubernetes?
A: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. Kubernetes allows use of multiple namespaces.

What is a pod in Kubernetes?
A: A Kubernetes pod is a group of containers that are deployed together on the same host. If you frequently deploy single containers, you can generally replace the word "pod" with "container" and accurately understand the concept.

What is a swarm in Docker?
A) Docker Swarm is a clustering and scheduling tool for Docker containers.

What is Kubernetes?
Kubernetes is Google's open source system for managing Linux containers across private, public and hybrid cloud environments.
Kubernetes is an open-source container management tool which holds the responsibilities of container deployment, scaling & descaling of containers & load balancing.
It is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
It contains tools for orchestration, service discovery and load balancing that can be used with Docker and Rocket containers.
As needs change, a developer can move container workloads in Kubernetes to another cloud provider without changing the code
It helps automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes.
Kubernetes has written in Go Programming


How is Kubernetes different from Docker Swarm?

Kubernetes
Auto-scaling: Kubernetes can do auto-scaling.
Docker Swarm
Auto-scaling: Docker swarm cannot do auto-scaling.

Docker Swarm
Scalability
Highly scalable and scales 5x faster than Kubernetes.

GUI
Docker Swarm
There is no GUI.

Kubernetes
Load Balancing: Manual intervention needed for load balancing traffic between different containers and pods.
Docker Swarm
Load Balancing: Docker swarm does auto load balancing of traffic between containers in the cluster.

Kubernetes
Rolling Updates & Rollbacks: Can deploy rolling updates and does automatic rollbacks.
Docker Swarm
Rolling Updates & Rollbacks: Can deploy rolling updates, but not automatic rollback.

Kubernetes
DATA Volumes: Can share storage volumes only with the other containers in the same pod.
Docker Swarm
DATA Volumes: Can share storage volumes with any other container.

GUI
Docker Swarm
3rd party tools like ELK stack should be used for logging and monitoring.

Q3. How is Kubernetes related to Docker?
a Docker image builds the runtime containers
individual containers have to communicate, Kubernetes is used.
containers running on multiple hosts can be manually linked and orchestrated using Kubernetes.

Q5. What is Container Orchestration?
Consider a scenario where you have 5-6 microservices for an application.
these microservices are put in individual containers
container orchestration means all the services in individual containers working together to fulfill the needs of a single server.

Q8. How does Kubernetes simplify containerized Deployment?
Since Kubernetes is cloud-agnostic and can run on any public/private providers it must be your choice simplify containerized deployment.

Q8. What do you understand by load balancer in Kubernetes?
A load balancer is one of the most common and standard ways of exposing service. There are two types of load balancer used based on the working environment i.e. either the Internal Load Balancer or the External Load Balancer. The Internal Load Balancer automatically balances load and allocates the pods with the required configuration whereas the External Load Balancer directs the traffic from the external load to the backend pods

Q9. What is Ingress network, and how does it work?
Ingress network is a collection of rules that acts as an entry point to the Kubernetes cluster. This allows inbound connections, which can be configured to give services externally through reachable URLs, load balance traffic, or by offering name-based virtual hosting. So, Ingress is an API object that manages external access to the services in a cluster, usually by HTTP and is the most powerful way of exposing service.

There are 2 nodes having the pod and root network namespaces with a Linux bridge. In addition to this, there is also a new virtual ethernet device called flannel0(network plugin) added to the root network.
Now, suppose we want the packet to flow from pod1 to pod 4. Refer to the below diagram.
    So, the packet leaves pod1’s network at eth0 and enters the root network at veth0.
    Then it is passed on to cbr0, which makes the ARP request to find the destination and it is found out that nobody on this node has the destination IP address.
    So, the bridge sends the packet to flannel0 as the node’s route table is configured with flannel0.
    Now, the flannel daemon talks to the API server of Kubernetes to know all the pod IPs and their respective nodes to create mappings for pods IPs to node IPs.
    The network plugin wraps this packet in a UDP packet with extra headers changing the source and destination IP’s to their respective nodes and sends this packet out via eth0.
    Now, since the route table already knows how to route traffic between nodes, it sends the packet to the destination node2.
    The packet arrives at eth0 of node2 and goes back to flannel0 to de-capsulate and emits it back in the root network namespace.
    Again, the packet is forwarded to the Linux bridge to make an ARP request to find out the IP that belongs to veth1.
    The packet finally crosses the root network and reaches the destination Pod4.
https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/uploads/2018/08/Pods.png


Q10.  What do you understand by Cloud controller manager?
The Cloud Controller Manager is responsible for persistent storage, network routing, abstracting the cloud-specific code from the core Kubernetes specific code, and managing the communication with the underlying cloud services.the cloud vendor develops their code and connects with the Kubernetes cloud-controller-manager while running the Kubernetes.

Q11. What is Container resource monitoring?
grafana
CAdvisor
Prometheus
InfluxDB

Q13. What is a Headless Service?
Headless Service is similar to that of a ‘Normal’ services but does not have a Cluster IP. This service enables you to directly reach the pods without the need of accessing it through a proxy.

Q15. What are federated clusters?
Multiple Kubernetes clusters can be managed as a single cluster with the help of federated clusters. So, you can create multiple Kubernetes clusters within a data center/cloud and use federation to control/manage them all at one place.

Q4. Which of the following are core Kubernetes objects?
Pods
    Services
    Volumes
    All of the above[Ans]

DaemonSet A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are: running a cluster storage daemon on every node running a logs collection daemon on every node running a node monitoring daemon on every node Tolerations: To understand tolerations, we have to look at another Kubernetes concept - taints. Taints are a way of indicating to the Kubernetes Scheduler that pods shouldn’t be scheduled on a particular node. The Kubernetes Scheduler is responsible for placing created pods in different nodes. When you put a taint on a node, you’re putting a mark on the node that tells the Kubernetes Scheduler to ignore that node. Only pods that have tolerance for that taint will be scheduled on that node. So toleration is just a way of ignoring the taints on a node in order to allow the pod to be scheduled on that node. You can taint a node by running the following command: Deployment Deployments are Kubernetes objects that are used for managing pods. The first thing a Deployment does when it’s created is to create a replicaset. The replicaset creates pods according to the number specified in the replica option. I Replicasets The Replicaset is the replacement for the Replication controller The Replication Controller is used to create multiple copies of the same pod in a Kubernetes cluster. It helps ensure that at any given time, the desired number of pods specified are in the running state. If a pod stops or dies, the ReplicationController creates another one to replace it. DaemonSet DaemonSet is another Kubernetes object that’s used for managing pods. DaemonSets are used when you want to make sure that a particular pod runs on all nodes that belong to a cluster. This is helpful in situations where you want to run a monitoring or logging application on the nodes in the cluster. ConfigMap ConfigMaps are used to separate configuration data from containers in your Kubernetes application. They offer you the ability to dynamically change data for containers at runtime. It’s important to note that ConfigMaps are meant to be used for storing sensitive information. If the data you want to pass to your application is sensitive, then it is recommended you use Secrets instead. Namespace Namespaces are used to organize objects in a Kubernetes cluster.
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ https://www.magalix.com/blog/understanding-kubernetes-objects
https://www.javainuse.com/devops/kubernetes_intvw
https://codingcompiler.com/kubernetes-interview-questions-answers/
https://www.edureka.co/blog/interview-questions/kubernetes-interview-questions/

ansible interview questions


  • What is Ansible?

Ansible is developed in Python language.

2.What Are The Advantages and use Of Ansible?
Answer:
Ansible has a huge number of benefits:
•No Agent: Agent is not required for setting up Ansible. If Box can support ssh and it has python, then no issue to set up Ansible.
•Idempotent: Architecture of Ansible is totally structured around the concept of idempotency. The main or core idea is that only those things need to be added which are needed, and those things will be repeatable without side effects.
•Declarative not procedural: a Normal attitude of other configuration tools of following a procedural process, means do this then do that and so on. But Ansible normally writes the description of the state of machine what we want and it takes proper steps toward fulfilling that description.
•Very Easy to learn and low overhead.

3.How Ansible Works?
Playbooks are in YAML file format.

7.What Is The Best Way To Make Content Reusable/ redistributable?
There have 3 ways to reuse files in playbooks of Ansible. 3 ways include, imports and roles.
Roles also can be uploaded and shared by Ansible Galaxy.

https://www.educba.com/ansible-interview-questions/


  • If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in Working with Inventory will not serve your needs

You may need to track hosts from multiple sources: cloud providers, LDAP, Cobbler, and/or enterprise CMDB systems.

Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: Inventory Plugins and inventory scripts.

If you’d like a GUI for handling dynamic inventory, the Red Hat Ansible Tower inventory database syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor. With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs.

Inventory Script Example: Cobbler
Ansible integrates seamlessly with Cobbler, a Linux installation server
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that can represent data for multiple configuration management systems (even at the same time) and serve as a ‘lightweight CMDB’.

Using Inventory Directories and Multiple Inventory Sources
Ansible can use multiple inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant hybrid cloud


https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html#static-groups-of-dynamic-groups