DOCX

Kubernetes application deployment model analytic (principle)

By Sue Johnson,2015-10-05 19:12
17 views 0
Kubernetes application deployment model analytic (principle)

    Kubernetes application deployment model analytic

    (principle)

    Kubernetes to manage Linux container cluster, accelerate development and simplify operations (i.e., the conversation).But on the network at present about Kubernetes introductory than actual use.This series of articles focus on the actual deployment, bring you to quickly master Kubernetes.For the last, this paper mainly introduces the principle and concept need to be aware of before deployment, including Kubernetes component structure, the role of each component function, model, and the application of Kubernetes, etc.

    Ten years Google has been running in a production environment using container business, responsible for managing the container cluster system is a forerunner of Kubernetes Borg.Is now a lot of work on Kubernetes project Google developers have previously at Borg to work on this project.Most Kubernetes deployment model of the application of thought originated from Borg, to understand these models is the key to mastering

    Kubernetes.Kubernetes API version is v1, in this paper, based on the code 0.18.2 version to introduce its application deployment model, finally, we use a simple use case to illustrate the deployment process.At the end of the deployment, this paper expounds how it is with the Iptables rules to implement various types of Service.

    Kubernetes architecture

    Kubernetes cluster including Kubernetes Kubernetes agent (agents) and service (master node) two roles, acting the role of components including Kube - proxy and Kubelet, they are deployed on a single node at the same time, the node is the agent.Service role of components including kube - apiserver kube - the scheduler, kube - controller - manager, they can be any cloth, they can be deployed in the same node, can also be deployed in different nodes (the current version if no).Kubernetes cluster rely on third-party components currently has two etcd and docker.The former provides the state storage, both used to manage the container.Cluster can also use the distributed storage provide container storage space.The following figure shows part of the current system:

    Kubernetes agent node

    Kubelet and Kube - proxy running on the agent node.They monitored Service node information to start the container and Kubernetes network and other business models, such as Service, Pod, etc.Of course every proxy node running Docker.Docker responsible for download containers mirror and run.

    Kubelet

    Pods Kubelet component management and their containers, mirror and volume information.

    Kube-Proxy

    Kube - proxy agent is a simple network and the load balancer.Its concrete implementation Service model, each Service will reflect on all Kube - proxy node.According to the Service selector that covered the Pods, Kube - proxy will load balancing of these Pods do to serve the visitors to the Service.

    Kubernetes service node

    Kubernetes service components formed Kubernetes control plane, they run on a single node, but will separate to deploy in the future, to support high availability.

    etcd

    All of the persistent state is stored in the etcd.Etcd support watch at the same time, this component is easy to get the change of the system state, thus rapid response and coordination.

    Kubernetes API Server

    This component provides support for the API, REST response operation, validation API model and update the etcd of the corresponding object.

    Scheduler

    By accessing the Kubernetes/binding API, the Scheduler is responsible for the Pods distributed on each node.The Scheduler is a plug-in type, Kubernetes can support user-defined Scheduler in the future.

    Kubernetes Controller Manager Server

    Controller Manager Server is responsible for all other functions, such as endpoints Controller is responsible for the endpoints object to create, update.The node controller is responsible for the node found that management and monitoring.They are likely to become the controller split and provides the realization of the plug-in.

    Kubernetes model

    Kubernetes greatness lies in its application deployment model, mainly including Pod, the Replication controller, Label, and Service.

    Pod

    The smallest unit of deployment of Kubernetes is Pod instead of container.As a First class citizen of API, the Pods can be created, scheduling and management.Simply speaking, like peas in a pea pods, the application of a Pod container to share the same context:

    1. PID namespace.But does not support in the docker

    2. Network name space, multiple containers in the same Pod space access to the same IP and port.

    3. The IPC name space, the application of the same Pod can use SystemV IPC and POSIX message

    queue to communicate.

    4. UTS name space, the application of the same Pod share a host name.

    5. Pod can also access application of each container in Pod level definition Shared volume.

    From the life cycle, the Pod should be short and not for a long time.Pods was dispatched to the node, keep at this point until it is destroyed.When the node death, assigned to the nodes of the Pods will be deleted.May realize the Pod migration features in the future.In actual use, we generally not directly create the Pods, we through the replication controller to be responsible for creating the Pods, replication, monitoring and destroyed.A Pod can include multiple containers, they often collaborate to complete an application function directly.

    Replication controller

    Copy the controller to ensure a certain number of copies of Pod (up) in the running.If more than this number, the controller will kill some, if less, the controller will start.Controller will be in the node failure and maintenance to ensure the quantity.So it is strongly recommended that the number of copies, even if we are 1, also copy controller is used, rather than directly to create a Pod.

    In the life cycle, copy the controller will not end, but not better than the Service span.The Service can span multiple copy controller to manage the Pods.And in a Service life cycle, copy the controller can be deleted and created.The Service and the client program is don't know the existence of replication controller.

    Copy the controller to create the Pods should be replace and semantically identical to each other, the particularly suitable to stateless service.

    Pod is temporary object, is created and destroyed, and won't recover.Replicators dynamically created and destroyed Pod.Although the Pod will be assigned to the IP address, but this IP address is not persistent.The result is a question: how external consumption Pod services?

    Service

    The logic of the Service defines a Pod collection and access the collection strategy.When collection is by defining the Service provided by the Label selector.For example, we assume that there are three Pod backup to complete a back-end of image processing.These backend backup logic is the same, the front end don't care which back-end in providing it services.Although of the backend actual Pod may change, the front-end client won't aware of this change, also won't track back end.The Service is used to realize the separation of the abstract.

    For the Service, we can also define the Endpoint, the Endpoint connect Service and Pod dynamically.

    The Service Cluster IP and kuber proxy

    Each agent node runs a kube - proxy process.The process from the Service process to the change of the Service and Endpoint object over there.For each Service, it in the local open a port.Any links to the port agent to the back-end Pod a Pod of a set of IP and port.After creating the service, the service Endpoint model can reflect the back-end Pod list of IP and port, kube - proxy is from this list, select the backend service

    Endpoint maintenance.In addition sessionAffinity attribute of the Service object will also help kube - proxy to choose which specific backend.By default, the choice of the back-end Pod is random.Can set up the service. The spec. SessionAffinity into "ClientIP" to specify the same ClientIP traffic agent to the same backend.On the implementation, kube - proxy will use IPtables rules to access the Service Cluster IP and port traffic redirected to the local port.The following sections will tell what is the Cluster IP service.

    Note: in previous versions 0.18 Cluster IP call PortalNet IP.

    Internal users of the service discovery

    Kubernetes within a cluster created object or the proxy cluster nodes to visit the client what we call the internal users.To expose services to internal users, Kubernetes supports two ways: environment variables and DNS.

    The environment variable

    When kubelet start a Pod on a node, it will give the Pod container for the currently running Service set up a series of environment variables, so Pod can access the Service.Generally is {SVCNAME} and {SVCNAME} _SERVICE_PORT _SERVICE_HOST h variables, including {SVCNAME} is the Service name into capital, crossed into the underline.Such as Service "redis - master", it's TCP port is 6379, was assigned to the Cluster IP address is 10.0.0.11, kubelet may produce the following variables to the newly created Pod container:

    REDIS_MASTER_SERVICE_HOST= 10.0.0.11

    REDIS_MASTER_SERVICE_PORT=6379

    REDIS_MASTER_PORT=tcp://10.0.0.11:6379

    REDIS_MASTER_PORT_6379_TCP=tcp:// 10.0.0.11 :6379

    REDIS_MASTER_PORT_6379_TCP_PROTO=tcp

    REDIS_MASTER_PORT_6379_TCP_PORT=6379

    REDIS_MASTER_PORT_6379_TCP_ADDR= 10.0.0.11

    Note that only after a certain Service created Pod will have the Service environment variable.

    DNS

    An optional Kubernetes attachment (it is strongly recommended that users) is the DNS service.It tracks the cluster Service object, create DNS records for each Service object.So all the Pod can be accessed through the DNS service.

    For example we in Kubernetes namespace "my - ns" there is a service called my - service, DNS service will create a "my - service. My - ns" DNS records.Pod can be used in this namespace "my - service" to get the service assigned to the Cluster IP and Pod in other

    namespaces can use the fully qualified name "my - service. My - ns" to get the address of the service.

    Pod IP and Service Cluster IP

    Pod IP address is actually exist in a network card (can be a virtual devices), but the Service Cluster IP is not the same, no network equipment is responsible for this address.It is used by kube - proxy Iptables rules redirect to the local port, balanced to the back-end Pod again.We said in front of the Service environment variables and DNS USES Service Cluster IP and port.

    Take the image processing program we mentioned above, for example.When our Service has been created, Kubernetes 10.0.0.1 assigns it an address.This address from our start API service - cluster - IP - range parameter (old version portal_net parameters) as the address specified in the pool allocation, for example - service - cluster - IP - range = 10.0.0.0/16.Suppose the Service port is 1234.All kube - proxy within the cluster will be noted that the Service.When the proxy and found a new service, it will be the local node open an arbitrary port, build corresponding iptables rules, redirection service of IP and port to the new port, began to accept to the service connection.

    When a client to access the service, the iptables rules began to work, the client's traffic is redirected to the kube - proxy for this service open ports, kube - proxy randomly select a backend pod to service customers.This process is shown in the figure below:

    According to Kubernetes network model, the use of the Service Cluster IP and Port access Service client can be located in any agent node.External to access the Service, we need to Service the external access IP.

    External access Service

    Service objects in the Cluster IP range pool assigned IP can only in the internal access, if the Service as an internal level, the application is very appropriate.If the Service as a front-end Service, for customers to provide business outside the cluster, we need to provide this Service public IP.

    Visitors to the external visitors is access to the cluster agent node.Provide Service for the visitors, we can define the Service specified when its spec. PublicIPs, under normal circumstances, publicIP physical IP address is the proxy node.And previous Cluster IP range assigned to the virtual IP, kube - proxy will also provide the publicIP Iptables redirection rules, put forward the traffic to the backend Pod.With publicIP, we can use the load balancer to the commonly used Internet technologies such as external access to a service organization.

    Spec. PublicIPs tag in the new version is out of date, instead of it is the spec. The type = NodePort, this type of service, the system will give it in each cluster agent distribution of nodes in a node level of port, access to the proxy node client can access the port, and access to the service.

    The Label and Label the selector

    The Label tag in Kubernetes occupy a very important role in the model.Label for the key/value pairs, attached to the Kubernetes management objects, is a typical Pods.They define the object recognition properties, used to organize and select these objects.The Label may be added on the object when the object creation, can also exist through the API management object of the Label.

    After defines the Label of the object, the other model can be used to Label the selector (the selector) to define the role of the object.

    There are two kinds of Label selector, respectively is Equality - -based and Set - -based.

    Such as Equality - -based selector sample as follows:

    environment = production

    tier != frontend

    environment = production;tier != frontend

    For the above selector, the first match L Abel with environment key and is equal to the production of objects, the second match with tier key, but the value is not equal to the object of the frontend.Use AND logic because kubernetes, article 3 of the matching production but it is not the object of the frontend.

    The Set - -based selector sample:

    environment in (production, qa)

    tier notin (frontend, backend)

    partition

    The first choose to have the environment the key and value is the label production or qa additional objects.The second choice is tier key, but its value is not a frontend and backend.Article 3 selected partition key object, no value to check.

    Replication controller replication controller and Service with the label and label selctor to dynamically with function object.Copy controller specified in the definition of its Label to create the Pod and myself to match this Pod selector, API server should check this definition.We can dynamically modify the replication of the controller to create Pod Label for mode, data recovery, etc.Once a Pod due to the Label change from the replication controller to move out after the replication controller will soon launch a new Pod pool to ensure that the copy of the attachments.For the Service, the Label the selector can be used to select a Service backend Pods

Report this document

For any questions or suggestions please email
cust-service@docsford.com