Technological changes Kubernetes
This paper mainly introduces the computer the Internet age, with the development of the container and virtual technology evolution, the requirements for distributed computing resources integrate centralized distribution, achieve an effective scheduling, dynamic extension, elastic expansion, service sharing and big data era characteristics such as high availability model system.Google will advantage feature of Borg application to the open source version Kubernetes, after 1 year of community efforts, v1.0 release and can be run in a production environment.Kubernetes, greatly promote the research and development of micro service architecture, to stimulate the rapid iteration of ecological surrounding the container, and for many IT Internet companies to build efficient large-scale computing system provides a technical basis.
Container technology greatly improve data computing power, changed the calculation model
Container virtual technology has been for many years, from 2000 to 2000, as the container kernel based namespace, Cgroup successively appeared, and the life cycle of the container after the LXC lightweight tools routing.Docker 2013 Version 0.10 release, as a senior LXC container based engine, greatly simplifies the creation and management of the container, the container virtual technology to spread and popularization.Compared to the LXC complex operation process and is highly dependent on the kernel to the user knowledge, Docker users with a simple command line can be achieved only container operation management, it brings the convenience of the most direct, is fast packaging, migration of service image.And potential value not only for software delivery, deployment, traditional operations such as greatly simplified, its application features also began to gradually change the traditional software model of thinking, people are no longer talking about the demise of the SOA, but gradually realized that the rise of micro service architecture.
Figure 1 Docker container building scheme
Internationally, and the company of the cloud computing, to some extent, almost all began to support and integrate Docker.In June 2014, Microsoft, Amazon, IBM, Google, Facebook, Twitter, Red Hat, too, and Salesforce, and many other companies, as a Docker supporters, gathered in the DockerCon, discuss and look forward to the future container technology of ecological development.
Container virtualization technology through the kernel resources sharing and isolation, the host computing resources further segmentation refinement, services to the process level of granularity to run in an independent PID control, a namespace and system environment of the network stack.Container technology to enhance the control of resources, is on the system of the abstract again.Compared with traditional virtual machines, container runtime Shared the system kernel, with separate process by isolation and cyberspace, this reduces the process starts when the operation of the consumption, more efficient resource utilization.Add to the operation of the container does not need to restart or shut down the entire operating system, only is the termination of the operation in the process of their own independent space, so you can quickly create and delete container.Container technology greatly simplifies the deployment of application, the application can quickly access bundled into a single address, and USES the Registry to store only a line command can complete components, can be easily migrated to other Linux environment.
Lightweight container technology are also promoted by the evolution of the micro service architecture theory and practice.Container technology applications make services can be independently deployed to different processes, and by lightweight communication mechanism between service interaction.So from the aspects of design can according to the business functions, and between each of a clear definition of the border, the separation of decoupling between them.Compared with the traditional way of library reference, upgrade the moment to the characteristics of the overall relocation, micro service mode will be broken down into a series of application services and in process operation, its significance lies in, without affecting the whole system can upgrade the obsolete components of local deployment, and that of cross-process calls is certainly need at the beginning of the design considering boundary clear and various responsibilities clear.Modular services enable highly discretization between components, micro service mode of the system of each component can be deployed independently, this makes great changes in the late of software life cycle, namely in the publishing stage of the production, even partial failure also don't have to stop work in other parts of the system.Rapid deployment, rapid configuration, the prerequisite for these services will inevitably led to the continuous delivery of software applications.
The convenience of container is single rapid deployment and migration, but in the real production environment, a single host often cannot support enough resource request, and in many host mode, also need to do dynamic resource scheduling and application of the load balancing, at the request of the deeper, the scope of service discovery not only need to cluster can reach, container needs reliable stability between virtual network at the same time.On the progress to the Docker engine, the host container connection is established in the form of the
bridge, but across a host of communication is still in the development phase, the Libnetwork for future network solution still cannot be applied to the production environment.
Large-scale container cluster management tools, Kubernetes from Borg
As a senior in Docker container engine rapid development at the same time, Google also began to look at themselves in the container technology and contribution to the accumulation of the cluster.Within Google, container technique has been used for many years, Borg applications running management with tens of thousands of container, with the support of it, whether Google search, Gmail or Google map, can be easily obtained from the vast data center technology resources to support service.
Borg is a cluster of manager, in its system, running a number of clusters, and each cluster may consist of hundreds of thousands of server connection, Borg all the time in handling from many applications submitted by hundreds of thousands of jobs, to receive, the Job scheduling, start, stop, restart and monitoring.As Borg, the thesis puts it, Borg provides three benefits: 1) the hidden resources management and error handling, users only need to pay attention to the development of application.2) service high availability, high reliability.3) load can be run by the tens of thousands of machines of joint in the cluster.
As Google's competitive technological advantage, Borg course is regarded as trade secrets hidden, but when Tiwtter engineers carefully to create our own Borg system (Mesos), Google also assess the situation to launch the new open source tools derived from their own technological theory.
In June 2014, Google cloud computing expert Eric Brewer (Eric Brewer) conference in San Francisco for the new open source tools, its name Kubernetes in Greek means the captain or pilot, this also happens to accord with the role of it in the Container cluster management, which is the loading of the Container (the Container) commander, so many of the cargo at the burden of global scheduling and operation monitoring duties.
Although Google Kubernetes is one of the purposes of promoting its surrounding the calculation Engine (Google Compute Engine) and Google App Engine (Google App Engine).But the emergence of Kubernetes can make more Internet companies can enjoy the connection of computer become the advantages of cluster resource pool.
Kubernetes computing resources for the higher level of abstraction, through the combination of the container carefully, the final application service to users.Kubernetes near the beginning of the model takes into account the containers across the machine connection request, support for multiple network solutions, at the same time, the scope of Service levels to build cluster SDN network.Its purpose is to place the service discovery and load balance to the container can reach the scope, the transparent convenient way of communication between the individual service, which provides a platform for the practice of the micro service architecture.In Pod level, as the smallest object of Kubernetes can operation, its characteristic is native support for micro service architecture.
Kubernetes project from Borg, can be said to be assembled Borg the essence of the design ideas, and absorb the experience and lessons in the Borg system.
Figure 2 Borg architecture
Borg will Job defined in the activities performed within certain boundaries, and can attach the relevant information about demand.The user can send commands to the Job by means of RPC.A Job can contain multiple Task, and can control the Running status of the Task, Pending, Running, and Dead.But the Job is just a group of constraints for the Task, at a higher level for more than one Job organization not flexible enough, only to the name of the Job are defined in the form of taking shortcuts, by naming format of custom and parsing screened conditions to achieve the goal of group selection.In Kubernetes
ReplicationController and Pod, absorbed in some sense the Job - the concept of Task multitasking instance, the user can put actual container in Pod, set dynamically through ReplicationController Pod copy number, and in the service level to achieve load balancing for multiple instances.Kubernetes increase the label as the Service at the same time, ReplicationController and Pod properties, through the Key - Value approach to dynamic organization Service, ReplicationController and Pod, users can use the label of the query to locate the instance of the application.Relative to Borg in a single curing of grouping, the label has greatly increased the Kubernetes dynamic relation between various levels between the different elements of flexibility.
Figure 3 Job - Task Task instance
In the Borg project Alloc, corresponding Kubernetes Pod, it is a reserved a resources in the host, used to make one or a set of Task running in it, the Task of group interaction is relatively close, external and overall manner at the same time.Similarly, Pod in ingenious composition way of Kubernetes let more containers to realize resources sharing, including disk and network, communication between container only need localhost can complete, at the same time support the process of detecting state and hooks, and through the combination between the detailed setting can let the container with high performance at the same time also has high fault tolerance, and for the upper caller, place more containers of Pod is created in the form of whole is up and running or deleted.
In terms of monitoring commissioning, Borg provides multifaceted interface and debugging tools, users can quickly locate to the relevant Job log information, and to use it to track to the application and the underlying service details of events and error messages.The Kubernetes used for numerous works of Borg, monitoring on the Cadvisor for host and example, based on Elasticsearch/Kibana to collect logs, etc.
Architectures, Kubernetes Master - Slave mode and multiple components lightweight API interaction way is also comes from the inspiration of Borg.Borg system will be decomposed into multiple processes such as resource scheduling based interface to run respectively.Kubernetes sets the Master to further only is the core processing request and the maintenance of the state of the object, and by other components such as scheduling and control through RESTAPI interact with the Master, its itself is the typical application of micro service architecture.
Kubernetes as container cluster management tool, on July 22, 2015 iterations to v 1.0 and officially announced, it means that the open source container layout system can be used formally in a production environment.Joint Linux Foundation at the same time, Google and other partners to set up the CNCF Foundation (Cloud Native Computing Foundation), and the Kuberentes as first into CNCF management system of the open source project, power container technology of ecological development and progress.Kubernetes project of Google in a production environment in the past decade the experience and lessons, from Borg multitasking Alloc resource block to Kubernetes more copies of Pod, Cell cluster management, from Borg to Kubernetes design concept of the cluster, such as Docker advanced engine drives the rise and popularization of container technology at the same time, to provide container cluster management alone to the insights and new ideas.