DOCX

Technological changes Kubernetes container management

By Frances Freeman,2015-08-14 13:17
15 views 0
Technological changes Kubernetes container management

    Technological changes Kubernetes

    container management

    This paper mainly introduces the computer the Internet age, with the development of the container and virtual technology evolution, the requirements for distributed computing resources integrate centralized distribution, achieve an effective scheduling, dynamic extension, elastic expansion, service sharing and big data era characteristics such as high availability model system.Google will advantage feature of Borg application to the open source version Kubernetes, after 1 year of community efforts, v1.0 release and can be run in a production environment.Kubernetes, greatly promote the research and development of micro service architecture, to stimulate the rapid iteration of ecological surrounding the container, and for many IT Internet companies to build efficient large-scale computing system provides a technical basis.

    Container technology greatly improve data computing power, changed the calculation model

    Container virtual technology has been for many years, from 2000 to 2000, as the container kernel based namespace, Cgroup successively appeared, and the life cycle of the container after the LXC lightweight tools routing.Docker 2013 Version 0.10 release, as a senior LXC container based engine, greatly simplifies the creation and management of the container, the container virtual technology to spread and popularization.Compared to the LXC complex operation process and is highly dependent on the kernel to the user knowledge, Docker users with a simple command line can be achieved only container operation management, it brings the convenience of the most direct, is fast packaging, migration of service image.And potential value not only for software delivery, deployment, traditional operations such as greatly simplified, its application features also began to gradually change the traditional software model of thinking, people are no longer talking about the demise of the SOA, but gradually realized that the rise of micro service architecture.

    Figure 1 Docker container building scheme

    Internationally, and the company of the cloud computing, to some extent, almost all began to support and integrate Docker.In June 2014, Microsoft, Amazon, IBM, Google, Facebook, Twitter, Red Hat, too, and Salesforce, and many other companies, as a Docker supporters, gathered in the DockerCon, discuss and look forward to the future container technology of ecological development.

    Container virtualization technology through the kernel resources sharing and isolation, the host computing resources further segmentation refinement, services to the process level of granularity to run in an independent PID control, a namespace and system environment of the network stack.Container technology to enhance the control of resources, is on the system of the abstract again.Compared with traditional virtual machines, container runtime Shared the system kernel, with separate process by isolation and cyberspace, this reduces the process starts when the operation of the consumption, more efficient resource utilization.Add to the operation of the container does not need to restart or shut down the entire operating system, only is the termination of the operation in the process of their own independent space, so you can quickly create and delete container.Container technology greatly simplifies the deployment of application, the application can quickly access bundled into a single address, and USES the Registry to store only a line command can complete components, can be easily migrated to other Linux environment.

    Lightweight container technology are also promoted by the evolution of the micro service architecture theory and practice.Container technology applications make services can be independently deployed to different processes, and by lightweight communication mechanism between service interaction.So from the aspects of design can according to the business functions, and between each of a clear definition of the border, the separation of decoupling between them.Compared with the traditional way of library reference, upgrade the moment to the characteristics of the overall relocation, micro service mode will be broken down into a series of application services and in process operation, its significance lies in, without affecting the whole system can upgrade the obsolete components of local deployment, and that of cross-process calls is certainly need at the beginning of the design considering boundary clear and various responsibilities clear.Modular services enable highly discretization between components, micro service mode of the system of each component can be deployed independently, this makes great changes in the late of software life cycle, namely in the publishing stage of the production, even partial failure also don't have to stop work in other parts of the system.Rapid deployment, rapid configuration, the prerequisite for these services will inevitably led to the continuous delivery of software applications.

    The convenience of container is single rapid deployment and migration, but in the real production environment, a single host often cannot support enough resource request, and in many host mode, also need to do dynamic resource scheduling and application of the load balancing, at the request of the deeper, the scope of service discovery not only need to cluster can reach, container needs reliable stability between virtual network at the same time.On the progress to the Docker engine, the host container connection is established in the form of the

    bridge, but across a host of communication is still in the development phase, the Libnetwork for future network solution still cannot be applied to the production environment.

    Large-scale container cluster management tools, Kubernetes from Borg

    As a senior in Docker container engine rapid development at the same time, Google also began to look at themselves in the container technology and contribution to the accumulation of the cluster.Within Google, container technique has been used for many years, Borg applications running management with tens of thousands of container, with the support of it, whether Google search, Gmail or Google map, can be easily obtained from the vast data center technology resources to support service.

    Borg is a cluster of manager, in its system, running a number of clusters, and each cluster may consist of hundreds of thousands of server connection, Borg all the time in handling from many applications submitted by hundreds of thousands of jobs, to receive, the Job scheduling, start, stop, restart and monitoring.As Borg, the thesis puts it, Borg provides three benefits: 1) the hidden resources management and error handling, users only need to pay attention to the development of application.2) service high availability, high reliability.3) load can be run by the tens of thousands of machines of joint in the cluster.

    As Google's competitive technological advantage, Borg course is regarded as trade secrets hidden, but when Tiwtter engineers carefully to create our own Borg system (Mesos), Google also assess the situation to launch the new open source tools derived from their own technological theory.

    In June 2014, Google cloud computing expert Eric Brewer (Eric Brewer) conference in San Francisco for the new open source tools, its name Kubernetes in Greek means the captain or pilot, this also happens to accord with the role of it in the Container cluster management, which is the loading of the Container (the Container) commander, so many of the cargo at the burden of global scheduling and operation monitoring duties.

    Although Google Kubernetes is one of the purposes of promoting its surrounding the calculation Engine (Google Compute Engine) and Google App Engine (Google App Engine).But the emergence of Kubernetes can make more Internet companies can enjoy the connection of computer become the advantages of cluster resource pool.

    Kubernetes computing resources for the higher level of abstraction, through the combination of the container carefully, the final application service to users.Kubernetes near the beginning of the model takes into account the containers across the machine connection request, support for multiple network solutions, at the same time, the scope of Service levels to build cluster SDN network.Its purpose is to place the service discovery and load balance to the container can reach the scope, the transparent convenient way of communication between the individual service, which provides a platform for the practice of the micro service architecture.In Pod level, as the smallest object of Kubernetes can operation, its characteristic is native support for micro service architecture.

    Kubernetes project from Borg, can be said to be assembled Borg the essence of the design ideas, and absorb the experience and lessons in the Borg system.