Let is big Docker container, hundreds of easily managed online store

By Debbie Coleman,2015-04-01 07:03
17 views 0
Let is big Docker container, hundreds of easily managed online store

    "Let" is big: Docker container, hundreds of easily

    managed online store

    ShopifyIs a company with eb point store solutions, at present there are more than 10 m service on the number of online stores (Tesla and its users).Website is the main framework of Ruby on Rails, 1700 cores and 6 TB of RAM, can response to the 8000 user requests per second.More easily to expand and manage business, Shopify began using the Docker and CoreOS technology, Graeme Shopify software engineer Johnson will write a series of articles to share their experiences, this article is the second article in the series, mainly introduced Shopify in a production environment is how to use the container.

    The following is a translation,

    Why use container technology?

    Before we go deep into the construction of container mechanism, discuss about the motivation to do so in the first place.Container in a data center has potential as the console in the game.At the beginning of the PC game, usually before you begin to play a game, all you need to install video or audio driver.However, console provides a different experience, with the Docker very similar:

    ; Predictability: the game with its own game, can run at any time, do not need to download or


    ; Quickly: game with the read only memory (ROM), in order to gain the speed as lightning;

    ; Simple: the game is strong, have children protection measures and to a large extent, they are true

    plug and play.

    ; Predictability, rapid, simple, is a good thing in terms of scale.Docker container provides the

    building blocks that make it easier for our data center operation, better suited to make the

    application become independent modules, ready at any time the unit is like game consoles.


    In an effort to achieve a container, you need to have a combination of the development and operational skills.First of all, to communicate with your operations team, you need to make sure that your container can completely copy in your production environment.If you run in OSX or Windows desktop operating system, but deployed on Linux, use a virtual machine (such as a Vagrant) as a local testing environment.The first step is to update your operating system and installation support package.Select a base image matching your production environment (we use Ubuntu14.01), can't go wrong - you don't want to deal with container and operating system/package at the same time upgrade the trouble.

    Select the container type

    In terms of container types Docker provides you with enough choice space, from a single process of "thin" container to a virtual machine that make you feel like traditional "fat" container (for example, Phusion).

    We choose to follow the "lean" container, removing some irrelevant components from inside the container.Although the two ways to make a choice is difficult, but we prefer small kind, because the container simplification will consume less CPU and memory.Detailed instructions in this mannerDocker blogIn the.

    Environment configuration

    In a production environment, we use a Chef in a deployment tools to manage the system of each node.In this way, we can easily do it in a container operation Chef, however, we don't want some services in every container, such as: the index of the log service, running state collection services.And Chef use undoubtedly makes many services will repeat installation in unnecessary container, unable to endure more than vainly rework, we choose in each run Docker on host share the same copy of a copy of these services.

    How to do the container lightweight, the key is: converts Chef deployment tools run the script to a Dockerfile (this part, we then replace it with a custom Docker build process, after the article involve).Docker was born, can be said to be a godsend, make the operations staff to assess the internal production environment, and review and organize what needs to be in the whole system life cycle.In this part, for the system, please try to give up, but also to ensure that try to be careful in the code review process.

    In fact, the whole process, is not as difficult sounds.In the end, our team is in the form of a 125 line Dockerfile ended, and the Dockerfile is defined on the Shopify all containers need to share the base image.The base image contains 25 packages, these packages include larger span programming language runtime (Ruby, Python, and the Node), and a variety of development tools (Git, Vim, build - essential and Go), there are some need to share the use of the library files.At the same time, in the base image also contains a series of tool script to tasks, such as by adjusting the parameters to start Ruby, or send Datadog monitoring data, etc.

    In the above circumstances, our applications can be optional in the base image to add their own specific requirements.In spite of this, our biggest application is only added some operating systems rely on package, so in general, our base image are a relatively concise and hard working.

    Container law of 100

    When choosing to what services container, can say you have 100 small container first run on the same host, and then think about whether it is necessary to run a copy of the 100 service, or Shared a single host is better.

    Here are some examples of how we according to the law of 100 to decide how to use the container:

    Journal index: error is critical to the diagnosis, especially in the container, the file system disappeared after is more important.We purposely avoid modifying the application log behavior (such as forced them redirect to the system logs), and allow them to continue to write the log to the file system.Running 100 log agent seems to be wrong, so we create a background process to deal with the following important tasks:

    ; Running on the host side and subscribe to Docker event;

    ; The container starts, configure logging index to monitor the container;

    ; Container is destroyed, delete index instructions.

    ; In order to ensure the vessel exits, all logs are indexed, you may need to delay a little destruction of

    the container.

    Statistics: Shopify at several levels (systems, middleware and application) is generated on the runtime statistics, the results through a proxy or application code.

    ; Many of our statistical result can pass StasD transmission, fortunately we can configure Datadog to

    receive container traffic from the host end, with the appropriate configuration, it can be StasD

    receiving address to the container;

    ; Because the container is essentially the process tree, so a monitoring agent on the host system can

    see container border Shared a single system monitoring is free;

    ; From the perspective of a more centered on the container, consider the Docker and Datadog

    integration, will add a Docker metrics to host the monitoring agent;

    ; Application level metrics most works can also, them or sent via StasD events, or direct dialogue

    with other services.It is very important to specify the name for a container, so that the metrics are

    easier to read.

    Kafka: we use Kafka to real-time processing events from Shopify stack to partners.We use Ruby on Rail code to release Kafka, generate information, and then into the SysV message queue.A background of the language programs will give Kafka out messages in the queue.This reduces the Ruby process time, we are better able to cope with Kafka server of the accident.There is a little bad, SysV message queue is a part of the IPC namespace, so we can't be used to connect the container: the host.Our solution is to add a socket on the host side, used to put messages to SysV queue.

    The use of the 100 law requires a certain flexibility.In some cases, only need to write about the components of "bonding", can also be configured to achieve purpose.In the end, you should get a container, containing needed to run your application, and provides a Docker hosting and sharing service of the host environment.

    Will your application container

    As the environment is ready, we can turn attention to application of container.

    We tend to thin the container is to be able to do a thing.The such as the a unicorn unicorn is a Unix and LAN/local host optimized HTTP server), the master and the worker thread service web request, or a Resque (Resque use Redis create background tasks, and stored into the queue, and then execute. It is the most commonly used Rails under one of the background task management tool) the worker thread service to a particular queue.Thin container allows fine-grained scaling (generally refers to the particle size of the interface of remote method invocation), in order to meet the demand.For instance, we can check a spam attack response Resque the number of worker threads.

    We found some standard conventions are useful for the code in the layout in the container:

    ; The application is always under root within the container/app;

    ; Applications usually released through a single port services;

    We also established some container git repo (repo encapsulates) to the operation of the git agreement:

    ; / container/files have a when the building was copied into the container file tree, for example, a

    request Splunk index application logs, it adds/container/files/etc/Splunk. D/inputs. The conf file

    into your git repo is enough (control log index responsibility is transferred to the developer, this is

    a subtle but significant change, the past is work at the ops);

    ; / container/compile is a shell script, compile your application, and generate a container may be run

    at any time.Create this file and adapt to your application, is the most complicated place;

    ; / container/roles. Json save command line in machine-readable form used to run the workload.A lot

    of our application to multiple roles, running the same code base, some processing web traffic

    processing background tasks at the same time.This part of the inspiration from the procfile

    Heroku.The following is an example of roles. The json file:

    We use a simple Makefile drive build, can also be run locally.We Dockerfile looks like this: cancel the compile phase's goal is to produce a container is immediately ready to run.Docker is one of the key advantages in container startup super fast start without being damaged because of the extra work.To achieve this goal you need to know your entire deployment process.Some examples:

    ; We are using Capistrano, Capistrano is a more open source tools that run the script on the server, it

    is mainly used for deployment of web applications) will be deployed to machine code, compile and

    asset before happening as part of the deployment.Through mobile assets generated, compiled into

    containers to deploy new code by the more simple and fast for a few minutes.

    ; The our unicorn host starts to get data from the table model (table object represents an HTML

    form.Not only it is slow, but smaller container size means that we will need more database

    connection.On the other hand, it is possible to do this (get data) in the container, to speed up the


    In this case, the compilation phase consists of the following logical steps:

    ; bundle install

    ; asset compile

    ; database setup

    In order to keep the announcement (email) the size of the reasonable, we simplify some details.Key management is one of the main details, we have not discussed here.Don't put them into the source registration management.We have already got used to encrypt the key code, devoted to the subject of a blog post would come soon.

    Debugging and details

    Run the application in the container and container's performance the vast majority of cases are the same.In addition, most of your debugging tools (for example: the strace, GDB, / proc filesystem) is running on the Docker's host.And familiar tools nsenter or nsinit, can use their mounts to a running container for debugging.

    Docker exec as docker 1.3.0 version provides the new tools, can be used to run the container injection process.But, unfortunately, if you need to root injection process, you need to pass the nsenter, situation in some places may not be as expected.

    Process of layered

    Although we run a lightweight container, still need initialization process (pid = 1) with monitoring tools, background management, service discovery and tightly integrated, also can give us a fine-grained health monitoring.

    In addition to the initialization process, we added a in each container ppidshim process (pid = 2), the application process of pid = 3.As a result of the existence of ppdishim process, the application is not directly inherited from the initialization process, avoid them thinks he is a background daemon, wrong consequences.

    The final process level is like this:


    If you're using container technology, you're likely to modify the existing running script or write a new script, which includes the docker run calls.By default, the docker run will be acting signal to your application, so you must understand how the application is the interpretation of the signal.

    Standard UNIX is SIGTERM to request an orderly shut down a process.In order to ensure the application to comply with this rule, use Resque SIGQUIT to an orderly shutdown process, SIGTERM to emergency shutdown process.Fortunately, Resque can use SIGTERM is configured to shut down gracefully.


    We choose the container name to describe the workload (for example, unicorn - 1 resque - 2), and combined with the host name for easy traceability.The end result will be like this: We use the Docker the host name of the tag results into the container, this makes the most of the application report out the correct value.Some programs (Ruby) to ask the host name to get the thumbnail of the (unicorn - 1), without asking the expected FQDN.Because the Docker manages the/etc/resolv. Conf file, we don't allow any change our current version, so we through LD_PRELOAD, rewrite the gethostname () and uname () method, and injected into the library.Finally the monitoring tools will be released but we want to get the host name don't have to change the application code.

    The registration and deployment

    We found that the construction of a container in the process of copying 'bare metal' behavior is equivalent to a constant process of debugging.If you are a sensible, are you sure you want to build automated container.

    In order to everyone can push, we use the lot commit hooks to trigger the construction of a container, we will submit state in the process of building a log to determine whether the build is successful.We use git commit SHA to container "docker tag", so you can see exactly

    what version of the code is included in the container.In order to more easily by script debugging and use, we also put SHA in the container in a file (/ app/REVISION).

    Once you build is normal, you will want to push the container to a central registry of the warehouse.In order to increase the speed of deployment and reduce external dependencies we choose to run data center in your library.We in Nginx reverse proxy (including the contents of the cache the GET request) run multiple standard Python behind a copy of the registry, as shown in the figure below:

    When the same image multiple Docker host requests, we found that the large network interface (10 Gbps) and reverse proxy, in solving the "thundering herd" such code deployment related problem, it is very effective.Proxy approach also allows us to run a variety of types of library, and in the case of failure, with automatic failover.

    If you follow this blueprint, you can be automated to build the container, and the container stored in a central warehouse safely in the registry, which can blend in your deployment process.

Report this document

For any questions or suggestions please email