Science And Technology Museum

Docker core Technology and implementation principle

by Miles Warren
January 22,2021

When it comes to virtualization, Docker is definitely the first thing that comes to mind. After four years of rapid development, Docker has become a standard in many companies. And Docker is no longer a toy that can only be used in the development stage. As a product widely used in the production environment, Docker has a very mature community and many users. And the content in the codebase has also become very large.

Due to the development of the project, the split of functions, and various strange name changes. It becomes more difficult for us to understand the overall architecture of Docker again.

Although Docker currently has many components and its implementation is very complex. This paper does not want to introduce the specific implementation details of Docker too much. We'd like to talk more about the core technologies that support Docker's emergence, which is virtualization technology.

The emergence of Docker must be because the current back end really needs a virtualization technology. This technology solves the problem of the consistency between the development environment. And the production environment in the development and operation and maintenance stages. Through Docker, we can also include the environment in which the program is running into version control. And eliminate the possibility of different running results due to the environment. However, although the above requirements promote the emergence of virtualization technology. We still cannot have a perfect product without the appropriate underlying technology support. The rest of this article introduces several core technologies used by Docker. If we understand how they are used and how they work, we can clearly understand how Docker works.

Namespaces are a Method that Linux provides for separating resources,such as process trees, network interfaces, mount points, and interprocess communication. When we use Linux or macOS on a daily basis, we don't have the need to run multiple completely separate servers. But if we start multiple services on the server, those services will actually affect each other. Each service can see the processes of the other services. And can also access any file on the host machine. This is something we don't want to see a lot of times. We would rather have different services running on the same machine be completely isolated as if they were running on multiple different machines.

In this case, once a service on the server is compromised. The intruder can access all the services and files on the current machine, which is not what we want to see. In fact, Docker achieves the isolation of different containers through Linux's Namespaces.

Docker has become a very mainstream technology and has been used in the production environment of many mature companies. But the core technology of Docker is actually many years old. Linux namespace, control group, and UnionFS support the current implementation of Docker. Is also the most important reason for the emergence of Docker.

  • Miles Warren
  • January 22,2021

Leave a Reply