Why do microservices use docker?
As early as 2013, docker was released, but there are still very few people who know about docker. Until 2014, Martin Fowler proposed the concept of microservices, and two irrelevant technologies finally came together to create today’s glory!
In recent years, many Internet relationships have begun to follow suit and build the architecture of docker+microservices. However, according to the author’s observation, some children’s shoes are only used in the process of use, but do not understand why use docker, anyway, for them, the company will use it! And some companies, although using docker, but the operation and maintenance methods have not changed, wasted the great performance of docker!
Therefore, the birth of this article. This article will not teach you how to use the docker api, after all, the official website document is very comprehensive, but to explain the advantages of docker, and then explain why it is suitable for the microservices architecture!
Here we must first explain the advantages and disadvantages of physical machines, virtual machines, and containers. I don’t want to go to a bunch of concepts, and the answer is three pictures!
The so-called physical machine is the villa below
Then the virtual machine is the suite like this
Finally, our container is the capsule apartment below.
Well, the professional saying is that containers are a lightweight, portable, self-contained software packaging technology that allows applications to run in the same way almost anywhere. Containers share the same set of operating system resources. Because the container is the kernel of the shared main operating system, it is impossible to run a different operating system on the server than the primary server, that is, Windows cannot be run on the Linux server. Just like the above figure, each capsule container is a common toilet, kitchen, and each capsule can no longer build its own toilet and kitchen!
Advantage of the container
In the past: I remember that in 12 years, the department had to go to a project. That will, I am doing this. Go directly to the online server, copy a tomcat, then change the port number, then deploy the application to the webapps folder, restart it. And I can feel the conscience, and there are still many traditional companies doing this.
So what are the disadvantages of doing this?
Obviously, applications interact with each other. There is a problem with one application, CPU is 100%, and other applications on this server are cool together. A large application is split into dozens of microservices, which are developed by different teams, and the levels of different teams are uneven. If you also use this deployment method, your application and the application of a pothole team are deployed on the same server. As for the results, I believe you understand.
Now: With the docker container, Docker can package our application package into a container that contains the necessary resources for the application’s code, runtime environment, dependencies, configuration files, and more. Process-level isolation between containers, the operation in the container does not affect the host and other containers, so there will be no interaction between applications!
In the past: Once upon a time, the chat between us and the test guys was like this.
Development: “You go to the test environment, follow the development environment, and then take three sets of the same test environment!”
A few hours have passed…
Test: “You help me to see why the startup error, is it missing the parameters?”
So the next few hours was so pleasant to chat with the test mm! ! Well, I believe that some companies are trying to solve the single problem of development, and they don’t use docker.
However, chatting with maintaince guys is generally like this
Operation and maintenance: “Develop this group of brains, release the new war package, and hang up the production!”
Development: “This is a good job, I am a good local, how can I not produce it on the market!”
So the next few hours, the tearing between the operation and maintenance has passed! Well, the ultimate bitterness is the user!
Now: Since the use of the docker container, the development, testing and production environment can be unified and standardized. Mirroring as a standard deliverable can be run as a container in development, test, and production environments, ultimately achieving consistent application across three sets of environments and running dependencies.
In the current micro-service architecture, an application is split into dozens of micro-services, and each micro-service corresponds to the development, testing, and production of three sets of environments that need to be built. By yourself, if you use the traditional deployment method, how many environments need to be deployed. I have heard that when a company builds a new project, it takes a whole week to build the environment. It is simply terrible!
What, you and I said, you used docker, but still have these problems?
I have seen some companies use docker like this. It is true that the container is virtualized, and then the ssh server is built on the container. The next thing is amazing, the deployment method has not changed at all, directly connected to the container, everything is still deployed! In this regard, I am also very difficult to say! Are you a docker for the leader?
Lightweight and efficient
In the past: In 2016, it will work in another big factory. This is a little more standardized, an application deployed on a virtual machine! At that time, the biggest experience was one. The virtual machine was very heavy, the construction speed was slow, and it took up a lot of resources. Only one virtual machine could be used on one physical machine!
Compared with virtual machines, containers only need to encapsulate the dependencies required by applications and applications, achieve a lightweight application runtime environment, and have higher hardware resource utilization than virtual machines. In the micro-service architecture, some services are under heavy pressure and need to be deployed in clusters. It may be deployed on dozens of machines. For some small and medium-sized companies, the use of virtual machines is too costly. If you use a container, the same physical machine can support thousands of containers, saving money for small and medium-sized companies!
Note to the author: The author has always felt that this feature is just a blind eye.
For example, do you say that the container starts up fast? Do you have enough work in your work to support nothing, have you restarted the virtual machine?
You said that virtual machines consume more resources? Most of the company’s server resource utilization should be less than 50%. A large amount of CPU, memory, and local disk are wasted all year round, so the overhead of the virtual machine is just a waste of resources that were originally wasted. Therefore, the author believes that for traditional applications, the use and non-use of Docker may not directly bring benefits to the enterprise. On the contrary, encountering problems in use will definitely bring trouble to the enterprise. For traditional enterprises, do not blindly follow the trend, VM The virtual machine is actually enough! .
to sum up
In the evolution of technology, docker is only a trend, not a result. I believe that in the future, there must be a higher deployment architecture!