I. Preface


In the previous articlehttp://www.codecoder.top/dotnet/net-core-deployment-under-linux-tutorial.html, I introduced how to install .NET Core SDK / .NET Core Runtime, Nginx, MySQL, and how to install it in a Linux environment. Our ASP.NET Core MVC program is deployed to Linux and the supervisor daemon is used to protect our .NET Core programs.
If you have read the article and are a Linux white user like me, the first feeling may be that deploying the .NET Core project on IIS is also good.

Is it so complicated to deploy a .NET Core project to Linux, is there a simple way to deploy it?

Hello, yes, Docker know about it~~~

PS: The sample code here still uses the previous graduation project. At the time of this article, I have added support for Docker in the repository of the program. You can download it and try it yourself. After all, practice the truth. .

Code warehousing:


 Second, Step by Step


1. Install Docker & Docker Compose

In the process of code delivery, occasionally encounter such problems, local testing is good, but when deployed to the test environment, production environment, such problems, and because of the local and test environment, production environment There are differences, we may not be able to reproduce these problems locally, so is there a tool that can solve this problem well?
As the wheel of history continues to move forward, container technology is born.

Docker, as a virtualization container technology that has emerged in recent years, can isolate our running programs from the operating system. For example, here we need to run the .NET Core program, we no longer need to care about the underlying operating system. There is no need to install the various dependencies of the program running on each machine that needs to run the program. We can package the image into a mirror file by means of the program, and put all the dependencies of the application and the program into an image file. As long as Docker is installed on other machines, you can run this program through the image we packaged.

1.1, uninstall Docker

Before installing Docker, we should determine if Docker is already installed on the current machine. To prevent conflicts with the currently installed Docker CE, here we uninstall the previous version of Docker, if you are sure that it is not installed on your machine. This step can be skipped if Docker is available.

In Linux, you can use \Add Enter to wrap a long, long statement, and the commands here and later are used in this way.

Sudo  yum remove docker \
  Docker - client \
  Docker -client- latest \
  Docker - common \
  Docker - latest \
  Docker -latest- logrotate \
  Docker - logrotate \
  Docker -engine

1.2, add yum source

In the way of installing Docker CE, I added the source of Docker CE to the yum source, then we can install Docker CE directly using yum install. The whole installation process is as follows.

# Installation kit which allows us yum to add another storage source
 sudo  yum  install -y yum - utils \
  Device -mapper-persistent- data \

# Set the stable library address of docker ce
Sudo  yum -config- manager \
     --add- repo \
    Https: // download.docker.com/linux/centos/docker-ce.repo

# Install docker ce
Sudo  yum  install docker-ce docker-ce-cli containerd.io

Once we have Docker installed, we can use the docker command to verify that we have successfully installed Docker on the machine. We can also use the docker –version command to view the version of Docker CE we installed.


1.3, set the boot from the start

Once Docker has been installed on our machine, we can set up Docker as the machine’s self-starting service, so that if a server restart occurs, our Docker can also automatically start the Docker service with the server’s restart.

#Start Docker service and allow boot
 sudo systemctl start docker

# View the current operation of dokcer
Sudo systemctl status docker

1.4, Hello World

Just as we are learning a new language, the first code that runs is almost always printed Hello World, and in Docker Hub, there is also such a mirror. In the numerous Docker tutorials, after installing Docker, The first thing is to pull the image file and “tell” Docker, I am coming.

Docker Hub is a repository for mirrors. It contains many image files. Because the server is abroad, the download speed may not be ideal. For example, Alibaba Cloud and Tencent Cloud also provide accelerator services for Docker images. You can press Need to use, of course, you can also create your own private mirror repository.

Docker run hello-world

The docker run command, which looks for this image in our local mirror library and then runs.
If it is not found locally, it will automatically be searched from Docker Hub using docker pull. If it can be found, it will be downloaded to the local and then run. If it is not found, this command will fail.


1.5, install Docker Compose

In actual project development, we may have multiple application images. For example, in the example of this article, in order to run our program in Docker, we need three images: the application itself image, the MySQL Server image, and Nginx image, in order to start our program, we need to manually scan the startup parameters of each container, environment variables, container naming, specify the link parameters of different containers, etc., and more annoying, may fail a certain step The post program will not work properly.
And when we use Docker Compose, we can write these commands once in the docker-compose.yml configuration file, and each time we start our application, we only need to do it automatically by the docker compose command. These operations.

# Download the docker compose binary sudo curl -L from github
 " https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m) " -o /usr/local/bin/docker- compose

# Apply executable permissions to downloaded binary files
Sudo  chmod +x /usr/local/bin/docker- compose

# View docker compose version
Docker -compose --version



2, build program image

Once we have docker and docker compose installed on the server, we can start building our program image.
First we need to add support for Docker to our running program.
You can manually add Dockerfile files to your MVC project yourself, or add Docker support by right-clicking.

A Dockerfile is like a list of implementations that tells Docker what commands we need to run when building and running.
Open the Dockerfile that VS created for us automatically, and you can see that the content is clearly divided into four pieces.

We know that the .NET Core program needs to depend on the .NET Core Runtime (CoreCLR), so in order for our program to run, we need to pull the runtime from the hub and build our application image on this basis. .
At the same time, in order to avoid the impact on the program due to the difference of the basic environment, the Runtime here needs to be consistent with the version of the .NET Core SDK during program development, so here I am using the .NET Core 2.1 Runtime.

An image contains the application and all its dependencies. Unlike a virtual machine, each image in the container ultimately shares the operating system resources of the host. The container runs as a separate process in the user space on the host operating system. on.

PS: The copyright of the image belongs to Microsoft’s technical documentation. If there is any infringement, please contact me to delete it. Source file address:

What is Docker?

Mirroring can be seen as a small “virtual host”, here we create a /app path in the image as the working directory of our program in the image, and expose port 80 to Docker, so that we can be outside the image Access to the running program in the current image through the port.

FROM microsoft/dotnet:2.1 -aspnetcore-runtime AS base

Because our application is a single application of a multi-tier architecture, the final MVC project relies on the various class libraries in the solution and the various third-party components that we download from Nuget. When deploying, we need to package these components into Dll reference.
So, here we need to restore and build using the .NET Core CLI included in the .NET Core SDK.

Just like in the following code, we create a /src path inside the image, copy the class library under the current solution to this directory, and then restore each of our main programs depends on the dotnet restore command. Component.
After we restore the dependent components, we can use the dotnet build command to generate the Release version of the dll file and output it to the previously created /app path.

FROM microsoft/dotnet:2.1 -sdk AS build
 COPY ["PSU.Site/PSU.Site.csproj", "PSU.Site/" ]
 COPY ["03_Logic/PSU.Domain/PSU.Domain.csproj", "03_Logic/PSU.Domain/" ]
 COPY ["03_Logic/PSU.Repository/PSU.Repository.csproj", "03_Logic/PSU.Repository/" ]
 COPY ["01_Entity/PSU.Entity/PSU.Entity.csproj", "01_Entity/PSU.Entity/" ]
 COPY ["02_Infrastructure/PSU.Utility/PSU.Utility.csproj", "02_Infrastructure/PSU.Utility/" ]
 COPY ["04_Rule/PSU.Model/PSU.Model.csproj", "04_Rule/PSU.Model/"]
 COPY["02_Infrastructure/PSU.EFCore/PSU.EFCore.csproj", "02_Infrastructure/PSU.EFCore/" ]
 COPY ["04_Rule/PSU.IService/PSU.IService.csproj", "04_Rule/PSU.IService/" ]
 COPY ["Controllers.PSU/Controllers.PSU.csproj", "Controllers.PSU/" ]
 RUN dotnet restore "PSU.Site/PSU.Site.csproj"
 COPY . .
 WORKDIR "/src/PSU.Site"
 RUN dotnet build " PSU.Site.csproj" -c Release -o /app

The above step can be seen as a solution for us to generate a Release version using VS. Once the build has no errors, we can release the program.

FROM build AS publish
 RUN dotnet publish "PSU.Site.csproj" -c Release -o /app

After the release file has been generated, according to the process we usually deploy on Windows, we can now run through IIS deployment, so the final step in building our application image is to execute our program through the dotnet command.

FROM base AS final
 COPY --from=publish /app .
 ENTRYPOINT ["dotnet", "PSU.Site.dll"]

It seems that this step of building the program image is over. According to this process, we need to upload the entire solution to the server. However, many times, we just upload the project we posted locally to the server. This is very different from our current build process, so here we modify the Dockerfile to conform to our release process.

From the above analysis of the Dockerfile process, it is easy to see that the second and third steps of building the image on the server are the parts that we now manually complete in the development environment, so here we only need to delete this part. The modified Dockerfile is as follows.

The FROM Microsoft / DOTNET: 2.1 -aspnetcore-Runtime
 the WORKDIR / App
 COPY / App. 
 EntryPoint [ "DOTNET", "PSU.Site.dll"]

In the modified Dockerfile, we can see that we deleted the build and release process, choose to copy the files in our Dockerfile path directly to the /app path in the image, and then directly execute the dotnet command to run our program. .

In order to ensure that the Dockerfile is in the same path as the post-release file, here we need to use VS to modify the property value of the Dockerfile, and make sure to copy it to the output directory. Here, if it is newer, copy it.

3, write docker-compose.yml

When we build the image of the application, we can pull it from the hub for Nginx and MySQL, and then perform some configuration.
So, we can now write a docker compose file to define the dependencies that our application image runtime needs to include and the order in which each image is launched.

Right-click on the MVC project and add a docker-compose.yml file. Similarly, you need to modify the properties of the file so that the file can be copied to the output directory.
Note that the file name here and the Dockerfile above are specific and you can’t make any changes.
If you already have Docker for Windows installed on your computer, you can also use VS, right-click to add, and select the container business process coordinator to support automatic configuration of docker compose.


In the yml file, I defined three images: psu.site, docker.mysql, docker.nginx.
There are many identical places in the definition of the three images, all of which are set to restart automatically, and are all under the same bridge network (psu-net) to achieve communication between the mirrors.


Docker.mysql is a mirror image of MySQL. We set the MySQL database connection password through the environment variable MYSQL_ROOT_PASSWORD and persist the database file in the image to our server local path by mounting the volume.
Also, map the mirrored 3306 port to the server’s 3306 port.

Psu.site is our program image, built with the Dockerfile located in the /usr/wwwroot/psu/ path, because the main program needs to depend on the database, so the depends_on attribute is used here to make our application image dependent. In the docker.mysql image, that is, the application image will not be launched until docker.mysql is started.

Docker.nginx is our nginx image, where the port 80 and port 443 in the image are mapped to the server IP, because we need to configure Nginx to listen to our program, so by mounting the volume, the local nginx The .conf configuration file is mapped to the image with the configuration.
At the same time, because we are exposed to the Dockerfile file of the application image, we expose the port 80. Therefore, we can listen through the links property. (If the port is not exposed when building, you can expose the image in the mirror in the docker compose file through the Expose property. port).

The configuration file of Nginx is as follows. In particular, you need to pay attention to the format of the file, indent, and a small error may cause the image to not work properly.
If you put nginx.conf in the same way as I did, don’t forget to modify the properties of the file.

Server {
    Listen 80;
    Location / {
      Proxy_pass http://psu.site;
      Proxy_http_version 1.1;
      Proxy_set_header Upgrade $http_upgrade;
      Proxy_set_header Host $http_host;
      Proxy_cache_bypass $http_upgrade;

A complete docker compose file is shown below, containing three images and a bridged network.

Version: '3.7'

    Image: mysql
      - "3306:3306"
    Restart: always
      - MYSQL_ROOT_PASSWORD=123456@sql
      - /usr/mysql:/var/lib/mysql
      - psu-net

    Build: /usr/wwwroot/psu/
    Restart: always
      - docker.mysql
      - psu-net

    Image: nginx
      - "80:80"
      - "443:443"
      - ./nginx.conf:/etc/nginx/nginx.conf
      - psu.site
      - psu-net

    Driver: bridge


It should be noted here that all the places that can be used for communication between mirrors, we need to use the image name to refer to, for example, in the above nginx configuration file, we need to change the listening address to the image name, and we need to modify the program. The server accesses the string to the server address, and the modified database connection string is as follows.

 "ConnectionStrings": {
    "SQLConnection": "server=docker.mysql;database=PSU.Site;user=root;password=123456@sql;port=3306;persistsecurityinfo=True;"

4, release the deployment program

Once we have built the docker compose file, we can upload the entire file to the server to build the docker image.
Here I put all the deployment files in the /usr/wwwroot/psu/ directory of the server, then we can use the docker compose command to build the image.

Locate the location of the deployment file, we can use the following command to mirror (re)build, start, and link a service-related container, the whole process will run in the background, if you want to see the whole process, you You can remove the -d parameter.

# Perform mirroring build, launch 
docker-compose up -d

When the up command is executed, we can use the ps command to view the running container. If some of the containers are not running, you can use logs to view the container’s running log for troubleshooting.

# View all containers that are running 
docker- Compose PS

# Display container running log 
docker-compose logs


 Third, summary


This chapter mainly introduces how to completely deploy a single application of .NET Core through the docker container. Compared to the previous deployment of .NET Core applications through Linux, you can see that the whole process is much less and much simpler. .
There are some docker commands involved in the article. If you haven’t touched the docker before, you may need to know more.
After we package the program into a mirror, you can upload the image to the private image repository or directly into a mirrored compressed file. When you need to switch the deployment environment, you only need to obtain the image. Quickly complete the deployment, which greatly facilitated our work compared to before.