Comparion of HUAWEI ARM64 TaiShan Kunpeng server and Intel Server
background
Since the Sino-U.S. Trade conflict, I believe the biggest feeling is not how much tariffs I add to you, but what I have, but I do not sell it to you. “Sale ban” has become the biggest competitiveness in the market economy.
It is believed that because of this, Huawei ’s “spare tires turn straight” Kunpeng series chips, once launched, have attracted the attention of the industry.
After a long wait, based on Kunpeng 920, Huawei servers representing high-end computing capabilities have begun to ship in large numbers. However, limited to professional barriers, the chips used in servers are nowhere near as impressive as 5G and MATE30.
I stumbled across today that Huawei Cloud is conducting a free trial of the “Pengpeng Elastic Cloud Server”, so I quickly applied for a taste.
Basic environment
The most basic trial package includes an elastic server with 1 core, 1G memory, and 1M bandwidth; a 100G cloud hard disk and a dynamic public network IP. Individual users can try for 15 days for free.
The server can choose multiple operating systems. Huawei recommends its own EulerOS. This is Huawei’s customized version based on CentOS, which includes optimizations for a variety of server scenarios, and also has better support for ARM64 chips. There are more than 10 other options, all of which are Linux-based various distributions.
If you rely heavily on the Windows series … you can retire now. In addition to the Windows operating system currently tied to the X86 series CPU, the Microsoft series is also prohibited.
As a trial, first of all, it is convenient to “play”, I chose Ubuntu18.04 system.
Like the common cloud system, after the purchase is completed, the server will quickly complete the configuration and start by itself. Huawei Cloud provides a browser-based terminal interface:
In the beginning, there was only one root. Using the browser terminal, create a daily account, upgrade various updates and patches, restart, and then you can safely and remotely log in using ssh. Having a dynamic public network IP is still much more convenient.
The whole process is smooth and stable, and the first impression is not different from the usual server. If you do not use uname to check the kernel, you will not feel it is an ARM server.
$ uname -a Linux ecs-kc1-small-1-linux-20191209185931 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:21:09 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux
First look at the configuration, CPU:
$ cat /proc/cpuinfo processor : 0 BogoMIPS : 200.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm CPU implementer : 0x48 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd01 CPU revision : 0
Then comes the memory:
$ cat /proc/meminfo MemTotal: 1006904 kB MemFree: 387044 kB MemAvailable: 671300 kB Buffers: 33604 kB Cached: 296076 kB SwapCached: 1148 kB Active: 217232 kB Inactive: 275692 kB Active(anon): 59824 kB Inactive(anon): 119960 kB Active(file): 157408 kB Inactive(file): 155732 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 1762696 kB SwapFree: 1729472 kB Dirty: 28632 kB Writeback: 0 kB AnonPages: 162892 kB Mapped: 61680 kB Shmem: 16508 kB Slab: 96464 kB SReclaimable: 60696 kB SUnreclaim: 35768 kB KernelStack: 2464 kB PageTables: 3824 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2266148 kB Committed_AS: 1049036 kB VmallocTotal: 135290290112 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
Although it is only an experience, it is difficult for us to make a fair evaluation of the “experience” result without comparison.
So I again used a domestic top three cloud service provider (there is no mention of the name here, anyway, there is no meaning of a snooze platform) and I borrowed a traditional Intel Xeon server for production.
Also using Ubuntu 18:
$ uname -a Linux ebs-31389 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
CPU:
$ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz stepping : 2 microcode : 0x1 cpu MHz : 2494.224 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit bogomips : 4988.44 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz stepping : 2 microcode : 0x1 cpu MHz : 2494.224 cache size : 4096 KB physical id : 1 siblings : 1 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit bogomips : 4988.44 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management:
RAM:
$ cat /proc/meminfo MemTotal: 4039500 kB MemFree: 1083580 kB MemAvailable: 3561040 kB Buffers: 206180 kB Cached: 2326624 kB SwapCached: 296 kB Active: 1394884 kB Inactive: 1213580 kB Active(anon): 40644 kB Inactive(anon): 53080 kB Active(file): 1354240 kB Inactive(file): 1160500 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4038652 kB SwapFree: 4033008 kB Dirty: 20 kB Writeback: 0 kB AnonPages: 75392 kB Mapped: 88396 kB Shmem: 18068 kB Slab: 305188 kB SReclaimable: 251528 kB SUnreclaim: 53660 kB KernelStack: 2704 kB PageTables: 8312 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 6058400 kB Committed_AS: 597368 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 106368 kB DirectMap2M: 4087808 kB DirectMap1G: 2097152 kB
Judging from the hardware parameters, “Xunpeng” is very unfavorable. Poor 1-core 1G memory. Although another Intel XEON is not a premium product, but 2 core 4G memory, anyone seems to be a crushing opponent. If I know that I can only borrow such a comparative server, I should open a higher configuration machine in Huawei Cloud. Unfortunately, there is only one free opportunity, and I can only continue my scalp.
Host
TaiShan
Unknown brand
CPU | Kunpeng 920 | Intel Xeon |
Number of cores | 1 | 2 |
RAM | 1G | 4G |
The other configurations are not pulled out because the remaining hardware has little effect on the comparison; the software configuration is the default basic system, and no special settings and tuning are done on either side. If there is a difference, it is also the performance of the cloud staff, and it must be considered a plus.
Experience content and environmental preparation
Earlier, I still liked to watch the running points, but after a long time, I found that the contents of the running points are still quite different from the actual work. Often the indicators of running points are very beautiful, and really use them, it is not the case.
So today we are a bit more complicated. We choose to have a deep experience of Kunpeng server from three aspects: front-end development, back-end development and services, and containers. From the perspective of cloud services, how can these three types of applications cover 80% of common needs?
(In the text, the verbal habit is used to mix the chip brand and the server brand. I believe that you understand it and will not change it.)
First we prepare the corresponding tools and environment.
The front-end development uses the node.js / npm / yarn tool chain and the vue framework. The versions of both machines are identical:
$ node -v v12.13.1 $ npm -v 6.12.1 $ yarn -v 1.21.0
The backend uses the PostgreSQL database, and the two versions are the same:
$ psql --version psql (PostgreSQL) 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1)
The back-end toolchain is compiled with Rust nightly, the same at both ends:
$ rustc -V rustc 1.41.0-nightly (59947fcae 2019-12-08)
To add here, nightly is only suitable for development and experiment, please do not use it in production environment. Since it is an experience here, of course, there must be a moderate advance, so the nightly version is chosen. After all, by the time you use it, today’s version of nightly is estimated to be normal.
In the process of back-end development, open source toolchains such as gcc / git / openssl are also used. Both use the built-in version of Ubuntu, and the two servers are the same. Because these tools are not the main development environment, to save space here, we will not list the versions one by one.
In terms of containers, because it is more a compatibility experience and does not require any index data, only Kunpeng server is installed, the version is as follows:
$ sudo docker version Client: Version: 18.09.7 API version: 1.39 Go version: go1.10.1 Git commit: 2d0083d Built: Fri Aug 16 14:20:24 2019 OS/Arch: linux/arm64 Experimental: false Server: Engine: Version: 18.09.7 API version: 1.39 (minimum version 1.12) Go version: go1.10.1 Git commit: 2d0083d Built: Wed Aug 14 19:41:23 2019 OS/Arch: linux/arm64 Experimental: false
This article is not teaching, so the installation process is ignored. It is worth mentioning that during the process of setting up the two servers, the performance was smooth and fast, and the operations were exactly the same. Often you need to take a special look at the host name to remember which server this is. It’s completely different from the past when operating heterogeneous servers.
In addition, the domestic mirror sites of each toolchain have greatly helped the environment construction and significantly increased the construction speed.
Trial project
There are habitual factors, and the project used to test the server here uses Gothinkster’s RealWorld. RealWorld is a minimalist micro-blogging system known as “the mother of application demos.” It is a small application of sparrows that are small.
On its
project website
, 22 types of front-end and 50 types of back-end open source code are provided, and any kind of front-end can work with any combination of back-ends.
Looking at the previous environment configuration, I guess you guessed it, I chose the combination of Vue front-end and rust-rocket-diesel back-end here.
Front-end development
Let’s start with the front end and first download the source code:
$ git clone https://github.com/gothinkster/vue-realworld-example-app
Then download the relevant dependencies:
$ cd vue-realworld-example-app $ yarn install
We need to make four changes to the source code:
- Add vue.config.jsfiles to the project root directory and configure the sub-paths of the project in the website. After all, although it is a trial, it is not too particular to open the root directory directly to RealWorld.
- Vue’s front-end uses a single page structure. Between different functions, it looks like a different webpage. Actually, Vue intercepts the URL address and switches between screen components. In order for Vue routing to work accurately, we need to modify the src/router/index.jsfile, set the routing mode, and the base webpage file URL.
- The front-end and back-end communicate using the Restful interface. We need src/common/config.jsto set the API base address in the file.
- src/store/auth.module.jsThere is a bug in the file, UPDATE_USER method. This problem is not reflected in most back-end cooperations, but with strict back-ends such as Rust, users will not be able to edit personal information. Need to modify the data submission part of the function.
This article does not do teaching, I believe everyone is not interested in watching teaching, so the specific modification and configuration methods are skipped. We only look at the compilation process.
The first is on Kunpeng server:
$ time yarn build yarn run v1.21.0 $ cross-env BABEL_ENV=dev vue-cli-service build ⠇ Building for production... File Size Gzipped dist/js/chunk-vendors.dcd10e99.js 172.11 KiB 58.87 KiB dist/js/chunk-52fabea2.8d54de7e.js 35.24 KiB 10.74 KiB dist/js/app.5e06b01a.js 19.41 KiB 5.56 KiB dist/js/chunk-8ab06c80.0691ea34.js 13.74 KiB 4.53 KiB dist/js/chunk-fee37f4e.962c341f.js 5.50 KiB 1.80 KiB dist/js/chunk-2d0b3289.4ecc4d5e.js 3.68 KiB 1.17 KiB dist/js/chunk-2d217357.a492fd23.js 3.20 KiB 1.15 KiB dist/js/chunk-704fe663.1eb6fa07.js 2.94 KiB 1.14 KiB dist/js/chunk-2d0d6d35.3e7333df.js 2.92 KiB 1.15 KiB dist/js/chunk-2d2086b7.9e172229.js 2.57 KiB 1.12 KiB dist/precache-manifest.d3673753a0030f7 1.66 KiB 0.55 KiB ef7bc3318dfea2bf8.js dist/service-worker.js 0.95 KiB 0.54 KiB dist/js/chunk-2d0bd246.4cab42ec.js 0.58 KiB 0.40 KiB dist/js/chunk-2d0f1193.580d39c8.js 0.57 KiB 0.40 KiB dist/js/chunk-2d0cedd0.a32d9392.js 0.53 KiB 0.38 KiB dist/js/chunk-2d207fb4.d8669731.js 0.48 KiB 0.35 KiB dist/js/chunk-2d0bac97.f736bcaf.js 0.48 KiB 0.35 KiB Images and other types of assets omitted. DONE Build complete. The dist directory is ready to be deployed. INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html Done in 23.57s. real 0m23.889s user 0m19.927s sys 0m0.965s
In order to reduce the space, the log messages have removed warning messages in individual source format. Everything works fine, and no incompatibilities occur. Let’s take a look at Intel’s performance:
$ time yarn build yarn run v1.21.0 $ cross-env BABEL_ENV=dev vue-cli-service build ⠇ Building for production... File Size Gzipped dist/js/chunk-vendors.dcd10e99.js 172.11 KiB 58.87 KiB dist/js/chunk-52fabea2.c34912e7.js 35.24 KiB 10.74 KiB dist/js/app.348e5166.js 19.35 KiB 5.53 KiB dist/js/chunk-8ab06c80.3fa2c5de.js 13.74 KiB 4.53 KiB dist/js/chunk-fee37f4e.55893266.js 5.50 KiB 1.80 KiB dist/js/chunk-2d0b3289.7b3abcbe.js 3.68 KiB 1.17 KiB dist/js/chunk-2d217357.e2eb7ad1.js 3.20 KiB 1.14 KiB dist/js/chunk-704fe663.25958462.js 2.94 KiB 1.14 KiB dist/js/chunk-2d0d6d35.ddc63fdd.js 2.92 KiB 1.15 KiB dist/js/chunk-2d2086b7.35190064.js 2.57 KiB 1.12 KiB dist/precache-manifest.049c26b68ee8b9c 1.55 KiB 0.53 KiB 603c4f04a6cd8e3c8.js dist/service-worker.js 0.95 KiB 0.54 KiB dist/js/chunk-2d0bd246.b354ca7f.js 0.58 KiB 0.40 KiB dist/js/chunk-2d0f1193.12c44839.js 0.57 KiB 0.40 KiB dist/js/chunk-2d0cedd0.ea949ae4.js 0.53 KiB 0.38 KiB dist/js/chunk-2d207fb4.245dc458.js 0.48 KiB 0.35 KiB dist/js/chunk-2d0bac97.74e3c28d.js 0.48 KiB 0.35 KiB Images and other types of assets omitted. DONE Build complete. The dist directory is ready to be deployed. INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html Done in 16.27s. real 0m16.548s user 0m20.435s sys 0m1.223s
In the end, there is an extra core and 4 times the memory, and the compilation speed is about 30% faster.
Considering the hardware configuration of the two sides, I subjectively feel that it is fair to count the two draws.
Back-end development
The first is to download the source code from the
warehouse
.
Then do something like this:
- The back end used to have only a set of Restful interface services. We need to make it directly provide static file services. Otherwise, we need to configure another static file service to accommodate the just compiled front-end files. I modified the src/lib.rsprogram, added processing functions, and ./static/opened the folder to a static file path.
- The result of compiling the Vue front end is a dist/complete copy of the Vue project path to the current project static/directory.
- Follow the instructions on the code repository web page to configure the PostgreSQL service and initialize the realworld database using the Diesel ORM tool.
Next, we use Rust’s development model to make a test run:
$ cargo run
On Intel servers, this process works fine. On Kun Peng, unfortunate things happened, an error occurred, and the log process was very long. Only one line of the error message was intercepted below:
undefined reference to `rust_crypto_util_fixed_time_eq_asm'
Not surprisingly, this is something related to compilation.
With the development of technology to this day, with the help of Almighty Linux, most heterogeneous systems can flourish, provided that if the compilation part is not involved.
To continue testing, check the source code of the rust-crypto toolbox based on the error message.
It was quickly discovered that in the rust-crypto-0.2.36/src/util_helpers.cfile, there were only two assembly languages, X64 / ARM. Although Xun Peng is also ARM, but the aarch64 architecture, the corresponding assembly language code does not exist.
Since I was not familiar with compilation, I started searching on the Internet. Kung Fu is worthy of care. After about an hour of hard work, I found a piece of the implementation of this function aarch64 on the Internet:
#ifdef __aarch64__ uint32_t rust_crypto_util_fixed_time_eq_asm(uint8_t* lhsp, uint8_t* rhsp, size_t count) { if (count == 0) { return 1; } uint8_t result = 0; asm( " \ 1: \ \ ldrb w4, [%1]; \ ldrb w5, [%2]; \ eor w4, w4, w5; \ orr %w0, %w0, w4; \ \ add %w1, %w1, #1; \ add %w2, %w2, #1; \ subs %w3, %w3, #1; \ bne 1b; \ " : "+&r" (result), "+&r" (lhsp), "+&r" (rhsp), "+&r" (count) // all input and output : // input : "w4", "w5", "cc" // clobbers ); return result; } #endif
Put this code in util_helpers.cand execute it again cargo run. Realworld runs successfully.
Feel free to post a blog post: The
trial runs normally, let the two show the strength of compilation again. First, please Kun Peng to play:
$ time cargo build --release Compiling libc v0.2.66 Compiling autocfg v0.1.7 Compiling cfg-if v0.1.10 ...(略去)... Compiling rocket_cors v0.4.0 Compiling rocket_contrib v0.4.2 Compiling realworld v0.4.0 (/home/andrew/dev/realworld-rust-rocket) Finished release [optimized] target(s) in 18m 28s real 18m28.666s user 18m8.184s sys 0m10.982s
There are a total of 191 source code packages. The log saves space. Only 6 of them are listed. The compilation takes 18 minutes and 28 seconds and the executable file is 8.4M.
$ ls -lh target/release/ total 15M drwxrwxr-x 64 andrew andrew 4.0K Dec 10 09:27 build drwxrwxr-x 2 andrew andrew 32K Dec 10 09:45 deps drwxrwxr-x 2 andrew andrew 4.0K Dec 10 09:27 examples drwxrwxr-x 2 andrew andrew 4.0K Dec 10 09:27 incremental -rw-rw-r-- 1 andrew andrew 1.2K Dec 10 09:45 librealworld.d -rw-rw-r-- 2 andrew andrew 6.2M Dec 10 09:45 librealworld.rlib -rwxrwxr-x 2 andrew andrew 8.4M Dec 10 09:45 realworld -rw-rw-r-- 1 andrew andrew 1.2K Dec 10 09:45 realworld.d
Then look at Intel’s speed:
$ time cargo build --release Compiling libc v0.2.65 Compiling autocfg v0.1.7 Compiling cfg-if v0.1.10 ...(略去)... Compiling rocket_cors v0.4.0 Compiling rocket_contrib v0.4.2 Compiling realworld v0.4.0 (/home/andrew/dev/rust/realworld-rust-rocket) Finished release [optimized] target(s) in 7m 39s real 7m39.088s user 15m1.126s sys 0m13.470s $ ls -lh target/release/ total 16M drwxrwxr-x 64 andrew andrew 4.0K Dec 10 01:38 build drwxrwxr-x 2 andrew andrew 36K Dec 10 01:45 deps drwxrwxr-x 2 andrew andrew 4.0K Dec 10 01:38 examples drwxrwxr-x 2 andrew andrew 4.0K Dec 10 01:38 incremental -rw-rw-r-- 1 andrew andrew 1.3K Dec 10 01:45 librealworld.d -rw-rw-r-- 2 andrew andrew 6.3M Dec 10 01:45 librealworld.rlib -rwxrwxr-x 2 andrew andrew 9.0M Dec 10 01:45 realworld -rw-rw-r-- 1 andrew andrew 1.3K Dec 10 01:45 realworld.d
(Manual assistance) Intel only spent a little over 1/3 of the time to complete the compilation, and the executable file generated was 9M. This time, Kun Peng was behind.
Performance testing tools
Internet applications are different from desktop applications. In order to fully test performance, we need an independent test tool.
Some Ubuntu software sources have been integrated, but I chose Wrk and compiled it from the source code. In this way, the compilation speed and compatibility of C / C ++ can be seen by the way.
The following steps are performed on Kunpeng server:
# 下载源码 $ git clone https://github.com/wg/wrk # 编译 $ cd wrk $ time make Building LuaJIT... make[1]: Entering directory '/home/andrew/dev/wrk/obj/LuaJIT-2.1.0-beta3' ==== Building LuaJIT 2.1.0-beta3 ==== make -C src make[2]: Entering directory '/home/andrew/dev/wrk/obj/LuaJIT-2.1.0-beta3/src' HOSTCC host/minilua.o HOSTLINK host/minilua DYNASM host/buildvm_arch.h HOSTCC host/buildvm.o HOSTCC host/buildvm_asm.o HOSTCC host/buildvm_peobj.o HOSTCC host/buildvm_lib.o HOSTCC host/buildvm_fold.o HOSTLINK host/buildvm BUILDVM lj_vm.S ASM lj_vm.o CC lj_gc.o BUILDVM lj_ffdef.h CC lj_err.o CC lj_char.o BUILDVM lj_bcdef.h CC lj_bc.o ... gcc -I. -Icrypto/include -Iinclude -fPIC -pthread -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF crypto/ec/ec_check.d.tmp -MT crypto/ec/ec_check.o -c -o crypto/ec/ec_check.o crypto/ec/ec_check.c gcc -I. -Icrypto/include -Iinclude -fPIC -pthread -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF crypto/ec/ec_curve.d.tmp -MT crypto/ec/ec_curve.o -c -o crypto/ec/ec_curve.o crypto/ec/ec_curve.c ... gcc -I. -Iinclude -fPIC -pthread -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF ssl/t1_trce.d.tmp -MT ssl/t1_trce.o -c -o ssl/t1_trce.o ssl/t1_trce.c gcc -I. -Iinclude -fPIC -pthread -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF ssl/tls13_enc.d.tmp -MT ssl/tls13_enc.o -c -o ssl/tls13_enc.o ssl/tls13_enc.c gcc -I. -Iinclude -fPIC -pthread -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF ssl/tls_srp.d.tmp -MT ssl/tls_srp.o -c -o ssl/tls_srp.o ssl/tls_srp.c ...(略去)... make depend && make _build_engines make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Nothing to be done for '_build_engines'. make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' created directory `/home/andrew/dev/wrk/obj/lib/engines-1.1' *** Installing engines make depend && make _build_programs make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Nothing to be done for '_build_programs'. make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' *** Installing runtime programs install apps/openssl -> /home/andrew/dev/wrk/obj/bin/openssl install ./tools/c_rehash -> /home/andrew/dev/wrk/obj/bin/c_rehash make[1]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' CC src/wrk.c CC src/net.c CC src/ssl.c CC src/aprintf.c CC src/stats.c CC src/script.c CC src/units.c CC src/ae.c CC src/zmalloc.c CC src/http_parser.c LUAJIT src/wrk.lua LINK wrk real 3m31.575s user 3m6.914s sys 0m22.147s
This small tool contains a large amount of C language source code and some assembly code, and a small number of Lua scripts exist as data files.
Kun Peng’s compilation process took 3 minutes and 32 seconds.
Followed by Intel Xeon:
$ time make Building LuaJIT... make[1]: Entering directory '/home/andrew/dev/wrk/obj/LuaJIT-2.1.0-beta3' ==== Building LuaJIT 2.1.0-beta3 ==== make -C src make[2]: Entering directory '/home/andrew/dev/wrk/obj/LuaJIT-2.1.0-beta3/src' HOSTCC host/minilua.o HOSTLINK host/minilua DYNASM host/buildvm_arch.h HOSTCC host/buildvm.o HOSTCC host/buildvm_asm.o HOSTCC host/buildvm_peobj.o HOSTCC host/buildvm_lib.o HOSTCC host/buildvm_fold.o HOSTLINK host/buildvm BUILDVM lj_vm.S ASM lj_vm.o CC lj_gc.o BUILDVM lj_ffdef.h CC lj_err.o CC lj_char.o ...... CC="gcc" /usr/bin/perl crypto/aes/asm/aesni-mb-x86_64.pl elf crypto/aes/aesni-mb-x86_64.s gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -c -o crypto/aes/aesni-mb-x86_64.o crypto/aes/aesni-mb-x86_64.s CC="gcc" /usr/bin/perl crypto/aes/asm/aesni-sha1-x86_64.pl elf crypto/aes/aesni-sha1-x86_64.s gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -c -o crypto/aes/aesni-sha1-x86_64.o crypto/aes/aesni-sha1-x86_64.s ...... gcc -I. -Icrypto/include -Iinclude -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF crypto/asn1/x_pkey.d.tmp -MT crypto/asn1/x_pkey.o -c -o crypto/asn1/x_pkey.o crypto/asn1/x_pkey.c gcc -I. -Icrypto/include -Iinclude -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/home/andrew/dev/wrk/obj/ssl\"" -DENGINESDIR="\"/home/andrew/dev/wrk/obj/lib/engines-1.1\"" -DNDEBUG -MMD -MF crypto/asn1/x_sig.d.tmp -MT crypto/asn1/x_sig.o -c -o crypto/asn1/x_sig.o crypto/asn1/x_sig.c ...(略)... make depend && make _build_programs make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Entering directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' make[2]: Nothing to be done for '_build_programs'. make[2]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' *** Installing runtime programs install apps/openssl -> /home/andrew/dev/wrk/obj/bin/openssl install ./tools/c_rehash -> /home/andrew/dev/wrk/obj/bin/c_rehash make[1]: Leaving directory '/home/andrew/dev/wrk/obj/openssl-1.1.1b' CC src/wrk.c CC src/net.c CC src/ssl.c CC src/aprintf.c CC src/stats.c CC src/script.c CC src/units.c CC src/ae.c CC src/zmalloc.c CC src/http_parser.c LUAJIT src/wrk.lua LINK wrk real 3m48.678s user 3m9.735s sys 0m37.941s andrew@ebs-31389:~/dev/wrk$
Huh? 3 minutes and 48 seconds, actually slightly slower than Kun Peng.
In fact, after careful analysis, I think it is normal. According to the official data, Kun Peng’s core performance is not bad. If the task is relatively small and can be completed in memory, then Kun Peng’s speed should obviously be fast.
And if the task is relatively large, which results in a large number of disk swaps, the low-profile Kunpeng we chose will not be able to support it, plus a core configuration. In the end, the bigger the mission, the more the Kunpeng was dropped.
OK, the test tool is ready, let’s test the performance of the Web service of the two servers separately. I believe that for cloud hosts, this is the hard bar.
Start both servers:
First look at the test data of Kun Peng:
$ wrk -t1 -c50 -d5s --latency --timeout 2s http://localhost:8000/index.html Running 5s test @ http://localhost:8000/index.html 1 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.97ms 1.04ms 4.58ms 57.80% Req/Sec 8.79k 228.58 9.09k 70.00% Latency Distribution 50% 1.97ms 75% 2.87ms 90% 3.41ms 99% 3.78ms 44400 requests in 5.08s, 120.47MB read Socket errors: connect 0, read 44400, write 0, timeout 0 Requests/sec: 8745.40 Transfer/sec: 23.73MB
The configuration of the two servers is quite different. To be fair, we only enabled one thread in the parameter. Choose 50 connections, I think for a small website, this may be a more typical number.
Kunpeng server in the 5.08 second test, withstood 44400 requests, sent a total of 120.47MB of data.
Then Intel stood on the front desk:
$ wrk -t1 -c50 -d5s --latency --timeout 2s http://localhost:8000/index.html Running 5s test @ http://localhost:8000/index.html 1 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 32.32ms 5.00ms 64.20ms 75.23% Req/Sec 1.52k 176.64 1.91k 72.00% Latency Distribution 50% 31.52ms 75% 34.64ms 90% 38.67ms 99% 48.04ms 7552 requests in 5.00s, 21.97MB read Socket errors: connect 0, read 7550, write 0, timeout 0 Requests/sec: 1509.75 Transfer/sec: 4.39MB
Wow, click … Did you startle your eyes? There is really no harm without contrast. In the 5-second test, Intel Xeon only withstood 7552 requests and sent 21.97MB of data. As an old CPU maker, do you lose Intel’s “core”?
index.htmlIt’s just a static page. Let’s take a look at another dynamic link, so that the parts of the database can be reflected together.
The following test is a Restful interface for listing article content:
$ curl http://127.0.0.1:8000/api/articles {"articles":[{"author":{"bio":null,"email":"andrewwang@sina.com","id":1,"image":null,"username":"andrew"},"body":"苹果公司近日宣布,新的Mac Pro和Pro Display XDR将于12月10日开始订购。新的Mac Pro起价为5,999美元(约合人民币42202元),而Pro Display XDR起价为4,999美元(约合人民币35167元)。\n5,999美元的基本款Mac Pro搭载了8核Intel Xeon处理器,256 GB SSD,32GB RAM等配置。最高配置支持28核Intel Xeon处理器,4块Vega显卡,1.5TB的超大容量内存。而其首次引入的Apple Afterburner加速卡,这使得Mac Pro可实时解码最多达 3 条 8K ProRes RAW 视频流和最多达 12 条 4K ProRes RAW 视频流。\n而新款的 Pro Display XDR则配置了分辨率达到6016 x 3384的32英寸显示屏,这款显示器的参数达到了静态 1000nit / 峰值 1600nits 的亮度,同时还有着1000000:1的对比度。如果用户追求低反射率和低眩光,可以多加1000美元(约合人民币7000元)给显示器添加一个“纳米纹理”哑光涂层。","createdAt":"2019-12-10T02:05:38.758Z","description":"nothing but test","favorited":false,"favoritesCount":0,"id":1,"slug":"test-sqzxyV","tagList":[],"title":"test","updatedAt":"2019-12-10T02:05:38.758Z"}],"articlesCount":1}
Similarly, first look at the performance of Kun Peng:
$ wrk -t1 -c50 -d5s --latency --timeout 2s http://127.0.0.1:8000/api/articles Running 5s test @ http://127.0.0.1:8000/api/articles 1 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 40.03ms 2.37ms 41.82ms 99.15% Req/Sec 1.25k 11.16 1.27k 66.00% Latency Distribution 50% 40.21ms 75% 40.48ms 90% 40.75ms 99% 41.34ms 6199 requests in 5.00s, 8.92MB read Socket errors: connect 0, read 6198, write 0, timeout 0 Requests/sec: 1239.54 Transfer/sec: 1.78MB
Intel debut:
$ wrk -t1 -c50 -d5s --latency --timeout 2s http://127.0.0.1:8000/api/articles Running 5s test @ http://127.0.0.1:8000/api/articles 1 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 62.63ms 9.18ms 96.45ms 72.66% Req/Sec 787.82 95.11 1.01k 72.00% Latency Distribution 50% 62.32ms 75% 68.07ms 90% 74.42ms 99% 85.25ms 3921 requests in 5.01s, 5.64MB read Socket errors: connect 0, read 3920, write 0, timeout 0 Requests/sec: 783.09 Transfer/sec: 1.13MB
The post is too short, the data flow is very small, and all the time is consumed on the link, which does not show its strength. …. But … Peng Peng is leading again. At the same time, don’t forget that this tested Kunpeng server has only 1/4 of Intel’s competing products and 1/2 CPU core.
From these data, I want to say responsibly that the cloud strength of domestic chips and domestic servers is stable.
Container experience
Container-based microservices have become the mainstream model of server operations today. In this regard, Kunpeng server should be said to be very lucky. After all, cgroups and namespaces that can only be implemented in the Linux kernel are much easier than VM technologies that require simulated instruction sets. Can be said to be born.
But the problem is not so simple, such as you just search for an application:
# docker search mariadb NAME DESCRIPTION STARS OFFICIAL AUTOMATED mariadb MariaDB is a community-developed fork of MyS… 3135 [OK] bitnami/mariadb Bitnami MariaDB Docker Image 107 [OK] linuxserver/mariadb A Mariadb container, brought to you by Linux… 95 toughiq/mariadb-cluster Dockerized Automated MariaDB Galera Cluster … 41 [OK] colinmollenhour/mariadb-galera-swarm MariaDb w/ Galera Cluster, DNS-based service… 26 [OK] panubo/mariadb-galera MariaDB Galera Cluster 23 [OK] lsioarmhf/mariadb ARMHF based Linuxserver.io image of mariadb 18 mariadb/server MariaDB Server is a modern database for mode… 18 [OK] webhippie/mariadb Docker images for MariaDB 16 [OK] bianjp/mariadb-alpine Lightweight MariaDB docker image with Alpine… 12 [OK] centos/mariadb-101-centos7 MariaDB 10.1 SQL database server 10 severalnines/mariadb A homogeneous MariaDB Galera Cluster image t… 7 [OK] centos/mariadb-102-centos7 MariaDB 10.2 SQL database server 6 tutum/mariadb Base docker image to run a MariaDB database … 4 wodby/mariadb Alpine-based MariaDB container image with or… 4 [OK] circleci/mariadb CircleCI images for MariaDB 3 [OK] tiredofit/mariadb-backup MariaDB Backup image to backup MariaDB/MySQL… 2 [OK] kitpages/mariadb-galera MariaDB with Galera 2 [OK] rightctrl/mariadb Mariadb with Galera support 2 [OK] jonbaldie/mariadb Fast, simple, and lightweight MariaDB Docker… 2 [OK] demyx/mariadb Non-root Docker image running Alpine Linux a… 0 ccitest/mariadb CircleCI test images for MariaDB 0 [OK] jelastic/mariadb An image of the MariaDB SQL database server … 0 ansibleplaybookbundle/mariadb-apb An APB which deploys RHSCL MariaDB 0 [OK] alvistack/mariadb Docker Image Packaging for MariaDB 0
Hmm, it looks no different from an x86 server. But want to pull one down and try? Dear, you still forget it. The binary executable file packaged in the container image is to distinguish the CPU, and you won’t need to pull it down.
It seems that the world is not ready for the arrival of ARM servers, at least it should be like tools like APT / YUM, automatically distinguish the architecture to prepare the resource pool, isn’t it?
So if you want to find an image file suitable for Kunpeng server, you need to add a keyword search manually.
Currently, there are two classifications of Docker Hub applicable to 64-bit ARM server architecture, namely, aarch64and arm64v8. The aarch64classification is no longer used, and the newly added images have been classified arm64v8. However, due to compatibility considerations, the aarch64original images still exist. In other words, if both categories have the image you need, you should prefer the one arm64v8below the category.
Then, the second small question comes. Use these two keywords to search:
# docker search aarch64 NAME DESCRIPTION STARS OFFICIAL AUTOMATED homeassistant/aarch64-homeassistant 15 aarch64/ubuntu Ubuntu is a Debian-based Linux operating sys… 14 homeassistant/aarch64-hassio-supervisor 5 balenalib/aarch64-ubuntu-node This image is part of the balena.io base ima… 1 balenalib/aarch64-alpine-python This image is part of the balena.io base ima… 1 resin/aarch64-alpine-python This repository is deprecated. 1 resin/aarch64-python This repository is deprecated. 1 resin/aarch64-alpine-buildpack-deps This repository is deprecated. 0 resin/aarch64-ubuntu-golang This repository is deprecated. 0 resin/aarch64-fedora-buildpack-deps This repository is deprecated. 0 resin/aarch64-fedora-python This repository is deprecated. 0 resin/aarch64-alpine-openjdk This repository is deprecated. 0 balenalib/aarch64-alpine-node This image is part of the balena.io base ima… 0 resin/aarch64-fedora-golang This repository is deprecated. 0 resin/aarch64-golang This repository is deprecated. 0 resin/aarch64-fedora-openjdk This repository is deprecated. 0 resin/aarch64-alpine-golang This repository is deprecated. 0 balenalib/aarch64-node This image is part of the balena.io base ima… 0 balenalib/aarch64-debian-node This image is part of the balena.io base ima… 0 resin/aarch64-fedora-node This repository is deprecated. 0 resin/aarch64-node This repository is deprecated. 0 resin/aarch64-ubuntu-python This repository is deprecated. 0 balenalib/aarch64-ubuntu-golang This image is part of the balena.io base ima… 0 resin/aarch64-alpine-node This repository is deprecated. 0 balenalib/aarch64-debian-python This image is part of the balena.io base ima… 0 # docker search arm64v8 NAME DESCRIPTION STARS OFFICIAL AUTOMATED arm64v8/alpine A minimal Docker image based on Alpine Linux… 45 arm64v8/ubuntu Ubuntu is a Debian-based Linux operating sys… 30 arm64v8/debian Debian is a Linux distribution thats compos… 21 arm64v8/nginx Official build of Nginx. 18 arm64v8/python Python is an interpreted, interactive, objec… 18 arm64v8/nextcloud A safe home for all your data 15 arm64v8/node Node.js is a JavaScript-based platform for s… 12 arm64v8/openjdk OpenJDK is an open-source implementation of … 9 arm64v8/redis Redis is an open source key-value store that… 7 arm64v8/php While designed for web development, the PHP … 7 arm64v8/mongo MongoDB document databases provide high avai… 6 arm64v8/golang Go (golang) is a general purpose, higher-lev… 6 arm64v8/docker Docker in Docker! 6 arm64v8/ros The Robot Operating System (ROS) is an open … 5 arm64v8/buildpack-deps A collection of common build dependencies us… 3 arm64v8/busybox Busybox base image. 3 arm64v8/ruby Ruby is a dynamic, reflective, object-orient… 2 arm64v8/tomcat Apache Tomcat is an open source implementati… 2 arm64v8/erlang Erlang is a programming language used to bui… 1 arm64v8/wordpress The WordPress rich content management system… 1 arm64v8/joomla Joomla! is an open source content management… 0 arm64v8/haxe Haxe is a modern, high level, static typed p… 0 troyfontaine/arm64v8_min-alpinelinux Minimal 64-bit ARM64v8 Alpine Linux Image 0 arm64v8/hylang Hy is a Lisp dialect that translates express… 0 arm64v8/perl Perl is a high-level, general-purpose, inter… 0
You will find that compared to the rich x86 community, the resources of the arm server are really poor, and most of them are basic images.
I’m afraid this is no way out. There are fewer users and fewer resources. Fortunately, with the base image, adding applications by yourself, there is nothing unacceptable. Think about it, which key application do you dare to use the community image directly?
Also because of the lack of preparation of Docker Hub in terms of architecture distinction, it is now docker searchinconvenient to search for images directly using commands. Because in addition to the image keyword, we have another architectural qualification.
So it is recommended to search directly to the corresponding webpage:
https://hub.docker.com/u/aarch64
and
https://hub.docker.com/u/arm64v8
Below we test the Docker image classified by arm64v8, execute a common WordPress application, and experience the performance of Kunpeng server in terms of containers.
WordPress applications require two containers, one deploys Apache / PHP and WordPress itself; another MySQL-compatible database is also required. We use its community open source version MariaDB.
First pull the image down:
# docker pull arm64v8/wordpress Using default tag: latest latest: Pulling from arm64v8/wordpress a4f3dd4087f9: Pull complete e54f8c59bdae: Pull complete 6ae19fe01dd7: Pull complete 939a6e43e07c: Pull complete c7bc60aacdf3: Pull complete c1e1bedfb04e: Pull complete 8332b8441264: Pull complete 012fa89ca2bc: Pull complete c0dfb13372af: Pull complete 3cbeabdc4805: Pull complete 8e492268eedf: Pull complete db2ddafb0478: Pull complete a02565d248c3: Pull complete 7e8259639516: Pull complete 3efb6c94a4c9: Pull complete 77f6d83e6c7a: Pull complete 3601f2116010: Pull complete 4ec7c7d8a180: Pull complete b834909e81a9: Pull complete 72c2b2a88763: Pull complete d77d0ee96a04: Pull complete Digest: sha256:28e7d4a7b3ba0d55f151e718e84de5f186b0c65adaac2da9005a64cb6ad82de8 Status: Downloaded newer image for arm64v8/wordpress:latest # docker pull arm64v8/mariadb Using default tag: latest latest: Pulling from arm64v8/mariadb 6531af355894: Pull complete 82f7942d2fb7: Pull complete fdce94e690d5: Pull complete a96a89ada1c3: Pull complete 9bcef89e3002: Pull complete 06115e3e56a0: Pull complete 5712e955a6d4: Pull complete afd2dc9f5e8f: Pull complete 07ef8ef990de: Pull complete ae55899885f1: Pull complete 9c16c03a30d3: Pull complete 5f1431dbf111: Pull complete 58fecc1c9379: Pull complete 1c94839aac8b: Pull complete Digest: sha256:c67410e8deeb6e165c867131c7669155e43b532d441120df2bbf4f12a3710cd7 Status: Downloaded newer image for arm64v8/mariadb:latest
Then execute the database image first, set the root account password in the environment parameters, create a new user account, and build a separate library for WordPress. We will not operate the database on the host machine, so we will no longer map ports:
# docker run -e MYSQL_ROOT_PASSWORD=rootpassword -e MYSQL_USER=wpuser -e MYSQL_PASSWORD=wpuserpassword -e MYSQL_DATABASE=wordpressdb --name wordpressdb -d arm64v8/mariadb 51e6d43af860e00c45cce81bed1918ae3c2a5c91bdcfca18203b0486d8f2783d
Then execute the WordPress container, pass the common database user account you just created in the environment variables, and connect the WordPress container to the database container just now. This is to make them directly connected using the Docker internal network without using the host machine. Transfer:
# docker run -e WORDPRESS_DB_USER=wpuser -e WORDPRESS_DB_PASSWORD=wpuserpassword -e WORDPRESS_DB_NAME=wordpressdb -p 8080:80 --link wordpressdb:mysql --name wordpress -d arm64v8/wordpress 83cad21cf2a057273440cb919885c061b77711b4baedb64fd7bff683a1a30177
In this way, a micro-blog is set up. Open your browser and take a look:
First, you need to make some basic settings, only for site information. Because the database settings were passed in when the container was executed just now, the web page for the database settings would not appear at all.
After setting up, open the homepage of the website: it
works very well.
The entire construction process, except that the names of the two images have an arm64v8 prefix, and the configuration and use are not different from the X86 server.
to sum up
Several small applications are obviously not representative of all, but they think they are more full than the usual test software.
Tell me about your experience:
- Kunpeng 920 performed well and was pleasantly surprised. It can fully play the main role in common cloud enterprise applications.
- Conventional scripting languages and virtual machine languages don’t need to worry about compatibility at all.
- Conventional C / C ++ / Rust is compiled into binary languages without any pressure. I believe 99% of enterprise applications are compatible.
- When it comes to compilation, aarch64 compilation is an obstacle. Whether it is new research and development or the community looking for resources, you need to accumulate slowly.
- The container community resources are obviously insufficient, which slightly hits small teams that rely on community resources and has little impact on enterprise applications.
After the trial is completed, I wish the TaiShan server and Kunpeng chip become better and better.
Orignal link:https://www.cnblogs.com/andrewwang/p/12020953.html