一、开启nginx访问日志

1. Docker部署

[yeqiang@harbor wrk]$ docker run --name=test -d -p 81:80 nginx:alpine
3a37a184cb67d5ea9f608ae1b5c196c679b263872a8edc598d80929d05c7f9d0

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81/
Running 30s test @ http://localhost:81/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.77ms    1.21ms  39.14ms   79.73%
    Req/Sec    23.37k     2.54k   29.49k    70.42%
  2792821 requests in 30.04s, 2.21GB read
Requests/sec:  92963.41
Transfer/sec:     75.35MB

CPU状态:

2。 k8s部署测试(单机,非集群模式!)

1个实例,不限制CPU,采用ingress方式暴露服务

nginx-ingress-test.yml配置

kind: Deployment
apiVersion: apps/v1
metadata:
  name: test-nginx
  namespace: default
  labels:
    app: test-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-nginx
  template:
    metadata:
      labels:
        app: test-nginx
    spec:
      containers:
        - name: test-nginx
          image: 'nginx:alpine'          
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600


---
kind: Service
apiVersion: v1
metadata:
  name: test-nginx
  namespace: default
  labels:
    app: test-nginx
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: test-nginx
  sessionAffinity: None
  type: ClusterIP

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: test-nginx-ingress
  namespace: default
  labels:
    app: test-nginx
  annotations:
    ingress.kubernetes.io/proxy-body-size: '0'
    ingress.kubernetes.io/ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
  rules:
    - host: test-nginx.hknaruto.com
      http:
        paths:
          - backend:
              serviceName: test-nginx
              servicePort: 80

部署项目

[yeqiang@harbor tmp]$ kubectl apply -f nginx-ingress-test.yml 
deployment.apps/test-nginx created
service/test-nginx created
ingress.extensions/test-nginx-ingress created

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://test-nginx.hknaruto.com/
Running 30s test @ http://test-nginx.hknaruto.com/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.37ms    6.23ms 129.74ms   93.63%
    Req/Sec    12.31k     2.12k   18.20k    64.00%
  1470519 requests in 30.03s, 1.20GB read
Requests/sec:  48965.59
Transfer/sec:     40.76MB

CPU状态

3. 裸机部署

确认docker内nginx版本

/ # /usr/sbin/nginx -V
nginx version: nginx/1.19.3
built by gcc 9.3.0 (Alpine 9.3.0) 
built with OpenSSL 1.1.1g  21 Apr 2020
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer' --with-ld-opt=-Wl,--as-needed

采用相同版本nginx,相同编译参数,编译安装(本地gcc为:9.2.1)

[yeqiang@harbor Downloads]$ cd nginx-1.19.3/
[yeqiang@harbor nginx-1.19.3]$ ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer' --with-ld-opt=-Wl,--as-needed
[yeqiang@harbor nginx-1.19.3]$ make -j12
[yeqiang@harbor nginx-1.19.3]$ sudo make install

直接复制容器中的nginx配置文件

[yeqiang@harbor nginx-1.19.3]$ sudo su
[root@harbor nginx-1.19.3]# cd /etc/nginx/
[root@harbor nginx]# cp nginx.conf nginx.conf.bak
[root@harbor nginx]# docker cp 5e639b847735:/etc/nginx/nginx.conf .
[root@harbor nginx]# docker cp 5e639b847735:/etc/nginx/conf.d .

其中启动用户改为nobody

user  nobody;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

conf.d/default.conf启动端口改为81(本地端口被ingress服务占用了)

server {
    listen       81;
    listen  [::]:81;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

创建目录

[root@harbor nginx]# mkdir -p /var/cache/nginx/client_temp -p
[root@harbor nginx]# chown root:nobody /var/cache/nginx/ -R

复制容器中的html目录(做到index.html体积一样大)

[root@harbor ~]# docker cp 4370fb6acb18:/usr/share/nginx/html /usr/share/nginx/
[root@harbor nginx]$ sudo chown nobody:root /usr/share/nginx/ -R

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81/
Running 30s test @ http://localhost:81/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.11ms    1.37ms  24.79ms   88.74%
    Req/Sec    47.91k     5.44k   65.41k    76.58%
  5728491 requests in 30.06s, 4.53GB read
Requests/sec: 190587.74
Transfer/sec:    154.49MB

CPU状态

4. 补充Docker net=host部署

启动nginx

[yeqiang@harbor wrk]$ docker run -d --rm -it --net=host -v /etc:/etc nginx:alpine
4cac307731580c954833a7a64dfd47a4e0b72c0742b8f18dbf7fd1c2d40fd889

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81/
Running 30s test @ http://localhost:81/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.82ms    1.64ms  21.51ms   82.80%
    Req/Sec    25.03k     3.31k   39.45k    72.92%
  2992155 requests in 30.05s, 2.37GB read
Requests/sec:  99558.06
Transfer/sec:     80.70MB

CPU状态

此时Docker容器终端打印出大量访问日志

二、关闭nginx日志

1. Docker部署

[yeqiang@harbor wrk]$ docker run --name=test -d -v /etc:/etc -p 81:81 nginx:alpine
5dd64ba7937ee1c21563ce1350c23db3bee8b3bb1f28200774879e038c199f2d

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81/
Running 30s test @ http://localhost:81/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.51ms    1.25ms  41.45ms   87.12%
    Req/Sec    28.16k     3.18k   40.94k    72.33%
  3365729 requests in 30.04s, 2.66GB read
Requests/sec: 112035.96
Transfer/sec:     90.81MB

CPU状态

2. k8s部署

部署配置文件

kind: Deployment
apiVersion: apps/v1
metadata:
  name: test-nginx
  namespace: default
  labels:
    app: test-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-nginx
  template:
    metadata:
      labels:
        app: test-nginx
    spec:
      containers:
        - name: test-nginx
          image: 'nginx:alpine'          
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
          volumeMounts:
          - name: host-path-etc
            mountPath: /etc
      volumes:
      - name: host-path-etc
        hostPath:
          path: /etc
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600


---
kind: Service
apiVersion: v1
metadata:
  name: test-nginx
  namespace: default
  labels:
    app: test-nginx
spec:
  ports:
    - protocol: TCP
      port: 81
      targetPort: 81
  selector:
    app: test-nginx
  sessionAffinity: None
  type: ClusterIP

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: test-nginx-ingress
  namespace: default
  labels:
    app: test-nginx
  annotations:
    ingress.kubernetes.io/proxy-body-size: '0'
    ingress.kubernetes.io/ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
  rules:
    - host: test-nginx.hknaruto.com
      http:
        paths:
          - backend:
              serviceName: test-nginx
              servicePort: 81

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://test-nginx.hknaruto.com/
Running 30s test @ http://test-nginx.hknaruto.com/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.23ms   10.31ms 268.90ms   94.20%
    Req/Sec    13.01k     2.76k   26.20k    69.31%
  1554172 requests in 30.04s, 1.26GB read
Requests/sec:  51731.01
Transfer/sec:     43.07MB

CPU状态

3. 裸机

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81
Running 30s test @ http://localhost:81
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.91ms    1.41ms  31.26ms   89.55%
    Req/Sec    76.03k    10.03k  116.32k    76.75%
  9083913 requests in 30.04s, 7.19GB read
Requests/sec: 302359.96
Transfer/sec:    245.09MB

 

 

4. 补充Docker net=host部署,关闭访问日志

配置/etc/nginx/conf.d/default.conf

access_log off;

wrk测试

[yeqiang@harbor wrk]$ ./wrk -t4 -c160 -d30s http://localhost:81/
Running 30s test @ http://localhost:81/
  4 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.05ms    1.25ms  23.94ms   89.69%
    Req/Sec    46.81k    10.69k   96.71k    66.42%
  5596303 requests in 30.05s, 4.43GB read
Requests/sec: 186240.84
Transfer/sec:    150.96MB

此时CPU状态

总结:

数据对比
 开访问日志开日志CPU(容器)关访问日志关日志CPU(容器)
Docker92963.41docker-proxy 118.6%
dockerd 28.9%
containerd-shim 8.3%
112035.96

docker-proxy 138.5%

docker 1.3%

Docker
net=host
99558.06dockerd 50.2%
containerd-shim 46.2%
186240.84dockerd 1.3%
k8s48965.59nginx-ingress-controller 49.8%
dockerd 41.2
51731.01nginx-ingress-controller 59%
dockerd 41.7
裸机190587.74 302359.96 

Docker部署服务方式性能损耗排行:

1. 网络:服务提供docker-proxy

2. 文件系统IO及终端日志(本次测试打开日志情况下,终端会打印日志)

k8s部署服务方式性能损耗排行:

1. 网络:服务提供ingress-controller

2. dockerd

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐