Welcome to Centmin Mod Community
Register Now

SSL Caddy v2 versus Centmin Mod Nginx HTTP/2 & HTTP/3 HTTPS Benchmarks

Discussion in 'Domains, DNS, Email & SSL Certificates' started by eva2000, May 10, 2020.

Thread Status:
Not open for further replies.
  1. eva2000

    eva2000 Administrator Staff Member

    55,443
    12,257
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,840
    Local Time:
    12:30 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    First attempt at using newer Caddy v2 server so thought I’d do some quick HTTP/2 & HTTP/3 benchmarks against my Nginx HTTP/2 & HTTP/3 Cloudflare Quiche patched servers to see where performance is at. The full write up and system/config details are at centminmod/centminmod-caddy-v2.

    Previous Caddy benchmarks
    Test Parameters


    Using h2load tester
    • h2load HTTP/2 HTTPS load tests at 150, 500 and 1,000 user concurrency at different number of requests and max concurrent stream parameters
    • h2load HTTP/3 HTTPS load tests at 150, 500 and 1,000 user concurrency at different number of requests and max concurrent stream parameters
    Caddy v2 keeled over at 1000 user concurrency mark for both h2load HTTP/2 and HTTP/3 load tests while Nginx handled them fine both on the same Virtualbox CentOS 7.8 guest OS environment.

    Test ngx.domain.com site test over curl with HTTP/3 support built using Cloudflare Quiche library.
    Code (Text):
    curl-http3 --http3 -skD - -H "Accept-Encoding: gzip" https://ngx.domain.com/caddy-index.html -o /dev/null         
    HTTP/3 200
    date: Sat, 09 May 2020 15:01:22 GMT
    content-type: text/html; charset=utf-8
    last-modified: Wed, 06 May 2020 18:44:09 GMT
    vary: Accept-Encoding
    etag: W/"5eb30579-2fc2"
    server: nginx centminmod
    x-powered-by: centminmod
    alt-svc: h3-27=":443"; ma=86400
    x-xss-protection: 1; mode=block
    x-content-type-options: nosniff
    content-encoding: gzip

    Caddy v2 HTTP/3 with experimental_http3 enabled
    Code (Text):
    curl-http3 --http3 -skD - -H "Accept-Encoding: gzip" https://caddy.domain.com:4444/caddy-index.html -o /dev/null
    HTTP/3 200
    x-xss-protection: 1; mode=block
    etag: "q9xapl9fm"
    content-type: text/html; charset=utf-8
    last-modified: Wed, 06 May 2020 18:44:09 GMT
    content-encoding: gzip
    x-powered-by: caddy centminmod
    alt-svc: h3-27=":4444"; ma=2592000
    x-content-type-options: nosniff
    vary: Accept-Encoding
    server: Caddy

    For h2load HTTP/3 tests
    Code (Text):
    h2load-http3 --version
    h2load nghttp2/1.41.0-DEV

    For curl
    Code (Text):
    curl-http3 -V
    curl 7.71.0-DEV (x86_64-pc-linux-gnu) libcurl/7.71.0-DEV BoringSSL zlib/1.2.11 brotli/1.0.7 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.5) libssh2/1.8.0 nghttp2/1.36.0 quiche/0.3.0
    Release-Date: [unreleased]
    Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
    Features: alt-svc AsynchDNS brotli HTTP2 HTTP3 HTTPS-proxy IDN IPv6 Largefile libz NTLM NTLM_WB PSL SSL UnixSockets


    Tabulated results are below:

    HTTP/2 HTTPS Benchmarks



    caddy-vs-nginx-http2-benchmarks-may10-01.png

    HTTP/3 HTTPS Benchmarks



    caddy-vs-nginx-http3-benchmarks-may10-01.png
     
  2. eva2000

    eva2000 Administrator Staff Member

    55,443
    12,257
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,840
    Local Time:
    12:30 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    h2load HTTP/3 HTTPS Resouce Usage



    After posting initial results on my laptop's Virtualbox setup, folks have asked what the server resource usage is like between Caddy v2 and Nginx. This can vary greatly due to how the respective web servers are built and configured settings wise. For example if Nginx is configured with caching and buffers, then their starting state memory usage would be higher but be more uniform during load at times.

    I intended to move my testing over to a proper server to do resource monitoring measurements as a laptop Virtualbox environment isn't best. But I decided to do a quick test for h2load HTTP/3 tests between Caddy v2 and Centmin Mod Nginx 1.16.1 with Cloudflare Nginx HTTP/3 patch on the same setup done for tests here.

    I had to change h2load test parameters as doing request based number tests would be too fast to completion for any accurate resource monitoring measurements. For example Nginx can complete 500 user concurrency with 2,000 requets for h2load HTTP/3 in 2.37 seconds. Hardly enough time to do any measurements. So changed h2load HTTP/3 tests to a duration of 20 seconds and small user concurrency for a quick test. I will reserve further testing with longer durations for when I have time to move the setup to a proper server for continued testing. The system resource usage statistics are at very bottom of the page.

    Below results are with h2load HTTP/3 HTTPS tests for 50 concurrent users and duration of 20 seconds with 5 second warm up time at 50 max concurrent streams

    caddy-vs-nginx-http3-benchmarks-resource-duration-20s-may10-01.png

    Caddy



    Code (Text):
    caddyrestart; ngxstop; sleep 30; h2load-http3 -t1 -c50 -D 20 --warm-up-time=5 -m50 -H "Accept-Encoding:gzip" https://caddy.domain.com:4444/caddy-index.html
    Redirecting to /bin/systemctl restart caddy.service
    Stopping nginx (via systemctl):                            [  OK  ]
    starting benchmark...
    spawning thread #0: 50 total client(s). Timing-based test with 5s of warm-up time and 20s of main duration for measurements.
    Warm-up started for thread #0.
    progress: 10% of clients started
    progress: 20% of clients started
    progress: 30% of clients started
    progress: 40% of clients started
    progress: 50% of clients started
    progress: 60% of clients started
    progress: 70% of clients started
    progress: 80% of clients started
    progress: 90% of clients started
    progress: 100% of clients started
    TLS Protocol: TLSv1.3
    Cipher: TLS_AES_128_GCM_SHA256
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h3-27
    Warm-up phase is over for thread #0.
    Main benchmark duration is started for thread #0.
    Main benchmark duration is over for thread #0. Stopping all clients.
    Stopped all clients for thread #0
    
    finished in 25.86s, 373.00 req/s, 2.29MB/s
    requests: 7460 total, 7460 started, 7460 done, 7460 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 9015 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 45.80MB (48023049) total, 2.73MB (2858502) headers (space savings -11.82%), 43.23MB (45326876) data
                        min         max         mean         sd        +/- sd
    time for request:    47.01ms      12.91s       7.22s       3.38s    55.87%
    time for connect:        0us         0us         0us         0us     0.00%
    time to 1st byte:        0us         0us         0us         0us     0.00%
    req/s           :       0.00       10.63        7.23        1.48    80.00%


    cpu.png loadaverage.png ram.png contextsw.png

    Nginx



    Code (Text):
    ngxrestart; caddystop; sleep 30; h2load-http3 -t1 -c50 -D 20 --warm-up-time=5 -m50 -H "Accept-Encoding:gzip" https://ngx.domain.com/caddy-index.html
    Restarting nginx (via systemctl):                          [  OK  ]
    Redirecting to /bin/systemctl stop caddy.service
    starting benchmark...
    spawning thread #0: 50 total client(s). Timing-based test with 5s of warm-up time and 20s of main duration for measurements.
    Warm-up started for thread #0.
    progress: 10% of clients started
    progress: 20% of clients started
    progress: 30% of clients started
    progress: 40% of clients started
    progress: 50% of clients started
    progress: 60% of clients started
    progress: 70% of clients started
    progress: 80% of clients started
    progress: 90% of clients started
    progress: 100% of clients started
    TLS Protocol: TLSv1.3
    Cipher: TLS_AES_128_GCM_SHA256
    Server Temp Key: X25519 253 bits
    Application protocol: h3-27
    Warm-up phase is over for thread #0.
    Main benchmark duration is started for thread #0.
    Main benchmark duration is over for thread #0. Stopping all clients.
    Stopped all clients for thread #0
    
    finished in 25.03s, 1446.55 req/s, 7.11MB/s
    requests: 28931 total, 28931 started, 28931 done, 28931 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 27904 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 142.19MB (149098871) total, 4.59MB (4817820) headers (space savings 55.13%), 167.34MB (175464789) data
                        min         max         mean         sd        +/- sd
    time for request:   534.93ms      10.42s       1.72s       1.04s    83.99%
    time for connect:        0us         0us         0us         0us     0.00%
    time to 1st byte:        0us         0us         0us         0us     0.00%
    req/s           :      12.79       45.22       29.52        8.17    60.00%


    cpu.png
    loadaverage.png
    ram.png
    contextsw.png
     
Thread Status:
Not open for further replies.