Join the community today
Become a Member

Vultr Vultr 60% discount & Free $100 Credit Test Drive Vultr Bare Metal Instances!

Discussion in 'Virtual Private Server (VPS) hosting' started by eva2000, Jan 16, 2018.

  1. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Spun up Vultr Bare Metal E3-1270v6 instance again for some more Centmin Mod 123.09beta01 LEMP stack benchmarks as it's rare to have access to 10Gbps network :)

    Using my forked wrk load tester, wrk-cmm I managed (with some extra system tuning) to push Centmin Mod default Nginx 1.13.8 web server to 350,000 concurrent connections against Centmin Mod's default Nginx index page :D

    Only tested with 4 threads as needed the other 4 threads for Nginx on Xeon E3-1270v6 4C/8T processor with 350,000 concurrent connections over 4 threads producing 130,156 reques/sec with transfers of 543.80MB/s or 5.4+ Gbps. Needed to use wrk bind ports to 127.0.0.1/28 to get around the ephemeral ports limit.

    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c350000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 10s test @ http://localhost/
      4 threads and 350000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   915.79ms  345.52ms   1.90s    57.44%
        Connect   998.07ms  470.08ms   1.81s    57.36%
        TTFB      915.79ms  345.52ms   1.90s    57.44%
        TTLB        2.12us   59.82us  50.03ms   99.97%
        Req/Sec    66.13k    81.04k  244.32k    79.17%
      Latency Distribution
         50%  921.16ms
         75%    1.20s
         90%    1.36s
         99%    1.63s
      1399595 requests in 10.75s, 5.71GB read
    Requests/sec: 130156.48
    Transfer/sec:    543.80MB
    



    Nginx status page reporting active connections, accepts and handled requests :)

    wrk-nginx-status-010218-350k.png

    Centmin Mod's default Nginx index page

    Code (Text):
    curl -I http://localhost
    HTTP/1.1 200 OK
    Date: Thu, 01 Feb 2018 12:03:35 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 4074
    Last-Modified: Thu, 01 Feb 2018 05:48:37 GMT
    Connection: keep-alive
    Vary: Accept-Encoding
    ETag: "5a72aa35-fea"
    Server: nginx centminmod
    X-Powered-By: centminmod
    Accept-Ranges: bytes
    
     
  2. pamamolf

    pamamolf Premium Member Premium Member

    4,068
    427
    83
    May 31, 2014
    Ratings:
    +832
    Local Time:
    4:54 AM
    Nginx-1.25.x
    MariaDB 10.3.x
    Is there any easy way to check how many users are connected on the server?

    I don't mean connections or request per user....
     
  3. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    for nginx user = active connections and nginx status page shows it
    Code (Text):
    curl -s localhost/nginx_status
    

    Code (Text):
    curl -s localhost/nginx_status
    Active connections: 1 
    server accepts handled requests
     533 533 538 
    Reading: 0 Writing: 1 Waiting: 0 
    

    or if you enable optional nginx vhost status module Beta Branch - Centmin Mod Nginx live vhost traffic statistics preview & discussion

    upload_2018-2-1_23-0-2.png
     
  4. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    More max Nginx concurrent connection tests following on from initial tests here.

    What's better than pushing Nginx to 350,000 concurrent connections ? Doing so with better performance :) With a bit more tuning managed to pull of 350k concurrent connections with wrk-cmm load tester and improved average thread latency by 5.4% and throughput by 17.7%. Latency distribution at 50%, 75% and 90% were faster, though 99% percentile latency was higher 1.68s vs 1.63s

    Before:
    • Avg Thread Latency 915.79ms
    • Requests/sec: 130156.48
    • Transfer/sec: 543.80MB ~5.43+ Gbps
    After:
    • Avg Thread Latency 866.07ms
    • Requests/sec: 153187.50
    • Transfer/sec: 640.02MB ~6.4 Gbps
    At 350k concurrent connections
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c350000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 10s test @ http://localhost/
      4 threads and 350000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   866.07ms  328.59ms   1.99s    62.38%
        Connect   990.43ms  459.15ms   1.79s    57.84%
        TTFB      866.07ms  328.59ms   1.99s    62.38%
        TTLB        2.29us   77.49us  50.03ms   99.93%
        Req/Sec    88.79k    98.99k  304.75k    76.47%
      Latency Distribution
         50%  815.30ms
         75%    1.10s
         90%    1.32s
         99%    1.68s
      1643912 requests in 10.73s, 6.71GB read
    Requests/sec: 153187.50
    Transfer/sec:    640.02MB
    

    Tried but failed at 375k concurrent connections with some socket timeout errors ~1717
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c375000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 10s test @ http://localhost/
      4 threads and 350000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   930.08ms  359.93ms   1.99s    62.38%
        Connect     1.04s   486.68ms   1.87s    57.44%
        TTFB      930.07ms  359.93ms   1.99s    62.38%
        TTLB        2.27us  100.27us  90.02ms   99.97%
        Req/Sec    89.75k   108.02k  275.98k    67.86%
      Latency Distribution
         50%  904.38ms
         75%    1.19s
         90%    1.46s
         99%    1.71s
      1497155 requests in 10.89s, 6.11GB read
      Socket errors: connect 0, read 0, write 0, timeout 1717
    Requests/sec: 137450.35
    Transfer/sec:    574.27MB
    

    Tried but failed at 400k concurrent connections with some socket timeout errors ~4511
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c400000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 10s test @ http://localhost/
      4 threads and 350000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   821.56ms  373.54ms   1.97s    64.21%
        Connect     1.09s   493.57ms   1.97s    58.69%
        TTFB      821.55ms  373.53ms   1.97s    64.21%
        TTLB        2.15us   62.79us  47.05ms   99.95%
        Req/Sec    72.44k    87.21k  272.89k    85.00%
      Latency Distribution
         50%  708.46ms
         75%    1.11s
         90%    1.39s
         99%    1.78s
      1400000 requests in 16.14s, 5.71GB read
      Socket errors: connect 0, read 0, write 0, timeout 4511
    Requests/sec:  86742.61
    Transfer/sec:    362.41MB
    

    Tried but failed at 500k concurrent connections with some socket timeout errors ~71718
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c500000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost/ 
    Running 10s test @ http://localhost/
      4 threads and 500000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.19s   377.63ms   2.00s    61.42%
        Connect     1.17s   506.26ms   2.00s    57.34%
        TTFB        1.19s   377.63ms   2.00s    61.42%
        TTLB        2.53us   80.13us  53.17ms   99.89%
        Req/Sec    68.44k    87.24k  251.60k    73.68%
      Latency Distribution
         50%    1.18s 
         75%    1.50s 
         90%    1.72s 
         99%    1.94s 
      1373428 requests in 11.26s, 5.60GB read
      Socket errors: connect 0, read 0, write 0, timeout 71718
    Requests/sec: 121986.91
    Transfer/sec:    509.67MB
    


    350k

    wrk-nginx-status-020218-adv-tuned-350k.png

    375k

    wrk-nginx-status-020218-adv-tuned-375k-failed.png

    400k

    wrk-nginx-status-020218-adv-tuned-400k-failed.png

    500k

    wrk-nginx-status-020218-adv-tuned-500k-failed.png
     
  5. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Now instead of throughput, lets look at response time latency targetting. Testing what the max nginx concurrent connection level where 99% percentile latency is less than 400-500ms target. Seems I maybe hitting 10Gbps limit even on lower concurrency levels as 10,000 concurrent connections was pushing 0.94GB/s or 9.4Gbps ? and 5,000 concurrent connections pushing 0.99GB/s or 9.9Gbps ? hmmm, maybe I need a server with 40Gbps network connections :D

    5k port 80 - 99% percentile latency = 49.65ms
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    18.05ms   10.77ms  83.05ms   69.03%
        Connect    13.42ms    6.50ms  26.56ms   58.12%
        TTFB       18.04ms   10.77ms  83.05ms   69.04%
        TTLB        2.32us   62.54us  25.01ms   99.99%
        Req/Sec    61.70k     8.91k   92.79k    71.61%
      Latency Distribution
         50%   15.97ms
         75%   24.41ms
         90%   32.84ms
         99%   49.65ms
      2461029 requests in 10.10s, 10.04GB read
    Requests/sec: 243600.83
    Transfer/sec:      0.99GB
    

    10k port 80 - 99% percentile latency = 99.26ms
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    39.50ms   21.39ms 229.24ms   71.51%
        Connect    27.65ms   13.35ms  54.67ms   59.86%
        TTFB       39.50ms   21.39ms 229.24ms   71.51%
        TTLB        2.61us   68.68us  26.13ms   99.93%
        Req/Sec    58.23k     7.58k   82.53k    69.85%
      Latency Distribution
         50%   33.01ms
         75%   52.40ms
         90%   71.20ms
         99%   99.26ms
      2315383 requests in 10.08s, 9.45GB read
    Requests/sec: 229656.18
    Transfer/sec:      0.94GB
    

    20k port 80
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    84.91ms   41.66ms 268.25ms   69.39%
        Connect    54.83ms   26.15ms 101.30ms   57.66%
        TTFB       84.91ms   41.66ms 268.24ms   69.39%
        TTLB        2.89us   75.44us  19.38ms   99.89%
        Req/Sec    55.21k     7.16k   77.06k    72.49%
      Latency Distribution
         50%   76.69ms
         75%  110.55ms
         90%  143.09ms
         99%  205.93ms
      2184479 requests in 10.10s, 8.91GB read
    Requests/sec: 216272.05
    Transfer/sec:      0.88GB
    

    30k port 80
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   130.07ms   56.76ms 371.73ms   66.43%
        Connect    81.64ms   39.36ms 161.54ms   58.02%
        TTFB      130.07ms   56.76ms 371.73ms   66.43%
        TTLB        2.89us   70.17us  19.22ms   99.89%
        Req/Sec    54.07k     7.01k   73.91k    68.31%
      Latency Distribution
         50%  123.06ms
         75%  165.43ms
         90%  208.66ms
         99%  289.41ms
      2118384 requests in 10.08s, 8.64GB read
    Requests/sec: 210082.72
    Transfer/sec:      0.86GB
    

    40k port 80
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c40000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 40000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   173.75ms   74.16ms 487.60ms   64.53%
        Connect   115.55ms   56.97ms 220.49ms   57.70%
        TTFB      173.74ms   74.16ms 487.59ms   64.53%
        TTLB        2.66us   60.08us  15.12ms   99.94%
        Req/Sec    54.50k     7.09k   72.51k    68.89%
      Latency Distribution
         50%  160.09ms
         75%  225.61ms
         90%  280.14ms
         99%  363.32ms
      2121728 requests in 10.11s, 8.66GB read
    Requests/sec: 209945.32
    Transfer/sec:      0.86GB
    

    50k port 80
    Code (Text):
    wrk-cmm -b 127.0.0.1/28 -t4 -c50000 -d10s --latency --breakout -s scripts/pipeline2.lua http://localhost:80/
    Running 10s test @ http://localhost:80/
      4 threads and 50000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   218.23ms   74.26ms 534.40ms   66.11%
        Connect   141.39ms   67.82ms 266.87ms   58.20%
        TTFB      218.23ms   74.26ms 534.40ms   66.11%
        TTLB        2.75us   69.65us  26.03ms   99.97%
        Req/Sec    53.88k     7.88k   87.18k    69.54%
      Latency Distribution
         50%  213.70ms
         75%  269.47ms
         90%  316.19ms
         99%  401.18ms
      2077696 requests in 10.11s, 8.48GB read
    Requests/sec: 205486.11
    Transfer/sec:    858.53MB
    
     
  6. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Seeing as 5k concurrent connections was hitting 0.99GB/s or 9.9Gbps on 10Gbps network that Vultr supports, the next wrk tests are for gzip compressed HTTP load tests to reduce the bandwidth transfer sizes. Tested at 12k, 30k, 40k and 60k concurrent nginx connections and seems only 12k loads produced acceptable 99% percentile latency response times of ~405.29ms while 60k concurrent connection's 99% percentile latency times jumped to 1.62s. So that means this Vultr Bare Metal Intel Xeon E3-1270v6 Kabylake 32GB 2x240GB SSD server with 10Gbps network connectivity running Centmin Mod Nginx web server managed to handle 5,000 concurrent connections (non-gzip and limited by 10Gbps network) or around 12,000 concurrent connections (gzip compressed) where acceptable latency response time was <500ms against Centmin Mod Nginx default index page.

    At 60,000 concurrent nginx connections with HTTP gzip compressed loads and longer 330 second duration test reduced bandwidth transfers to 113.11 MB/s or 1.13Gbps and Centmin Mod Nginx handled 60,000 concurrent connections nicely with 99% percentil latency at 1.62s and 711 socket timeout errors
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c60000 -d330s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 6m test @ http://localhost/
      4 threads and 60000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.01s   190.01ms   2.00s    80.95%
        Connect   189.86ms   92.21ms 365.36ms   57.99%
        TTFB        1.01s   190.01ms   2.00s    80.95%
        TTLB        1.56us    7.41us   9.41ms   99.96%
        Req/Sec    14.85k     2.06k   29.89k    72.48%
      Latency Distribution
         50%  935.86ms
         75%    1.09s
         90%    1.30s
         99%    1.62s
      19485331 requests in 5.50m, 36.46GB read
      Socket errors: connect 0, read 0, write 0, timeout 711
    Requests/sec:  59027.07
    Transfer/sec:    113.09MB
    

    At 40,000 concurrent nginx connections with HTTP gzip compressed loads and longer 330 second duration test reduced bandwidth transfers to 113.11 MB/s or 1.13Gbps and Centmin Mod Nginx handled 40,000 concurrent connections nicely with 99% percentil latency at 1.13s
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c40000 -d330s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 6m test @ http://localhost/
      4 threads and 40000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   675.70ms  141.29ms   1.65s    81.95%
        Connect   133.10ms   64.76ms 247.40ms   57.69%
        TTFB      675.69ms  141.29ms   1.65s    81.95%
        TTLB        1.54us    0.67us   0.93ms   99.19%
        Req/Sec    14.84k     1.01k   18.94k    71.08%
      Latency Distribution
         50%  618.57ms
         75%  720.53ms
         90%  894.53ms
         99%    1.13s
      19487557 requests in 5.50m, 36.46GB read
    Requests/sec:  59038.26
    Transfer/sec:    113.11MB
    

    At 30,000 concurrent nginx connections with HTTP gzip compressed loads and longer 330 second duration test reduced bandwidth transfers to 113.78 MB/s or 1.13Gbps and Centmin Mod Nginx handled 30,000 concurrent connections nicely with 99% percentil latency at 901.42ms
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d330s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 6m test @ http://localhost/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   504.22ms  114.24ms   1.33s    84.14%
        Connect    91.70ms   43.76ms 171.02ms   57.54%
        TTFB      504.22ms  114.24ms   1.33s    84.14%
        TTLB        1.55us    0.95us   3.01ms   99.24%
        Req/Sec    14.93k     1.07k   19.20k    70.00%
      Latency Distribution
         50%  460.18ms
         75%  531.98ms
         90%  673.24ms
         99%  901.42ms
      19601295 requests in 5.50m, 36.67GB read
    Requests/sec:  59383.90
    Transfer/sec:    113.78MB
    

    At 12,000 concurrent nginx connections with HTTP gzip compressed loads and longer 330 second duration test reduced bandwidth transfers to 114.80 MB/s or 1.14Gbps and Centmin Mod Nginx handled 12,000 concurrent connections nicely with 99% percentil latency at 405.29ms
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c12000 -d330s --latency --breakout -s scripts/pipeline2.lua http://localhost/
    Running 6m test @ http://localhost/
      4 threads and 12000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   200.29ms   55.47ms   1.71s    85.95%
        Connect    32.50ms   15.60ms  62.92ms   57.91%
        TTFB      200.29ms   55.47ms   1.71s    85.95%
        TTLB        1.63us    2.43us   1.22ms   99.64%
        Req/Sec    15.06k     0.98k   20.49k    69.94%
      Latency Distribution
         50%  182.66ms
         75%  196.54ms
         90%  284.29ms
         99%  405.29ms
      19779313 requests in 5.50m, 37.01GB read
    Requests/sec:  59919.89
    Transfer/sec:    114.80MB
    


    wrk-nginx-status-040218-gzip-330s-12k.png wrk-nginx-status-040218-gzip-330s-30k_2.png wrk-nginx-status-040218-gzip-330s-40k_2.png wrk-nginx-status-040218-gzip-330s-60k_2.png
     
  7. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Centmin Mod Nginx Proxy vs Proxy Cache



    Next up is I spun up a 2nd Vultr Baremetal Xeon E3-1270v6 in same NJ region and installed Centmin Mod 123.09beta01 Nginx on it and set it up as a proxy backend. And used existing Vultr Baremetal with Centmin Mod Nginx setup as a proxy upstream testing both proxying with and without proxy_cache (?nocache requests). Proxy caching is only set to 10 second limit. So below tests at 60 seconds will only show proxy_cache at work some of the time. I am testing HTTP gzip compressed loads at nginx user connection concurrency levels of 5k, 10k and 30k which is within range of acceptable latency response times at 99% percentile.

    I enabled Centmin Mod Nginx Vhost Statistics for some insight into what's going on by enabling and setting in persistent config file /etc/centminmod/custom_config.inc the following variables
    Code (Text):
    NGXDYNAMIC_VHOSTSTATS='y'
    NGINX_VHOSTSTATS='y'
    

    additional persistent config file /etc/centminmod/custom_config.inc settings to switch Nginx compiles to using GCC 7.2.1 with OpenSSL 1.1.0g
    Code (Text):
    LIBRESSL_SWITCH='n'
    CLANG='n'
    DEVTOOLSETSEVEN='y'
    NGINX_DEVTOOLSETGCC='y'
    CLOUDFLARE_ZLIB='y'
    

    then recompile nginx via centmin.sh menu option 4 to add nginx-module-vts nginx module GitHub - vozlt/nginx-module-vts: Nginx virtual host traffic status module as an Nginx dynamic module.
    The auto setup vhost include file for main hostname vhost at /usr/local/nginx/conf/vts_mainserver.conf needs enabling and whitelisting YOURIPADDRESS so you cna view stats from mainhostname/vhost_status
    Code (Text):
    location /vhost_status {
        allow 127.0.0.1;
        allow YOURIPADDRESS;
        deny all;
        vhost_traffic_status on;
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
    }
    
    location = /vhost_status.html {
        allow 127.0.0.1;
        allow YOURIPADDRESS;
        deny all;
    

    Then in between test runs to reset vhost stats I visited web browser at
    Code (Text):
    mainhostname/vhost_status/control?cmd=reset&group=*
    


    Nginx upstream setup as follows where baremetal2 is hostname ip for 2nd server.
    Code (Text):
    upstream proxy_backend {
        zone upstream_dynamic 512k;
        keepalive 4096;
        least_conn;
        #hash $scheme$proxy_host$request_uri consistent;
        #server 127.0.0.1:8686 weight=10;
        #server 127.0.0.1:8687 weight=10;
        #server 127.0.0.1:8688 weight=10;
        #server 127.0.0.1:8689 weight=10;
        server baremetal2:8686 weight=40;
        server baremetal2:8687 weight=40;
        server baremetal2:8688 weight=40;
        server baremetal2:8689 weight=40;
        server baremetal2:8690 weight=40;
        server baremetal2:8691 weight=40;
        server baremetal2:8692 weight=40;
        server baremetal2:8693 weight=40;
    }
    


    5k concurrency



    5k no proxy_cache tests with ?nocache appended to url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-5k-remoteonly.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/?nocache
    Running 1m test @ http://baremetal:8080/?nocache
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   162.46ms   26.73ms 281.98ms   67.71%
        Connect    13.63ms    6.58ms  25.21ms   57.80%
        TTFB      162.45ms   26.73ms 281.98ms   67.71%
        TTLB        2.02us    3.04us 248.00us   97.99%
        Req/Sec     7.72k   669.39    10.49k    69.71%
      Latency Distribution
         50%  161.36ms
         75%  180.06ms
         90%  197.95ms
         99%  227.98ms
      1842993 requests in 1.00m, 3.92GB read
      Socket errors: connect 0, read 0, write 0, timeout 13
    Requests/sec:  30687.83
    Transfer/sec:     66.79MB
    

    5k proxy_cache tests without ?nocache on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-5k-proxycached-remoteonly-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/
    Running 1m test @ http://baremetal:8080/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   104.28ms   19.06ms 664.18ms   68.56%
        Connect    13.52ms    6.56ms  25.51ms   57.92%
        TTFB      104.28ms   19.06ms 664.18ms   68.56%
        TTLB        1.73us    2.45us 288.00us   99.60%
        Req/Sec    12.04k   588.88    14.42k    72.88%
      Latency Distribution
         50%  104.12ms
         75%  117.06ms
         90%  128.45ms
         99%  147.54ms
      2875185 requests in 1.00m, 6.10GB read
    Requests/sec:  47881.81
    Transfer/sec:    104.07MB
    


    10k concurrency



    10k no proxy_cache tests with ?nocache appended to url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-10k-remoteonly.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/?nocache
    Running 1m test @ http://baremetal:8080/?nocache
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   328.04ms   42.89ms 478.30ms   68.16%
        Connect    27.98ms   13.17ms  51.98ms   58.46%
        TTFB      328.03ms   42.89ms 478.29ms   68.16%
        TTLB        1.96us    2.21us 211.00us   97.93%
        Req/Sec     7.63k     1.32k   12.64k    69.29%
      Latency Distribution
         50%  326.14ms
         75%  357.59ms
         90%  384.08ms
         99%  430.02ms
      1823650 requests in 1.00m, 3.88GB read
    Requests/sec:  30350.41
    Transfer/sec:     66.05MB
    


    10k proxy_cache tests without ?nocache on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-10k-proxycached-remoteonly-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/
    Running 1m test @ http://baremetal:8080/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   210.28ms   31.14ms   1.98s    70.90%
        Connect    29.79ms   14.34ms  58.38ms   58.90%
        TTFB      210.28ms   31.14ms   1.98s    70.90%
        TTLB        1.60us    0.75us 226.00us   99.28%
        Req/Sec    11.92k     1.42k   23.20k    76.21%
      Latency Distribution
         50%  210.49ms
         75%  230.61ms
         90%  247.49ms
         99%  274.89ms
      2847759 requests in 1.00m, 6.04GB read
      Socket errors: connect 0, read 0, write 0, timeout 106
    Requests/sec:  47388.91
    Transfer/sec:    103.00MB
    


    30k concurrency



    30k no proxy_cache tests with ?nocache appended to url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-30k-remoteonly.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/?nocache
    Running 1m test @ http://baremetal:8080/?nocache
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.02s   101.70ms   1.30s    78.93%
        Connect    85.09ms   40.22ms 161.53ms   58.14%
        TTFB        1.02s   101.70ms   1.30s    78.93%
        TTLB        2.09us    2.97us 544.00us   97.66%
        Req/Sec     7.34k     2.40k   16.41k    67.36%
      Latency Distribution
         50%    1.02s
         75%    1.08s
         90%    1.13s
         99%    1.21s
      1746834 requests in 1.00m, 3.71GB read
    Requests/sec:  29067.23
    Transfer/sec:     63.26MB
    


    30k proxy_cache tests without ?nocache on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-30k-proxycached-remoteonly-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/
    Running 1m test @ http://baremetal:8080/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   627.68ms   66.79ms   1.98s    75.42%
        Connect    92.06ms   44.36ms 171.72ms   57.55%
        TTFB      627.68ms   66.79ms   1.98s    75.42%
        TTLB        1.59us    0.65us  96.00us   99.25%
        Req/Sec    11.93k     1.05k   16.11k    72.86%
      Latency Distribution
         50%  631.78ms
         75%  670.53ms
         90%  701.46ms
         99%  745.66ms
      2844712 requests in 1.00m, 6.04GB read
      Socket errors: connect 0, read 0, write 0, timeout 1
    Requests/sec:  47333.24
    Transfer/sec:    102.90MB
    


    Centmin Mod Nginx Proxy Cache At 60-80k Users



    Pushing just proxy_cache tests for 60,000 and 80,000 concurrent users.

    60k proxy_cache tests without ?nocache appended url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-60k-proxycached-remoteonly-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c60000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/ | grep -v 'unable to'
    Running 1m test @ http://baremetal:8080/
      4 threads and 60000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.26s   132.62ms   1.78s    90.43%
        Connect   189.80ms   92.50ms 375.29ms   57.95%
        TTFB        1.26s   132.62ms   1.78s    90.43%
        TTLB        1.66us    0.77us 118.00us   98.00%
        Req/Sec    11.73k     1.66k   22.07k    73.75%
      Latency Distribution
         50%    1.28s
         75%    1.32s
         90%    1.37s
         99%    1.44s
      2785007 requests in 1.00m, 5.91GB read
      Socket errors: connect 0, read 0, write 0, timeout 379
    Requests/sec:  46316.29
    Transfer/sec:    100.70MB
    

    80k proxy_cache tests without ?nocache appended url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-80k-proxycached-remoteonly-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c80000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/ | grep -v 'unable to'
    Running 1m test @ http://baremetal:8080/
      4 threads and 80000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.64s   192.69ms   1.98s    91.79%
        Connect   250.18ms  122.22ms 497.70ms   58.51%
        TTFB        1.64s   192.69ms   1.98s    91.79%
        TTLB        1.66us    0.97us 417.00us   97.75%
        Req/Sec    11.82k     2.42k   23.65k    76.32%
      Latency Distribution
         50%    1.67s
         75%    1.73s
         90%    1.79s
         99%    1.86s
      2798866 requests in 1.00m, 5.95GB read
      Socket errors: connect 0, read 0, write 0, timeout 283
    Requests/sec:  46568.81
    Transfer/sec:    101.31MB
    
     
  8. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Spun up a 3rd Vultr baremetal E3-1270v6 so 1st is proxy server, 2nd and 3rd are backend servers.

    Aim is to see what nginx concurrency level can be achieved with 99% percentile latency response times less than <500ms. Seems around 5k to 10k concurrent nginx connections is the limit where latency response times are acceptable.

    While playing with anisble as well, just noticed 2 of the 3 vultr bare metal has more memory at 64GB instead of 32GB ?

    Code (Text):
    ansible -a 'free -m' all         
    baremetal2 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          64216        4988       57534          48        1693       58626
    Swap:          1023           0        1023
    
    baremetal3 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          64216        4984       58431          38         800       58662
    Swap:          1023           0        1023
    
    baremetal | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          31962        4661       19357          68        7943       26782
    Swap:          1023           0        1023


    5k no proxy_cache tests with ?nocache appended to url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-5k-remoteonlyx3-1.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/?nocache
    Running 1m test @ http://baremetal:8080/?nocache
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   159.15ms   26.96ms 501.99ms   67.63%
        Connect    13.51ms    6.60ms  27.65ms   58.58%
        TTFB      159.14ms   26.96ms 501.98ms   67.63%
        TTLB        2.02us    3.33us 394.00us   98.23%
        Req/Sec     7.87k   749.57    10.40k    68.54%
      Latency Distribution
         50%  157.95ms
         75%  176.99ms
         90%  194.57ms
         99%  226.25ms
      1881489 requests in 1.00m, 4.02GB read
      Socket errors: connect 0, read 0, write 0, timeout 19
    Requests/sec:  31319.75
    Transfer/sec:     68.55MB
    

    5k proxy_cache tests without ?nocache on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-5k-proxycached-remoteonly-x2-2-1.png
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/
    Running 1m test @ http://baremetal:8080/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    96.22ms   18.01ms 635.58ms   67.82%
        Connect    13.66ms    6.53ms  25.44ms   58.10%
        TTFB       96.22ms   18.01ms 635.58ms   67.82%
        TTLB        1.70us    2.52us 293.00us   99.63%
        Req/Sec    13.05k   734.82    15.74k    76.21%
      Latency Distribution
         50%   96.31ms
         75%  108.54ms
         90%  118.82ms
         99%  135.51ms
      3116246 requests in 1.00m, 6.65GB read
    Requests/sec:  51897.59
    Transfer/sec:    113.44MB
    


    10k no proxy_cache tests with ?nocache appended to url on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-10k-remoteonlyx3-1.png
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/?nocache
    Running 1m test @ http://baremetal:8080/?nocache
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   316.53ms   42.08ms 483.72ms   69.65%
        Connect    27.69ms   13.24ms  51.51ms   57.90%
        TTFB      316.52ms   42.08ms 483.71ms   69.65%
        TTLB        1.94us    2.29us 212.00us   98.03%
        Req/Sec     7.91k     1.13k   11.68k    70.46%
      Latency Distribution
         50%  315.30ms
         75%  343.32ms
         90%  370.49ms
         99%  421.79ms
      1888132 requests in 1.00m, 4.04GB read
      Socket errors: connect 0, read 0, write 0, timeout 21
    Requests/sec:  31419.96
    Transfer/sec:     68.77MB
    


    10k proxy_cache tests without ?nocache on port 8080

    wrk-nginx-status-050218-gzip-60s-vhoststats-10k-proxycached-remoteonly-x2-2-1.png
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d60s --latency --breakout -s scripts/pipeline2.lua http://baremetal:8080/
    Running 1m test @ http://baremetal:8080/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   189.51ms   27.96ms 645.58ms   68.43%
        Connect    30.34ms   14.80ms  57.20ms   58.34%
        TTFB      189.50ms   27.96ms 645.57ms   68.43%
        TTLB        1.59us    1.45us   1.91ms   99.80%
        Req/Sec    13.23k     1.06k   16.63k    71.92%
      Latency Distribution
         50%  189.88ms
         75%  208.61ms
         90%  224.89ms
         99%  251.03ms
      3161010 requests in 1.00m, 6.75GB read
    Requests/sec:  52605.00
    Transfer/sec:    114.99MB
    
     
  9. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    HTTP/1.1 HTTPS Proxy Load Tests



    Next up is with 3x Vultr Baremetal Intel Xeon E3-1270v6 in Centmin Mod 123.09beta01 Nginx Proxy + 2x Centmin Mod Nginx backend server configuration testing HTTP/1.1 based HTTPS load tests with wrk-cmm tool. Similar acceptable latency <500ms range for concurrent connections to peak around 5,000 concurrent nginx connections though request rate is around 20-33% lower than non-HTTPS due to HTTPS overhead.

    proxy cache header check
    Code (Text):
    curl --http1.1 -I https://baremetal.domain.com/
    HTTP/1.1 200 OK
    Date: Mon, 05 Feb 2018 16:34:44 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 4074
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Sun, 04 Feb 2018 20:06:59 GMT
    Vary: Accept-Encoding
    ETag: "5a7767e3-fea"
    Expires: Wed, 07 Mar 2018 16:34:21 GMT
    Cache-Control: max-age=2592000
    Access-Control-Allow-Origin: *
    Cache-Control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    Server: nginx centminmod
    X-Powered-By: centminmod
    X-Cache-Status: HIT
    Accept-Ranges: bytes
    

    no proxy cache header check
    Code (Text):
    curl --http1.1 -I https://baremetal.domain.com/?nocache
    HTTP/1.1 200 OK
    Date: Mon, 05 Feb 2018 16:34:46 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 4074
    Connection: keep-alive
    Vary: Accept-Encoding
    Last-Modified: Sun, 04 Feb 2018 12:30:13 GMT
    Vary: Accept-Encoding
    ETag: "5a76fcd5-fea"
    Expires: Wed, 07 Mar 2018 16:34:46 GMT
    Cache-Control: max-age=2592000
    Access-Control-Allow-Origin: *
    Cache-Control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    Server: nginx centminmod
    X-Powered-By: centminmod
    X-Cache-Status: BYPASS
    Accept-Ranges: bytes
    


    wrk-cmm load tests at 1k, 2k, 5k, 10k and 20k concurrent users

    1k concurrent HTTP/1.1 HTTPS - no proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c1000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/?nocache | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/?nocache
      4 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    33.66ms   10.13ms  81.52ms   65.90%
        Connect    37.74ms   85.96ms 280.08ms    0.00%
        TTFB       33.63ms   10.12ms  81.50ms   65.90%
        TTLB       30.19us   80.27us   7.18ms   97.39%
        Req/Sec     7.36k   767.89     9.04k    82.65%
      Latency Distribution
         50%   31.47ms
         75%   40.63ms
         90%   48.24ms
         99%   59.92ms
      289093 requests in 10.07s, 632.74MB read
    Requests/sec:  28702.64
    Transfer/sec:     62.82MB
    


    1k concurrent HTTP/1.1 HTTPS - proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c1000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/
      4 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    21.25ms    8.94ms 533.57ms   84.03%
        Connect    26.64ms   73.95ms 279.74ms    0.00%
        TTFB       21.22ms    8.94ms 533.55ms   84.03%
        TTLB       26.87us   75.18us  12.69ms   97.02%
        Req/Sec    11.76k     1.26k   13.14k    83.16%
      Latency Distribution
         50%   19.96ms
         75%   25.82ms
         90%   30.77ms
         99%   39.69ms
      462233 requests in 10.07s, 0.99GB read
    Requests/sec:  45894.80
    Transfer/sec:    100.32MB
    



    2k concurrent HTTP/1.1 HTTPS - no proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c2000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/?nocache | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/?nocache
      4 threads and 2000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    66.73ms   17.40ms 148.29ms   66.31%
        Connect    76.64ms  169.70ms 543.38ms    0.00%
        TTFB       66.69ms   17.42ms 148.27ms   66.36%
        TTLB       38.23us  171.18us   5.45ms   98.26%
        Req/Sec     7.41k     0.96k    9.47k    84.03%
      Latency Distribution
         50%   63.87ms
         75%   78.10ms
         90%   91.01ms
         99%  113.97ms
      284037 requests in 10.09s, 621.67MB read
    Requests/sec:  28149.47
    Transfer/sec:     61.61MB
    


    2k concurrent HTTP/1.1 HTTPS - proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c2000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/
      4 threads and 2000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    44.56ms   12.07ms  90.62ms   65.14%
        Connect    50.15ms  145.36ms 533.71ms    0.00%
        TTFB       44.52ms   12.07ms  90.61ms   65.16%
        TTLB       40.55us  138.03us   9.01ms   96.57%
        Req/Sec    11.14k     1.28k   13.39k    86.58%
      Latency Distribution
         50%   43.46ms
         75%   52.97ms
         90%   61.28ms
         99%   74.17ms
      424710 requests in 10.03s, 0.91GB read
    Requests/sec:  42332.74
    Transfer/sec:     92.53MB
    


    5k concurrent HTTP/1.1 HTTPS - no proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/?nocache | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/?nocache
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   167.05ms   31.39ms 280.84ms   68.52%
        Connect   212.66ms  434.31ms   1.37s     0.00%
        TTFB      166.97ms   31.50ms 280.67ms   68.70%
        TTLB       79.43us  491.71us  10.97ms   98.56%
        Req/Sec     7.35k     1.29k   10.32k    77.08%
      Latency Distribution
         50%  163.89ms
         75%  187.18ms
         90%  210.01ms
         99%  244.28ms
      258485 requests in 10.08s, 565.75MB read
    Requests/sec:  25645.41
    Transfer/sec:     56.13MB
    


    5k concurrent HTTP/1.1 HTTPS - proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   106.84ms   22.45ms 178.79ms   67.85%
        Connect   443.27ms  423.62ms   1.34s     0.57%
        TTFB      106.79ms   22.49ms 178.78ms   67.95%
        TTLB       51.93us  291.78us   6.61ms   98.14%
        Req/Sec    11.54k     1.59k   13.74k    86.04%
      Latency Distribution
         50%  106.90ms
         75%  121.60ms
         90%  136.32ms
         99%  160.19ms
      406378 requests in 10.07s, 0.87GB read
    Requests/sec:  40368.57
    Transfer/sec:     88.24MB
    


    10k concurrent HTTP/1.1 HTTPS - no proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/?nocache | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/?nocache
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   350.36ms   60.41ms 523.53ms   72.33%
        Connect   622.84ms  788.14ms   1.90s     0.00%
        TTFB      349.99ms   61.17ms 523.51ms   72.76%
        TTLB      366.18us    1.85ms  18.65ms   96.96%
        Req/Sec     6.85k     2.55k   17.23k    71.97%
      Latency Distribution
         50%  354.29ms
         75%  389.02ms
         90%  422.43ms
         99%  471.84ms
      204598 requests in 10.07s, 447.80MB read
    Requests/sec:  20325.53
    Transfer/sec:     44.49MB
    


    10k concurrent HTTP/1.1 HTTPS - proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   205.20ms   35.80ms 308.70ms   75.50%
        Connect     0.00us    0.00us   0.00us    -nan%
        TTFB      204.76ms   36.37ms 308.67ms   76.16%
        TTLB      444.44us    2.30ms  59.86ms   96.69%
        Req/Sec    11.84k     3.53k   22.38k    72.60%
      Latency Distribution
         50%  207.23ms
         75%  228.58ms
         90%  245.45ms
         99%  277.27ms
      355552 requests in 10.10s, 777.18MB read
    Requests/sec:  35208.41
    Transfer/sec:     76.96MB
    


    20k concurrent HTTP/1.1 HTTPS - no proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/?nocache | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/?nocache
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   612.85ms  145.08ms 913.16ms   73.77%
        Connect     0.00us    0.00us   0.00us    -nan%
        TTFB      606.41ms  153.17ms   1.77s    73.81%
        TTLB        6.47ms   17.36ms 513.72ms   88.13%
        Req/Sec     7.21k     2.80k   15.71k    76.40%
      Latency Distribution
         50%  649.93ms
         75%  705.80ms
         90%  760.42ms
         99%  857.30ms
      142530 requests in 10.10s, 311.96MB read
      Socket errors: connect 0, read 0, write 0, timeout 2
    Requests/sec:  14111.05
    Transfer/sec:     30.88MB
    


    20k concurrent HTTP/1.1 HTTPS - proxy cache

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com/
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   420.80ms  107.05ms 616.08ms   77.78%
        Connect     0.00us    0.00us   0.00us    -nan%
        TTFB      418.48ms  109.84ms   1.81s    77.68%
        TTLB        2.33ms    7.67ms 353.92ms   91.91%
        Req/Sec    10.92k     4.66k   34.83k    73.37%
      Latency Distribution
         50%  447.08ms
         75%  490.30ms
         90%  524.21ms
         99%  587.48ms
      218054 requests in 10.08s, 476.63MB read
      Socket errors: connect 0, read 0, write 0, timeout 1
    Requests/sec:  21628.13
    Transfer/sec:     47.28MB
    
     
  10. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    While playing with anisble as well, just noticed 2 of the 3 vultr bare metal has more memory at 64GB instead of 32GB and looks like the 64GB memory servers are not Xeon E3-1270v6 but old previous generation Xeon E3-1270v5 !

    Ansible memory and cpu outputs for 3x Vultr Bare metal servers named baremetal, baremetal2 and baremetal3

    Code (Text):
    ansible -a 'free -m' all       
    baremetal2 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          64216        4988       57534          48        1693       58626
    Swap:          1023           0        1023
    
    baremetal3 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          64216        4984       58431          38         800       58662
    Swap:          1023           0        1023
    
    baremetal | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:          31962        4661       19357          68        7943       26782
    Swap:          1023           0        1023

    Code (Text):
    ansible -a 'lscpu' all
    baremetal3 | SUCCESS | rc=0 >>
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                8
    On-line CPU(s) list:   0-7
    Thread(s) per core:    2
    Core(s) per socket:    4
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 94
    Model name:            Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz
    Stepping:              3
    CPU MHz:               3699.703
    CPU max MHz:           4000.0000
    CPU min MHz:           800.0000
    BogoMIPS:              7200.00
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              8192K
    NUMA node0 CPU(s):     0-7
    Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
    

    Code (Text):
    baremetal2 | SUCCESS | rc=0 >>
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                8
    On-line CPU(s) list:   0-7
    Thread(s) per core:    2
    Core(s) per socket:    4
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 94
    Model name:            Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz
    Stepping:              3
    CPU MHz:               3699.843
    CPU max MHz:           4000.0000
    CPU min MHz:           800.0000
    BogoMIPS:              7200.00
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              8192K
    NUMA node0 CPU(s):     0-7
    Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
    

    Code (Text):
    baremetal | SUCCESS | rc=0 >>
    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                8
    On-line CPU(s) list:   0-7
    Thread(s) per core:    2
    Core(s) per socket:    4
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 158
    Model name:            Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
    Stepping:              9
    CPU MHz:               3999.945
    CPU max MHz:           4200.0000
    CPU min MHz:           800.0000
    BogoMIPS:              7584.00
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              8192K
    NUMA node0 CPU(s):     0-7
    Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
    


    Xeon E3-1270v6 vs E3-1270v5 Intel® Product Specification Comparison

    upload_2018-2-6_12-32-44.png
     
  11. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Haproxy Loadbalancer + Centmin Mod Nginx Backends



    Next up is installing Haproxy 1.8.3 for load balancing to same Centmin Mox Nginx backends - 8x backend vhosts on each of the 2x Vultr baremetal for 16 total. Centmin Mod has long term plans to integrate Haproxy loadbalancer so this is just part of the development testing :)

    Haproxy 1.8.3 was installed with OpenSSL 1.1.0g + Pcre Jit + multithreaded support and custom Cloudflare zlib performance fork for better HTTP compressed request performance [benchmarks] and used native GCC 4.8.5 compiler with Intel optimised march=native flag. Used 4 Haproxy threads.
    Code (Text):
    haproxy -vv
    HA-Proxy version 1.8.3-205f675 2017/12/30
    Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>
    
    Build options :
      TARGET  = linux2628
      CPU     = native
      CC      = gcc
      CFLAGS  = -march=native -m64 -march=x86-64 -O2 -g
      OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1
    
    Default settings :
      maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
    
    Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
    Running on OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
    OpenSSL library supports TLS extensions : yes
    OpenSSL library supports SNI : yes
    OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
    Built with Lua version : Lua 5.3.4
    Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
    Encrypted password support via crypt(3): yes
    Built with multi-threading support.
    Built with PCRE version : 8.41 2017-07-05
    Running on PCRE version : 8.41 2017-07-05
    PCRE library supports JIT : yes
    Built with zlib version : 1.2.8
    Running on zlib version : 1.2.8
    Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
    Built with network namespace support.
    
    Available polling systems :
          epoll : pref=300,  test result OK
           poll : pref=200,  test result OK
         select : pref=150,  test result OK
    Total: 3 (3 usable), will use epoll.
    
    Available filters :
            [SPOE] spoe
            [COMP] compression
            [TRACE] trace
    

    Cloudflare zlib performance fork = 1.2.8
    Code (Text):
    Built with zlib version : 1.2.8
    Running on zlib version : 1.2.8
    

    Built with new Haproxy multithreading support
    Code (Text):
    Built with multi-threading support.
    


    Haproxy 1.8.3 setup to listen on port 90 for these tests with wrk-cmm load tester. As expected latency is a bit slower than using Centmin Mod Nginx has upstream proxy based load balancer for non-proxy_cache tests but request rate is more uniform with Haproxy. Looks like Haproxy gzip compression level defaults to level 1 which would explain why Haproxy request/s was faster than Centmin Mod Nginx in Proxy and backend modes as Nginx defaults to gzip compression level 5.

    Haproxy vs Centmin Mod Nginx Proxy
    • At 5k mark 99% percentile latency as much higher on Haproxy at 0.305s vs Centmin Mod Nginx at 0.226s. Requests Haproxy 57142 vs Centmin Mod Nginx 31319. Centmin Mod Nginx proxy_cache tests were around 51897 requests/s and 0.135s latency.
    • At 10k mark 99% percentile latency as much higher on Haproxy at 1.01s vs Centmin Mod Nginx at 0.421s. Requests Haproxy 55019 vs Centmin Mod Nginx 31419. Centmin Mod Nginx proxy_cache tests were around 52605 requests/s and 0.251s latency.
    • At 20k mark 99% percentile latency, Haproxy at 1.21s vs Centmin Mod Nginx at 0.857s. Requests Haproxy 53131 vs Centmin Mod Nginx 31319
    • At 30K mark 99% percentile latency, Haproxy at 1.31s vs Centmin Mod Nginx at 1.21s. Requests Haproxy 52648 vs Centmin Mod Nginx 29423. Centmin Mod Nginx proxy_cached tests were around 46797 requests/s and 0.758s latency.
    Centmin Mod Nginx as proxy_cache based load balancer of course had better latency. Below are tests at wrk-cmm 1k, 2k, 5k, 10k, 20k and 30k concurrent user connections.

    curl header check
    Code (Text):
    curl -I http://baremetal.domain.com:90/
    HTTP/1.1 200 OK
    Date: Tue, 06 Feb 2018 15:10:04 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 4074
    Last-Modified: Thu, 01 Feb 2018 22:50:21 GMT
    Vary: Accept-Encoding
    ETag: "5a7399ad-fea"
    Server: nginx centminmod
    X-Powered-By: centminmod
    Expires: Thu, 08 Mar 2018 15:10:04 GMT
    Cache-Control: max-age=2592000
    Access-Control-Allow-Origin: *
    Cache-Control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    Accept-Ranges: bytes
    Set-Cookie: SERVERID=l_server3; path=/
    


    1k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-1k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c1000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    15.85ms    6.31ms  89.86ms   62.46%
        Connect     3.74ms    2.33ms   9.42ms   64.20%
        TTFB       15.85ms    6.31ms  89.85ms   62.46%
        TTLB        1.78us    1.21us 301.00us   95.48%
        Req/Sec    15.82k   471.00    17.03k    74.25%
      Latency Distribution
         50%   16.32ms
         75%   19.87ms
         90%   23.48ms
         99%   31.35ms
      629696 requests in 10.02s, 1.33GB read
    Requests/sec:  62867.90
    Transfer/sec:    135.77MB
    

    2k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-2k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c2000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 2000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    33.95ms   16.00ms 209.96ms   68.65%
        Connect     6.09ms    2.96ms  11.17ms   57.40%
        TTFB       33.94ms   16.00ms 209.94ms   68.65%
        TTLB        1.89us    1.81us 160.00us   97.50%
        Req/Sec    14.92k   676.40    17.49k    70.50%
      Latency Distribution
         50%   32.96ms
         75%   42.61ms
         90%   53.99ms
         99%   82.40ms
      593764 requests in 10.02s, 1.25GB read
    Requests/sec:  59284.65
    Transfer/sec:    128.03MB
    

    5k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-5k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    91.70ms   60.43ms 763.62ms   76.99%
        Connect    15.49ms    7.46ms  29.73ms   57.92%
        TTFB       91.69ms   60.43ms 763.60ms   76.99%
        TTLB        2.03us    2.73us 359.00us   97.83%
        Req/Sec    14.41k   831.25    16.75k    75.00%
      Latency Distribution
         50%   77.76ms
         75%  117.22ms
         90%  168.36ms
         99%  305.31ms
      573528 requests in 10.04s, 1.21GB read
    Requests/sec:  57142.74
    Transfer/sec:    123.41MB
    

    10k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-10k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   202.60ms  179.14ms   1.82s    89.87%
        Connect    30.93ms   14.82ms  56.27ms   57.54%
        TTFB      202.60ms  179.14ms   1.82s    89.87%
        TTLB        1.95us    1.74us 429.00us   97.08%
        Req/Sec    13.92k     0.95k   16.86k    72.00%
      Latency Distribution
         50%  153.72ms
         75%  246.27ms
         90%  376.40ms
         99%    1.01s
      554008 requests in 10.07s, 1.17GB read
    Requests/sec:  55019.74
    Transfer/sec:    118.82MB
    

    20k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    sockerr timeout errors started to show = 7572

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-20k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   305.04ms  243.31ms   2.00s    82.38%
        Connect    62.32ms   29.96ms 115.69ms   57.72%
        TTFB      305.04ms  243.30ms   2.00s    82.38%
        TTLB        2.03us    9.16us   4.55ms   99.59%
        Req/Sec    13.53k     1.13k   17.74k    70.20%
      Latency Distribution
         50%  230.91ms
         75%  400.81ms
         90%  614.67ms
         99%    1.21s
      533007 requests in 10.03s, 1.12GB read
      Socket errors: connect 0, read 0, write 0, timeout 7572
    Requests/sec:  53131.24
    Transfer/sec:    114.74MB
    

    30k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + Pcre Jit + cloudflare zlib performance fork library

    sockerr timeout errors started to show = 6694

    haproxy-183-admin-stats-wrk-cmm-http-gziptests-30k-02.png

    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   351.94ms  262.48ms   2.00s    83.22%
        Connect    95.35ms   45.32ms 173.60ms   57.68%
        TTFB      351.93ms  262.48ms   2.00s    83.22%
        TTLB        2.03us    3.58us 686.00us   98.04%
        Req/Sec    13.54k     1.08k   15.62k    70.56%
      Latency Distribution
         50%  274.33ms
         75%  451.12ms
         90%  696.42ms
         99%    1.31s
      531629 requests in 10.10s, 1.12GB read
      Socket errors: connect 0, read 0, write 0, timeout 6694
    Requests/sec:  52648.67
    Transfer/sec:    113.70MB
    


    Haproxy Multithreaded Thread Tests



    Raising Haproxy 1.8.3 threads in multithreaded mode from 4 to 6 improved performance again but strangely higher gzip compression level 5 vs 1 didn't change performance much in terms of throughput for requests/s but latency was better at lower compression level 1 defaults ?

    Testing at 30k with gzip level 1 default
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   406.19ms  354.50ms   2.00s    82.47%
        Connect   101.37ms   44.81ms 205.49ms   64.39%
        TTFB      406.18ms  354.50ms   2.00s    82.47%
        TTLB        2.41us   28.09us  16.85ms   99.91%
        Req/Sec    18.67k     2.27k   38.28k    83.85%
      Latency Distribution
         50%  292.19ms
         75%  574.16ms
         90%  901.79ms
         99%    1.58s
      725639 requests in 10.04s, 1.53GB read
      Socket errors: connect 0, read 4, write 0, timeout 7006
    Requests/sec:  72259.04
    Transfer/sec:    156.06MB
    

    Testing at 30k with gzip level 5
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   407.12ms  363.36ms   2.00s    83.80%
        Connect    87.11ms   44.16ms 184.25ms   59.30%
        TTFB      407.11ms  363.36ms   2.00s    83.80%
        TTLB        2.31us   17.46us  11.26ms   99.78%
        Req/Sec    18.68k     2.39k   27.49k    86.63%
      Latency Distribution
         50%  291.72ms
         75%  573.52ms
         90%  912.39ms
         99%    1.63s
      733714 requests in 10.08s, 1.55GB read
      Socket errors: connect 0, read 0, write 0, timeout 8329
    Requests/sec:  72767.99
    Transfer/sec:    157.16MB
    


    Haproxy has alot more options for load balancing to tune, so managed to pull off 200,000 concurrent connections :)

    200k
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c200000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 200000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   111.20ms  106.22ms   1.38s    86.94%
        Connect   597.92ms  209.98ms 818.98ms   78.42%
        TTFB      111.20ms  106.22ms   1.38s    86.94%
        TTLB        2.54us   32.19us  14.15ms   99.79%
        Req/Sec    17.74k     3.79k   27.29k    57.34%
      Latency Distribution
         50%   77.31ms
         75%  148.38ms
         90%  246.56ms
         99%  514.09ms
      657202 requests in 10.13s, 1.39GB read
      Socket errors: connect 0, read 1, write 0, timeout 0
    Requests/sec:  64868.73
    Transfer/sec:    140.10MB
    
     
  12. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Haproxy HTTP/2 and HTTP/1.1 HTTPS Testing on port 444



    Next up is Haproxy HTTP/2 and HTTP/1.1 HTTPS Testing on port 444

    header checks for HTTP/2 and HTTP/1.1
    Code (Text):
    curl -I https://baremetal.domain.com:444/
    HTTP/2 200
    date: Wed, 07 Feb 2018 16:16:54 GMT
    content-type: text/html; charset=utf-8
    content-length: 4074
    last-modified: Sun, 04 Feb 2018 12:30:13 GMT
    vary: Accept-Encoding
    etag: "5a76fcd5-fea"
    server: nginx centminmod
    x-powered-by: centminmod
    expires: Fri, 09 Mar 2018 16:16:54 GMT
    cache-control: max-age=2592000
    access-control-allow-origin: *
    cache-control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    accept-language: bytes
    set-cookie: SERVERID=1_server6; path=/
    

    Code (Text):
    curl --http1.1 -I https://baremetal.domain.com:444/
    HTTP/1.1 200 OK
    Date: Wed, 07 Feb 2018 16:17:37 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 4074
    Last-Modified: Sun, 04 Feb 2018 12:30:13 GMT
    Vary: Accept-Encoding
    ETag: "5a76fcd5-fea"
    Server: nginx centminmod
    X-Powered-By: centminmod
    Expires: Fri, 09 Mar 2018 16:17:37 GMT
    Cache-Control: max-age=2592000
    Access-Control-Allow-Origin: *
    Cache-Control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    Accept-Ranges: bytes
    Set-Cookie: SERVERID=1_server5; path=/
    

    https-haproxy-183-admin-stats-01.png
    https-haproxy-183-admin-stats-02.png

    With re-tuned and revisted Haproxy 1.8.3 config, will retest both non-HTTPS and HTTP/1.1 HTTPS 1k, 2k, 5k, 10k, 20k, and 30k wrk-cmm load tests. The wrk-cm load testing tool is only HTTP/1.1 HTTPS supported so can't test HTTP/2 HTTPS as need to switch to h2load HTTP/2 load testing tool for that. Socket errors in wrk-cmm testings start showing up at 20k and 30k mark for HTTP/1.1 HTTPS tests while no errors start showing up for non-HTTPS load tests. As you can see HTTPS has overhead and thus result in lower throughput requests/s and higher latencies for 1k and 2k tests, but for 5k and above result in lower latenncies probably due to socket errors not fullfulling the requests.
    • port 90 = non-HTTPS
    • port 444 = HTTP/1.1 based HTTPS

    1K Concurrent



    1k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c1000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    16.73ms    8.15ms  80.67ms   71.71%
        Connect    58.78ms  108.79ms 380.46ms    0.00%
        TTFB       16.73ms    8.15ms  80.65ms   71.71%
        TTLB        7.07us   49.04us   8.95ms   99.80%
        Req/Sec    14.78k     1.73k   20.23k    89.95%
      Latency Distribution
         50%   16.34ms
         75%   21.66ms
         90%   27.06ms
         99%   39.17ms
      576357 requests in 10.09s, 1.22GB read
    Requests/sec:  57093.55
    Transfer/sec:    123.31MB
    


    1k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c1000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 1000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    12.78ms    7.26ms  71.80ms   82.96%
        Connect     3.96ms    2.06ms   7.94ms   61.90%
        TTFB       12.77ms    7.26ms  71.79ms   82.96%
        TTLB        1.87us    4.01us   2.00ms   97.52%
        Req/Sec    20.29k     1.38k   25.92k    81.25%
      Latency Distribution
         50%    9.82ms
         75%   14.29ms
         90%   24.79ms
         99%   36.66ms
      809554 requests in 10.05s, 1.71GB read
    Requests/sec:  80556.48
    Transfer/sec:    173.98MB
    


    2K Concurrent



    2k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c2000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 2000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    34.49ms   26.85ms 416.77ms   80.12%
        Connect   152.13ms  231.39ms   1.02s     0.00%
        TTFB       34.48ms   26.86ms 416.76ms   80.12%
        TTLB        9.19us  113.15us  15.34ms   99.82%
        Req/Sec    15.46k     1.78k   17.82k    94.15%
      Latency Distribution
         50%   27.21ms
         75%   46.82ms
         90%   69.06ms
         99%  126.67ms
      584838 requests in 10.06s, 1.23GB read
    Requests/sec:  58107.54
    Transfer/sec:    125.50MB
    


    2k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c2000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 2000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    29.01ms   19.29ms 280.98ms   81.99%
        Connect     7.97ms    4.91ms  20.60ms   64.35%
        TTFB       29.01ms   19.29ms 280.97ms   81.99%
        TTLB        1.96us    4.79us   2.27ms   98.09%
        Req/Sec    18.32k     1.29k   21.83k    65.75%
      Latency Distribution
         50%   23.89ms
         75%   36.43ms
         90%   52.80ms
         99%   99.43ms
      730811 requests in 10.06s, 1.54GB read
    Requests/sec:  72660.14
    Transfer/sec:    156.93MB
    


    5K Concurrent



    5k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    92.75ms   79.21ms   1.00s    84.14%
        Connect   384.16ms  425.58ms   1.98s     5.97%
        TTFB       92.73ms   79.21ms   1.00s    84.14%
        TTLB        9.94us  362.48us 101.42ms   99.96%
        Req/Sec    14.44k     3.22k   19.45k    92.48%
      Latency Distribution
         50%   68.89ms
         75%  123.58ms
         90%  191.32ms
         99%  380.29ms
      525014 requests in 10.06s, 1.11GB read
    Requests/sec:  52168.66
    Transfer/sec:    112.67MB
    


    5k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c5000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 5000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    87.17ms   89.85ms   1.11s    89.85%
        Connect    16.07ms    9.18ms  45.55ms   69.48%
        TTFB       87.16ms   89.85ms   1.11s    89.85%
        TTLB        2.09us    4.64us   1.71ms   98.02%
        Req/Sec    17.30k     1.74k   22.32k    77.19%
      Latency Distribution
         50%   62.51ms
         75%  110.88ms
         90%  178.26ms
         99%  462.36ms
      690598 requests in 10.10s, 1.46GB read
    Requests/sec:  68405.46
    Transfer/sec:    147.74MB
    


    10K Concurrent



    10k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    89.60ms   50.35ms 895.39ms   78.59%
        Connect   249.50ms  617.57ms   2.00s     0.00%
        TTFB       89.54ms   50.31ms 894.64ms   78.61%
        TTLB       59.07us    1.27ms  48.90ms   99.78%
        Req/Sec    13.93k     2.70k   25.50k    79.86%
      Latency Distribution
         50%   76.64ms
         75%  109.48ms
         90%  152.87ms
         99%  265.68ms
      443468 requests in 10.08s, 0.94GB read
    Requests/sec:  44011.61
    Transfer/sec:     95.06MB
    


    10k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c10000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 10000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   102.12ms  136.26ms   1.92s    92.44%
        Connect   259.75ms  332.08ms   1.01s    80.82%
        TTFB      102.12ms  136.26ms   1.92s    92.44%
        TTLB        2.19us    8.77us   5.47ms   99.22%
        Req/Sec    17.12k     2.96k   23.15k    60.00%
      Latency Distribution
         50%   62.88ms
         75%  122.79ms
         90%  208.78ms
         99%  781.86ms
      679586 requests in 10.08s, 1.43GB read
    Requests/sec:  67443.23
    Transfer/sec:    145.66MB
    


    20K Concurrent



    20k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   119.97ms   85.97ms   1.30s    80.93%
        Connect   657.12ms  758.88ms   2.00s     0.00%
        TTFB      119.85ms   86.02ms   1.30s    80.84%
        TTLB      110.08us    1.60ms  63.88ms   99.46%
        Req/Sec    10.72k     2.02k   16.03k    76.24%
      Latency Distribution
         50%   97.53ms
         75%  151.99ms
         90%  226.28ms
         99%  433.03ms
      343116 requests in 10.08s, 741.06MB read
      Socket errors: connect 76, read 0, write 0, timeout 0
    Requests/sec:  34033.97
    Transfer/sec:     73.51MB
    


    20k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c20000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 20000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    96.53ms  145.22ms   1.89s    94.14%
        Connect   337.97ms  357.93ms   1.02s    71.40%
        TTFB       96.53ms  145.22ms   1.89s    94.14%
        TTLB        2.21us   11.82us   7.37ms   99.57%
        Req/Sec    17.10k     3.18k   24.97k    62.53%
      Latency Distribution
         50%   59.73ms
         75%  110.72ms
         90%  189.18ms
         99%  841.75ms
      679031 requests in 10.09s, 1.43GB read
    Requests/sec:  67278.93
    Transfer/sec:    145.31MB
    


    30K Concurrent



    30k concurrent HTTP/1.1 HTTPS - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua https://baremetal.domain.com:444/ | grep -v 'unable to'
    Running 10s test @ https://baremetal.domain.com:444/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    92.72ms   45.48ms 671.23ms   72.82%
        Connect     0.00us    0.00us   0.00us    -nan%
        TTFB       92.40ms   45.07ms 671.20ms   72.44%
        TTLB      314.02us    3.56ms  98.81ms   99.17%
        Req/Sec    11.93k     3.23k   20.54k    66.80%
      Latency Distribution
         50%   87.63ms
         75%  115.25ms
         90%  148.25ms
         99%  250.15ms
      362486 requests in 10.10s, 782.89MB read
      Socket errors: connect 7398, read 0, write 0, timeout 0
    Requests/sec:  35888.71
    Transfer/sec:     77.51MB
    


    30k concurrent HTTP/1.1 HTTP - haproxy 1.8.3 + openssl 1.1.0g + luajit + cloudflare zlib performance fork library
    Code (Text):
    wrk-cmm -H 'Accept-Encoding: gzip' -b 127.0.0.1/28 -t4 -c30000 -d10s --latency --breakout -s scripts/pipeline2.lua http://baremetal.domain.com:90/ | grep -v 'unable to'
    Running 10s test @ http://baremetal.domain.com:90/
      4 threads and 30000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    87.22ms   82.56ms   1.16s    87.59%
        Connect   476.82ms  392.52ms   1.04s    51.08%
        TTFB       87.22ms   82.56ms   1.16s    87.59%
        TTLB        2.29us   40.81us  29.08ms   99.87%
        Req/Sec    17.05k     5.00k   25.23k    55.84%
      Latency Distribution
         50%   62.03ms
         75%  114.64ms
         90%  186.77ms
         99%  391.28ms
      673850 requests in 10.10s, 1.42GB read
    Requests/sec:  66723.64
    Transfer/sec:    144.11MB
    
     
  13. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Haproxy HTTP/2 HTTPS vs Centmin Mod Nginx HTTP/2 HTTPS



    Now lets test HTTP/2 HTTPS load tests using h2load HTTP/2 load tester for both Haproxy 1.8.3 and Centmin Mod Nginx 1.13.8 both are compiled against OpenSSL 1.1.0g with Cloudflare zlib performance library. I use h2load HTTP/2 load tester when I want to test HTTP/2 HTTPS like when I tested against Caddy HTTP/2 HTTPS. Haproxy 1.8 branch is the first branch to have native HTTP/2 HTTPS support so there still bugs and improvements to be made with regards to HTTP/2 HTTPS. This seems to show in below h2load HTTP/2 load testing as well in terms of wide fluctuation results for Haproxy HTTP/2 HTTPS and less header compression savings via HPACK compression and compression efficiency. Haproxy HTTP/2 HTTPS might not use as high a level of compression by default so may also explain the better throughput in terms of requests/s as there's less overhead.

    h2load version
    Code (Text):
    h2load --version
    h2load nghttp2/1.30.0-DEV
    


    Test Configurations.
    • port 444 = with Haproxy 1.8.3 HTTP/2 HTTPS with 2x Vultr Baremetal Xeon E3-1270v5 64GB 2x 240GB SSD Raid 1 backends
    • port 443 without ?nocache = no proxy_cache with Centmin Mod Nginx 1.13.8 HTTP/2 HTTPS with 2x Vultr Baremetal Xeon E3-1270v5 64GB 2x 240GB SSD Raid 1 backends
    • port 443 with ?nocache = proxy_cache with Centmin Mod Nginx 1.13.8 HTTP/2 HTTPS with 2x Vultr Baremetal Xeon E3-1270v5 64GB 2x 240GB SSD Raid 1 backends
    Haproxy HTTP/2 HTTPS
    Code (Text):
    curl -I https://baremetal.domain.com:444/
    HTTP/2 200
    date: Wed, 07 Feb 2018 19:06:41 GMT
    content-type: text/html; charset=utf-8
    content-length: 4074
    last-modified: Sun, 04 Feb 2018 12:30:13 GMT
    vary: Accept-Encoding
    etag: "5a76fcd5-fea"
    server: nginx centminmod
    x-powered-by: centminmod
    expires: Fri, 09 Mar 2018 19:06:41 GMT
    cache-control: max-age=2592000
    access-control-allow-origin: *
    cache-control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    accept-language: bytes
    set-cookie: SERVERID=1_server5; path=/
    

    Centmin Mod Nginx HTTP/2 HTTPS with proxy_cache
    Code (Text):
    curl -I https://baremetal.domain.com:443/
    HTTP/2 200
    date: Wed, 07 Feb 2018 19:07:23 GMT
    content-type: text/html; charset=utf-8
    content-length: 4074
    vary: Accept-Encoding
    last-modified: Sun, 04 Feb 2018 12:30:13 GMT
    vary: Accept-Encoding
    etag: "5a76fcd5-fea"
    expires: Fri, 09 Mar 2018 19:06:54 GMT
    cache-control: max-age=2592000
    access-control-allow-origin: *
    cache-control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    server: nginx centminmod
    x-powered-by: centminmod
    x-cache-status: HIT
    accept-ranges: bytes
    

    Centmin Mod Nginx HTTP/2 HTTPS without proxy_cache
    Code (Text):
    curl -I https://baremetal.domain.com:443/?nocache
    HTTP/2 200
    date: Wed, 07 Feb 2018 19:07:36 GMT
    content-type: text/html; charset=utf-8
    content-length: 4074
    vary: Accept-Encoding
    last-modified: Sun, 04 Feb 2018 12:30:13 GMT
    vary: Accept-Encoding
    etag: "5a76fcd5-fea"
    expires: Fri, 09 Mar 2018 19:07:36 GMT
    cache-control: max-age=2592000
    access-control-allow-origin: *
    cache-control: public, must-revalidate, proxy-revalidate, immutable, stale-while-revalidate=86400, stale-if-error=604800
    server: nginx centminmod
    x-powered-by: centminmod
    x-cache-status: BYPASS
    accept-ranges: bytes
    


    Haproxy HTTP/2 HTTPS



    Light h2load test with 2 threads, 2 concurrent users and 10 requests
    Code (Text):
    /usr/local/bin/h2load -t2 -c2 -m100 -n10 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:444/
    starting benchmark...
    spawning thread #0: 1 total client(s). 5 total requests
    spawning thread #1: 1 total client(s). 5 total requests
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES128-GCM-SHA256
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    progress: 20% done
    progress: 40% done
    progress: 60% done
    progress: 80% done
    progress: 100% done
    
    finished in 2.77ms, 3607.50 req/s, 7.43MB/s
    requests: 10 total, 10 started, 10 done, 10 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 10 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 21.10KB (21607) total, 4.19KB (4287) headers (space savings 8.92%), 16.53KB (16930) data
                         min         max         mean         sd        +/- sd
    time for request:      524us       921us       653us       131us    80.00%
    time for connect:     1.60ms      1.84ms      1.72ms       168us   100.00%
    time to 1st byte:     2.20ms      2.29ms      2.24ms        61us   100.00%
    req/s           :    1947.82     1963.71     1955.77       11.24   100.00%
    

    Heavy h2load test with 7x runs of 2 threads, 500 concurrent users and 50k requests

    Seems to have a huge swing in terms of performance between the 7 runs ! Haproxy was configured with native inbuilt caching as well but it was only configured for non-HTTPS tests so shouldn't show up for HTTPS tsts ? But seems maybe only 2 out of 7 of the runs had caching. A Haproxy bug? FYI, Caching TTL = 10s
    Code (Text):
    echo "with cache - Haproxy 1.8.3 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:444/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    

    Code (Text):
    echo "with cache - Haproxy 1.8.3 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:444/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    with cache - Haproxy 1.8.3 HTTP/2 h2load stress test
    4192.64 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492141) total, 20.44MB (21434340) headers (space savings 8.92%), 80.73MB (84650000) data
    43055.35 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492162) total, 20.44MB (21434379) headers (space savings 8.92%), 80.73MB (84650000) data
    43342.96 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492323) total, 20.44MB (21434369) headers (space savings 8.92%), 80.73MB (84650000) data
    4225.61 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492273) total, 20.44MB (21434355) headers (space savings 8.92%), 80.73MB (84650000) data
    4462.79 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492048) total, 20.44MB (21434355) headers (space savings 8.92%), 80.73MB (84650000) data
    4238.36 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107492414) total, 20.44MB (21434388) headers (space savings 8.92%), 80.73MB (84650000) data
    4452.79 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 102.51MB (107491991) total, 20.44MB (21434379) headers (space savings 8.92%), 80.73MB (84650000) data
    


    Centmin Nginx HTTP/2 HTTPS with proxy_cache



    Nginx proxy_cache TTL = 10s

    Light h2load test with 2 threads, 2 concurrent users and 10 requests
    Code (Text):
    /usr/local/bin/h2load -t2 -c2 -m100 -n10 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/
    starting benchmark...
    spawning thread #0: 1 total client(s). 5 total requests
    spawning thread #1: 1 total client(s). 5 total requests
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    progress: 20% done
    progress: 40% done
    progress: 60% done
    progress: 80% done
    progress: 100% done
    
    finished in 2.27ms, 4399.47 req/s, 8.75MB/s
    requests: 10 total, 10 started, 10 done, 10 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 10 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 20.37KB (20860) total, 3.57KB (3652) headers (space savings 22.30%), 16.53KB (16930) data
                         min         max         mean         sd        +/- sd
    time for request:      175us       586us       374us       138us    60.00%
    time for connect:     1.46ms      1.47ms      1.47ms         7us   100.00%
    time to 1st byte:     1.66ms      1.68ms      1.67ms        14us   100.00%
    req/s           :    2361.31     2444.93     2403.12       59.12   100.00%
    

    Heavy h2load test with 7x runs of 2 threads, 500 concurrent users and 50k requests
    Code (Text):
    echo "with proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    

    Code (Text):
    echo "with proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    with proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test
    41114.53 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.02MB (103825021) total, 17.41MB (18250521) headers (space savings 22.34%), 80.73MB (84650000) data
    41366.07 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.17MB (103985124) total, 17.56MB (18410624) headers (space savings 22.32%), 80.73MB (84650000) data
    44522.57 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.15MB (103970551) total, 17.54MB (18396051) headers (space savings 22.32%), 80.73MB (84650000) data
    43232.74 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.12MB (103938667) total, 17.51MB (18364167) headers (space savings 22.32%), 80.73MB (84650000) data
    42330.59 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103973060) total, 17.55MB (18398560) headers (space savings 22.32%), 80.73MB (84650000) data
    44525.14 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.18MB (103998511) total, 17.57MB (18424011) headers (space savings 22.32%), 80.73MB (84650000) data
    42880.43 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.13MB (103949287) total, 17.52MB (18374787) headers (space savings 22.32%), 80.73MB (84650000) data
    


    Centmin Nginx HTTP/2 HTTPS without proxy_cache



    Light h2load test with 2 threads, 2 concurrent users and 10 requests
    Code (Text):
    /usr/local/bin/h2load -t2 -c2 -m100 -n10 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/?nocache
    starting benchmark...
    spawning thread #0: 1 total client(s). 5 total requests
    spawning thread #1: 1 total client(s). 5 total requests
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    progress: 20% done
    progress: 40% done
    progress: 60% done
    progress: 80% done
    progress: 100% done
    
    finished in 2.57ms, 3885.00 req/s, 7.74MB/s
    requests: 10 total, 10 started, 10 done, 10 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 10 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 20.40KB (20890) total, 3.60KB (3682) headers (space savings 22.16%), 16.53KB (16930) data
                         min         max         mean         sd        +/- sd
    time for request:      444us       858us       660us       147us    60.00%
    time for connect:     1.46ms      1.47ms      1.47ms         8us   100.00%
    time to 1st byte:     1.93ms      1.97ms      1.95ms        24us   100.00%
    req/s           :    2100.06     2112.52     2106.29        8.82   100.00%
    

    Heavy h2load test with 7x runs of 2 threads, 500 concurrent users and 50k requests
    Code (Text):
    echo "no proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/?nocache > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    

    Code (Text):
    echo "no proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t2 -c500 -m100 -n50000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/?nocache > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    no proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test
    
    3255.46 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3270.14 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3206.48 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3256.87 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3236.25 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3304.80 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    3270.14 req/s 100% completed  status codes: 50000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 99.16MB (103975000) total, 17.55MB (18400500) headers (space savings 22.20%), 80.73MB (84650000) data
    
     
  14. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Part 2 - Haproxy HTTP/2 HTTPS vs Centmin Mod Nginx HTTP/2 HTTPS



    At higher levels of h2load concurrency bumping from 500 concurrent users to 5,000 concurrent users and bumping requests from 50k to 100k and h2load bump from 2 threads to 4 threads. Again Haproxy HTTP/2 HTTPS throughput beats Nginx HTTP/2 HTTPS for no cached request h2load tests though as you can see HTTP/2 header compression ratio is lower for Haproxy HTTP/2 HTTPS at 8.92% savings vs 22+ savings for Nginx HTTP/2 so that is probably why throughput is higher as there's less compression overhead ? Nginx total h2load data traffic size is ~198 MB which is made up of ~35 MB of headers. While Haproxy total h2load data traffic size is ~205 MB which is made up of ~40.88 MB of headers - which is ~14.4% larger in total header sizes.

    Haproxy HTTP/2 HTTPS



    Heavy h2load test with 7x runs of 4 threads, 5000 concurrent users and 100k requests

    average = 7,000.77 req/s ~15.7% higher than Centmin Mod Nginx non-cached HTTP/2 HTTPS tests below.

    Code (Text):
    echo "Haproxy 1.8.3 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t4 -c5000 -m100 -n100000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:444/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    Haproxy 1.8.3 HTTP/2 h2load stress test
    7115.71 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215053563) total, 40.88MB (42868754) headers (space savings 8.92%), 161.46MB (169300000) data
    8324.00 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215052443) total, 40.88MB (42868714) headers (space savings 8.92%), 161.46MB (169300000) data
    6329.77 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215051915) total, 40.88MB (42868735) headers (space savings 8.92%), 161.46MB (169300000) data
    6566.54 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215054951) total, 40.88MB (42868792) headers (space savings 8.92%), 161.46MB (169300000) data
    6921.47 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215053985) total, 40.88MB (42868843) headers (space savings 8.92%), 161.46MB (169300000) data
    7133.50 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215054324) total, 40.88MB (42868813) headers (space savings 8.92%), 161.46MB (169300000) data
    7614.40 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 205.09MB (215053501) total, 40.88MB (42868764) headers (space savings 8.92%), 161.46MB (169300000) data
    


    Centmin Nginx HTTP/2 HTTPS with proxy_cache



    Nginx proxy_cache TTL = 10s

    Heavy h2load test with 7x runs of 4 threads, 5000 concurrent users and 100k requests

    average = 30,119.74 req/s

    Code (Text):
    echo "with proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t4 -c5000 -m100 -n100000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/ > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    with proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test
    27936.36 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.22MB (207850008) total, 34.81MB (36505008) headers (space savings 22.33%), 161.46MB (169300000) data
    27770.82 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.25MB (207883428) total, 34.85MB (36538428) headers (space savings 22.33%), 161.46MB (169300000) data
    31370.32 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.49MB (208135263) total, 35.09MB (36790263) headers (space savings 22.31%), 161.46MB (169300000) data
    30790.68 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.57MB (208218579) total, 35.17MB (36873579) headers (space savings 22.31%), 161.46MB (169300000) data
    30545.24 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.42MB (208055355) total, 35.01MB (36710355) headers (space savings 22.32%), 161.46MB (169300000) data
    31300.47 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.53MB (208176203) total, 35.12MB (36831203) headers (space savings 22.31%), 161.46MB (169300000) data
    31124.27 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208155007) total, 35.10MB (36810007) headers (space savings 22.31%), 161.46MB (169300000) data
    


    Centmin Nginx HTTP/2 HTTPS without proxy_cache



    Heavy h2load test with 7x runs of 4 threads, 500 concurrent users and 100k requests

    average = 5,901.85 req/s

    Code (Text):
    echo "no proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test"; sleep 30; for i in {1..7}; do /usr/local/bin/h2load -t4 -c5000 -m100 -n100000 -H 'Accept-Encoding: gzip' https://baremetal.domain.com:443/?nocache > h2load.$i.nginx.log; cat h2load.$i.nginx.log | awk '/finished in/ {print $4 " req/s "} /requests: / {print ($8/$2*100)"% completed"} /status codes: / {print " ",$0}  /traffic: / {print " ",$0}' | tr -d '\n'; echo; sleep 30; done;
    no proxy_cache CentminMod.com Nginx 1.13.8 HTTP/2 h2load stress test
    5839.77 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    6041.26 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    5288.68 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    5939.11 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    6024.00 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    5921.01 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    6259.09 req/s 100% completed  status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx  traffic: 198.51MB (208150000) total, 35.10MB (36805000) headers (space savings 22.19%), 161.46MB (169300000) data
    
     
  15. eva2000

    eva2000 Administrator Staff Member

    53,152
    12,110
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,645
    Local Time:
    11:54 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Well it was definitely alot of fun testing Vultr Bare Metal servers especially having access to 10Gbps network connectivity really makes OVH's offered 250-500Mbit/s caps very limiting ! But today is final day with Vultr Bare Metal as the $100 free promo credits runs out :(

    The benchmarks and testing done on Vultr Bare Metal have been useful in gather data for Centmin Mod LEMP stack optimisations in future. Of course having a larger benchmarking budget would always help CentminMod Benchmarking Budget Assistance ;)

    Example of 10Gbps network speed for centmin mod 123.09beta01 install download of pcre tarball which is hosted on centminmod.com local New York/NJ mirro on a Vultr VPS so download speed was at 130MB/s :D

    Code (Text):
    *************************************************
    * Installing nginx
    *************************************************
    Installing nginx Modules / Prerequisites...
    Download pcre-8.41.tar.gz ...
    --2018-02-10 22:38:45--  https://centminmod.com/centminmodparts/pcre/pcre-8.41.tar.gz
    Resolving centminmod.com... 45.63.18.5
    Connecting to centminmod.com|45.63.18.5|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 2068775 (2.0M) [application/octet-stream]
    Saving to: ‘pcre-8.41.tar.gz’
    
         0K .......... .......... .......... .......... ..........  2% 84.5M 0s
        50K .......... .......... .......... .......... ..........  4%  161M 0s
       100K .......... .......... .......... .......... ..........  7%  161M 0s
       150K .......... .......... .......... .......... ..........  9%  158M 0s
       200K .......... .......... .......... .......... .......... 12%  155M 0s
       250K .......... .......... .......... .......... .......... 14%  159M 0s
       300K .......... .......... .......... .......... .......... 17%  117M 0s
       350K .......... .......... .......... .......... .......... 19%  174M 0s
       400K .......... .......... .......... .......... .......... 22%  164M 0s
       450K .......... .......... .......... .......... .......... 24%  159M 0s
       500K .......... .......... .......... .......... .......... 27% 74.9M 0s
       550K .......... .......... .......... .......... .......... 29%  158M 0s
       600K .......... .......... .......... .......... .......... 32%  170M 0s
       650K .......... .......... .......... .......... .......... 34%  156M 0s
       700K .......... .......... .......... .......... .......... 37%  125M 0s
       750K .......... .......... .......... .......... .......... 39%  134M 0s
       800K .......... .......... .......... .......... .......... 42%  128M 0s
       850K .......... .......... .......... .......... .......... 44%  124M 0s
       900K .......... .......... .......... .......... .......... 47%  117M 0s
       950K .......... .......... .......... .......... .......... 49%  106M 0s
      1000K .......... .......... .......... .......... .......... 51%  121M 0s
      1050K .......... .......... .......... .......... .......... 54%  112M 0s
      1100K .......... .......... .......... .......... .......... 56%  144M 0s
      1150K .......... .......... .......... .......... .......... 59%  128M 0s
      1200K .......... .......... .......... .......... .......... 61%  102M 0s
      1250K .......... .......... .......... .......... .......... 64%  130M 0s
      1300K .......... .......... .......... .......... .......... 66% 99.9M 0s
      1350K .......... .......... .......... .......... .......... 69%  126M 0s
      1400K .......... .......... .......... .......... .......... 71%  131M 0s
      1450K .......... .......... .......... .......... .......... 74%  140M 0s
      1500K .......... .......... .......... .......... .......... 76%  146M 0s
      1550K .......... .......... .......... .......... .......... 79%  131M 0s
      1600K .......... .......... .......... .......... .......... 81%  134M 0s
      1650K .......... .......... .......... .......... .......... 84%  143M 0s
      1700K .......... .......... .......... .......... .......... 86%  131M 0s
      1750K .......... .......... .......... .......... .......... 89%  136M 0s
      1800K .......... .......... .......... .......... .......... 91%  142M 0s
      1850K .......... .......... .......... .......... .......... 94%  124M 0s
      1900K .......... .......... .......... .......... .......... 96%  145M 0s
      1950K .......... .......... .......... .......... .......... 98%  142M 0s
      2000K .......... ..........                                 100%  100M=0.02s
    
    2018-02-10 22:38:45 (130 MB/s) - ‘pcre-8.41.tar.gz’ saved [2068775/2068775]
    
    Download done.
    pcre-8.41.tar.gz valid file.