Join the community today
Become a Member

PHP PHP 7.x Benchmarks Centmin Mod vs Easyengine vs Webinoly vs VestaCP vs OneInStack

Discussion in 'Nginx and PHP-FPM news & discussions' started by eva2000, Jun 17, 2018.

Thread Status:
Not open for further replies.
  1. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Previously did some static HTML HTTP/2 HTTPS Nginx benchmarks and non-HTTPS Nginx benchmarks comparing several LEMP stacks - Centmin Mod vs Easyengine vs Webinoly vs VestaCP vs OneInStack. Now it's time to look into the respective LEMP stack's PHP-FPM performance.

    LEMP Stack Configurations



    The following LEMP stack installers installed in Ubuntu 18.04 LTS LXD containers on a ssdnode's 4 CPU, 16GB ram, 80GB disk KVM VPS with Ubuntu 18.04 LTS.
    • Centmin Mod 123.09beta01 beta Nginx 1.15.0 on CentOS 7.5 64bit (default gzip compression = 5)
    • Easyengine 3.8.1 using Nginx 1.14.0 on Ubuntu 16.04 LTS (default gzip compression = 6)
    • OneInStack Nginx 1.14.0 on Ubuntu 16.04 LTS (default gzip compression = 6)
    • OneInStack OpenResty Nginx 1.13.6 on Ubuntu 16.04 LTS (default gzip compression = 6)
    • VestaCP 0.9.8-21 using Nginx 1.15.0 on Ubuntu 16.04 LTS (default gzip compression = 9)
    • Webinoly 1.4.3 using Nginx 1.14.0 on Ubuntu 18.04 LTS (default gzip compression = 6)

    PHP-FPM Version Background Info



    For each LEMP stack, will be updating to latest PHP 7.x version that is available only via the LEMP stack's routines themselves. This will exclude PHP 7.x upgrade paths which require any manual/additional repository configurations to setup. This means the following PHP-FPM versions were tested:
    • Centmin Mod 123.09beta01 beta Nginx 1.15.0 on CentOS 7.5 64bit = PHP 7.2.6 without PGO and PHP 7.2.6 with PGO. PGO = Profile Guided Optimizations (Updated via centmin.sh menu option 5)
    • EasyEngine 3.8.1 using Nginx 1.14.0 on Ubuntu 16.04 LTS = PHP 7.0.30 (EE upgrade routine latest)
    • OneInStack Nginx 1.14.0 on Ubuntu 16.04 LTS = PHP 7.2.6 (default)
    • OneInStack OpenResty Nginx 1.13.6 on Ubuntu 16.04 LTS = PHP 7.2.6 (default)
    • VestaCP 0.9.8-21 using Nginx 1.15.0 on Ubuntu 16.04 LTS = PHP 7.0.30 (default)
    • Webinoly 1.4.3 using Nginx 1.14.0 on Ubuntu 18.04 LTS = PHP 7.2.5 (default)

    Centmin Mod
    Code (Text):
    PHP 7.2.6 (cli) (built: Jun 16 2018 12:20:03) ( NTS )
    Copyright (c) 1997-2018 The PHP Group
    Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
       with Zend OPcache v7.2.6, Copyright (c) 1999-2018, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Date: Sat, 16 Jun 2018 17:10:05 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    Server: nginx centminmod
    X-Powered-By: centminmod
    
    curl -Iks https://http2.domain.com/micro_bench.php
    HTTP/1.1 200 OK
    Date: Sat, 16 Jun 2018 17:10:09 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    Server: nginx centminmod
    X-Powered-By: centminmod
    

    EasyEngine
    Code (Text):
    PHP 7.0.30-1+ubuntu16.04.1+deb.sury.org+1 (cli) (built: May  2 2018 12:43:14) ( NTS )
    Copyright (c) 1997-2017 The PHP Group
    Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
       with Zend OPcache v7.0.30-1+ubuntu16.04.1+deb.sury.org+1, Copyright (c) 1999-2017, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:11:28 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    X-Powered-By: EasyEngine 3.8.1
    
    curl -Iks https://http2.domain.com/micro_bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:11:33 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    X-Powered-By: EasyEngine 3.8.1
    

    OneInStack Std Nginx
    Code (Text):
    PHP 7.2.6 (cli) (built: Jun 12 2018 04:14:37) ( NTS )
    Copyright (c) 1997-2018 The PHP Group
    Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
       with Zend OPcache v7.2.6, Copyright (c) 1999-2018, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:12:31 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    
    curl -Iks https://http2.domain.com/micro_bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:12:34 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    

    OneInStack OpenResty Nginx
    Code (Text):
    PHP 7.2.6 (cli) (built: Jun 12 2018 06:10:31) ( NTS )
    Copyright (c) 1997-2018 The PHP Group
    Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
       with Zend OPcache v7.2.6, Copyright (c) 1999-2018, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Server: openresty
    Date: Sat, 16 Jun 2018 17:13:26 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Server: openresty
    Date: Sat, 16 Jun 2018 17:13:30 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Vary: Accept-Encoding
    

    VestaCP Nginx
    Code (Text):
    PHP 7.0.30-0ubuntu0.16.04.1 (cli) ( NTS )
    Copyright (c) 1997-2017 The PHP Group
    Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
       with Zend OPcache v7.0.30-0ubuntu0.16.04.1, Copyright (c) 1999-2017, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:14:22 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Keep-Alive: timeout=60
    
    curl -Iks https://http2.domain.com/micro_bench.php
    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 16 Jun 2018 17:14:27 GMT
    Content-Type: text/html; charset=UTF-8
    Connection: keep-alive
    Keep-Alive: timeout=60
    

    Webinoly Nginx
    Code (Text):
    PHP 7.2.5-1+ubuntu18.04.1+deb.sury.org+1 (cli) (built: May  5 2018 05:00:15) ( NTS )
    Copyright (c) 1997-2018 The PHP Group
    Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
       with Zend OPcache v7.2.5-1+ubuntu18.04.1+deb.sury.org+1, Copyright (c) 1999-2018, by Zend Technologies
    
    curl -Iks https://http2.domain.com/bench.php
    HTTP/2 200
    server: nginx
    date: Sat, 16 Jun 2018 17:15:25 GMT
    content-type: text/html; charset=UTF-8
    vary: Accept-Encoding
    x-frame-options: SAMEORIGIN
    x-content-type-options: nosniff
    x-xss-protection: 1; mode=block
    cache-control: public, no-cache
    referrer-policy: unsafe-url
    
    curl -Iks https://http2.domain.com/micro_bench.php
    HTTP/2 200
    server: nginx
    date: Sat, 16 Jun 2018 17:15:30 GMT
    content-type: text/html; charset=UTF-8
    vary: Accept-Encoding
    x-frame-options: SAMEORIGIN
    x-content-type-options: nosniff
    x-xss-protection: 1; mode=block
    cache-control: public, no-cache
    referrer-policy: unsafe-url
    


    PHP CLI micro_bench.php & bench.php benchmarks



    First up is PHP CLI command line tests against each LEMP stack's PHP binary for micro_bench.php and bench.php benchmarks with the following test parameters:
    • Will test each php script with Zend Opcache opcache.enable_cli disabled (0) and enabled (1)
    • Each test will be run 3x times and the run times will be calculated for minimum, average, maximum and standard deviation as well as report average max memory usage
    • Full raw numbers here.

    micro_bench.php opcache.enable_cli=0



    runtimes in seconds (lower = faster)

    php-microbench-cli0.png

    avg runtimes in seconds (lower = faster) vs max memory (MB) for benchmark run

    php-microbench-maxmem-cli0.png

    micro_bench.php opcache.enable_cli=1



    runtimes in seconds (lower = faster)

    php-microbench-cli1.png

    avg runtimes in seconds (lower = faster) vs max memory (MB) for benchmark run

    php-microbench-maxmem-cli1.png


    bench.php opcache.enable_cli=0



    runtimes in seconds (lower = faster)

    php-bench-cli0.png

    avg runtimes in seconds (lower = faster) vs max memory (MB) for benchmark run

    php-bench-maxmem-cli0.png

    bench.php opcache.enable_cli=1



    runtimes in seconds (lower = faster)

    php-bench-cli1.png

    avg runtimes in seconds (lower = faster) vs max memory (MB) for benchmark run

    php-bench-maxmem-cli1.png
     
    Last edited: Jun 17, 2018
  2. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Nginx HTTP/2 HTTPS PHP-FPM bench.php Benchmarks



    Above tests were PHP CLI command line pure tests. Now for proper usage where Nginx is serving PHP requests for bench.php file being tested via h2load HTTP/2 HTTPS tester. The internet is moving towards HTTPS only, so these tests showcase how respective LEMP stacks perform for serving PHP requests via Nginx HTTP/2 HTTPS PHP-FPM.

    Nginx HTTP/2 HTTPS configuration is exact same used in static HTML benchmark tests.

    Nginx TLS Protocol / SSL Cipher Summary



    Summary of all LEMP stacks' Nginx default preferred TLS protocol and SSL ciphers for h2load HTTP/2 HTTPS client are in below table when all Nginx configurations have dual RSA 2048bit + ECDSA 256bit SSL certification configurations setup manually.

    Nginx OpenSSL Protocol SSL Cipher Server Temp Key App Protocol
    Centmin Mod Nginx 1.15.0 OpenSSL 1.1.0h TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256 ECDH P-256 256 bits h2
    EasyEngine Nginx 1.14.0 OpenSSL 1.0.2g TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256 ECDH P-256 256 bits h2
    OneInStack Std Nginx 1.14.0 OpenSSL 1.0.2o TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 ECDH P-256 256 bits h2
    OneInStack OpenResty Nginx 1.13.6 OpenSSL 1.0.2o TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 ECDH P-256 256 bits h2
    VestaCP Nginx 1.15.0 OpenSSL 1.0.2g TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 ECDH P-256 256 bits h2
    Webinoly Nginx 1.14.0 OpenSSL 1.1.0g TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 X25519 253 bits h2


    Centmin Mod 123.09beta01 Nginx HTTPS settings



    default ssl cipher settings
    Code (Text):
    lxc exec centos75-2 -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /usr/local/nginx/conf/conf.d/http2.domain.com.ssl.conf
      ssl_dhparam /usr/local/nginx/conf/ssl/http2.domain.com/dhparam.pem;
      ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
      ssl_prefer_server_ciphers   on;
    

    h2load version
    Code (Text):
    lxc exec centos75-2 -- h2load --version
    h2load nghttp2/1.31.1
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ centmin mod
    prefers TLSv1.2 with ECDHE-ECDSA-AES128-GCM-SHA256 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec centos75-2 -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-ECDSA-AES128-GCM-SHA256
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    

    Nginx version

    Easyengine 3.8.1 Nginx HTTPS settings



    default ssl cipher settings
    Code (Text):
    lxc exec easyengine-ubuntu1604 -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /etc/nginx/nginx.conf
           ssl_session_cache shared:SSL:20m;
           ssl_session_timeout 10m;
           ssl_prefer_server_ciphers on;
           ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
           ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    

    h2load version
    Code (Text):
    lxc exec easyengine-ubuntu1604 -- h2load --version
    h2load nghttp2/1.33.0-DEV
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ easyengine
    prefers TLSv1.2 with ECDHE-ECDSA-AES128-GCM-SHA256 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec easyengine-ubuntu1604 -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-ECDSA-AES128-GCM-SHA256
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    

    Nginx version

    VestaCP 0.9.8-21 Nginx HTTPS settings



    default ssl cipher settings
    Code (Text):
    lxc exec vestacp-ubuntu1604 -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /etc/nginx/nginx.conf
       ssl_session_cache   shared:SSL:10m;
       ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
       ssl_prefer_server_ciphers on;
       ssl_ciphers        "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
    

    h2load version
    Code (Text):
    lxc exec vestacp-ubuntu1604 -- h2load --version
    h2load nghttp2/1.33.0-DEV
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ vestacp nginx
    prefers TLSv1.2 with ECDHE-RSA-AES128-GCM-SHA256 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec vestacp-ubuntu1604 -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    

    Nginx version

    OneInStack v1.7 Standard Nginx HTTPS settings



    default ssl cipher settings - seems to use nginx upstream defaults ?
    Code (Text):
    lxc exec oneinstack-ubuntu16-nginx -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /usr/local/nginx/conf/nginx.conf.default
       #    ssl_session_cache    shared:SSL:1m;
       #    ssl_session_timeout  5m;
       #    ssl_ciphers  HIGH:!aNULL:!MD5;
       #    ssl_prefer_server_ciphers  on;
    

    h2load version
    Code (Text):
    lxc exec oneinstack-ubuntu16-nginx -- h2load --version
    h2load nghttp2/1.33.0-DEV
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ oneinstack nginx
    prefers TLSv1.2 with ECDHE-RSA-AES256-GCM-SHA384 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec oneinstack-ubuntu16-nginx -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    

    Nginx version

    OneInStack v1.7 OpenResty Nginx HTTPS settings



    default ssl cipher settings - seems to use nginx upstream defaults ?
    Code (Text):
    lxc exec oneinstack-ubuntu16-openresty -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /usr/local/openresty/nginx/conf/nginx.conf.default
       #    ssl_session_cache    shared:SSL:1m;
       #    ssl_session_timeout  5m;
       #    ssl_ciphers  HIGH:!aNULL:!MD5;
       #    ssl_prefer_server_ciphers  on;
    

    h2load version
    Code (Text):
    lxc exec oneinstack-ubuntu16-openresty -- h2load --version
    h2load nghttp2/1.33.0-DEV
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ oneinstack openresty nginx
    prefers TLSv1.2 with ECDHE-RSA-AES256-GCM-SHA384 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec oneinstack-ubuntu16-openresty -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    

    Nginx version

    Webinoly v1.4.3 Nginx HTTPS settings



    default ssl cipher settings
    Code (Text):
    lxc exec webinoly -- egrep 'ssl_session_cache|ssl_session_timeout|ssl_prefer_server_ciphers|ssl_ciphers|ssl_protocols|ssl_dhparam' /etc/nginx/nginx.conf
           ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
           ssl_session_timeout 10m;
           ssl_session_cache shared:SSL:20m;
           ssl_dhparam /etc/ssl/dhparam.pem;
           ssl_prefer_server_ciphers on;
           ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT;
    

    h2load version
    Code (Text):
    lxc exec webinoly -- h2load --version
    h2load nghttp2/1.33.0-DEV
    

    h2load tested SSL cipher/protocol when connecting to https://http2.domain.com/ - webinoly prefers TLSv1.2 with
    ECDHE-RSA-AES128-GCM-SHA256 ssl cipher for h2load HTTP/2 HTTPS client
    Code (Text):
    lxc exec webinoly -- h2load -t1 -c1 -n1 https://http2.domain.com/ | egrep 'TLS Protocol:|Cipher:|Server Temp Key:|Application protocol:'
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES128-GCM-SHA256
    Server Temp Key: X25519 253 bits
    Application protocol: h2
    

    Nginx version

    h2load HTTP/2 HTTPS bench.php Baseline



    Baseline results show only 1 user and 1 request to give an idea of overhead differences from above pure PHP CLI vs PHP-FPM served via Nginx HTTP/2 HTTPS. All tests are done with Zend Opcache enabled so for comparison reference to above PHP CLI opcache.enable_cli=1

    The h2load tests were run 9x times to derive the below results. The following h2load test parameters were used where -c1 and -n1 values were chosen.
    • h2load -t1 = 1 thread
    • --ciphers = the list is the default h2load client preferred ciphers specifically mentioned. If you ran h2load without --ciphers, these would be same defaults used.
    • Accept-Encoding: gzip = test compressed HTTP requests like a web browser would
    • -c1 = 1 concurrent users
    • -n1 = 1 requests
    h2load test command used:
    Code (Text):
    h2load -t1 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c1 -n1 https://http2.domain.com/bench.php
    


    • PHP-FPM bench.php baseline HTTP/2 HTTPS results show that fastest was Centmin Mod PHP-FPM 7.2.6 with PGO enabled with Webinoly PHP-FPM 7.2.5 coming in 2nd place.
    • EasyEngine and VestaCP PHP-FPM were using slower PHP-FPM 7.0.30 so fell behind.

    phpbench-h2load-c1-n1-baseline-01.png

    h2load HTTP/2 HTTPS hello.php Baseline



    Next for extended tests, will use hello world hello.php file instead with 1 user and 1 request to get a baseline.

    PHP:
    <html>
     <head>
      <title>Hello World</title>
     </head>
     <body>
     <?php echo '<p>Hello World</p>'?>
     </body>
    </html>
    The h2load tests were run 9x times to derive the below results. The following h2load test parameters were used where -c1 and -n1 values were chosen.
    • h2load -t1 = 1 thread
    • --ciphers = the list is the default h2load client preferred ciphers specifically mentioned. If you ran h2load without --ciphers, these would be same defaults used.
    • Accept-Encoding: gzip = test compressed HTTP requests like a web browser would
    • -c1 = 1 concurrent users
    • -n1 = 1 requests
    h2load test command used:
    Code (Text):
    h2load -t1 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c1 -n1 https://http2.domain.com/hello.php
    

    • For LEMP stack's you can not look at PHP-FPM performance by itself as Nginx the middleman responsible for passing PHP-FPM processed data to visitors. So essentially Nginx HTTP/2 HTTPS performance plays a role in PHP-FPM performance and this is clearly shown in below results given how much better Centmin Mod Nginx HTTP/2 HTTPS performed in previous static HTML tests.
    • Again Centmin Mod PHP-FPM 7.2.6 with Profile Guided Optimizations (PGO) was clearly ahead of non-PGO and other LEMP stack's PHP-FPM.
    • Centmin Mod PHP-FPM 7.2.6 with no-PGO was also faster than other LEMP stack's PHP-FPM.

    helloworld-h2load-c1-n1-baseline-01.png

    These baseline numbers when compared to planned higher concurrency load tests will highlight the differences between each's LEMP stack's configured Nginx + PHP-FPM servers.
     
  3. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Default PHP-FPM Configuration Settings



    Now to look into each respective LEMP stacks' default PHP-FPM configuration settings. I am not posting the entire config file, just excerpts of most relevant settings for each.
    • It seems OneInStack's Standard Nginx/OpenResty Nginx uses local Unix Sockets for PHP-FPM while other LEMP stacks default to using TCP listener based PHP-FPM configs.
    • Unix Sockets do in theory perform faster than TCP based PHP-FPM. Though depending on server environment configuration, TCP based PHP-FPM may scale better than Unix Sockets. We shall test this theory :)

    Centmin Mod



    /usr/local/etc/php-fpm.conf uses TCP by default but has optional Unix Socket support you can switch to by editing /usr/local/etc/php-fpm.conf and uncommenting ;listen = /tmp/php5-fpm.sock line and commenting out listen = 127.0.0.1:9000 and then editing and switching /usr/local/nginx/conf/php.conf include file's reference for fastcgi_pass from TCP port 9000 to Unix Socket unix:/tmp/php5-fpm.sock
    Code (Text):
        fastcgi_pass   127.0.0.1:9000;
        #fastcgi_pass   unix:/tmp/php5-fpm.sock;
    

    with ondemand process management
    Code (Text):
    [www]
    user = nginx
    group = nginx
    
    listen = 127.0.0.1:9000
    listen.allowed_clients = 127.0.0.1
    ;listen.backlog = -1
    
    ;listen = /tmp/php5-fpm.sock
    listen.owner = nginx
    listen.group = nginx
    listen.mode = 0660
    
    pm = ondemand
    pm.max_children = 16
    ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
    pm.start_servers = 8
    pm.min_spare_servers = 4
    pm.max_spare_servers = 12
    pm.max_requests = 1000
    
    ; PHP 5.3.9 setting
    ; The number of seconds after which an idle process will be killed.
    ; Note: Used only when pm is set to 'ondemand'
    ; Default Value: 10s
    pm.process_idle_timeout = 10s;
    

    EasyEngine



    /etc/php/7.0/fpm/pool.d/www.conf with ondemand process management

    Code (Text):
    [www]
    user = www-data
    group = www-data
    listen = 127.0.0.1:9070
    listen.owner = www-data
    listen.group = www-data
    pm = ondemand
    pm.max_children = 100
    pm.start_servers = 20
    pm.min_spare_servers = 10
    pm.max_spare_servers = 30
    ping.path = /ping
    pm.status_path = /status
    pm.max_requests = 500
    request_terminate_timeout = 300
    

    OneInStack STD Nginx



    /usr/local/php/etc/php-fpm.conf uses local Unix Sockets for PHP-FPM on shared memory /dev/shm and with dynamic process management

    Code (Text):
    [www]
    listen = /dev/shm/php-cgi.sock
    listen.backlog = -1
    listen.allowed_clients = 127.0.0.1
    listen.owner = www
    listen.group = www
    listen.mode = 0666
    user = www
    group = www
    
    pm = dynamic
    pm.max_children = 80
    pm.start_servers = 60
    pm.min_spare_servers = 50
    pm.max_spare_servers = 80
    pm.max_requests = 2048
    pm.process_idle_timeout = 10s
    request_terminate_timeout = 120
    request_slowlog_timeout = 0
    

    OneInStack OpenResty



    /usr/local/php/etc/php-fpm.conf uses local Unix Sockets for PHP-FPM on shared memory /dev/shm and with dynamic process management

    Code (Text):
    [www]
    listen = /dev/shm/php-cgi.sock
    listen.backlog = -1
    listen.allowed_clients = 127.0.0.1
    listen.owner = www
    listen.group = www
    listen.mode = 0666
    user = www
    group = www
    
    pm = dynamic
    pm.max_children = 80
    pm.start_servers = 60
    pm.min_spare_servers = 50
    pm.max_spare_servers = 80
    pm.max_requests = 2048
    pm.process_idle_timeout = 10s
    request_terminate_timeout = 120
    request_slowlog_timeout = 0
    

    VestaCP



    /etc/php/7.0/fpm/pool.d/http2.domain.com.conf with ondemand process management

    Code (Text):
    [http2.domain.com]
    listen = 127.0.0.1:9002
    listen.allowed_clients = 127.0.0.1
    
    user = admin
    group = admin
    
    pm = ondemand
    pm.max_children = 4
    pm.max_requests = 4000
    pm.process_idle_timeout = 10s
    

    Webinoly



    /etc/php/7.2/fpm/pool.d/www.conf with ondemand process management

    Code (Text):
    [www]
    user = www-data
    group = www-data
    
    listen = 127.0.0.1:9000
    listen.owner = www-data
    listen.group = www-data
    
    pm = ondemand
    pm.max_children = 100
    pm.start_servers = 20
    pm.min_spare_servers = 10
    pm.max_spare_servers = 30
    pm.max_requests = 500
    pm.status_path = /status
    ping.path = /ping
    request_terminate_timeout = 300
    
     
  4. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    High User Concurrency hello.php PHP-FPM Benchmarks



    Next up is doing h2load HTTP/2 HTTPS PHP-FPM tests against hello.php file at a much higher user concurrency work load of 500 users and 5000 requests. As previously mentioned, using PHP-FPM Unix Sockets (with OneInStack LEMP stacks default config) can be faster but up to a certain point, they're hit a concurrent work load limit and requests will start to fail. On the other hand, PHP-FPM TCP listeners are slower but scale much better in handling high user concurrent work loads. This can be clearly seen in below test results.
    • OneInStack LEMP stacks default to PHP-FPM Unix Sockets unlike other LEMP stacks tested defaulting to TCP listeners. So at 500 user concurrency, OneInStack PHP-FPM configs start to fail under the h2load load tester tool. Between 35-38% of all requests failed which in turn inflates and skews the requests/s and TTFB 99% percentile latency values. Requests per second and latency is based on the time to complete a request and thus failed requests resulted in h2load reporting higher requests/s and lower TTFB 99% percentile latency values. You do not want to be using PHP-FPM Unix Sockets under high concurrent user loads when almost 2/5 requests fail!
    • h2load requests/s numbers along won't show the complete picture until you factor into request latency. In this case I added to the chart the 99% percentile value for Time To First Byte (TTFB). Meaning 99% of the time, requests had such latency response times. Here Webinoly had a decent requests/s but much higher TTFB due to one of the 9x test runs stalling and thus resulting in minimum requests/s dropping to just 265.33. EasyEngine also had one of the 9x test runs stall and thus dropped requests/s to 240.3.
    • Only Centmin Mod no-pgo/pgo and VestaCP and Webinoly managed to complete 100% of the requests but VestaCP's TTFB 99% percentile value was twice as slow and Webinoly was 5x times slower than that of Centmin Mod's PHP-FPM performance.
    • Full raw h2load results here.

    phpbench-h2load-c500-n5000-01.png

    Example of one of the OneInStack h2load test runs where 33.84% of requests failed, notice the h2load completion (finish) time records is 1 second which inflates the requests/s and latency numbers and notice traffic total = 997.36KB
    Code (Text):
    Test Run: 1 (oneinstack-ubuntu16-nginx)
    h2load -t1 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c500 -n5000 https://http2.domain.com/hello.php
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-RSA-AES256-GCM-SHA384
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    finished in 1.00s, 4979.12 req/s, 993.19KB/s
    requests: 5000 total, 5000 started, 5000 done, 3308 succeeded, 1692 failed, 0 errored, 0 timeout
    status codes: 3308 2xx, 0 3xx, 0 4xx, 1692 5xx
    traffic: 997.36KB (1021292) total, 343.12KB (351356) headers (space savings 39.67%), 542.42KB (555436) data
                         min         max         mean         sd        +/- sd
    time for request:      195us    314.86ms     33.47ms     22.12ms    68.14%
    time for connect:   181.81ms    572.18ms    395.21ms     96.80ms    59.40%
    time to 1st byte:   318.26ms    663.97ms    446.64ms     92.25ms    53.40%
    req/s           :      10.13       22.20       14.09        2.44    57.80%
    

    compare that to a h2load test run by Centmin Mod where 100% of requests were completed and the h2load finish time was recorded as 2.89 seconds and traffic total = 1.06MB
    Code (Text):
    Test Run: 2 (centos75-2)
    h2load -t1 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c500 -n5000 https://http2.domain.com/hello.php
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-ECDSA-AES128-GCM-SHA256
    Server Temp Key: ECDH P-256 256 bits
    Application protocol: h2
    finished in 2.89s, 1728.49 req/s, 376.42KB/s
    requests: 5000 total, 5000 started, 5000 done, 5000 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 5000 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 1.06MB (1115000) total, 562.01KB (575500) headers (space savings 28.95%), 415.04KB (425000) data
                         min         max         mean         sd        +/- sd
    time for request:      447us       1.87s    131.15ms    336.58ms    92.58%
    time for connect:    63.00ms    588.84ms    484.45ms    110.10ms    92.60%
    time to 1st byte:   445.86ms       2.02s    653.13ms    318.92ms    93.80%
    req/s           :       3.47       12.64        6.82        3.50    70.00%
    


    You can check out official h2load manual for explanation of the output numbers h2load(1) — nghttp2 1.33.0-DEV documentation
     
  5. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    Low Concurrency PHP-FPM Benchmark Tests



    Now to test low user concurrency between LEMP stacks' PHP-FPM implementations with refocused look at PHP-FPM response time latency i.e. TTFB rather than pure throughput (requests/s). Centmin Mod 123.09beta01 was just updated with revised nginx and php-fpm defaults (benefits of detailed benchmarking and insights derived in further optimising Centmin Mod :)).

    The changes were:
    Nginx from
    PHP-FPM from
    Code (Text):
    keepalive_timeout 6
    

    to
    Code (Text):
    keepalive_timeout 8
    

    PHP-FPM from
    Code (Text):
    pm.max_children = 16
    pm.max_requests = 1000
    

    to
    PHP-FPM from
    Code (Text):
    pm.max_children = 20
    pm.max_requests = 5000
    


    Will continue to use hello world hello.php.

    The h2load tests were run 9x times to derive the below results. The following h2load test parameters were used where -c50 and -n50 values were chosen.
    • h2load -t2 = 2 threads tested
    • --ciphers = the list is the default h2load client preferred ciphers specifically mentioned. If you ran h2load without --ciphers, these would be same defaults used.
    • Accept-Encoding: gzip = test compressed HTTP requests like a web browser would
    • -c1 = 50 concurrent users
    • -n1 = 50 requests
    h2load test command used:

    -t2 = 2 thread
    Code (Text):
    h2load -t2 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c50 -n50 https://http2.domain.com/hello.php
    


    • TTFB min, average and max latency response numbers in milliseconds (ms) where lower equal faster response times and thus ultimately faster page load speed. These numbers are the average of 9x h2load test runs
    • TTFB average shows Centmin Mod PHP-FPM having fastest response times, followed by VestaCP, EasyEngine, OneInStack OpenResty, OneInStack Nginx and in last place with almost 2x times slower TTFB min, Webinoly.
    • TTFB maximum shows Centmin Mod PHP-FPM having fastest response times, followed by VestaCP, EasyEngine, OneInStack OpenResty, OneInStack Nginx and in last place with almost 2x times slower TTFB maximum, Webinoly.

    phpbench-h2load-c50-n50-01.png

    Then switch to 4 h2load threads with command used:

    -t4 = 4 thread
    Code (Text):
    h2load -t4 --ciphers=ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 -H 'Accept-Encoding: gzip' -c50 -n50 https://http2.domain.com/hello.php
    


    • TTFB min, average and max latency response numbers in milliseconds (ms) where lower equal faster response times and thus ultimately faster page load speed. These numbers are the average of 9x h2load test runs
    • TTFB average shows Centmin Mod PHP-FPM having fastest response times, followed by EasyEngine, OneInStack OpenResty, OneInStack Nginx, VestaCP and in last place with almost 2x times slower TTFB min, Webinoly.
    • TTFB maximum shows Centmin Mod PHP-FPM having fastest response times, followed by EasyEngine, OneInStack OpenResty, OneInStack Nginx, VestaCP and in last place with almost 2x times slower TTFB maximum, Webinoly.

    phpbench-h2load-c50-n50-t4-01.png
     
  6. eva2000

    eva2000 Administrator Staff Member

    53,134
    12,108
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,641
    Local Time:
    10:18 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+

    High Concurrency PHP-FPM Benchmark Tests 1000 & 2000 & 5000 Users



    Now to test very high 1000 & 2000 + bonus 5000 user concurrency and again looking at PHP-FPM response time latency i.e. TTFB rather than pure throughput (requests/s) and also looking at percentage of completed requests. Only Centmin Mod LEMP stack's PHP-FPM managed to serve 100% of the requests. Though the response times are less than ideal, the results do show exactly how well each respective LEMP stack handles high user concurrency with out of the box defaults. Granted, Nginx and PHP-FPM settings can be changed and tuned to be more optimal for all LEMP stacks.

    The h2load tests were run 9x times to derive the below results. The following h2load test parameters were used:
    • h2load -t4 = 4 threads tested
    • --ciphers = the list is the default h2load client preferred ciphers specifically mentioned. If you ran h2load without --ciphers, these would be same defaults used.
    • Accept-Encoding: gzip = test compressed HTTP requests like a web browser would
    • -c1000/2000 = 1000 & 2000 & 5000 concurrent users
    • -n10000 = 10000 requests

    1000 concurrent users



    • Centmin Mod 123.09beta01 beta Nginx 1.15.0 PHP-FPM 7.2.6 = 100% completed requests with TTFB average = 1543ms and TTFB max = 4514ms for no-PGO and slightly better result for PGO based PHP 7.2.6 with 100% completed requests with TTFB average = 1501ms and TTFB max = 3811ms
    • Easyengine 3.8.1 Nginx 1.14.0 PHP-FPM 7.0.30 = 95% completed requests with TTFB average = 3326ms and TTFB max = 13508ms
    • OneInStack Nginx 1.14.0 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing 30% completed requests with TTFB average = 728ms and TTFB max = 1350ms
    • OneInStack OpenResty Nginx 1.13.6 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing = 42% completed requests with TTFB average = 727ms and TTFB max = 1312ms
    • VestaCP 0.9.8-21 using Nginx 1.15.0 PHP-FPM 7.0.30 = 98% completed requests with TTFB average = 2273ms and TTFB max = 5657ms
    • Webinoly 1.4.3 using Nginx 1.14.0 PHP-FPM 7.2.5 = 99% completed requests with TTFB average = 4304ms and TTFB max = 13443ms
    • 1st place = Centmin Mod PHP-FPM Profile Guided Optimization (PGO) enabled with average and max TTFB response times between a 1/3 to 1/2 the times of 3rd place
    • 2nd place = Centmin Mod PHP-FPM non-PGO with average and max TTFB response times between a 1/3 to 1/2 the times of 3rd place
    • 3rd place = VestaCP PHP-FPM despite 1% less completed requests than Webinoly, the average and max TTFB latency was faster than Webinoly
    • 4th place = Webinoly PHP-FPM with 4% more completed requests than EasyEngine
    • 5th place = EasyEngine PHP-FPM
    • 6th place = OneInStack OpenResty PHP-FPM
    • 7th place = OneInStack Std PHP-FPM
    phpbench-h2load-c1000-n10000-t4-01.png

    2000 concurrent users



    • Centmin Mod 123.09beta01 beta Nginx 1.15.0 PHP-FPM 7.2.6 = 100% completed requests with TTFB average = 2176ms and TTFB max = 5653ms for no-PGO and slightly better result for PGO based PHP 7.2.6 with 100% completed requests with TTFB average = 2012ms and TTFB max = 5082ms
    • Easyengine 3.8.1 Nginx 1.14.0 PHP-FPM 7.0.30 = 99% completed requests with TTFB average = 7623ms and TTFB max = 37700ms
    • OneInStack Nginx 1.14.0 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing = 19% completed requests with TTFB average = 1271ms and TTFB max = 1911ms
    • OneInStack OpenResty Nginx 1.13.6 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing = 23% completed requests with TTFB average = 1364ms and TTFB max = 2044ms
    • VestaCP 0.9.8-21 using Nginx 1.15.0 PHP-FPM 7.0.30 totally failed with essentially what amounted to 0% completed requests with TTFB average = 3475ms and TTFB max = 8305ms
    • Webinoly 1.4.3 using Nginx 1.14.0 PHP-FPM 7.2.5 = 99% completed requests with TTFB average = 5233ms and TTFB max = 14216ms
    • 1st place = Centmin Mod PHP-FPM Profile Guided Optimization (PGO) enabled with average and max TTFB response times between a 1/3 to 2/5 the times of 3rd place
    • 2nd place = Centmin Mod PHP-FPM non-PGO with average and max TTFB response times between a 1/3 to 2/5 the times of 3rd place
    • 3rd place = Webinoly PHP-FPM
    • 4th place = EasyEngine PHP-FPM
    • 5th place = OneInStack OpenResty PHP-FPM
    • 6th place = OneInStack Std PHP-FPM
    • 7th place = VestaCP PHP-FPM
    phpbench-h2load-c2000-n10000-t4-01.png

    5000 concurrent users



    Last minute addition is try to see how far LEMP stacks can go pushing 5,000 concurrent user loads :)
    • Centmin Mod 123.09beta01 beta Nginx 1.15.0 PHP-FPM 7.2.6 = 99% completed requests with TTFB average = 4321ms and TTFB max = 23482ms for no-PGO and slightly better result for PGO based PHP 7.2.6 with 100% completed requests with TTFB average = 4336ms and TTFB max = 13252ms
    • Easyengine 3.8.1 Nginx 1.14.0 PHP-FPM 7.0.30 = 95% completed requests with TTFB average = 12129ms and TTFB max = 50578ms
    • OneInStack Nginx 1.14.0 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing = 13% completed requests with TTFB average = 2537ms and TTFB max = 3553ms
    • OneInStack OpenResty Nginx 1.13.6 PHP-FPM 7.2.6 showed it's limitations for using Unix Socket instead of TCP listener when it comes to concurrency scaling only managing = 11% completed requests with TTFB average = 2662ms and TTFB max = 3751ms
    • VestaCP 0.9.8-21 using Nginx 1.15.0 PHP-FPM 7.0.30 totally failed with essentially what amounted to 0% completed requests with TTFB average = 0ms and TTFB max = 0ms. But looking at raw logs shows there was some requested served for first 8 test runs but the 9th test run failed completely messing up my bench scripts statistics calculations - read further below for actual breakdown of raw stats.
    • Webinoly 1.4.3 using Nginx 1.14.0 PHP-FPM 7.2.5 = 99% completed requests with TTFB average = 8267ms and TTFB max = 17967ms
    • 1st place = Centmin Mod PHP-FPM Profile Guided Optimization (PGO) enabled with average and max TTFB response times 1/2 the times of 3rd place
    • 2nd place = Centmin Mod PHP-FPM non-PGO with average and max TTFB response times 1/2 the times of 3rd place. 2nd place over Webinoly also on 99% completed due to almost 2x faster TTFB average times
    • 3rd place = Webinoly PHP-FPM
    • 4th place = EasyEngine PHP-FPM
    • 5th place = OneInStack Std PHP-FPM
    • 6th place = OneInStack OpenResty PHP-FPM
    • 7th place = VestaCP PHP-FPM
    phpbench-h2load-c5000-n10000-t4-01.png

    The raw numbers for 5000 concurrent user test run

    1st place = Centmin Mod PHP-FPM Profile Guided Optimization (PGO) enabled
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 1192.38 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 820.41 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 916.02 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 986.69 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 1127.76 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 1013.81 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 962.71 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 592.05 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 292.67 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    883.36ms 3.80s 8.11s
    770.44ms 3.88s 12.03s
    2.05s 4.68s 10.73s
    1.50s 3.98s 9.94s
    1.25s 3.54s 8.70s
    1.86s 4.20s 9.63s
    1.50s 4.05s 9.54s
    1.38s 4.71s 16.58s
    1.77s 6.18s 34.01s
    -------------------------------------------------------------------------------------------
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    185.012    4.336      13.252     0.792         874.326           6.062             32.616
    -------------------------------------------------------------------------------------------
    

    2nd place = Centmin Mod PHP-FPM non-PGO
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 1319.37 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 519.67 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9925
    500 1000 1035.51 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 1160.48 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 1123.22 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 9998 9998
    500 1000 787.86 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9995
    500 1000 90.95 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9881
    500 1000 984.22 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 328.77 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9719
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    1.27s 3.46s 5.75s
    1.29s 4.05s 16.84s
    656.15ms 3.28s 9.44s
    921.73ms 3.28s 8.42s
    1.26s 3.21s 8.75s
    1.03s 3.85s 12.36s
    1.59s 7.42s 109.77s
    1.72s 4.32s 9.76s
    1.57s 6.02s 30.25s
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    176.401    4.321      23.482     1.455         900.484           7.308             103.408
    

    3rd place = Webinoly PHP-FPM raw numbers
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 574.64 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 9999
    500 1000 598.82 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 577.51 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 530.71 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 939.28 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 685.01 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 295.70 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 9983
    500 1000 578.78 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 572.49 gzip ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 10000 9955
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    252.15ms 8.52s 17.28s
    883.74ms 8.59s 16.59s
    603.29ms 7.98s 17.13s
    715.04ms 8.67s 18.29s
    852.35ms 6.02s 9.84s
    384.50ms 7.60s 14.50s
    1.50s 11.88s 33.62s
    614.19ms 7.88s 17.14s
    859.82ms 7.26s 17.31s
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    574.064    8.267      17.967     1.586         881.826           11.623            32.394
    

    4th place = EasyEngine PHP-FPM
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 295.40 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 154.29 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 8773
    500 1000 161.65 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9952
    500 1000 159.44 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9564
    500 1000 161.41 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9692
    500 1000 295.14 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    500 1000 144.61 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 8164
    500 1000 161.78 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 9870
    500 1000 571.01 gzip ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 10000 10000
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    1.82s 4.83s 33.61s
    967.66ms 26.94s 61.80s
    1.22s 6.15s 61.83s
    1.33s 10.48s 61.57s
    1.61s 10.18s 61.81s
    797.40ms 5.04s 33.64s
    1.64s 33.43s 61.95s
    1.52s 8.21s 61.61s
    1.61s 3.90s 17.38s
    -------------------------------------------------------------------------------------------
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    197.312    12.129     50.578     10.618        954.039           32.911            61.940
    -------------------------------------------------------------------------------------------
    

    5th place = OneInStack Std PHP-FPM
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 2132.25 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 6920 700
    500 1000 2087.88 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7338 912
    500 1000 2165.97 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 9134 1182
    500 1000 2078.79 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8308 1376
    500 1000 2232.27 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8054 936
    500 1000 1955.50 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 6400 859
    500 1000 2204.79 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8962 1024
    500 1000 2029.75 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7132 1731
    500 1000 2061.81 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 9520 910
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    550.22ms 2.18s 2.95s
    1.16s 2.24s 3.38s
    904.68ms 2.99s 4.06s
    735.54ms 2.60s 3.74s
    674.91ms 2.58s 3.51s
    557.78ms 1.82s 2.63s
    1.24s 2.91s 3.81s
    962.11ms 2.53s 3.44s
    531.25ms 2.98s 4.46s
    -------------------------------------------------------------------------------------------
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    546.543    2.537      3.553      0.399         957.516           2.989             4.428
    -------------------------------------------------------------------------------------------
    

    6th place = OneInStack OpenResty PHP-FPM
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 2171.40 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8978 1089
    500 1000 1943.47 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 9696 905
    500 1000 2233.61 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 9606 1044
    500 1000 2149.44 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7086 983
    500 1000 2156.73 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8908 999
    500 1000 1820.10 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 6006 917
    500 1000 2402.42 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7954 1037
    500 1000 2274.09 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8442 881
    500 1000 1807.63 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8216 1037
    

    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    848.57ms 2.89s 3.93s
    892.61ms 3.22s 4.92s
    785.08ms 2.98s 4.14s
    679.90ms 2.13s 2.86s
    1.35s 3.09s 3.91s
    585.51ms 1.95s 3.03s
    739.68ms 2.38s 3.14s
    662.76ms 2.57s 3.49s
    527.13ms 2.75s 4.34s
    -------------------------------------------------------------------------------------------
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    635.843    2.662      3.751      0.438         889.087           3.210             4.874
    -------------------------------------------------------------------------------------------
    

    7th place = VestaCP PHP-FPM - this is where the calculations in my script broke due to the 9th test totally failing and thus throwing off script's statistic calculations. If we take 9th test as 0 successful requests and average the other 8 results over 9 tests, we'd get 2628/10000 = 26.28% successful results - unfortunately the TTFB numbers are not calculable
    Code (Text):
    users requests req/s encoding cipher protocol started succeeded
    500 1000 892.04 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7648 3449
    500 1000 1027.73 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7541 3208
    500 1000 948.23 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7096 2114
    500 1000 457.71 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 6860 2167
    500 1000 1033.16 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8564 2743
    500 1000 827.71 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8299 2897
    500 1000 1100.63 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 7574 2599
    500 1000 767.96 gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 8683 4483
    500 1000  gzip ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2
    datamash: invalid numeric value in line 9 field 1: 'gzip'
    datamash: invalid numeric value in line 9 field 1: 'gzip'
    datamash: invalid numeric value in line 9 field 1: 'gzip'
    datamash: invalid numeric value in line 9 field 1: 'gzip'
    datamash: invalid numeric value in line 9 field 1: 'gzip'
    datamash: invalid input: field 1 requested, line 9 has only 0 fields
    datamash: invalid input: field 1 requested, line 9 has only 0 fields
    EOF encountered in a comment.
    (standard_in) 1: syntax error
    

    which also throw off TTFB calculations
    Code (Text):
    ttfb-time-min ttfb-time-avg ttfb-time-max
    -------------------------------------------------------------------------------------------
    h2load ttfb latency result summary
    ttfb-min:  ttfb-avg:  ttfb-max:  ttfb-stddev:  ttfb-perc99-min:  ttfb-perc99-avg:  ttfb-perc99-max:
    0.000      0.000      0.000      0.000         0.000             0.000             0.000
    -------------------------------------------------------------------------------------------
    


    It has been interesting doing these comparative LEMP stack benchmarks for PHP-FPM as well as static HTML HTTP/2 HTTPS Nginx benchmarks and non-HTTPS Nginx benchmarks. Been having heaps of fun with playing with Ubuntu 18.04 LTS LXD containers on a ssdnode's 4 CPU, 16GB ram, 80GB disk KVM VPS :D
     
    Last edited: Jun 20, 2018
Thread Status:
Not open for further replies.