Want more timely Centmin Mod News Updates?
Become a Member

Nginx SSL Offloading, Encryption and Certificates with NGINX

Discussion in 'Nginx and PHP-FPM news & discussions' started by eva2000, May 25, 2014.

Tags:
  1. eva2000

    eva2000 Administrator Staff Member

    54,547
    12,221
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,790
    Local Time:
    5:17 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    NGINX provides a number of SSL features that allow it to handle most SSL requirements. NGINX uses OpenSSL and the power of standard processor chips to provide cost effective SSL performance. As the power of standard processor chips continues to increase and as chip vendors add cryptographic acceleration support, the cost advantage of using standard processor chips over specialized SSL chips also continues to widen.

    [​IMG]



    Decrypting HTTPS traffic on NGINX brings many benefits

    The three major use cases for NGINX with SSL are:

    SSL Offloading


    When NGINX is used as a proxy, it can offload the SSL decryption processing from the backend servers. The are a number of advantages of doing decryption at the proxy:

    • Improved performance. The biggest performance hit when doing SSL decryption is the initial handshake. To improve performance, the server doing the decryption can cache SSL session IDs and manages TLS session tickets. If this is done at the proxy, then all requests from the same client will be able use the cached values, but if this is done on the backend servers, then a client’s requests may go to more then one server, thus requiring the client to re-authenticate. The use of TLS tickets can help mitigate this issue, but they are not supported by all clients and can be difficult to configure and manage.
    • Better utilization of the backend servers. SSL processing is very CPU intensive, and is becoming more intensive as key sizes increase. Removing this work from the backend servers allows them to focus on what they are most efficient at, delivering content.
    • Intelligent routing. By decrypting the traffic, the proxy has access to the request content, such as headers, URI, etc., and can use this data to route requests.
    • Certificate management. Certificates only need to be purchased and installed on the proxy servers and not the backend servers. This saves both time and money.
    • Security patches. If vulnerabilities arise in the SSL stack, the appropriate patches need only be applied to the proxy servers.

    Read more

    SSL Encryption to the Origin Servers


    There may be times when you need NGINX to encrypt requests that it sends to the backend servers. These requests may arrive at the NGINX server as plain text or as encrypted traffic that NGINX must decrypt in order to make a routing decision. By using a pool of keep alive connections to the backend servers, SSL handshakes will be minimized thus maximizing SSL performance. This is achieved very simply by configuring NGINX to proxy to “https” and NGINX will automatically encrypt the traffic it is not already encrypted.

    End-to-End Encryption


    Because NGINX can do both decryption and encryption, you can achieve end-to-end encryption of all requests while still allowing NGINX make layer 7 routing decisions. In this case the clients communicate with NGINX over HTTPS, NGINX decrypts the requests and then encrypts them before sending them to the backend servers. This can be desirable when the NGINX proxy is not collocated in a data center with the backbend servers. As more and more servers are being moved to the cloud, it is becoming more necessary to use HTTPS between the proxy and the backend servers.

    Client Certificates


    NGINX can be handle SSL client certificates and can be configured to make them optional or required. Client certificates are a way of restricting access to your systems to only pre-approved clients without requiring a password, and you can control these certificates by adding revoked certificates to a Certificate Revocation List (CRL) which NGINX can check to makes sure that a client certificate is still valid.

    Other Features


    There are number of features available in support of these use cases, including but not limited to the following:

    • Multiple certificates. A single NGINX instance can support many certificates for different domains and can scale up to support hundreds of thousands of certificates. It is a common use case to have an NGINX instance serving many IP addresses and domains with each domain requiring its own certificate.
    • OCSP Stapling. When this is enabled, NGINX will include a time-stamped OCSP response signed by the certificate authority that the client can use to verify the server’s certificate and avoid the performance penalty of having to contact the OCSP server directly.
    • SSL Ciphers. You can specify which ciphers are enabled.
    • SSL Protocols. You can specify which protocols are enabled, including SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2.
    • Chained Certificates. NGINX supports Certificate Chains, used when the website’s certificate is not signed directly by the root certificate of a CA (Certificate Authority), but rather by a series intermediate certificates. The web server presents a ‘certificate chain’ containing the intermediate certificates, so that the web client can verify the chain of trust that links the website certificate to a trusted root certificate.
    • HTTPS server optimizations. NGINX can be tuned to maximum its SSL performance by configuring the number of worker processes, using keepalive connections and by using an SSL session cache.

    For a more details please go to Configuring HTTPS Servers, Admin Guide: SSL Termination and ngx_http_ssl_module

    Examples


    Here are a few examples demonstrating some of NGINX’s SSL features. These examples assume a basic understanding of NGINX configurations.

    Let’s say you have the following configuration for a simple site to handle HTTP traffic for www.example.com and to proxy it to an upstream group:

    upstream backends {
    server 192.168.100.100:80;
    server 192.168.100.101:80;
    }​

    server {
    listen 80;
    server_name www.example.com;
    location / {
    proxy_pass http://backends;
    }
    }​

    And now you want to add HTTPS support, with NGINX decrypting the traffic using the certificate and private key and communicating with the backend servers over HTTP:

    upstream backends {
    server 192.168.100.100:80;
    server 192.168.100.101:80;
    }​

    server {
    listen 80;
    listen 443 ssl; # The ssl directive tells NGINX to decrypt
    # the traffic
    server_name www.example.com;
    ssl_certificate www.example.com.crt; # This is the certificate file
    ssl_certificate_key www.example.com.key; # This is the private key file
    location / {
    proxy_pass http://backends;
    }
    }​

    Now let’s say that rather then needing to decrypt the traffic, you want to receive traffic over HTTP but send it to the backend servers over HTTPS:


    upstream backends {
    server 192.168.100.100:443;
    server 192.168.100.101:443;
    }​

    server {
    listen 80;
    server_name www.example.com;
    location / {
    proxy_pass https://backends; # Specifying "https" causes NGINX to
    # encrypt the traffic
    }
    }​





    The post SSL Offloading, Encryption and Certificates with NGINX appeared first on NGINX.

    Continue reading...