Learn about Centmin Mod LEMP Stack today
Register Now

Benchmarks Optimizing and tuning WordPress and XenForo with NewRelic

Discussion in 'Dedicated server hosting' started by deltahf, Oct 8, 2021.

  1. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Yup that's what I do - my Evernote notes are in 10000s hehe

    Closest would be https://community.centminmod.com/forums/centmin-mod-user-tutorials-guides.50/ - might rework the forum title hmmm.

    KeyCDN performance tool has that right side down arrow, click it to reveal the HTTP response header for each request and check cf-cache-status and cf-rayid headers for details as to cache status and the cloudflare datacenter the request was served from and compare that to your own requests in web browser network dev tools response header inspection.


    Should be fine https://developers.cloudflare.com/workers/learning/how-the-cache-works :)

    You not using Cloudflare Polish webP ?
     
  2. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Oh, I should have clicked that arrow... :ROFLMAO:

    Sure enough, the "x-cache-handler: cache-enabler-engine" response header inserted by Cache Enabler is NOT listed in the KeyCDN tool. The header shows as expected when I view the site in an Incognito browser window, and other third-party response header tools like https://httpstatus.io/ also show the header. I wonder what the KeyCDN tool is doing?

    I'm using Polish but not with WebP. I did when I first switched to Cloudflare a few years ago, but some of my users and writers complained about WebP breaking their ability to save and edit images. A lot of users use my forums to swap and share images that they want to edit/collaborate on. They didn't understand what WebP was or what was actually going on; I was busy with other things and just disabled it at the time to save everyone the hassle.

    Now I see that I can enable it or disable Polish based on Page Rules. If I enable WebP and then set a Page Rule to disable Polish on forum attachments and image proxy images, that might work. Of course then I will lose Polish (non-WebP) compression on forum images, so... I don't know. I wish there was a way to just enable/disable only WebP conversion with Page Rules. Or maybe I should just do it anyway and drag all my users into the future kicking and screaming. :whistle:

    I didn't have much time to work on this today, but as of now my plan of attack looks like this:
    1. Use your optimise-images.sh on all the files in my /wp-content/uploads folder.
    2. Install/configure either EWWW or another image compression tool to compress uploads automatically in the future.
    3. Configure CloudFlare to do WebP conversion and delivery.
     
  3. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    I figured out why KeyCDN's TTFB Performance Tester (and other TTFB tools) were bypassing Cache Enabler. :)

    Those tools send "HEAD" requests instead of "GET" requests. I noticed this after closer inspection of the server request logs.

    HTTP/1.1: Method Definitions

    Cache Enabler's custom nginx rules use 405 errors and a @fallback "named location" which effectively bypasses the cached file for any request that is not a GET:

    Code (Text):
    location / {
            error_page 405 = @fallback;
            recursive_error_pages on;
    
            # check request method
            if ( $request_method != GET ) {
                return 405;
            }
    ...
    }
    


    I think it might be worth it to modify the nginx configuration so that it does cache HEAD requests. It is somewhat ironic that the benefits of KeyCDN's own speed plugin are not visible with their own testing tool because of this! :ROFLMAO:
     
  4. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    I guess you setup Cache Enabler advanced caching manually and not via centmin.sh menu option 22 auto Wordpress installer as it's a existing Wordpress site? Because Centmin Mod installer for Cache Enabler doesn't use the 405 directive to exclude HEAD requests :)
     
  5. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Ha, yep, that explains it. Another reminder to always do things "the Centminmod way". :D
     
  6. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    A very interesting update this time.

    While checking my Crawl Stats in my Google Search Console (go to Settings -> Crawl Stats) I discovered that millions of XenForo search results pages (/forum/search/*) were being crawled by Google every single day. I could not believe it because I thought robots.txt would prevent that... but when I checked it, I realized I had not updated my robots.txt since 2015!

    That was before I upgraded to XF2.x, and unfortunately I have had an outdated robots.txt file for six and a half years. I know the importance of robots.txt but just always thought "oh yeah, I have that taken care of". The fact it took me so long to discover this is embarrassing, but I digress.

    This is the contents of my new file, inspired by the defaults on XenForo.com:

    Code (Text):
    # BAD BOTS
    User-agent: PetalBot
    User-agent: AspiegelBot
    User-agent: AhrefsBot
    User-agent: SemrushBot
    User-agent: DotBot
    User-agent: MauiBot
    User-agent: MJ12bot
    Disallow: /
    
    # AMAZON BOT
    User-agent: Amazonbot
    Disallow: /forum/threads/*/reply
    
    # ALL BOTS
    User-agent: *
    Disallow: /wp-admin/
    Disallow: /forum/whats-new/
    Disallow: /forum/account/
    Disallow: /forum/attachments/
    Disallow: /forum/goto/
    Disallow: /forum/posts/
    Disallow: /forum/login/
    Disallow: /forum/search/
    Disallow: /forum/admin.php
    Allow: /
    


    I implemented the necessary changes and submitted the update to Google. To Google's credit the bot picked up the changes and changed its behavior almost immediately.

    This had absolutely destroyed my crawl budget, so it was interesting to watch as suddenly Google started crawling a lot more of my legitimate content, most of it older.

    It is still too early for these changes to show up in the Google Search Console reports, but my other reports saw drastic changes. At the same time, I noticed that there were still a lot of requests coming from AhrefsBot and SemrushBot. I blocked them with Cloudflare Firewall rules.

    Most of the worthless requests were going to XenForo, so it saw the biggest change in request volume in NewRelic. The changes were implemented late afternoon on Jan. 11, 2022.

    You can see request volume drop:

    [​IMG]

    With fewer requests, this of course makes XF look a bit slower! But that's expected and these numbers are more accurate:

    [​IMG]

    Cloudflare Analytics were also quite interesting. "Page views" and "visits" both collapsed, but bandwidth and requests remained largely the same, as you can see here:

    Screen Shot 2022-01-15 at 8.00.40 PM.png


    Cache performance also looks so much better, as the origin is not having to serve so many crap requests. :)

    Screen Shot 2022-01-15 at 8.03.04 PM.png
     
  7. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Oh yes, I almost forgot... it's custom Cloudflare Worker time. :)

    I have finally implemented a custom CF Worker to cache just my WordPress pages. It took some tweaking, but the results have been incredible!

    Because my WordPress install is at root and my articles are not in any subdirectory, this creates a possible problem as the Worker will be running on all requests to the domain not serviced by other Workers. So, I thought it would be wise to set up a bunch of null Worker routes on my domain so the Worker isn't inspecting a lot of requests it will never need to cache. (A null route is just setting a path for a Worker and then just choosing "None" for the service. As far as I know, this is the only way to achieve this.)

    I set up null routes for a few custom directories I use on my site in addition to WP directories. There are also a ton of requests to files like the favicon and Apple touch icons, so I included those as well.

    Code (Text):
    /wp-content/*
    /wp-includes/*
    /wp-admin/*
    /favicon*
    /apple*
    /forum/*
    


    For the other requests, I still had concerns about my Worker caching things that it should not. So, I decided to define custom HTTP headers in my theme's functions.php file which will only be inserted on the types of pages I want to be cached. There may be a better way to accomplish this functionality but it seemed like the most reliable and straightforward to me.

    Functions.php code:

    PHP:
    /**
     * Function to include the MyCF-Cache HTTP header that will be used to signal
     * which pages to cache for the custom Cloudflare Worker.
     *
     * Here is why we use the 'template_redirect' hook:
     * https://wordpress.stackexchange.com/a/258899/160222
     */
    add_action'template_redirect''send_mycf_header' );
    function 
    send_mycf_header() {
        
    // If this is a page type we want to cache, return the header
        
    if ( is_home() OR is_category() OR is_tag() OR is_author() OR ( is_single() && 'post' == get_post_type() ) ) {
            
    header'MYCF-Cache: true' );
        }
    }
    I discovered this header is not included if the page is returned by Cache Enabler, because of course it never touches PHP and the file is served directly by Nginx. However, those responses include a "x-cache-handler" header.

    So, in my Worker, I check for the presence of either of those headers before determining if the request should be cached. Here is the Worker with some descriptive comments for anyone curious about how it works. It also includes a lot of "console.log" statements to make testing and debugging easier using Wrangler's tail command.

    Code:
    addEventListener('fetch', event => {
      // We add "passThroughOnException()" here in case
      // there is an error. This line means it will automatically
      // pass the request through to the origin transparently
      // in case something goes wrong in the Worker.
      event.passThroughOnException()
      event.respondWith(handleRequest(event))
    })
    
    async function handleRequest(event) {
      const request = event.request
      const { city, region, country, colo } = request.cf
      const ip = request.headers.get("CF-Connecting-IP")
      console.log(`Request from ${ip} (${city}, ${region}, ${country})`)
    
      /**
       * If the request contains wp or xf cookies, bypass the cache
       * and fetch from the origin.
       */
      if (request.headers.get("Cookie")) {
        const cookies = request.headers.get("Cookie")
        const has_target_cookies = Boolean(
          cookies.includes('xf_user') ||
          cookies.includes('wordpress_logged_in') ||
          cookies.includes('wp-postpass') ||
          cookies.includes('woocommerce_')
        );
     
        if (has_target_cookies) {
          console.log(`CACHE BYPASSED: target cookies detected`)
          return fetch(request)
        }
      }
     
      /**
       * Check to see if this page is already in the cache. If not, we need
       * to fetch it from the origin and (maybe) store it into the cache.
       *
       * The cacheKey will be constructed from the URL and will only contain
       * the origin and and pathname. If we do not do this, the full URL, including
       * any query parameters, will be cached. This will break caching for any links
       * with utm_ tags, social media tools, or clicks via Facebook.
       *
       * https://developers.cloudflare.com/workers/examples/cache-api
       */
      const cache = caches.default    // use the "default" cache
      const url = new URL(request.url)
      const cacheKey = url.origin + url.pathname
    
      let response = await cache.match(cacheKey)
    
      if (!response) {
        // If no response from the cache, we will fetch it from the origin
        response = await fetch(request)
    
        // Do not cache an unsuccessful (non-200) response
        if (response.status != 200) {
          console.log(`CACHE BYPASS: ${response.status} response code, will not be cached`)
          return response
        }
    
        // Check custom headers
        const myCacheHeader = response.headers.get('mycf-cache')
        const cacheEnablerHeader = response.headers.get('x-cache-handler')
    
        if (myCacheHeader || cacheEnablerHeader) {
          console.log(`CACHE MISS: Storing into ${colo} edge cache`)
    
          // We will be storing this response in the cache. We must create a copy and
          // add our own Cache-Control headers to store it in cache
          // for seven days (604800 seconds).
          response = new Response(response.body, response)
          response.headers.append("Cache-Control", "s-maxage=604800")
    
          // Store the fetched response into the cache. We use "waitUntil" here to return
          // the response without having to wait for the cache write to complete.
          event.waitUntil(cache.put(cacheKey, response.clone()))
        } else {
          console.log(`CACHE BYPASS: No custom headers detected, not something we want to cache`)
        }
      } else {
        console.log(`CACHE HIT: Serving from ${colo} edge cache`)
      }
    
      return response
    }
    
    This was paired with the Cloudflare WordPress plugin with APO disabled to clear the cache as necessary on post publication.

    The results speak for themselves. :) Here is the KeyCDN TTFB Performance Tester results.

    No worker (Cache Enabler only):
    new server ttfb.png

    With CF Worker after cache primed at all edge locations:
    all cache hits.png

    If we compare these numbers to where I was on my old server in November, we can see that for a user in London, that is a 90% faster TTFB and for a user in Sydney, that's a 94% faster TTFB!

    The Worker was implemented on January 12, 2022, so there is just a few days worth of data, but we can already see a change in Google Analytics server response times:

    Screen Shot 2022-01-15 at 9.33.47 PM.png

    And for some more perspective, let's take that chart back out to October 1... :)

    Screen Shot 2022-01-15 at 9.34.20 PM.png

    This is a major improvement so I am hopeful this will have at least some impact on my Google rankings. I will be keeping a close eye on my Search Console data over the coming weeks and months.
     
  8. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Yup crawlers to search paths had to do the same as well. Google Search Console is very useful for diagnosing such.
    Tiny changes in robots.txt with big impact
    You can add to your worker an array of exclusion paths from worker Cache API instead of many workers. If you add path inclusion/exclusion logic, you can then do much more granular caching based on urls and even query strings etc. In your case you would be inspecting url.pathname
    Awesome result there. If will definitely show up in your Google Core Web Vital metrics eventually via the Chrome User Experience report that Google Page Speed Insight's field metrics are based on.
     
    Last edited: Jan 16, 2022
  9. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    That would definitely be the easiest solution, but cost becomes an issue if a Worker is handling all those requests.

    Right now my Workers in total (I have other Workers doing things) serve around 150k requests/day, or 4.5 million/month. The threshold is 10 million/month included in my CF Paid plan so they don't cost me anything. Without using null routes, all 3.5+ million/requests a day would be getting processed by Workers and it would be around $70+ extra per month! :eek:
     
  10. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Interesting perspective and valid for cost side :) Right now my CF Worker usage tops out around 80 million requests/month :D
     
  11. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Another quick optimization win...

    For some time now I have been battling with WordPress plugins that include a lot of extra JS and CSS files. I am a fairly proficient WordPress developer — I built my site's own theme from scratch and created custom plugins — but wrestling with these stupid files was always a challenge.

    Today, though, I finally figured out the problem was the official Braintree (payment processor) WooCommerce plugin. It was forcing the entire Jquery library to load along with some other Jquery-related plugins and CSS on every WordPress page, though it wasn't obvious that was the plugin causing the issues. It was only when I started looking at the differences in files included on the page between my local development site and my public production site that I started to get suspicious of it.

    Anyway, I switched over to the official WooCommerce Stripe plugin, which is apparently much more well-built, and the extra files have finally been removed without the need for any ugly hacks on my part. :)

    The result is a 35% reduction in average JS file download size on WordPress pages, from 114KB to 73KB. According to my Cloudflare Analytics report, this will save me and my visitors over 650MB of bandwidth per day, or nearly 20GB per month! :D
     
  12. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Very nice. jQuery in Wordpress can be problematic especially if plugins bundle their own so you can end up with duplicate files being served too.

    Just love seeing and hearing your optimization journey as it plays out :D
     
  13. Jay Chen

    Jay Chen Active Member

    184
    60
    28
    Sep 10, 2017
    Ratings:
    +116
    Local Time:
    10:52 PM
    Very detailed, now I'm curious to see if I can squeeze more performance for my wordpress sites.

    What service did you use to test the speed for variable location?
    upload_2022-1-19_11-47-43.png
     
  14. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
  15. Jay Chen

    Jay Chen Active Member

    184
    60
    28
    Sep 10, 2017
    Ratings:
    +116
    Local Time:
    10:52 PM
  16. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Yes, extremely tricky!

    I thought jQuery was being included by a "quiz" plugin that we use to make quiz-type articles on our site which are pretty popular sometimes. I thought that plugin was the one that needed jQuery, and I thought it was too poorly-coded to only insert the library on posts/pages that actually had quizzes on them, so I had kind of given up on removing the library. I actually was looking into using Cloudflare Workers' HTMLRewriter to see if I could remove the library conditionally depending on whether or not the page included a quiz on it! :confused: Fortunately I found the real culprit.

    There is SO MUCH performance to be pulled out of WordPress, and there are SO MANY bad plugins out there.

    That's one reason I like NewRelic so much. It makes it easier to see where the problems are coming from.
     
  17. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    A few graphs from Cloudflare Analytics...

    As a reminder, the Cloudflare Worker which caches WordPress pages was implemented on January 11, 2022.

    Here we can see how it affected the median page load metrics for the entire site:

    full site.png

    Now we filter this to just include WordPress pages... Look at the instant drop in "Request" time. :) We can also exclude traffic from the United States, where the server is actually hosted in Ashburn, VA, to make the improvement really stand out!

    Screen Shot 2022-01-25 at 12.58.12 AM.png

    And finally, for fun, let's look at how much things have improved for visitors on the other side of the world in Australia... :D

    Screen Shot 2022-01-25 at 12.58.41 AM.png

    How awesome is that?

    Even inside the United States, though, we can still see a real improvement in speeds for visitors due to the number of POPs Cloudflare has in U.S. cities:

    Screen Shot 2022-01-25 at 1.07.39 AM.png
     

    Attached Files:

  18. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Bloody awesome mate (y) Looking at Cloudflare's Web Site Analytics for 99% percentile metrics too should be telling to see what the peak was around too.
     
  19. deltahf

    deltahf Premium Member Premium Member

    587
    265
    63
    Jun 8, 2014
    Ratings:
    +489
    Local Time:
    10:52 PM
    Still going. :)

    For the past two weeks, I have had the Image Optimizer for XF 2.x plugin optimizing my forum's 2.1 million attachments and proxied images in the background. (I actually added a secondary cron job for it so it would run every minute instead of the default five so it wouldn't take 8 months to complete... :whistle:)

    Screen Shot 2022-01-26 at 11.24.28 PM.png

    This is pretty crazy. Due to the age of my forum (21 years) and the fact that many of these images had never been optimized beyond XenForo's (or vBulletin's... or Ikonboard's...!) modifications, I think these results were especially dramatic. I will actually be able to downgrade my server backup plan because of this plugin and save a good bit of money. I could even go a step further and ditch the secondary HDD which stores my attachments/image proxy and put everything on the 1TB NVMe drive... but I'll have to think about that a bit more.

    Looking at some of the images, a few of the default settings might have been slightly too aggressive. I am now using 70-90 for PNGQuant compression range and 1 for PNGQuant speed. I've also increased the JPEG optimization level to 86. However, the vast majority of the optimized images were over a year old and will rarely be viewed. So I think it's a good compromise. Needless to say, I highly recommend this plugin for anyone on XF2.

    I was waiting for this process to complete before enabling Cloudflare's WebP compression. Now that it has finished and the option is enabled, I did a full CF cache purge for my site and ventured out to see just what kind of real-world improvement this brought.

    My forum hosts some huge gallery threads and users stuff them with high-resolution images, so we have some pages that are ridiculously large.

    I checked specific pages in some of these threads and looked at page size in Chrome Developer Tools three times: before optimization, after plugin optimization, and finally after enabling CF WebP compression. This allows us to see just how much was gained from each step.

    Thread 1
    • Original size: 68.3 MB
    • Plugin optimized: 26.7 MB
    • WebP enabled: 20.8 MB (-70% smaller)
    • Total savings: 47.5 MB
    Thread 2
    • Original size: 73.5 MB
    • Plugin optimized: 43.8 MB
    • WebP enabled: 35 MB (-52% smaller)
    • Total savings: 38.5 MB
    Thread 3
    • Original size: 151 MB
    • Plugin optimized: 58.8 MB
    • WebP enabled: 48.4 MB (-68% smaller)
    • Total savings: 102.6 MB

    These improvements should be especially noticeable for mobile users on capped plans and those on slower internet connections. :)
     
  20. eva2000

    eva2000 Administrator Staff Member

    55,223
    12,253
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,831
    Local Time:
    12:52 PM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Very nice indeed! Haven't tried that addon yet but one Xenforo user we know in common had 4TB of Xenforo attachments and used my optimise-images.sh shell script to optimize and resize his attachments :)

    One of my clients was curious to know if he can optimize his Xenforo avatar image sizes, so I started writing a newer Xenforo specific image optimizer based on optimise-images.sh just for Xenforo avatars which also will add AVIF and animated webP image support for GIFs. Will possibly extend this to Xenforo attachments one day but avatars are easier due to smaller sizes and fixed known dimensions to work with :)

    The original 1.jpg and 2.jpg are actually PNG images in below example due to Xenforo avatar upload system always renaming any image format to .jpg extension!
    Code (Text):
    ./xf-avatar-optimizer.sh list
    nginx  nginx  982     Jan  23  22:41  /data/avatars/s/0/2.webp
    nginx  nginx  1629    Jan  23  22:41  /data/avatars/s/0/2.jpg.pngquant
    nginx  nginx  4128    Jan  23  09:04  /data/avatars/s/0/2.jpg.optipng
    nginx  nginx  4625    Jan  23  22:41  /data/avatars/s/0/2.jpg.opt
    nginx  nginx  4648    Jan  22  23:32  /data/avatars/s/0/2.jpg
    nginx  nginx  1437    Jan  23  22:41  /data/avatars/s/0/2.avif
    nginx  nginx  570     Jan  23  22:41  /data/avatars/s/0/1.webp
    nginx  nginx  730     Jan  23  22:41  /data/avatars/s/0/1.jpg.pngquant
    nginx  nginx  1488    Jan  23  09:04  /data/avatars/s/0/1.jpg.optipng
    nginx  nginx  1788    Jan  23  22:41  /data/avatars/s/0/1.jpg.opt
    nginx  nginx  1809    Jan  17  11:52  /data/avatars/s/0/1.jpg
    nginx  nginx  1007    Jan  23  22:41  /data/avatars/s/0/1.avif
    nginx  nginx  9546    Jan  23  22:41  /data/avatars/o/0/2.webp
    nginx  nginx  25234   Jan  23  22:41  /data/avatars/o/0/2.jpg.pngquant
    nginx  nginx  112717  Jan  23  09:04  /data/avatars/o/0/2.jpg.optipng
    nginx  nginx  127125  Jan  23  22:41  /data/avatars/o/0/2.jpg.opt
    nginx  nginx  127102  Jan  22  23:32  /data/avatars/o/0/2.jpg
    nginx  nginx  9407    Jan  23  22:41  /data/avatars/o/0/2.avif
    nginx  nginx  1142    Jan  23  22:41  /data/avatars/o/0/1.webp
    nginx  nginx  1169    Jan  23  22:41  /data/avatars/o/0/1.jpg.pngquant
    nginx  nginx  2065    Jan  23  09:04  /data/avatars/o/0/1.jpg.optipng
    nginx  nginx  2092    Jan  23  22:41  /data/avatars/o/0/1.jpg.opt
    nginx  nginx  2087    Jan  17  11:52  /data/avatars/o/0/1.jpg
    nginx  nginx  1270    Jan  23  22:41  /data/avatars/o/0/1.avif
    nginx  nginx  2184    Jan  23  22:41  /data/avatars/m/0/2.webp
    nginx  nginx  3542    Jan  23  22:41  /data/avatars/m/0/2.jpg.pngquant
    nginx  nginx  12178   Jan  23  09:04  /data/avatars/m/0/2.jpg.optipng
    nginx  nginx  13601   Jan  23  22:41  /data/avatars/m/0/2.jpg.opt
    nginx  nginx  13634   Jan  22  23:32  /data/avatars/m/0/2.jpg
    nginx  nginx  2672    Jan  23  22:41  /data/avatars/m/0/2.avif
    nginx  nginx  934     Jan  23  22:41  /data/avatars/m/0/1.webp
    nginx  nginx  1232    Jan  23  22:41  /data/avatars/m/0/1.jpg.pngquant
    nginx  nginx  3351    Jan  23  09:04  /data/avatars/m/0/1.jpg.optipng
    nginx  nginx  4193    Jan  23  22:41  /data/avatars/m/0/1.jpg.opt
    nginx  nginx  4214    Jan  17  11:52  /data/avatars/m/0/1.jpg
    nginx  nginx  1448    Jan  23  22:41  /data/avatars/m/0/1.avif
    nginx  nginx  4592    Jan  23  22:41  /data/avatars/l/0/2.webp
    nginx  nginx  8683    Jan  23  22:41  /data/avatars/l/0/2.jpg.pngquant
    nginx  nginx  36771   Jan  23  09:04  /data/avatars/l/0/2.jpg.optipng
    nginx  nginx  41347   Jan  23  22:41  /data/avatars/l/0/2.jpg.opt
    nginx  nginx  41411   Jan  22  23:32  /data/avatars/l/0/2.jpg
    nginx  nginx  5042    Jan  23  22:41  /data/avatars/l/0/2.avif
    nginx  nginx  1734    Jan  23  22:41  /data/avatars/l/0/1.webp
    nginx  nginx  2512    Jan  23  22:41  /data/avatars/l/0/1.jpg.pngquant
    nginx  nginx  7403    Jan  23  09:04  /data/avatars/l/0/1.jpg.optipng
    nginx  nginx  9375    Jan  23  22:41  /data/avatars/l/0/1.jpg.opt
    nginx  nginx  9410    Jan  17  11:52  /data/avatars/l/0/1.jpg
    nginx  nginx  2408    Jan  23  22:41  /data/avatars/l/0/1.avif
    nginx  nginx  9694    Jan  23  22:41  /data/avatars/h/0/2.webp
    nginx  nginx  23463   Jan  23  22:41  /data/avatars/h/0/2.jpg.pngquant
    nginx  nginx  110569  Jan  23  09:04  /data/avatars/h/0/2.jpg.optipng
    nginx  nginx  124695  Jan  23  22:41  /data/avatars/h/0/2.jpg.opt
    nginx  nginx  124820  Jan  22  23:32  /data/avatars/h/0/2.jpg
    nginx  nginx  9318    Jan  23  22:41  /data/avatars/h/0/2.avif
    nginx  nginx  3972    Jan  23  22:41  /data/avatars/h/0/1.webp
    nginx  nginx  3802    Jan  23  22:41  /data/avatars/h/0/1.jpg.pngquant
    nginx  nginx  11459   Jan  23  09:04  /data/avatars/h/0/1.jpg.optipng
    nginx  nginx  13184   Jan  23  22:41  /data/avatars/h/0/1.jpg.opt
    nginx  nginx  13218   Jan  17  11:52  /data/avatars/h/0/1.jpg
    nginx  nginx  4580    Jan  23  22:41  /data/avatars/h/0/1.avif
    


    As you can see AVIF isn't always smaller than webP

    Exactly, my updated image resizer and optimise-images.sh can also set a date threshold so you only optimise images of a certain age. So you can probably do several separate optimization runs with different image quality and age threshold settings :D
     
    Last edited: Jan 27, 2022