Over the past two weeks, I've had three incidents (the third is currently ongoing) with suddenly slow database performance. The issue is not consistent, but when it does appear, it always begins shortly after 1:00am local (server) time. There is nothing scheduled at this time of day (backups, etc.) that I am aware of which would cause the issue. Here is what the incident looks like in NewRelic. As you can see, there is a drastic increase in MySQL's contribution to the overall web transaction time: Taking a closer look at the database activity report in NewRelic, unfortunately, does not offer any more clues. All of the tracked queries seem to have become "slow", with no particular outliers: Restarting MySQL or any of the related services (Nginx, PHP) has no impact on the problem. The only way I can resolve the issue is by rebooting the entire server. However... rebooting now takes an unusually long time (15 minutes compared to the previous 3 minutes — this is a dedicated box at ReliableSite). Of course, this is not a permanent fix, as the problem randomly resurfaces a few days later. I am running the latest versions of Centminmod, Nginx (1.13.3), PHP-FPM (5.6.31), and MariaDB (5.5.57). I checked in mysqld.log but nothing stood out as unusual. I have no idea how to troubleshoot this further. Anyone have some ideas?