Want to subscribe to topics you're interested in?
Become a Member

Sysadmin How do you back up your public directory?

Discussion in 'System Administration' started by BamaStangGuy, Mar 31, 2017.

  1. BamaStangGuy

    BamaStangGuy Active Member

    475
    137
    43
    May 25, 2014
    Ratings:
    +181
    Local Time:
    11:48 AM
    Right now we have a backup script that tars our entire /home/nginx/domains/ folder and then sends it to Amazon S3 every day. We Glacier after 7 days and delete after 14 days.

    It takes 7 minutes to do the tar on all of our sites on that server, which isn't bad. We don't compress as when I tested we only saved 3GB on a 51GB tar.

    We really love S3 for price and speed but just seeing if anyone else has a come up with a better system? I am not really interested in rsync and we are ok with losing a days worth of attachments if something does happen, since we don't run really attachment heavy sites.
     
  2. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    I usually do per site vhost backups so /home/nginx/domains/domain1.com/public i just backup the web roots unless there's important stuff in /home/nginx/domains/domain1.com/* so you exclude all the logs in /home/nginx/domains/domain1.com/logs/* . Tar first then multi-threaded compression via pigz, lbzip2 or pxz after tar.

    Like dbbackup.sh, I save backup locally + Amazon S3 remote backup every 6-8/hrs depending on site and also have a backup server with rsnapshot to pull backups from servers every 4hrs, 24hrs, weekly, monthly.

    Amazon S3 I save to Standard Infrequent Access storage class after 30 days but retain backup depending on importance up to 366 days in Glacier. You never know when you need a working backup and only realise when you need them most i.e. data corruption, data restoration etc :)
     
    Last edited: Mar 31, 2017
    • Like Like x 1
  3. BamaStangGuy

    BamaStangGuy Active Member

    475
    137
    43
    May 25, 2014
    Ratings:
    +181
    Local Time:
    11:48 AM
    We have a custom bash for databases that run every 6 hours and backsup to s3 and glaciers after 7 days and deletes after 30.

    Looks like S3 may be my best option yet.
     
  4. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    I usually write backup scripts to back per site vhost + db pairs. That way I can schedule them at site's specific off peak rather than server off peak as they are not necessarily the same time. For very large data set sites, i use site maintenance mode i.e. https://community.centminmod.com/threads/sitestatus-maintenance-mode.5599/. All my backups are timed and monitored so I have a set target criteria for how fast my backups must be locally and remotely i.e. locally 30GB data in 5 minutes or 60GB data in 10-15 mins and spec the servers accordingly.

    Remote time criteria can vary - had clients needing 150-200MB/s backup speed :) But wanting 600MB/s but priced way out of their budget heh
     
  5. BamaStangGuy

    BamaStangGuy Active Member

    475
    137
    43
    May 25, 2014
    Ratings:
    +181
    Local Time:
    11:48 AM
    Since all of our databases are XenForo and use InnoDB I use single-transaction for mysqldump which allows me to do even Christian Foruns 60GB database during peak hours. It helps having the 16 core / 32 ht server though.
     
    • Like Like x 1
  6. BamaStangGuy

    BamaStangGuy Active Member

    475
    137
    43
    May 25, 2014
    Ratings:
    +181
    Local Time:
    11:48 AM
    I also get an email reporting each back up. All server email goes through postfix which is setup for SES via SMTP. The last few months I have been lucky enough to have the time to not only dig deep into CentminMod but also research heavily Postfix and other server tools to make my life easier.

    Screen Shot 2017-03-31 at 6.13.41 AM.png
     
    • Informative Informative x 1
  7. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    Yeah using Postfix relay for AWS SES smtp as well on servers that need to hide their sending IP address. And same setup email and pushover notifications :)

    yup reading is good (y)

    Yeah having that many cpu threads helps. Looking forward to AMD Zen Naples giving Intel some competition in the price to cpu threads ratios :)

    In dbbackup.sh i scripted it so that if it detects 100% InnoDB based mysql database it uses single-transaction flag for mysqldump otherwise if mysql database has other non-InnoDB tables i.e. MyISAM, it doesn't use single-transaction flag.
     
  8. Matt

    Matt Moderator Staff Member

    697
    322
    63
    May 25, 2014
    Sheffield, UK
    Ratings:
    +449
    Local Time:
    5:48 PM
    1.7.1
    MariaDB 10
    I have a 2x2TB E3 server from SYS which I use as my main backup machine. I then have multiple scripts running on there which pushes various backups to S3. I'm only just starting to play with moving this to Standard-IA and Glacier though.
     
    • Informative Informative x 1
  9. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    Yeah AWS S3 with Standard-IA storage class is the best balance of AWS S3 pricing to relative speed of access to S3 bucket saved data :)
     
    • Like Like x 1
  10. Matt

    Matt Moderator Staff Member

    697
    322
    63
    May 25, 2014
    Sheffield, UK
    Ratings:
    +449
    Local Time:
    5:48 PM
    1.7.1
    MariaDB 10
    TBH, I've only really needed to access a couple of things over the years, and S3 is really being used as redundant storage. Anything needed I can quickly access from the dedicated backup server.
     
    • Agree Agree x 1
  11. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    yeah true after a while in Standard-IA, i set the life cycle in AWS S3 to move to Glacier :)

    Starting to play with my addons/acmetool.sh and acme.sh for Letsencrypt SSL certificates via DNS validation for Cloudflare and AWS Route53 so looking at scripting syncing of Letsencrypt SSL certs issued via DNS validation to multiple servers i.e. clusters serving the same domain/ssl cert
    Cloudflare DNS validation already supported, just AWS Route53 DNS takes more manual work for setting up dedicated AWS IAM user and permissions/groups and then syncing across server clusters via AWS S3 dedicated ssl bucket :)
     
  12. RB1

    RB1 Active Member

    281
    72
    28
    Nov 11, 2016
    California
    Ratings:
    +119
    Local Time:
    9:48 AM
    Nginx 1.13.x
    MariaDB 10.1.x
    My sites are small enough and mostly static so I just backup the entire nginx domains folder manually every week.
    LOL :D
    Code (Text):
    # tar -zcvf 4-7-2017.7z /home/nginx/domains
     
    Last edited: Apr 8, 2017
  13. Revenge

    Revenge Active Member

    289
    64
    28
    Feb 21, 2016
    Portugal
    Ratings:
    +228
    Local Time:
    5:48 PM
    1.9.x
    10.1.x
    How much do you pay for S3?

    I have a script that uploads my backup to Dropbox(public folders, conf's and databases). 1TB costs only 9,99€/month or 99€/year.
     
  14. pamamolf

    pamamolf Well-Known Member

    2,826
    253
    83
    May 31, 2014
    Ratings:
    +449
    Local Time:
    7:48 PM
    Nginx-1.13.x
    MariaDB 10.1.x
  15. eva2000

    eva2000 Administrator Staff Member

    31,001
    6,920
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,426
    Local Time:
    3:48 AM
    Nginx 1.13.x
    MariaDB 5.5
    depends on storage class used Cloud Storage Pricing – Amazon Simple Storage Service (S3) – AWS but AWS S3 has greater availability and redundancy then normal dedicated servers or region availability selections to get closer to your server location for better speed

    snapshot from one of my AWS accounts at @1.5TB AWS S3 storage where i mainly use Standard-IA and Glacier storage classes
    Code (Text):
    Amazon Simple Storage Service TimedStorage-ByteHrs
    $0.023 per GB - first 50 TB / month of storage used    74.047 GB-Mo    $1.70
    
    Amazon Simple Storage Service TimedStorage-SIA-ByteHrs
    $0.0125 per GB-Month of storage used in Standard-Infrequent Access    1,133.646 GB-Mo    $14.17
    
    Amazon Simple Storage Service USW2-TimedStorage-ByteHrs
    $0.023 per GB - first 50 TB / month of storage used    36.708 GB-Mo    $0.84
    
    Amazon Simple Storage Service USW2-TimedStorage-GlacierByteHrs
    $0.004 per GB / month of storage used - Amazon Glacier    187.502 GB-Mo    $0.75
    
    Amazon Simple Storage Service USW2-TimedStorage-SIA-ByteHrs
    $0.0125 per GB-Month of storage used in Standard-Infrequent Access    134.054 GB-Mo    $1.68
    


    I also setup dedicated AWS IAM users/groups per AWS S3 bucket per server for greater security. I like AWS IAM security because you control, grant or revoke privileges from central AWS Console management so if you have 100s of servers it easier to manage user privileges Identity and Access Management (IAM) - Amazon Web Services (AWS)

    use AWS S3 Bucket's lifecycle management to automatically transition data to move from one storage class to another :)

    aws-s3-life-cycle-01.png
     
    Last edited: Apr 8, 2017
    • Informative Informative x 1