Welcome to Centmin Mod Community
Become a Member

Beta Branch addons/rclone.sh discussion thread

Discussion in 'Beta release code' started by eva2000, Oct 28, 2016.

  1. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Dedicated discussion thread for beta testing and feedback for addons/rclone.sh outlined at Beta Branch - addons/rclone.sh client for syncing to remote cloud storage providers | Centmin Mod Community

    Currently, addons/rclone.sh sync and copy flag options are not yet implemented as need to figure out how to lookup what remote name the end user configures for the remote providers. Otherwise, hard to sync and copy to a remote name in an unattended fashion.

    Only can grep/search the rclone config file ~/.rclone.conf if it's unencrypted.
    Code (Text):
    grep '\[' /root/.rclone.conf | sed 's/\(\[\|\]\)//g'
    dropbox1
    


     
    Last edited: Oct 28, 2016
  2. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    working on copy command for addons/rclone.sh but interesting because rclone.sh runs create a log file at /root/centminlogs. So if copying the /root/centminlogs via copy command, rclone reports failed attempt on that log file as corrupted on transfer as the log is not completely written to until after addons/rclone.sh completes it run
    Code (Text):
    ./rclone.sh copy dropbox1      
    remote = dropbox1
    
    copy /root/centminlogs to cloud storage remote dropbox1
    https://community.centminmod.com/posts/39071/
    
    rclone copy /root/centminlogs dropbox1:centminlogs
    2016/10/27 23:55:41 Dropbox root 'centminlogs': Waiting for checks to finish
    2016/10/27 23:55:41 Dropbox root 'centminlogs': Waiting for transfers to finish
    2016/10/27 23:55:43 rclone_copy_271016-235512.log: corrupted on transfer: sizes differ 173 vs 330
    2016/10/27 23:55:44 Attempt 1/3 failed with 1 errors and: corrupted on transfer: sizes differ 173 vs 330
    2016/10/27 23:55:45 Dropbox root 'centminlogs': Waiting for checks to finish
    2016/10/27 23:55:45 Dropbox root 'centminlogs': Waiting for transfers to finish
    2016/10/27 23:55:46
    Transferred:   3.066 MBytes (91.979 kBytes/s)
    Errors:                 0
    Checks:                52
    Transferred:           54
    Elapsed time:       34.1s
    
    
    Total Rclone Copy Time: 34.147900166 seconds
    

    looks like i can exclude the log from run
    Code (Text):
    ./rclone.sh copy dropbox1       
    remote = dropbox1
    
    copy /root/centminlogs to cloud storage remote dropbox1
    https://community.centminmod.com/posts/39071/
    
    rclone copy /root/centminlogs dropbox1:centminlogs
    rclone copy /root/centminlogs dropbox1:centminlogs --exclude rclone_copy_281016-000242.log
    2016/10/28 00:02:44 Dropbox root 'centminlogs': Waiting for checks to finish
    2016/10/28 00:02:44 Dropbox root 'centminlogs': Waiting for transfers to finish
    2016/10/28 00:02:47
    Transferred:    908 Bytes (177 Bytes/s)
    Errors:                 0
    Checks:                53
    Transferred:            1
    Elapsed time:        5.1s
    
    
    Total Rclone Copy Time: 5.121360189 seconds
    

    try with sync command
    Code (Text):
    ./rclone.sh sync dropbox1
    remote = dropbox1
    
    sync /root/centminlogs to cloud storage remote dropbox1
    https://community.centminmod.com/posts/39071/
    
    rclone sync /root/centminlogs dropbox1:centminlogs
    rclone sync /root/centminlogs dropbox1:centminlogs --exclude rclone_sync_281016-000453.log
    2016/10/28 00:04:55 Dropbox root 'centminlogs': Waiting for checks to finish
    2016/10/28 00:04:55 Dropbox root 'centminlogs': Waiting for transfers to finish
    2016/10/28 00:04:58 Waiting for deletions to finish
    2016/10/28 00:04:58 
    Transferred:    632 Bytes (124 Bytes/s)
    Errors:                 0
    Checks:                54
    Transferred:            1
    Elapsed time:          5s
    
    
    Total Rclone Sync Time: 5.089820731 seconds
    
     
    Last edited: Oct 28, 2016
  3. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    added /usr/local/nginx/conf to copy/sync commands as well
    Code (Text):
    ./rclone.sh copy dropbox1        
    remote = dropbox1
    
    copy /root/centminlogs to cloud storage remote dropbox1
    https://community.centminmod.com/posts/39071/
    
    rclone copy /root/centminlogs dropbox1:centminlogs --exclude rclone_copy_281016-002424.log
    2016/10/28 00:24:26 Dropbox root 'centminlogs': Waiting for checks to finish
    2016/10/28 00:24:26 Dropbox root 'centminlogs': Waiting for transfers to finish
    2016/10/28 00:24:28 
    Transferred:    684 Bytes (144 Bytes/s)
    Errors:                 0
    Checks:                56
    Transferred:            1
    Elapsed time:        4.7s
    
    copy /usr/local/nginx/conf to cloud storage remote dropbox1
    
    rclone copy /usr/local/nginx/conf dropbox1:nginxconf
    2016/10/28 00:24:58 Dropbox root 'nginxconf': Waiting for checks to finish
    2016/10/28 00:24:58 Dropbox root 'nginxconf': Waiting for transfers to finish
    2016/10/28 00:25:03 
    Transferred:   102.562 kBytes (2.979 kBytes/s)
    Errors:                 0
    Checks:                 0
    Transferred:           64
    Elapsed time:       34.4s
    
    
    Total Rclone Copy Time: 39.212982797 seconds
    

    Code (Text):
    rclone lsd dropbox1:
               0 0001-01-01 00:00:00        -1 centminlogs
               0 0001-01-01 00:00:00        -1 nginxconf
    2016/10/28 00:25:51 
    Transferred:      0 Bytes (0 Bytes/s)
    Errors:                 0
    Checks:                 0
    Transferred:            0
    Elapsed time:        1.6s
    

    Code (Text):
    rclone lsl dropbox1:nginxconf/conf.d
         1120 2016-10-28 00:24:55.000000000 demodomain.com.conf
          846 2016-10-28 00:24:57.000000000 ssl.conf
         1701 2016-10-28 00:24:59.000000000 virtual.conf
         2052 2016-10-28 00:25:00.000000000 newdomain.com.conf
    2016/10/28 00:27:32 
    Transferred:      0 Bytes (0 Bytes/s)
    Errors:                 0
    Checks:                 4
    Transferred:            0
    Elapsed time:        1.5s
    

    upload_2016-10-28_10-26-26.png

    upload_2016-10-28_10-27-0.png
     
  4. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    Nice @eva2000
    Waiting for google drive because I have bussiness account and unlimited storage, so I will put there 50% of my server for backup like I am using Rsnapshot to backup those files (40GB) on my another server (keeping last 7 days Rsnapshot + rsync).
     
  5. pamamolf

    pamamolf Premium Member Premium Member

    4,084
    428
    83
    May 31, 2014
    Ratings:
    +834
    Local Time:
    12:47 AM
    Nginx-1.25.x
    MariaDB 10.3.x
    Waiting for google drive also as it has more free space for a free account :)
     
  6. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    for non-dropbox storage the steps involve browser link and local 53682 port see Google Cloud Storage instructions Google Cloud Storage not sure how to automate that in a script

    and Google Drive Google drive

    non-dropbox storage providers involve a setup needing to use web browser on the machine running rclone Remote Setup. They have a windows rclone client though so might need to do that http://rclone.org/downloads/
     
    Last edited: Oct 28, 2016
  7. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
  8. pamamolf

    pamamolf Premium Member Premium Member

    4,084
    428
    83
    May 31, 2014
    Ratings:
    +834
    Local Time:
    12:47 AM
    Nginx-1.25.x
    MariaDB 10.3.x
    Encryption? :)
     
  9. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Not yet set.. still need to read up and test for that. Hence, why suggest setup test dummy Google Drive or Dropbox accounts for now :)
     
  10. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Last edited: Oct 28, 2016
  11. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
  12. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    For Google drive

    Making your own client_id
    When you use rclone with Google drive in its default configuration you are using rclone’s client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

    However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone’s rate limit.

    Here is how to create your own Google Drive client ID for rclone:

    1. Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

    2. Select a project or create a new project.

    3. Under Overview, Google APIs, Google Apps APIs, click “Drive API”, then “Enable”.

    4. Click “Credentials” in the left-side panel (not “Go to credentials”, which opens the wizard), then “Create credentials”, then “OAuth client ID”. It will prompt you to set the OAuth consent screen product name, if you haven’t set one already.

    5. Choose an application type of “other”, and click “Create”. (the default name is fine)

    6. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
     
  13. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    I configured google drive.

    I have 4 folders to backup.
    How to configure to every day sync this 4 folders in one folder on remote, and after 3 days sync from first one.
    1. DAY
    folder 1, folder 2 folder 3, folder 4 >> Folder A
    2. DAY
    folder 1, folder 2 folder 3, folder 4 >> Folder B
    3. DAY
    folder 1, folder 2 folder 3, folder 4 >> Folder C

    4. DAY
    folder 1, folder 2 folder 3, folder 4 >> Folder A
    ...

    I can do it manuylly, but is there a way to create bash script (in this case it will need one bash script for every day, so 3 in total) and put it in cron to run every script every third day.

    Maybe I will be able to configure that, but is there an option to run sync for all 4 folders one after another in one command line?
    If I create 4 script for 4 folders, how to setup cron time because I do not know how time will previous script run?

    Any thougts, help...
     
  14. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    Also, if I run manually copy or sync through shell, I have to leave shell open, or I can close shell and turn off my computer?
     
  15. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    Great time to learn and play with shell scripting seeing that is what Centmin Mod is built upon ;)
    ah didn't read that far down the page at Google drive so thanks for heads up :)

    commands don't normally survive existing ssh session. That's why you can setup shell script and cron schedule it
     
  16. cloud9

    cloud9 Premium Member Premium Member

    431
    117
    43
    Oct 6, 2015
    England
    Ratings:
    +217
    Local Time:
    10:47 PM
    1.25.3
    10.6.x
    Very small world, Thats 40mins drive from me!
     
  17. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
    indeed a small world
    one note for folks unaware, rclone copied/synced files to cloud storage won't preserve file and user/group permissions if they are important and crucial. If you want to preserve such you'd need to tar them with -p flag for the directory level and upload the tarball *.tar or tarball gzipped compressed *.tar.gz version to the cloud storage instead of individual files.

    For the addons/rclone.sh sync and copy options for backing up /usr/local/nginx/conf and /root/centminlogs, the file permissions are not important as they're owned by root so restoration by root will be fine.

    The alternative if you want to upload individual files to cloud storage is before you do so or during upload, you also backup all directory and file permissions i.e. tools/backup-perm.sh in 123.09beta01 does this Beta Branch - add tools/backup-perm.sh | Centmin Mod Community where it runs against individual nginx vhost directories and files to backup file user, group and permissions and allows restoration of those permissions. Still a limitation of of this is such permission backup files are only really valid for the moment in time they are backed up. As any new files added to source nginx vhost site won't have such backed up permissions.

    Though a tarball backup is also a moment in time backup as well :)
     
  18. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    All day I am trying to create sript to rclone several folders from server.
    I ended with this configuration.

    Btw, I have permission backuped on daily bases and they are saved in /home/nginx/domains/pijanitvor.com/ so they are backed up too.
    I can not first tar directories, because there will not be enough space for files and tared files, so I am using Rclone on not compressed files.

    It is pity that incremental backups are not option in Rclone, like they are in rsync, so my script is just classic "add another command".
    Also, there is still(?) bug when sync do not delete empty directories on remote server.
    Also, is recommanded (do not know is this bug fixed or not) to delete and create directories on remote server by classic command (purge and mkdir).

    So, I have 4 directories where everything (I think) what need to be backup is backuped.
    First I run initial "0 day old" backup of all 4 directories.
    Because there is 10000 attachments in there, this backup is still running (more than 4 hours, and all backup will be about 37 GB), but I started through cron with bash script, so I can turn off session on cli and my computer.

    Code:
    /usr/sbin/rclone sync /home/nginx/domains/pijanitvor.com/ ptdrive:'##PT - Rclone'/0/pijanitvor.com
    /usr/sbin/rclone sync /usr/local/nginx/conf/ ptdrive:'##PT - Rclone'/0/conf
    /usr/sbin/rclone sync /mnt/xenforo-data/ ptdrive:'##PT - Rclone'/0/xenforo-data
    /usr/sbin/rclone sync /var/lib/redis/ ptdrive:'##PT - Rclone'/0/redis
    When that will be finished, I will manually create another 13 folders on Gdrive, so I will have folders from 0-13. That will be sign how old is backup from today (similiar like rsync do), and after that I will create this script and make it run with cron once a day.

    Code:
    #!/bin/bash
    /usr/sbin/rclone purge ptdrive:'##PT - Rclone'/13
    /usr/sbin/rclone mkdir ptdrive:'##PT - Rclone'/13
    /usr/sbin/rclone copy ptdrive:'##PT - Rclone'/12/ ptdrive:'##PT - Rclone'/13
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/11/ ptdrive:'##PT - Rclone'/12
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/10/ ptdrive:'##PT - Rclone'/11
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/9/ ptdrive:'##PT - Rclone'/10
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/8/ ptdrive:'##PT - Rclone'/9
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/7/ ptdrive:'##PT - Rclone'/8
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/6/ ptdrive:'##PT - Rclone'/7
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/5/ ptdrive:'##PT - Rclone'/6
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/4/ ptdrive:'##PT - Rclone'/5
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/3/ ptdrive:'##PT - Rclone'/4
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/2/ ptdrive:'##PT - Rclone'/3
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/1/ ptdrive:'##PT - Rclone'/2
    /usr/sbin/rclone sync ptdrive:'##PT - Rclone'/0/ ptdrive:'##PT - Rclone'/1
    /usr/sbin/rclone sync /home/nginx/domains/pijanitvor.com/ ptdrive:'##PT - Rclone'/0/pijanitvor.com
    /usr/sbin/rclone sync /usr/local/nginx/conf/ ptdrive:'##PT - Rclone'/0/conf
    /usr/sbin/rclone sync /mnt/xenforo-data/ ptdrive:'##PT - Rclone'/0/xenforo-data
    /usr/sbin/rclone sync /var/lib/redis/ ptdrive:'##PT - Rclone'/0/redis
    It is needs to be in that order, from 13 to 0, so every backup will be moved to day older directory. So in the end, that will be backup for last 2 weeks, one daay in one folder. All about 500 GB.

    Also, I run same folders (7 days backup) on another server with rsync (incremental backup).

    I hope that this will be helping to someone.

    Any suggestions, thoughts, bugs?
     
  19. eva2000

    eva2000 Administrator Staff Member

    54,519
    12,211
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +18,780
    Local Time:
    8:47 AM
    Nginx 1.27.x
    MariaDB 10.x/11.4+
  20. Sunka

    Sunka Well-Known Member

    1,150
    325
    83
    Oct 31, 2015
    Pula, Croatia
    Ratings:
    +525
    Local Time:
    11:47 PM
    Nginx 1.17.9
    MariaDB 10.3.22
    But this is my rclone usage.

    This is bash script for Rclone to run once a day through cron. It will created 14 days backup.

    And there is also two "bugs" for Rclone that I found on Rclone forum and think that it would be Ok to mention it here so other users should have them in mind.

    I posted here so maybe will use it, or it will be help for creating another script.
    Feel free to move this my few posts from this thread to some another one (y)