Rclone Backblaze



B2 is a supported backend, with its own documentation and options. But where it all started to fall apart for me was with the encryption. Rclone does no encryption at all by default. It has a whole separate system for that which, somewhat confusingly, is treated as a separate “backend” on top of the one you currently use. Something is wrong with Backblaze, usually a transient problem. Rclone will retry, by default up to 10 times with built-in rate limiting (pacer) as shown with the incident a7691a3d7f71-e47fc872d7ba below.

  • Tasks

Create B2 remote

Rclone Backblaze Unraid

  • Where 'Key ID' is account.
  • Where 'Application key' is key.
  • Using b2 as the remote name and used in all task examples (e.g. as b2:).

All done, confirm settings:

Tasks

List buckets

Synchronize local files to bucket

Will perform the following:

  • Push all files from /path/to/local/source to target bucket BUCKET_NAME.
  • Files found in target bucket not in source will be deleted.
  • Files considered identical if file size and modification date match.
  • Progress displayed to terminal, output sent to /path/to/rclone.log.
  • Using --fast-list to action rclone to pull all current target bucket files in a single/minimal number of API calls. Based on the number of target bucket files to consider this may have positive/negative execution time/cost benefit.
  • Use --transfers to control number of parallel file transfers to target bucket, tune based on available upstream bandwidth.

Synchronize local files to bucket - checksum

  • Identical to Synchronize local files to bucket, but using --checksum flag means files considered identical if file size and SHA-1 match.
  • A more thorough synchronization, but will take longer to execute as rclone must calculate SHA-1 checksums for every source file - B2 keeps a SHA-1 checksum for every target bucket file, so no additional overhead there.

Verify local files against bucket

Unraid

Will perform the following:

  • Verify all files at /path/to/local/source against target bucket BUCKET_NAME.
  • Files considered identical if file size and SHA-1 match.
  • To speed up the check, provide the --size-only flag, which will consider files identical if only file sizes match.
  • Progress displayed to terminal, output sent to /path/to/rclone.log.
  • Using --fast-list to action rclone to pull all current target bucket files in a single/minimal number of API calls. Based on the number of target bucket files to consider this may have positive/negative execution time/cost benefit.

Reference

Introduction

I was trying to use Synology’s Hyper Backup and it was taking months and still no results. So I gave up and decided to setup an rclone dokcer on my Synology NAS to sync my data to Google Drive. Here are the steps I took… Mac os gratis download.

Setup Folders

Create a folder where you will store your rclone/google-drive configuration ex. /docker/rclone

You will also need to know which folder you want to backup or sync. I’m doing a backup, but similar steps can be taken for sync. ex. /ETdoFresh/sync

Backblaze B2 Pricing

Download Docker rclone Image

Download the rclone/rclone Docker Image from the Registry

Launch rclone Image

Launching an image creates a docker container. Launch the rclone/rclone Image. Click on Advanced Settings.

Mount the following volumes under the Volume tab.ex. ETdoFresh/sync => /syncex. docker/rclone => /config/rclone

Enter config under Command in the Environment Tab. Adobe animate cc mac download.

B2 Pricing

Apply. Next. Apply.

Running the Configuration

At this point, rclone has already prompted you to enter an option. To verify this, check Logs and then goto Terminal.

Here is the console output of a succesful transaction:

Backblaze Backup

Verify Configuration

You should now have a rclone.conf in your configuration directory.

Change Command Line

Now we will change the command line from configuration to backup. To do this, we edit the rclone-rclone1 container to…

Well, I just found out, you can’t change the command line :(

So, repeat [Launch rclone Image](Launch rclone Image) using the following instead of config on the last step…

B2 Cloud

Notice 'google:sync' which is the {name-you-gave-during-rclone-config}:{directory-on-google}

Arguments and their descriptions can be found here:

Conclusion

This is the first steps to getting rclone working with your synology. The setup we have is manual. You have to run the docker everytime you want to backup. It will then shutdown when complete.

If you want to automate this, here are some options.

  1. Check the Enable auto-restart option in EditThe info is particular interesting as it will slow down when there is nothing to sync. It’s what I have currently setup, but will probably do Option #2 in the future.
  2. Extend the rclone/rclone Docker image to use crond/crontabs to schedule a nightly backup.