前言

活菩萨CloudFlare提供了对象存储服务R2,包含10GB的免费存储额度且不收取流量费用。其实挺好的用, 但最近买了台大存储的服务器,搭建了RustFS就想着把数据迁移过来。

操作步骤

1. 配置RustFS

我用的docker模式搭建的,

 # create data and logs directories
 mkdir -p data logs

 # using latest alpha version
 docker run -d -p 9000:9000 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:alpha

2. 配置Rclone

a. 下载Rclone

Rclone downloads

b. 配置Rclone

我自己是windows平台, 就以windows为例:

配置rclone r2 remote
rclone.exe config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Enter name for new remote.
name> r2
Storage> s3
provider> 6
env_auth> false
access_key_id> 你R2上的access_key
secret_access_key> 你R2上的access_access
region> APAC
endpoint> *****.r2.cloudflarestorage.com
Keep this "r2" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

配置Rclone RustFS remote
Current remotes:

Name                 Type
====                 ====
r2                   s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

Enter name for new remote.
name> rustfs
Storage> s3
provider> Other
env_auth> false
access_key_id> RustFS配置的key
secret_access_key> RustFS配置的secret
region> 1
endpoint> rustfs.naweiluren.com(改成你自己的)
Keep this "rustfs" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
执行迁移命令
rclone sync r2:bucket1 rustfs:bucket2 --progress --log-file=日志文件  # 全量同步
rclone copy r2:bucket1 rustfs:bucket2 --progress --log-file=日志文件  # 差异同步