Tutorial 10 min read by syncopio Team

Getting Started with rclone: Cloud & S3 Migration Made Simple

From installation to your first cloud sync. Learn rclone configuration, essential commands, and provider-specific tips for AWS S3, Azure, and more.

rclone is often described as “rsync for the cloud.” With support for 40+ storage providers — from AWS S3 to Google Drive to local filesystems — it’s the go-to tool for cloud data migration. This guide gets you from installation to your first sync in 15 minutes.

Installation

Linux

# Official install script
curl https://rclone.org/install.sh | sudo bash

# Or via package manager
sudo apt install rclone       # Debian/Ubuntu
sudo dnf install rclone       # Fedora/RHEL
brew install rclone           # macOS

Windows

# Using winget
winget install Rclone.Rclone

# Or download from https://rclone.org/downloads/

Verify Installation

rclone version
# rclone v1.68.x

Configuration with rclone config

rclone stores remote configurations in ~/.config/rclone/rclone.conf. The interactive wizard makes setup straightforward.

rclone config

This launches an interactive menu. Let’s configure a few common providers.

AWS S3

rclone config
# n) New remote
# name> aws-s3
# Storage> s3
# provider> AWS
# access_key_id> AKIA...
# secret_access_key> wJal...
# region> us-east-1
# acl> private

Or create the config directly:

[aws-s3]
type = s3
provider = AWS
access_key_id = AKIA...
secret_access_key = wJal...
region = us-east-1
acl = private

Azure Blob Storage

rclone config
# n) New remote
# name> azure
# Storage> azureblob
# account> mystorageaccount
# key> base64key...

MinIO (S3-Compatible)

rclone config
# n) New remote
# name> minio
# Storage> s3
# provider> Minio
# endpoint> https://minio.example.com
# access_key_id> minioadmin
# secret_access_key> minioadmin

MinIO status update

MinIO entered maintenance mode in December 2025. If you’re planning a new deployment, consider alternatives. See our MinIO maintenance mode analysis for options.

Google Cloud Storage

rclone config
# n) New remote
# name> gcs
# Storage> google cloud storage
# project_number> 123456789
# service_account_file> /path/to/credentials.json

Essential Commands

Copy — transfer files without deleting

# Copy local to S3
rclone copy /local/data aws-s3:my-bucket/data

# Copy S3 to local
rclone copy aws-s3:my-bucket/data /local/data

# Copy between providers
rclone copy aws-s3:source-bucket azure:dest-container

rclone copy only transfers new and changed files. It does not delete files on the destination.

Sync — make destination match source

rclone sync /local/data aws-s3:my-bucket/data

sync deletes destination files

rclone sync deletes files on the destination that don’t exist on the source. Use rclone copy if you don’t want deletions, or add —dry-run to preview first.

Check — verify files match

# Compare sizes and hashes
rclone check /local/data aws-s3:my-bucket/data

# Check with one-way comparison
rclone check /local/data aws-s3:my-bucket/data --one-way

List — explore remote storage

# List top-level contents
rclone ls aws-s3:my-bucket

# List directories only
rclone lsd aws-s3:my-bucket

# List with human-readable sizes
rclone lsl aws-s3:my-bucket

# Show total size
rclone size aws-s3:my-bucket/data

Move — transfer and delete from source

rclone move /local/data aws-s3:my-bucket/data

Transfers files then deletes them from the source. Use with caution.

Performance Tuning

Concurrent Transfers

# Default is 4 concurrent transfers
rclone copy /data aws-s3:bucket --transfers 16

For S3 and cloud providers, 8-32 concurrent transfers typically saturates bandwidth. Monitor for throttling (HTTP 429 errors).

Checkers (Parallel Comparison Threads)

rclone sync /data aws-s3:bucket --checkers 32

Checkers compare files in parallel before deciding what to transfer. More checkers = faster decision-making on large datasets.

Bandwidth Limiting

# Limit to 100 MB/s
rclone copy /data aws-s3:bucket --bwlimit 100M

# Time-based limits (full speed at night, throttled during business hours)
rclone copy /data aws-s3:bucket --bwlimit "08:00,10M 18:00,off"

Chunk Size for Large Files

# S3 multipart upload with 64MB chunks
rclone copy /data aws-s3:bucket --s3-chunk-size 64M --s3-upload-concurrency 8

Larger chunks reduce API calls but increase memory usage. 64MB is a good starting point for large file uploads.

Encryption

rclone can encrypt data before uploading:

rclone config
# n) New remote
# name> encrypted-s3
# Storage> crypt
# remote> aws-s3:my-bucket/encrypted
# filename_encryption> standard
# directory_name_encryption> true
# password> [enter password]

Now rclone copy /data encrypted-s3: automatically encrypts before upload and decrypts on download.

Encryption adds CPU overhead

Client-side encryption uses AES-256 and adds ~10-20% CPU overhead. Test throughput before committing to encryption for large migrations. Consider server-side encryption (SSE-S3, SSE-KMS) as an alternative.

Provider-Specific Tips

AWS S3

  • Use --s3-storage-class STANDARD_IA for infrequently accessed data (cheaper)
  • Enable --s3-upload-concurrency for faster large file uploads
  • Use --s3-no-check-bucket to skip bucket existence checks (faster)

Azure Blob

  • Use --azureblob-access-tier Cool for archival data
  • Azure has higher per-request costs than S3 — use larger chunk sizes

Google Cloud Storage

  • Use --gcs-bucket-policy-only for uniform bucket-level access
  • Nearline/Coldline classes have minimum storage durations (30/90 days)

MinIO / S3-Compatible

  • Set --s3-force-path-style if your provider doesn’t support virtual-hosted style
  • Test with rclone lsd minio: to verify connectivity

Common Workflows

Daily Backup to S3

#!/bin/bash
DATE=$(date +%Y-%m-%d)
rclone sync /data/important aws-s3:backups/$DATE \
  --transfers 16 \
  --bwlimit 50M \
  --log-file /var/log/rclone-backup.log \
  --log-level INFO

Cross-Cloud Migration

# AWS S3 to Azure Blob
rclone copy aws-s3:source-bucket azure:dest-container \
  --transfers 32 \
  --checkers 32 \
  --s3-chunk-size 64M \
  --log-level INFO \
  --stats 30s

Mount Cloud Storage as Local Filesystem

# Mount S3 bucket as local directory
rclone mount aws-s3:my-bucket /mnt/s3 --vfs-cache-mode full &

# Access files normally
ls /mnt/s3
cat /mnt/s3/data/file.txt

Limitations

rclone excels at cloud-to-cloud transfers but has gaps for NAS migrations:

  • No native NFS/SMB — requires local mount points for NAS-to-NAS
  • Single machine — no distributed workers; throughput limited by one host
  • No enterprise job management — no web dashboard, pause/resume, or scheduling
  • Size+timestamp comparison — not checksum by default (use --checksum flag)
  • No compliance reporting — log files only, no audit-ready reports

For NAS + cloud migrations

If your migration involves NFS or SMB alongside S3, syncopio handles all three protocols natively with a web dashboard for visibility. Compare syncopio vs rclone in detail.

Further Reading

Ready to simplify your migrations?

See how syncopio can save you hours on every migration project.

Request a Demo