Overview
This post will show you how to migrate your Nextcloud data folder from a local storage folder to Amazon S3 object storage. My Nextcloud server is running on CentOS 7.9 and this is what was used for the migration to S3. I later moved my Nextcloud install to Fedora 39 but that’s a story for another day.
First credit where it’s due. Thank you to the contributors to this post on the Nextcloud GitHub. I could not have done it without you.
Create the S3 bucket
Create the file nextcloud-s3.yaml and copy & paste the code below. Browse to CloudFormation in the AWS console and use it to spin up your Amazon S3 bucket and IAM User. The nextcloud-s3 user Access Key and Secret Access Key will be stored in the Parameter Store.
AWSTemplateFormatVersion: 2010-09-09
Description: >-
Create S3 Bucket and IAM user and store access key and secret in the parameter store
Parameters:
ParamterS3UserAccessKey:
Description: S3 User access key
Type: String
Default: /nextcloud/s3accesskey
ParamterS3UserSecretAccessKey:
Description: S3 User secret access key
Type: String
Default: /nextcloud/s3secretaccesskey
S3BucketName:
Description: >-
S3 Bucket Name that must be globally unique.
Replace 999999 with your own number.
Type: String
Default: nextcloud-s3-999999
Resources:
# S3
S3BucketNextCloud:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Ref S3BucketName
S3IamUser:
Type: AWS::IAM::User
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3FullAccess
Path: /
UserName: nextcloud-s3
S3IamUserAccessKey:
Type: AWS::IAM::AccessKey
Properties:
Status: Active
UserName: !Ref S3IamUser
S3UserAccessKey:
Type: AWS::SSM::Parameter
Properties:
Name: !Ref ParamterS3UserAccessKey
Type: String
Value: !Ref S3IamUserAccessKey
Description: "S3 User Access Key"
S3UserSecretAccessKey:
Type: AWS::SSM::Parameter
Properties:
Name: !Ref ParamterS3UserSecretAccessKey
Type: String
Value: !GetAtt S3IamUserAccessKey.SecretAccessKey
Description: "S3 User Secret Access Key"
...
Configure rclone
I used rclone to sync the files from local storage to S3 to preserve the file timestamps. My testing showed that this method was successful. The goal was to avoid Nextcloud clients re-syncing all the data because timestamps had changed.
Instructions on how to install and configure rclone for S3 can be found here:
Install: https://rclone.org/install/
Configure: https://rclone.org/s3/#configuration
Here’s the short version…
Install
# wget -q -O /root/install.sh https://rclone.org/install.sh
# chmod +x /root/install.sh
# /root/install.sh
Configure
Create the directory /root/.config/rclone/ and edit rclone.conf. Replace the access_key values with those stored in the AWS Parameter Store and change the region to your region.
# mkdir /root/.config/rclone
# touch /root/.config/rclone/rclone.conf
# chmod 0600 /root/.config/rclone/rclone.conf
# vim /root/.config/rclone/rclone.conf
[s3]
type = s3
provider = AWS
access_key_id = ABC..............XYZ
secret_access_key = aBC..................................XYz
region = ap-southeast-2
location_constraint = ap-southeast-2
acl = private
server_side_encryption = AES256
storage_class = INTELLIGENT_TIERING
You should now be able to copy files to your S3 bucket with rclone
# rclone copy testfile1 s3:/nextcloud-s3-999999/
# rclone ls s3:/nextcloud-s3-999999/
Migration Preparation
Do some cleaning and checks
# su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ files:cleanup'
0 orphaned file cache entries deleted
0 orphaned mount entries deleted
# su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ files:scan --all'
Starting scan for user 1 out of 7 (xxx)
Starting scan for user 2 out of 7 (xxx)
Starting scan for user 3 out of 7 (xxx)
Starting scan for user 4 out of 7 (xxx)
Starting scan for user 5 out of 7 (xxx)
Starting scan for user 6 out of 7 (xxx)
Starting scan for user 7 out of 7 (xxx)
+---------+-------+--------+--------------+
| Folders | Files | Errors | Elapsed time |
+---------+-------+--------+--------------+
| 2226 | 19284 | 0 | 00:00:13 |
+---------+-------+--------+--------------+
Put your server into maintenance mode
Wait about 5-10 minutes for all Nextcloud clients to go offline.
# su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ maintenance:mode --on'
Backup your instance
How you want to do this is up to you. My regular backup plan is to dump the database daily and zip up the Nextcloud directory excluding the /data directory once a week.
Get the user files
# mariadb -p -B --disable-column-names -D nextclouddb -e "select concat('urn:oid:', fileid, ' ', '/domains/cloud.example.com/data/', substring(id from 7), '/', path) from oc_filecache join oc_storages on storage = numeric_id where id like 'home::%' order by id;" > user_file_list
Get the meta files
# mariadb -p -B --disable-column-names -D nextclouddb -e "select concat('urn:oid:', fileid, ' ', substring(id from 8), path) from oc_filecache join oc_storages on storage = numeric_id where id like 'local::%' order by id;" > meta_file_list
Make a directory for the symbolic links
# mkdir s3_files
# cd s3_files
Create symbolic links
# while read target source; do if [ -f "$source" ]; then ln -s "$source" "$target"; fi; done < ../user_file_list
# while read target source; do if [ -f "$source" ]; then ln -s "$source" "$target"; fi; done < ../meta_file_list
Sync your Nextcloud local storage data to your S3 bucket
# cd ..
# rclone sync --copy-links --stats-log-level NOTICE --progress s3_files/ s3:/nextcloud-s3-999999/
Stop!
From here onwards you can damage your Nextcloud instance.
Database updates
# mariadb -p -D nextclouddb -e "update oc_storages set id = concat('object::user:', substring(id from 7)) where id like 'home::%';"
# mariadb -p -D nextclouddb -e "update oc_storages set id = 'object::store:amazon::nextcloud-s3-999999/' where id like 'local::%';"
# mariadb -p -D nextclouddb -e "update oc_mounts set mount_provider_class = 'OC\\\Files\\\Mount\\\ObjectHomeMountProvider' where mount_provider_class like '%LocalHomeMountProvider%';"
Update Nextcloud config
Backup your config.
# cp -a config.php config.php_backup
Edit config using the access_key values stored in the AWS Parameter Store and change the region to your region.
# diff config.php config.php_backup
46,54d45
< 'objectstore' => [
< 'class' => '\\OC\\Files\\ObjectStore\\S3',
< 'arguments' => [
< 'bucket' => 'nextcloud-s3-999999',
< 'region' => 'ap-southeast-2',
< 'key' => 'ABC..............XYZ',
< 'secret' => 'aBC..................................XYz',
< 'storageClass' => 'INTELLIGENT_TIERING',
< ],
< ],
Take your server out of maintenance mode
# su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ maintenance:mode --off'
Test your server
If everything went well your Nextcloud server is back up and running and all your data is being stored in your Amazon S3 bucket.
The Amazon S3 bucket metrics ‘Total bucket size’ & ‘Total number of objects’ will take a day or so to populate so don’t worry.
Issues encountered
Database table oc_mounts and mount_provider_class
I only updated the rows with mount_provider_class = ‘OC\Files\Mount\LocalHomeMountProvider’. I’m not sure if I should update them all. I haven’t seen any errors regards shares yet. Other mount_provider_class values in my database are ‘OCA\Files_Sharing\MountProvider’ and NULL.
Database table oc_storages duplicate entry
I hit one error when updating the oc_storages table for ‘local::’ entries as I had two. One was an old entry (/var/www/data I think it was) that no longer existed. I removed the redundant row leaving ‘/domains/cloud.example.com/data/’ and re-ran the SQL command successfully.
ERROR 1062 (23000): Duplicate entry ‘object::store:amazon::nextcloud-s3-999999/’ for key ‘storages_id_index’
Wrap Up
Drop us a comment on how you go or if there are any mistakes in the post. Bye.