Restore from S3 bucket #461
Replies: 3 comments
-
|
I have created a similar script. The feature would be very much appreciated. export AWS_ACCESS_KEY_ID=$(cat ${DB01_S3_KEY_ID_FILE})
export AWS_SECRET_ACCESS_KEY=$(cat ${DB01_S3_KEY_SECRET_FILE})
export AWS_DEFAULT_REGION=${DB01_S3_REGION}
export SOURCE_FILE="mysql_.*\.gz$"
export BACKUP_LOCATION="/backup"
aws sts get-caller-identity
export LATEST_FILE=$(aws s3 ls s3://${DB01_S3_BUCKET}/${DB01_S3_PATH}/ | grep -E ${SOURCE_FILE} | sort -r | head -n 1 | awk '{print $4}')
aws s3 cp s3://${DB01_S3_BUCKET}/${DB01_S3_PATH}/${LATEST_FILE} ${BACKUP_LOCATION}/${TARGET_FILE}
aws s3 cp s3://${DB01_S3_BUCKET}/${DB01_S3_PATH}/${LATEST_FILE}.sha1 ${BACKUP_LOCATION}/
restore ${BACKUP_LOCATION}/${TARGET_FILE} ${DB01_TYPE} ${DB01_HOST} ${DB01_NAME} ${DB01_USER} $(cat ${DB01_PASS_FILE}) ${DB01_PORT} |
Beta Was this translation helpful? Give feedback.
-
|
Updated for v4: |
Beta Was this translation helpful? Give feedback.
-
|
Came across this with the same idea but had to fiddle a lot to make it work for me, here is a more copy-paste solution for v4. It takes the defaults from DB01_ and falls back to DEFAULT_ where possible. It's also quite fixed for MariaDB, which I might make for flexible in the future. Also it just takes the latest backup, but I'd also like to enter which backup to restore.
#!/bin/bash
set -e -u -o pipefail
# restore-s3 <s3-path> <db_type> <db_hostname> <db_name> <db_user> <db_pass> <db_port>
export AWS_ACCESS_KEY_ID=${DB01_S3_KEY_ID:-${DEFAULT_S3_KEY_ID}} # $(cat ${DEFAULT_S3_KEY_ID_FILE})
export AWS_SECRET_ACCESS_KEY=${DB01_S3_KEY_SECRET:-${DEFAULT_S3_KEY_SECRET}} # $(cat ${DEFAULT_S3_KEY_SECRET_FILE})
export AWS_DEFAULT_REGION=${DB01_S3_REGION:-${DEFAULT_S3_REGION}}
export DEFAULT_PARAMS_AWS_ENDPOINT_URL=" --endpoint-url ${DB01_S3_PROTOCOL:-${DEFAULT_S3_PROTOCOL:-https}}://${DB01_S3_HOST:-${DEFAULT_S3_HOST}}"
export DEFAULT_FILESYSTEM_PATH=${DEFAULT_FILESYSTEM_PATH:-/backup}
export SOURCE_FILE="mariadb_.*\.gz$"
export LATEST_FILE=$(aws s3 ls "s3://${DEFAULT_S3_BUCKET}/${DEFAULT_S3_PATH}/" ${DEFAULT_PARAMS_AWS_ENDPOINT_URL} --recursive | grep -E "${SOURCE_FILE}" | sort -r | head -n 1 | awk '{print $4}')
export TARGET_FILE=$(basename "$LATEST_FILE")
aws s3 cp s3://${DEFAULT_S3_BUCKET}/${LATEST_FILE} ${DEFAULT_FILESYSTEM_PATH}/${TARGET_FILE} ${DEFAULT_PARAMS_AWS_ENDPOINT_URL}
aws s3 cp s3://${DEFAULT_S3_BUCKET}/${LATEST_FILE}.sha1 ${DEFAULT_FILESYSTEM_PATH}/${TARGET_FILE}.sha1 ${DEFAULT_PARAMS_AWS_ENDPOINT_URL}
# restore <filename> <db_type> <db_hostname> <db_name> <db_user> <db_pass> <db_port>
restore ${DEFAULT_FILESYSTEM_PATH}/${TARGET_FILE} ${DB01_TYPE} ${DB01_HOST} ${DB01_NAME} ${DB01_USER} ${DB01_PASS} ${DB01_PORT:-${DEFAULT_PORT:-3306}}in your docker-compose.yml then map this file from the host into the container services:
db-backup:
# ...
volumes:
- ./restore-s3.sh:/usr/local/bin/restore-s3:ro
# ... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Description of the feature
Add possibility to recover database backups stored in S3 bucket.
Benefits of feature
Restore from remote S3 location.
Additional context
Right now, use add. script:
It copies latest S3 ${SOURCE_FILE}=
mysql_.*\.gz$file from ${S3_BUCKET}/${S3_PATH} and downloads to ${TEMP_LOCATION}/${TARGET_FILE}, then executes restore cmdaws cpcan be used with ${LATEST_FILE} (+ ${SOURCE_FILE} grep), or, ${SOURCE_FILE}.Beta Was this translation helpful? Give feedback.
All reactions