Skip to content
This repository was archived by the owner on Mar 24, 2025. It is now read-only.

Commit 0baf83b

Browse files
committed
Update to version v2.4.0
1 parent a19a2cc commit 0baf83b

22 files changed

+358
-84
lines changed

CHANGELOG.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,10 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [2.4.0] - 2023-04-28
8+
### Added
9+
- Support for requester pay mode in S3 transfer task.
10+
711
## [2.3.0] - 2023-03-30
812
- Support S3 Access Key Rotation
913

README.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -46,13 +46,17 @@ If you have deployed a version before v2.0.2 (You can go to CloudFormation, chec
4646

4747
![S3 Plugin Architecture](s3-plugin-architect.png)
4848

49-
A *Finder* job running in AWS Fargate lists all the objects in source and destination buckets and determines what objects should be transferred, a message for each object to be transferred will be created in SQS. A *time-based CloudWatch rule* will trigger the ECS task to run every hour.
50-
51-
This plugin also supports S3 Event notification to trigger the data transfer (near real-time), only if the source bucket is in the same account (and region) as the one you deploy this plugin to. The event message will also be sent the same SQS queue.
52-
53-
The *Worker* job running in EC2 consumes the message in SQS and transfer the object from source bucket to destination bucket. You can use Auto Scaling Group to controll the number of EC2 instances to transfer the data based on your business need.
54-
55-
If an object or a part of an object failed to transfer, the EC2 instance will release the message in the Queue, and the object will be transferred again after the message is visible in the queue (Default visibility timeout is set to 15 minutes, extended for large objects). After a few retries, if the transfer still failed, the message will be sent to the Dead Letter Queue and an alarm will be triggered.
49+
The Amazon S3 plugin runs the following workflows:
50+
51+
1. A time-based Event Bridge rule triggers a AWS Lambda function on an hourly basis.
52+
2. AWS Lambda uses the launch template to launch a data comparison job (JobFinder) in an [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/).
53+
3. The job lists all the objects in the source and destination
54+
buckets, makes comparisons among objects and determines which objects should be transferred.
55+
4. Amazon EC2 sends a message for each object that will be transferred to [Amazon Simple Queue Service (Amazon SQS)](https://aws.amazon.com/sqs/). Amazon S3 event messages can also be supported for more real-time data transfer; whenever there is object uploaded to source bucket, the event message is sent to the same Amazon SQS queue.
56+
5. A JobWorker running in Amazon EC2 consumes the messages in SQS and transfers the object from the source bucket to the destination bucket. You can use an Auto Scaling Group to control the number of EC2 instances to transfer the data based on business need.
57+
6. A record with transfer status for each object is stored in Amazon DynamoDB.
58+
7. The Amazon EC2 instance will get (download) the object from the source bucket based on the Amazon SQS message.
59+
8. The Amazon EC2 instance will put (upload) the object to the destination bucket based on the Amazon SQS message.
5660

5761
This plugin supports transfer large size file. It will divide it into small parts and leverage the [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html) feature of Amazon S3.
5862

s3-plugin-architect.png

326 KB
Loading

source/bin/main-stack.ts

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,10 @@ stackSuppressions([
4343
], [
4444
{ id: 'AwsSolutions-IAM5', reason: 'some policies need to get dynamic resources' },
4545
{ id: 'AwsSolutions-IAM4', reason: 'these policies is used by CDK Customer Resource lambda' },
46+
{
47+
id: 'AwsSolutions-L1',
48+
reason: 'not applicable to use the latest lambda runtime version for aws cdk cr',
49+
},
4650
]);
4751

4852
Aspects.of(app).add(new AwsSolutionsChecks());
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)