-
Notifications
You must be signed in to change notification settings - Fork 176
[FEATURE] Improve EncryptorImpl with Asynchronous Handling for Scalability #3919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[FEATURE] Improve EncryptorImpl with Asynchronous Handling for Scalability #3919
Conversation
…ility Removed the usage of ContDownLatch. Every requets will be submitted and returns the Future. Added a list to track the ongoing master key generation. If any tenant id is in the list, then it's key generation is on going and it will wait until other thread completes the key genearion. Same time system will accept other requests, if key is already avaialble in the map that will procced otherwise key generation for new tenant will start in different thread. So, multiple tenants key generation can happen simulatneuosly. Resolves opensearch-project#3510 Signed-off-by: Abdul Muneer Kolarkunnu <[email protected]>
Awesome! Thanks for raising the PR. This will be a great improvement. I'll start actively reviewing this PR from tomorrow. Can you also please update your PR in details like how did you test for single tenancy and also for multi tenancy? |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3919 +/- ##
=========================================
Coverage 80.39% 80.40%
- Complexity 7910 7915 +5
=========================================
Files 693 693
Lines 34849 34863 +14
Branches 3872 3877 +5
=========================================
+ Hits 28018 28030 +12
Misses 5096 5096
- Partials 1735 1737 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
common/src/main/java/org/opensearch/ml/common/connector/HttpConnector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/HttpConnector.java
Outdated
Show resolved
Hide resolved
ml-algorithms/src/main/java/org/opensearch/ml/engine/encryptor/EncryptorImpl.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/McpConnector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/Connector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/HttpConnector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/HttpConnector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/HttpConnector.java
Outdated
Show resolved
Hide resolved
common/src/main/java/org/opensearch/ml/common/connector/McpConnector.java
Outdated
Show resolved
Hide resolved
common/src/test/java/org/opensearch/ml/common/connector/AwsConnectorTest.java
Outdated
Show resolved
Hide resolved
ml-algorithms/src/main/java/org/opensearch/ml/engine/encryptor/EncryptorImpl.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @akolarkunnu, first I want to thank you for your contribution! I've experienced flaky test behavior from this particular class (see #2888) and this may help with that.
That said, I've made multiple implementations on OpenSearch plugins and our Remote Metadata SDK using futures and learned some hard lessons along the way. I did leave a line by line review above but wanted to follow up with some general comments.
- TLDR on using Futures... avoid them if you can implement something with an
ActionListener
. These are longstanding well-established and tested callback mechanisms that handle most asynchronous work in OpenSearch. When you have a single "thread" with async breaks they are almost always the correct way to handle it. When you are awaiting multiple things to happen, that's where it gets complex. That may be the case here. - OpenSearch has an
ActionFuture
class that doubles as anActionListener
. It has anactionGet()
method that implements some exception handling (unwrapping the nested exceptions, etc.) that is pretty useful. Please take a look at using that. - Generally speaking I see the code replacing an entire map with an entirely new map every time an encrypt/decrypt call is made. It's hard for me to think this approach is thread safe without synchronization of the map. It seems to me we can probably easily add to a map without touching the other keys but the existing "replace the map" implementation raises a lot of questions.
- Consider an approach using an
AtomicReference
to the decryptedCredential map, taking advantage of theupdateAndGet()
method. (I'm not sure that will work here, but it seems a possible improvement over iterate-all-and-replace-without-atomicity.
I couldn't agree more on this point. |
Resolves opensearch-project#3510 Signed-off-by: Abdul Muneer Kolarkunnu <[email protected]>
@dbwiddis @dhrubo-os Please review the latest changes. |
@dbwiddis @dhrubo-os @pyek-bot Gentle reminder for review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @akolarkunnu thanks for continuing to try to iterate on this, but using thread-blocking code for concurrency is not ideal.
Please try to find a solution using only a chain of action listeners without blocking. These all naturally block on IO threads and don't consume thread pool resources.
initMasterKey(tenantId); | ||
final AwsCrypto crypto = AwsCrypto.builder().withCommitmentPolicy(CommitmentPolicy.RequireEncryptRequireDecrypt).build(); | ||
JceMasterKey jceMasterKey = createJceMasterKey(tenantId); | ||
CountDownLatch latch = new CountDownLatch(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We really shouldn't use a latch here, as waiting for it blocks the thread and consumes a thread pool resource. Try to stick to plain action listeners here.
final AwsCrypto crypto = AwsCrypto.builder().withCommitmentPolicy(CommitmentPolicy.RequireEncryptRequireDecrypt).build(); | ||
JceMasterKey jceMasterKey = createJceMasterKey(tenantId); | ||
CountDownLatch latch = new CountDownLatch(1); | ||
AtomicReference<Object> decryptResponse = new AtomicReference<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Object
for a reference here creates a lot of complexity and loses type safety.
// checking and waiting if the master key generation triggered by any other thread | ||
if (existingWaitingListener != null && tenantWaitingListenerMap.containsKey(tenantId)) { | ||
log.info("Waiting for other thread to generate master key"); | ||
waitingListeners.wait(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a blocking call, and it's inside a synchronized block so it'll never release waitingListeners
, blocking both this method and every other method synchronizing on the same object.
Description
This fix is mainly to fix the issue of duplicate master key generation. If key generation for a tenant is in progress and another request come for encryption/decryption with same tenant id, it will again try to generate another master key , because old request is in the process of creating key and not yet completed. So there is a chance of creating duplicate keys for single tenant. This fix improves the scalability for this specified scenario also.
Fix : Moved ContDownLatch from initMasterKey() to encrypt() or decrypt() methods. Storing all tenants who are waiting for the key generation in the map tenantWaitingListenerMap. Whenever the key generation completes or error happened for a tenant, notify all requestors waiting for that tenant key generation.
Testing:
Added more test cases with multi threaded success and failure use cases.
Also manually tested single and multi tenant use cases with multiple invocations using scripts. Tested this by setting property "plugins.ml_commons.multi_tenancy_enabled"
Related Issues
Resolves #3510
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.