-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Fix KeyError in s3fs _prune_deleted_files function #68452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 3007.x
Are you sure you want to change the base?
Conversation
Fixes issue saltstack#68335 where fileserver.update fails with KeyError: 'Key' when using S3 buckets in multiple environments per bucket mode. The issue occurs in _prune_deleted_files function where the code assumes a different data structure than what is actually provided. The fix properly handles the nested bucket structure by iterating through buckets and objects within each bucket to extract the 'Key' field correctly. Signed-off-by: GRomR1 <[email protected]>
Signed-off-by: GRomR1 <[email protected]>
|
Hi there! Welcome to the Salt Community! Thank you for making your first contribution. We have a lengthy process for issues and PRs. Someone from the Core Team will follow up as soon as possible. In the meantime, here's some information that may help as you continue your Salt journey. There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Community Events Calendar. |
salt/fileserver/s3fs.py
Outdated
| for bucket in meta.keys(): | ||
| for obj in meta[bucket]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To iterate over items (keyword, value pairs) in a dict, use .items(), but this isn't using the key so just use .values()
for objects in meta.values():
for obj in objects:alternatively use itertools:
import itertools
for obj in itertools.chain.from_iterable(meta.values()):There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the excellent feedback! You're absolutely right - using .values() is much more Pythonic and readable. I've applied your suggestion and updated the code accordingly.
The improved version is cleaner and more efficient since we're not using the dictionary keys anyway. I've created a new commit with this improvement: 38d24bc
Thanks for helping improve the code quality! 🙏
twangboy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please write a test for this as well
Use .values() instead of .keys() for better Pythonic code style. This addresses the review comment from bdrx312. Signed-off-by: GRomR1 <[email protected]>
Add comprehensive tests for both single environment per bucket and multiple environments per bucket modes to ensure the KeyError fix works correctly in both scenarios. Tests cover: - File deletion from cache when files are removed from S3 - Proper handling of nested metadata structure in multiple env mode - Cache cleanup for both bucket configuration modes Signed-off-by: GRomR1 <[email protected]>
Thank you for the feedback! I've added comprehensive tests for the s3fs _prune_deleted_files function fix. The tests cover both configuration modes:
Both tests verify that:
The tests are located in Please review the tests and let me know if any additional test coverage is needed. |
Fixes issue #68335 where fileserver.update fails with KeyError: 'Key' when using S3 buckets in multiple environments per bucket mode.
Problem
The issue occurs in the
_prune_deleted_filesfunction where the code assumes a different data structure than what is actually provided. When using multiple environments per bucket mode, the metadata structure is nested differently than expected.Solution
The fix properly handles the nested bucket structure by iterating through buckets and objects within each bucket to extract the 'Key' field correctly.
Changes Made
1. Core Fix
_prune_deleted_filesfunction insalt/fileserver/s3fs.py2. Code Improvements
meta.keys()tometa.values()for cleaner iteration3. Test Coverage
test_prune_deleted_files_multiple_envs_per_bucket- Tests the fix for multiple environments per bucket modetest_prune_deleted_files_single_env_per_bucket- Tests single environment per bucket mode to prevent regression_prune_deleted_filesTesting
This fix resolves the KeyError that was preventing fileserver.update from working with S3 buckets in multiple environments per bucket mode. The added tests ensure the fix works correctly and prevent future regressions.
Related Issues
Fixes #68335
Signed-off-by: GRomR1 [email protected]