fix: add pagination to 'submit all' to process all datasets#252
fix: add pagination to 'submit all' to process all datasets#252dsanchezmatilla wants to merge 2 commits intockan:masterfrom
Conversation
|
|
||
| for page in range(0, num_pages): | ||
| paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments) | ||
| package_list.extend([pkg['id'] for pkg in paged_response['results']]) |
There was a problem hiding this comment.
Suggestion: Since the whole purpose of pagination is to handle arbitrarily large numbers of datasets, perhaps it would be better to process each batch of 1000 before retrieving the next, rather than assembling a package ID list of arbitrarily large size?
There was a problem hiding this comment.
on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.
If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.
duttonw
left a comment
There was a problem hiding this comment.
Its a good contribution but does require spelling corrections and a slight change in logic.
Overall, a good PR :)
| {'ignore_auth': True}, {}) | ||
| for p_id in package_list: | ||
| self._submit_package(p_id, user, indent=2, sync=sync, queue=queue) | ||
| check_start = input('This action could take a few minuts depending on the number of DataSets:\nDid you want to start the process? y/N\n') |
| for p_id in package_list: | ||
| self._submit_package(p_id, user, indent=2, sync=sync, queue=queue) | ||
| else: | ||
| print('Submit all process stoped') |
|
|
||
| for page in range(0, num_pages): | ||
| paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments) | ||
| package_list.extend([pkg['id'] for pkg in paged_response['results']]) |
There was a problem hiding this comment.
on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.
If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.
This PR adds a loop to paginate through all datasets in batches of 1000 using package_search, so that all datasets are submitted to the xloader, not just the first 1000.