Releases: broadinstitute/sparklespray
Release 5.8.0
This release adds support for mounting Google Cloud Storage buckets as directories. To do so specify the disk type as gcs and the name as some google bucket path like gs://mybucket/folder in your .sparkles config.
For example:
mount_count=2
mount_2_path=/mnt/disks/bucket
mount_2_type=gcs
mount_2_name=gs://sparkles-bucket/sparkles-testing
Upon the start of the job, the files in gs://sparkles-bucket/sparkles-testing will be accessible as read-only files at the path /mnt/disks/bucket
v5.7.0
This version includes:
- in
sparkles sub ...job name (-n) is required because not providing it was causing it to generate a job name which was was used in a Batch API call, but the Batch API has constraints on the format of the ID and the default job ID violates those constraints. - in
sparkles setupit will now automatically apply grants to the service account such that it has access to all docker images residing in artifact registry mentioned in the config - in
sparkles setupsome more retry logic has been added for known spurious failures which occur when an account is first created.
v5.5.0
Fixes problem where docker images configured to run as non-root users would fail upon startup.
(Specifically they'd report a permissions error when trying to create /mnt/data)
This has been worked around by adding the option "-u 0" to the docker command (and thus forcing the user within the container to be root)
v5.4.1
v5.4.0
v5.3.6
v5.3.5
Various bug fixes in running workflows
v5.3.1
Various bug fixes to workflow execution and improvements to error messages
Full Changelog: v5.3.0...v5.3.1
sparklespray-5.3.1.tar.gz
v5.3.0
Various bug fixes
v5.2.0
Switched job submission to use google batch API from google life science API.
Added new "workflow" mode for running a sequence of jobs.