Skip to content

feat:add custom mount on request#154

Open
yunnian wants to merge 3 commits intoalibaba:mainfrom
yunnian:feature/custom-mount
Open

feat:add custom mount on request#154
yunnian wants to merge 3 commits intoalibaba:mainfrom
yunnian:feature/custom-mount

Conversation

@yunnian
Copy link
Contributor

@yunnian yunnian commented Jan 30, 2026

Summary

  • What is changing and why?

Previously I only supported custom mounts via batchsandbox-template.yaml, which does not satisfy the need to dynamically mount per‑session data directories. Each user/agent session can generate files; when the container is destroyed and the session resumes, those files must be re-mounted and restored. Therefore the mount path needs to accept dynamic identifiers (e.g., uid-sessionid) so it can map back to the correct persisted data. This change adds request-level volumes and mounts to support that workflow.

Example (request-level mounts/volumes):

curl -X POST "http://localhost:8080/v1/sandboxes" \
  -H "Content-Type: application/json" \
  -d '{
    "image": {"uri": "python:3.11-slim"},
    "entrypoint": ["python","-m","http.server","8000"],
    "timeout": 3600,
    "resourceLimits": {"cpu":"500m","memory":"512Mi"},
    "volumes": [
      {"name":"user-session-data","persistentVolumeClaim":{"claimName":"user-session-data"}},
      {"name":"public-skills-dir","persistentVolumeClaim":{"claimName":"public-skills-dir"}}
    ],
    "mounts": [
      {"name":"user-session-data","mountPath":"/workspace","subPath":"uid-1-sessionId-1"},
      {"name":"public-skills-dir","mountPath":"/skills","readOnly":true}
    ]
  }'

中文解释:

之前我做了在batchsandbox-template.yaml模板中自定义mount相关的处理,但无法满足动态加载数据目录的情况。
比如用户和智能体的每次会话都会产生一些数据文件,容器销毁后下次再接着对话时要把数据文件进行重新挂载恢复,所以挂载的文件目录应该支持传入类似uid-sessionid这种信息以便能够映射和恢复之前的数据文件,所以才有了这次的提交,请审阅
现在能够支持接口级别的自定义 :

curl -X POST "http://localhost:8080/v1/sandboxes" \
  -H "Content-Type: application/json" \
  -d '{
    "image": {"uri": "python:3.11-slim"},
    "entrypoint": ["python","-m","http.server","8000"],
    "timeout": 3600,
    "resourceLimits": {"cpu":"500m","memory":"512Mi"},
    "volumes": [
      {"name":"user-session-data","persistentVolumeClaim":{"claimName":"user-session-data"}},
      {"name":"public-skills-dir","persistentVolumeClaim":{"claimName":"public-skills-dir"}}
    ],
    "mounts": [
      {"name":"user-session-data","mountPath":"/workspace","subPath":"uid-1-sessionId-1"},
      {"name":"public-skills-dir","mountPath":"/skills","readOnly":true}
    ]
  }'

Testing

  • Not run (explain why)
  • Unit tests
  • Integration tests
  • e2e / manual verification

Breaking Changes

  • None
  • Yes (describe impact and migration path)

Checklist

  • Linked Issue or clearly described motivation
  • Added/updated docs (if needed)
  • Added/updated tests (if needed)
  • Security impact considered
  • Backward compatibility considered

@hittyt
Copy link
Collaborator

hittyt commented Feb 4, 2026

Hi @yunnian , thanks for the contribution and for sharing your use case!

The dynamic per-session mounting scenario you described is exactly the kind of workflow we want to support. We've been working on a formal volume support proposal that treats volumes as first-class citizens in the Lifecycle API.

Our design takes a slightly different approach - we unify volume definition and mount configuration into a single structure:

volumes:
  - name: user-session-data
    pvc:
      claimName: user-session-data
    mountPath: /workspace
    subPath: uid-1-sessionId-1
    accessMode: RW

This keeps the API simpler for the common case (one volume = one mount point), while still supporting subPath for your per-session isolation needs.
We've just landed the OpenAPI spec and server-side schema in #166
Your PVC use case aligns perfectly with what we're building.

Would you be interested in collaborating on pvc backend support? We'd love your input on the implementation, especially since you have a concrete production scenario. Feel free to check out the OSEP and let us know if the proposed design covers your needs!

@yunnian
Copy link
Contributor Author

yunnian commented Feb 5, 2026

@hittyt
Thanks for the invitation! I’m interested in collaborating, but I’m a bit busy recently so my pace may be slower.
Backend needs to cover both Docker and K8s; my previous change only covered K8s.
Do you want me to implement the backend in the current PR,or close it and build on #166?
Do you have a faster coordination channel, such as a WeChat group? If you have one, please send it to my email at 936321732@qq.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants