Skip to content

Commit 90e4346

Browse files
Document the change to three containers (#1581)
Beginning with the 22.06 release, the Merlin team provides the following three containers that are capable of training and inference: * `merlin-hugectr` * `merlin-tensorflow` * `merlin-pytorch` This is a change from the 22.05 release that used six containers--one for training and a separate container for inference. Remove step to install JupyterLab
1 parent a6733bb commit 90e4346

File tree

8 files changed

+58
-52
lines changed

8 files changed

+58
-52
lines changed

.pre-commit-config.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
repos:
22
- repo: https://github.com/timothycrosley/isort
3-
rev: 5.9.3
3+
rev: 5.10.1
44
hooks:
55
- id: isort
66
additional_dependencies: [toml]
77
exclude: examples/*
88
- repo: https://github.com/python/black
9-
rev: 21.7b0
9+
rev: 22.3.0
1010
hooks:
1111
- id: black
1212
- repo: https://gitlab.com/pycqa/flake8
1313
rev: 3.9.2
1414
hooks:
1515
- id: flake8
1616
- repo: https://github.com/pycqa/pylint
17-
rev: pylint-2.7.4
17+
rev: v2.14.1
1818
hooks:
1919
- id: pylint
2020
- repo: https://github.com/econchick/interrogate
@@ -28,12 +28,12 @@ repos:
2828
hooks:
2929
- id: codespell
3030
- repo: https://github.com/PyCQA/bandit
31-
rev: 1.7.0
31+
rev: 1.7.4
3232
hooks:
3333
- id: bandit
3434
args: [--verbose, -ll, -x, tests,examples,bench]
3535
- repo: https://github.com/s-weigand/flake8-nb
36-
rev: v0.3.0
36+
rev: v0.4.0
3737
hooks:
3838
- id: flake8-nb
3939
files: \.ipynb$

README.md

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -65,17 +65,14 @@ pip install nvtabular
6565
#### Installing NVTabular with Docker
6666

6767
NVTabular Docker containers are available in the [NVIDIA Merlin container
68-
repository](https://catalog.ngc.nvidia.com/?filters=&orderBy=scoreDESC&query=merlin). There are six different containers:
69-
68+
repository](https://catalog.ngc.nvidia.com/?filters=&orderBy=scoreDESC&query=merlin).
69+
The following table summarizes the key information about the containers:
7070

7171
| Container Name | Container Location | Functionality |
7272
| -------------------------- | ------------------ | ------------- |
73-
| merlin-tensorflow-inference |https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow-inference | NVTabular, Tensorflow and Triton Inference |
74-
| merlin-pytorch-inference |https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-pytorch-inference | NVTabular, PyTorch, and Triton Inference |
75-
| merlin-inference | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-inference | NVTabular, HugeCTR, and Triton Inference |
76-
| merlin-training | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training | NVTabular and HugeCTR |
77-
| merlin-tensorflow-training | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow-training | NVTabular, TensorFlow, and HugeCTR Tensorflow Embedding plugin |
78-
| merlin-pytorch-training | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch-training | NVTabular and PyTorch |
73+
| merlin-hugectr |https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-hugectr | NVTabular, HugeCTR, and Triton Inference |
74+
| merlin-tensorflow |https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow | NVTabular, Tensorflow and Triton Inference |
75+
| merlin-pytorch |https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-pytorch | NVTabular, PyTorch, and Triton Inference |
7976

8077
To use these Docker containers, you'll first need to install the [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-docker) to provide GPU support for Docker. You can use the NGC links referenced in the table above to obtain more information about how to launch and run these containers. To obtain more information about the software and model versions that NVTabular supports per container, see [Support Matrix](https://github.com/NVIDIA/NVTabular/blob/main/docs/source/resources/support_matrix.rst).
8178

examples/README.md

Lines changed: 29 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -60,46 +60,53 @@ Rossmann operates over 3,000 drug stores across seven European countries. Histor
6060

6161
## Running the Example Notebooks
6262

63-
You can run the example notebooks by [installing NVTabular](https://github.com/NVIDIA/NVTabular#installation) and other required libraries. Alternatively, Docker containers are available on http://ngc.nvidia.com/catalog/containers/ with pre-installed versions. Depending on which example you want to run, you should use any one of these Docker containers:
63+
You can run the example notebooks by [installing NVTabular](https://github.com/NVIDIA/NVTabular#installation) and other required libraries.
64+
Alternatively, Docker containers are available from the NVIDIA GPU Cloud (NGC) at <http://ngc.nvidia.com/catalog/containers/> with pre-installed versions.
65+
Depending on which example you want to run, you should use any one of these Docker containers:
6466

65-
- Merlin-Tensorflow-Training (contains NVTabular with TensorFlow)
66-
- Merlin-Pytorch-Training (contains NVTabular with PyTorch)
67-
- Merlin-Training (contains NVTabular with HugeCTR)
68-
- Merlin-Tensorflow-Inference (contains NVTabular with TensorFlow and Triton Inference support)
67+
- `merlin-hugectr` (contains NVTabular with HugeCTR)
68+
- `merlin-tensorflow` (contains NVTabular with TensorFlow)
69+
- `merlin-pytorch` (contains NVTabular with PyTorch)
70+
71+
Beginning with the 22.06 release, each container includes the software for training models and performing inference.
6972

7073
To run the example notebooks using Docker containers, do the following:
7174

7275
1. Pull the container by running the following command:
7376

74-
```
77+
```sh
7578
docker run --gpus all --rm -it -p 8888:8888 -p 8797:8787 -p 8796:8786 --ipc=host <docker container> /bin/bash
7679
```
7780

7881
**NOTES**:
7982

80-
- If you are running `Getting Started with MovieLens` , `Advanced Ops with Outbrain` or `Tabular Problems with Rossmann` example notebooks you need to add `-v ${PWD}:/root/` to the docker script above. Here `PWD` is a local directory in your system, and this very same directory should also be mounted to the `merlin-inference`container if you would like to run the inference example. Please follow the `start and launch triton server` instructions given in the inference notebooks.
81-
- If you are running `Training-with-HugeCTR` notebooks, please add `--cap-add SYS_NICE` to `docker run` command to suppress the `set_mempolicy: Operation not permitted` warnings.
83+
- If you are running Getting Started with MovieLens, Advanced Ops with Outbrain, or the Tabular Problems with Rossmann example notebooks, add a `-v ${PWD}:/root/` argument to the preceding Docker command.
84+
The `PWD` environment variable refers to a local directory on your computer, and you should specify this same directory and with the `-v` argument when you run a container to perform inference.
85+
Follow the instructions for starting Triton Inference Server that are provided in the inference notebooks.
86+
- If you are running `Training-with-HugeCTR` notebooks, please add `--cap-add SYS_NICE` to the `docker run` command to suppress the `set_mempolicy: Operation not permitted` warnings.
8287

83-
The container will open a shell when the run command execution is completed. You will have to start JupyterLab on the Docker container. It should look similar to this:
88+
The container opens a shell when the run command execution is completed.
89+
Your shell prompt should look similar to the following example:
8490

85-
```
86-
root@2efa5b50b909:
87-
```
91+
```sh
92+
root@2efa5b50b909:
93+
```
8894

89-
2. If jupyter-lab is not installed, install jupyter-lab with `pip` by running the following command:
95+
1. Start the jupyter-lab server by running the following command:
9096

97+
```shell
98+
jupyter-lab --allow-root --ip='0.0.0.0'
9199
```
92-
pip install jupyterlab
93-
```
94-
95-
For more information, see [Installation Guide](https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html).
96100

97-
3. Start the jupyter-lab server by running the following command:
101+
View the messages in your terminal to identify the URL for JupyterLab.
102+
The messages in your terminal show similar lines to the following example:
98103

99-
```
100-
jupyter-lab --allow-root --ip='0.0.0.0' --NotebookApp.token='<password>'
104+
```shell
105+
Or copy and paste one of these URLs:
106+
http://2efa5b50b909:8888/lab?token=9b537d1fda9e4e9cadc673ba2a472e247deee69a6229ff8d
107+
or http://127.0.0.1:8888/lab?token=9b537d1fda9e4e9cadc673ba2a472e247deee69a6229ff8d
101108
```
102109

103-
4. Open any browser to access the jupyter-lab server using <MachineIP>:8888.
110+
1. Open a browser and use the `127.0.0.1` URL provided in the messages by JupyterLab.
104111

105-
5. Once in the server, navigate to the `/nvtabular/` directory and try out the examples.
112+
1. After you log in to JupyterLab, navigate to the `/nvtabular` directory to try out the example notebooks.

examples/getting-started-movielens/04-Triton-Inference-with-HugeCTR.ipynb

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@
221221
"id": "4e4592a9",
222222
"metadata": {},
223223
"source": [
224-
"### Load Models on Triton Server"
224+
"### Load Models on Triton Inference Server"
225225
]
226226
},
227227
{
@@ -238,16 +238,18 @@
238238
"metadata": {},
239239
"source": [
240240
"```\n",
241-
"docker run -it --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 -v ${PWD}:/model nvcr.io/nvidia/merlin/merlin-inference:21.11\n",
242-
"```"
241+
"docker run -it --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 -v ${PWD}:/model nvcr.io/nvidia/merlin/merlin-hugectr:latest\n",
242+
"```\n",
243+
"\n",
244+
"> For production use, refer to the [Merlin containers](https://catalog.ngc.nvidia.com/?filters=&orderBy=scoreDESC&query=merlin) from the NVIDIA GPU Cloud (NGC) catalog and specify a tag rather than `latest`."
243245
]
244246
},
245247
{
246248
"cell_type": "markdown",
247249
"id": "c6f50e9e",
248250
"metadata": {},
249251
"source": [
250-
"After you started the container you can start triton server with the command below:"
252+
"After you start the container, start Triton Inference Server with the following command:"
251253
]
252254
},
253255
{

examples/getting-started-movielens/04-Triton-Inference-with-TF.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -54,30 +54,30 @@
5454
"cell_type": "markdown",
5555
"metadata": {},
5656
"source": [
57-
"## Launching and Starting the Triton Server"
57+
"## Starting Triton Inference Server"
5858
]
5959
},
6060
{
6161
"cell_type": "markdown",
6262
"metadata": {},
6363
"source": [
64-
"Before we get started, you should launch the Triton Inference Server docker container with the following script. This script will mount your local `model-repository` folder that includes your saved models from the previous notebook (`03a-Training-with-TF.ipynb`) to `/model` directory in the `merlin-inference` docker container."
64+
"Before we get started, start Triton Inference Server in the Docker container with the following command. The command includes the `-v` argument to mount your local `model-repository` directory that includes your saved models from the previous notebook (`03a-Training-with-TF.ipynb`) to `/model` directory in the container."
6565
]
6666
},
6767
{
6868
"cell_type": "markdown",
6969
"metadata": {},
7070
"source": [
7171
"```\n",
72-
"docker run -it --gpus device=0 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v ${PWD}:/model/ nvcr.io/nvidia/merlin/merlin-tensorflow-inference:22.04\n",
73-
"```\n"
72+
"docker run -it --gpus device=0 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v ${PWD}:/model/ nvcr.io/nvidia/merlin/merlin-tensorflow:latest\n",
73+
"```"
7474
]
7575
},
7676
{
7777
"cell_type": "markdown",
7878
"metadata": {},
7979
"source": [
80-
"After you started the `merlin-inference` container, you can start triton server with the command below. You need to provide correct path for the `models` directory.\n",
80+
"After you start the container, you can start Triton Inference Server with the following command. You need to provide correct path for the `models` directory.\n",
8181
"\n",
8282
"```\n",
8383
"tritonserver --model-repository=path_to_models --backend-config=tensorflow,version=2 --model-control-mode=explicit \n",
@@ -150,14 +150,14 @@
150150
"cell_type": "markdown",
151151
"metadata": {},
152152
"source": [
153-
"## Loading Ensemble Model with Triton Inference Serve"
153+
"## Loading Ensemble Model with Triton Inference Server"
154154
]
155155
},
156156
{
157157
"cell_type": "markdown",
158158
"metadata": {},
159159
"source": [
160-
"At this stage, you should have launched the Triton Inference Server docker container with the instructions above."
160+
"At this stage, you should have started the Triton Inference Server in a container with the instructions above."
161161
]
162162
},
163163
{
@@ -336,7 +336,7 @@
336336
"cell_type": "markdown",
337337
"metadata": {},
338338
"source": [
339-
"## Send request to Triton IS to transform raw dataset"
339+
"## Send request to Triton Inference Server to transform raw dataset"
340340
]
341341
},
342342
{

examples/scaling-criteo/docker-compose-fastai.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ volumes:
1111
services:
1212
lab:
1313
runtime: nvidia
14-
image: nvcr.io/nvidia/merlin/merlin-pytorch-training:21.11
14+
image: nvcr.io/nvidia/merlin/merlin-pytorch:22.06
1515
command: "/bin/bash -c 'pip install jupyterlab jupytext pydot && apt-get update && apt-get install -y tree && python -m ipykernel install --user --name=merlin && jupyter notebook --no-browser --allow-root --port=8888 --ip=0.0.0.0 --NotebookApp.token='demotoken' --NotebookApp.allow_origin='*' --notebook-dir=/'"
1616
volumes:
1717
- models:/models

examples/scaling-criteo/docker-compose-hugectr.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ volumes:
1111
services:
1212
triton:
1313
command: "/bin/bash -c 'tritonserver --model-repository=/model/ --backend-config=hugectr,ps=/model/ps.json --model-control-mode=explicit'"
14-
image: nvcr.io/nvidia/merlin/merlin-inference:21:11
14+
image: nvcr.io/nvidia/merlin/merlin-hugectr:22.06
1515
runtime: nvidia
1616
shm_size: "1g"
1717
ulimits:
@@ -26,7 +26,7 @@ services:
2626

2727
lab:
2828
runtime: nvidia
29-
image: nvcr.io/nvidia/merlin/merlin-training:21:11
29+
image: nvcr.io/nvidia/merlin/merlin-hugectr:22.06
3030
command: "/bin/bash -c 'pip install jupyterlab jupytext pydot nvidia-pyindex tritonclient geventhttpclient && apt-get update && apt-get install -y tree && jupyter notebook --no-browser --allow-root --port=8888 --ip=0.0.0.0 --NotebookApp.token='demotoken' --NotebookApp.allow_origin='*' --notebook-dir=/'"
3131
volumes:
3232
- model:/model

examples/scaling-criteo/docker-compose-tf.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ volumes:
1111
services:
1212
triton:
1313
command: "/bin/bash -c 'pip install grpcio-channelz && tritonserver --model-repository=/models/ --model-control-mode=explicit'"
14-
image: nvcr.io/nvidia/merlin/merlin-inference:21:11
14+
image: nvcr.io/nvidia/merlin/merlin-tensorflow:22.06
1515
runtime: nvidia
1616
shm_size: "1g"
1717
ulimits:
@@ -26,7 +26,7 @@ services:
2626

2727
lab:
2828
runtime: nvidia
29-
image: nvcr.io/nvidia/merlin/merlin-tensorflow-training:21:11
29+
image: nvcr.io/nvidia/merlin/merlin-tensorflow:22.06
3030
command: "/bin/bash -c 'pip install jupyterlab jupytext pydot nvidia-pyindex tritonclient geventhttpclient && apt-get update && apt-get install -y tree && python -m ipykernel install --user --name=merlin && jupyter notebook --no-browser --allow-root --port=8888 --ip=0.0.0.0 --NotebookApp.token='demotoken' --NotebookApp.allow_origin='*' --notebook-dir=/'"
3131
volumes:
3232
- models:/models

0 commit comments

Comments
 (0)