Skip to content

Commit 5e8286f

Browse files
Update ipex-llm default transformers version to 4.37.0 (intel#11859)
* Update default transformers version to 4.37.0 * Add dependency requirements for qwen and qwen-vl * Temp fix transformers version for these not yet verified models * Skip qwen test in UT for now as it requires transformers<4.37.0
1 parent d4ee0a8 commit 5e8286f

File tree

15 files changed

+27
-7
lines changed

15 files changed

+27
-7
lines changed

python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ conda activate llm
2020
# install the latest ipex-llm nightly build with 'all' option
2121
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
2222

23+
pip install "transformers<4.37.0"
2324
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
2425

2526
```
@@ -32,6 +33,7 @@ conda activate llm
3233
3334
pip install --pre --upgrade ipex-llm[all]
3435
36+
pip install "transformers<4.37.0"
3537
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib
3638
3739
```

python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@ conda activate llm
2222

2323
# install the latest ipex-llm nightly build with 'all' option
2424
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
25+
26+
pip install "transformers<4.37.0"
2527
pip install tiktoken einops transformers_stream_generator # additional package required for Qwen-7B-Chat to conduct generation
2628
```
2729

@@ -32,6 +34,8 @@ conda create -n llm python=3.11
3234
conda activate llm
3335
3436
pip install --pre --upgrade ipex-llm[all]
37+
38+
pip install "transformers<4.37.0"
3539
pip install tiktoken einops transformers_stream_generator
3640
```
3741

python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ conda activate llm
1919

2020
# install the latest ipex-llm nightly build with 'all' option
2121
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
22+
23+
pip install "transformers<4.37.0"
2224
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
2325
```
2426

@@ -29,6 +31,8 @@ conda create -n llm python=3.11
2931
conda activate llm
3032
3133
pip install --pre --upgrade ipex-llm[all]
34+
35+
pip install "transformers<4.37.0"
3236
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib
3337
```
3438

python/llm/example/GPU/HuggingFace/LLM/qwen/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

18+
pip install "transformers<4.37.0"
1819
pip install tiktoken einops transformers_stream_generator # additional package required for Qwen-7B-Chat to conduct generation
1920
```
2021

@@ -27,6 +28,7 @@ conda activate llm
2728
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2829
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2930

31+
pip install "transformers<4.37.0"
3032
pip install tiktoken einops transformers_stream_generator # additional package required for Qwen-7B-Chat to conduct generation
3133
```
3234

python/llm/example/GPU/HuggingFace/Multimodal/qwen-vl/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

18+
pip install "transformers<4.37.0"
1819
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
1920
```
2021

@@ -27,6 +28,7 @@ conda activate llm
2728
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2829
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2930

31+
pip install "transformers<4.37.0"
3032
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
3133
```
3234

python/llm/example/GPU/HuggingFace/Multimodal/voiceassistant/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ conda activate llm
1717
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1818
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1919

20+
pip install transformers==4.36.2
2021
pip install librosa soundfile datasets
2122
pip install accelerate
2223
pip install SpeechRecognition sentencepiece colorama
@@ -33,6 +34,7 @@ conda activate llm
3334
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
3435
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
3536

37+
pip install transformers==4.36.2
3638
pip install librosa soundfile datasets
3739
pip install accelerate
3840
pip install SpeechRecognition sentencepiece colorama

python/llm/example/GPU/HuggingFace/Multimodal/whisper/readme.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ conda activate llm
1616
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1717
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1818

19+
pip install transformers==4.36.2
1920
pip install datasets soundfile librosa # required by audio processing
2021
```
2122

@@ -28,6 +29,7 @@ conda activate llm
2829
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2930
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
3031

32+
pip install transformers==4.36.2
3133
pip install datasets soundfile librosa # required by audio processing
3234
```
3335

python/llm/example/GPU/PyTorch-Models/Model/llava/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ conda activate llm
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

1818
pip install einops # install dependencies required by llava
19-
pip install transformers==4.36.2
2019

2120
git clone https://github.com/haotian-liu/LLaVA.git # clone the llava libary
2221
cp generate.py ./LLaVA/ # copy our example to the LLaVA folder
@@ -34,7 +33,6 @@ conda activate llm
3433
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
3534

3635
pip install einops # install dependencies required by llava
37-
pip install transformers==4.36.2
3836

3937
git clone https://github.com/haotian-liu/LLaVA.git # clone the llava libary
4038
copy generate.py .\LLaVA\ # copy our example to the LLaVA folder

python/llm/example/GPU/PyTorch-Models/Model/qwen-vl/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

18+
pip install "transformers<4.37.0"
1819
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
1920
```
2021

@@ -27,6 +28,7 @@ conda activate llm
2728
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2829
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2930

31+
pip install "transformers<4.37.0"
3032
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
3133
```
3234

python/llm/example/GPU/PyTorch-Models/Model/speech-t5/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

18+
pip install transformers==4.36.2
1819
pip install "datasets<2.18" soundfile # additional package required for SpeechT5 to conduct generation
1920
```
2021

@@ -27,6 +28,7 @@ conda activate llm
2728
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2829
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2930

31+
pip install transformers==4.36.2
3032
pip install "datasets<2.18" soundfile # additional package required for SpeechT5 to conduct generation
3133
```
3234

0 commit comments

Comments
 (0)