Skip to content

Commit 52a2135

Browse files
authored
Replace ipex with ipex-llm (intel#10554)
* fix ipex with ipex_llm * fix ipex with ipex_llm * update * update * update * update * update * update * update * update
1 parent 0a2e820 commit 52a2135

File tree

106 files changed

+127
-122
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+127
-122
lines changed

docker/llm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ After the container is booted, you could get into the container through `docker
6262
docker exec -it my_container bash
6363
```
6464

65-
To run inference using `IPEX-LLM` using cpu, you could refer to this [documentation](https://github.com/intel-analytics/IPEX/tree/main/python/llm#cpu-int4).
65+
To run inference using `IPEX-LLM` using cpu, you could refer to this [documentation](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm#cpu-int4).
6666

6767

6868
#### Getting started with chat

docker/llm/finetune/qlora/cpu/kubernetes/Chart.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
apiVersion: v2
2-
name: ipex-fintune-service
2+
name: ipex_llm-fintune-service
33
description: A Helm chart for IPEX-LLM Finetune Service on Kubernetes
44
type: application
55
version: 1.1.27

docker/llm/serving/cpu/docker/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ sudo docker run -itd \
3030

3131
After the container is booted, you could get into the container through `docker exec`.
3232

33-
To run model-serving using `IPEX-LLM` as backend, you can refer to this [document](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/src/ipex/llm/serving).
33+
To run model-serving using `IPEX-LLM` as backend, you can refer to this [document](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/src/ipex_llm/serving/fastchat).
3434
Also you can set environment variables and start arguments while running a container to get serving started initially. You may need to boot several containers to support. One controller container and at least one worker container are needed. The api server address(host and port) and controller address are set in controller container, and you need to set the same controller address as above, model path on your machine and worker address in worker container.
3535

3636
To start a controller container:

docker/llm/serving/cpu/kubernetes/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ To deploy IPEX-LLM-serving cpu in Kubernetes environment, please use this image:
1010

1111
In this document, we will use `vicuna-7b-v1.5` as the deployment model.
1212

13-
After downloading the model, please change name from `vicuna-7b-v1.5` to `vicuna-7b-v1.5-ipex` to use `ipex-llm` as the backend. The `ipex-llm` backend will be used if model path contains `ipex-llm`. Otherwise, the original transformer-backend will be used.
13+
After downloading the model, please change name from `vicuna-7b-v1.5` to `vicuna-7b-v1.5-ipex-llm` to use `ipex-llm` as the backend. The `ipex-llm` backend will be used if model path contains `ipex-llm`. Otherwise, the original transformer-backend will be used.
1414

1515
You can download the model from [here](https://huggingface.co/lmsys/vicuna-7b-v1.5).
1616

python/llm/example/CPU/Deepspeed-AutoTP/deepspeed_autotp.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@
102102
# Batch tokenizing
103103
prompt = args.prompt
104104
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(f'cpu:{local_rank}')
105-
# ipex model needs a warmup, then inference time can be accurate
105+
# ipex-llm model needs a warmup, then inference time can be accurate
106106
output = model.generate(input_ids,
107107
max_new_tokens=args.n_predict,
108108
use_cache=True)

python/llm/example/CPU/LangChain/README.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
## Langchain Examples
22

3-
This folder contains examples showcasing how to use `langchain` with `ipex`.
3+
This folder contains examples showcasing how to use `langchain` with `ipex-llm`.
44

5-
### Install IPEX
5+
### Install-IPEX LLM
66

77
Ensure `ipex-llm` is installed by following the [IPEX-LLM Installation Guide](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm#install).
88

@@ -36,7 +36,7 @@ To run the example, execute the following command in the current directory:
3636
```bash
3737
python transformers_int4/rag.py -m <path_to_model> [-q <your_question>] [-i <path_to_input_txt>]
3838
```
39-
> Note: If `-i` is not specified, it will use a short introduction to Big-DL as input by default. if `-q` is not specified, `What is IPEX?` will be used by default.
39+
> Note: If `-i` is not specified, it will use a short introduction to Big-DL as input by default. if `-q` is not specified, `What is IPEX LLM?` will be used by default.
4040
4141

4242
### Example: Math
@@ -66,3 +66,8 @@ python transformers_int4/voiceassistant.py -m <path_to_model> [-q <your_question
6666
- `-x MAX_NEW_TOKENS`: the max new tokens of model tokens input
6767
- `-l LANGUAGE`: you can specify a language such as "english" or "chinese"
6868
- `-d True|False`: whether the model path specified in -m is saved low bit model.
69+
70+
### Legacy (Native INT4 examples)
71+
72+
IPEX-LLM also provides langchain integrations using native INT4 mode. Those examples can be foud in [native_int4](./native_int4/) folder. For detailed instructions of settting up and running `native_int4` examples, refer to [Native INT4 Examples README](./README_nativeint4.md).
73+

python/llm/example/CPU/PyTorch-Models/Model/mixtral/generate.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
with torch.inference_mode():
5555
prompt = MIXTRAL_PROMPT_FORMAT.format(prompt=args.prompt)
5656
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cpu')
57-
# ipex model needs a warmup, then inference time can be accurate
57+
# ipex-llm model needs a warmup, then inference time can be accurate
5858
output = model.generate(input_ids,
5959
max_new_tokens=args.n_predict)
6060

python/llm/example/CPU/QLoRA-FineTuning/alpaca-qlora/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Example usage:
2828
python ./alpaca_qlora_finetuning_cpu.py \
2929
--base_model "meta-llama/Llama-2-7b-hf" \
3030
--data_path "yahma/alpaca-cleaned" \
31-
--output_dir "./ipex-qlora-alpaca"
31+
--output_dir "./ipex-llm-qlora-alpaca"
3232
```
3333

3434
**Note**: You could also specify `--base_model` to the local path of the huggingface model checkpoint folder and `--data_path` to the local path of the dataset JSON file.
@@ -109,7 +109,7 @@ def generate_and_tokenize_prompt(data_point):
109109
python ./quotes_qlora_finetuning_cpu.py \
110110
--base_model "meta-llama/Llama-2-7b-hf" \
111111
--data_path "./english_quotes" \
112-
--output_dir "./ipex-qlora-alpaca" \
112+
--output_dir "./ipex-llm-qlora-alpaca" \
113113
--prompt_template_name "english_quotes"
114114
```
115115

python/llm/example/CPU/QLoRA-FineTuning/alpaca-qlora/finetune_one_node_two_sockets.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,5 +14,5 @@ mpirun -n 2 \
1414
--max_steps -1 \
1515
--base_model "meta-llama/Llama-2-7b-hf" \
1616
--data_path "yahma/alpaca-cleaned" \
17-
--output_dir "./ipex-qlora-alpaca"
17+
--output_dir "./ipex-llm-qlora-alpaca"
1818

python/llm/example/GPU/Deepspeed-AutoTP/deepspeed_autotp.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ def get_int_from_env(env_keys, default):
109109
with torch.inference_mode():
110110
prompt = args.prompt
111111
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(f'xpu:{local_rank}')
112-
# ipex model needs a warmup, then inference time can be accurate
112+
# ipex_llm model needs a warmup, then inference time can be accurate
113113
output = model.generate(input_ids,
114114
max_new_tokens=args.n_predict,
115115
use_cache=True)

0 commit comments

Comments
 (0)