Skip to content

Could not consume arg: --local_rank #18

@Droliven

Description

@Droliven

After finetuning the model with A100 80G * 4 GPU with the following commands:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch --nproc_per_node=4 --nnodes=$WORLD_SIZE --node_rank=$RANK --master_addr=$MASTER_ADDR --master_port=29005 \
    lora_finetune.py \
    --base_model model_zoos/lmsys_vicuna-13b-v1.3 \
    --data_path datas/gpt4tools_instructions/origin/gpt4tools_71k.json \
    --output_dir outputs/gpt4tools \
    --prompt_template_name gpt4tools \
    --num_epochs 6 \
    --batch_size 512 \
    --cutoff_len 2048 \
    --group_by_length \
    --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r 16 \
    --micro_batch_size=8 \
    2>&1 | tee outputs/gpt4tools/log_`date +%Y%m%d-%H%M%S`.out

it shows:

100%|██████████| 834/834 [16:13:11<00:00, 70.01s/it]

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=3
Usage: lora_finetune.py --local_rank=3 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=3 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=2
Usage: lora_finetune.py --local_rank=2 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=2 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=1
Usage: lora_finetune.py --local_rank=1 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=1 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=0
Usage: lora_finetune.py --local_rank=0 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=0 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help
/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 570 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 571 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 572 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 3 (pid: 573) of binary: /opt/conda/bin/python3
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module>
    main()
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main
    launch(args)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch
    run(args)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
lora_finetune.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-09-18_05:24:03
  host      : zcoregputrain-55-011068195071
  rank      : 3 (local_rank: 3)
  exitcode  : 2 (pid: 573)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions