Refine axolotl quickstart (#10957)
* Add default accelerate config for axolotl quickstart. * Fix requirement link. * Upgrade peft to 0.10.0 in requirement.
This commit is contained in:
parent
c801c37bc6
commit
164e6957af
4 changed files with 38 additions and 4 deletions
|
|
@ -33,7 +33,7 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl/tree/v0.4.0
|
|||
cd axolotl
|
||||
# replace requirements.txt
|
||||
remove requirements.txt
|
||||
wget -O requirements.txt https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
|
||||
wget -O requirements.txt https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
|
||||
pip install -e .
|
||||
pip install transformers==4.36.0
|
||||
# to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
|
||||
|
|
@ -92,7 +92,14 @@ Configure oneAPI variables by running the following command:
|
|||
|
||||
```
|
||||
|
||||
Configure accelerate to avoid training with CPU
|
||||
Configure accelerate to avoid training with CPU. You can download a default `default_config.yaml` with `use_cpu: false`.
|
||||
|
||||
```cmd
|
||||
mkdir -p ~/.cache/huggingface/accelerate/
|
||||
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
|
||||
```
|
||||
|
||||
As an alternative, you can config accelerate based on your requirements.
|
||||
|
||||
```cmd
|
||||
accelerate config
|
||||
|
|
|
|||
|
|
@ -36,13 +36,22 @@ source /opt/intel/oneapi/setvars.sh
|
|||
|
||||
#### 2.2 Configures `accelerate` in command line interactively.
|
||||
|
||||
You can download a default `default_config.yaml` with `use_cpu: false`.
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.cache/huggingface/accelerate/
|
||||
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
|
||||
```
|
||||
|
||||
As an alternative, you can config accelerate based on your requirements.
|
||||
|
||||
```bash
|
||||
accelerate config
|
||||
```
|
||||
|
||||
Please answer `NO` in option `Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:`.
|
||||
|
||||
After finish accelerate config, check if `use_cpu` is disable (i.e., ` use_cpu: false`) in accelerate config file (`~/.cache/huggingface/accelerate/default_config.yaml`).
|
||||
After finish accelerate config, check if `use_cpu` is disable (i.e., `use_cpu: false`) in accelerate config file (`~/.cache/huggingface/accelerate/default_config.yaml`).
|
||||
|
||||
#### 2.3 (Optional) Set ` HF_HUB_OFFLINE=1` to avoid huggingface hug signing.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,18 @@
|
|||
compute_environment: LOCAL_MACHINE
|
||||
debug: false
|
||||
distributed_type: 'NO'
|
||||
downcast_bf16: 'no'
|
||||
gpu_ids: all
|
||||
ipex_config:
|
||||
use_xpu: true
|
||||
machine_rank: 0
|
||||
main_training_function: main
|
||||
mixed_precision: 'no'
|
||||
num_machines: 1
|
||||
num_processes: 1
|
||||
rdzv_backend: static
|
||||
same_network: true
|
||||
tpu_env: []
|
||||
tpu_use_cluster: false
|
||||
tpu_use_sudo: false
|
||||
use_cpu: false
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
# This file is copied from https://github.com/OpenAccess-AI-Collective/axolotl/blob/v0.4.0/requirements.txt
|
||||
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
|
||||
packaging==23.2
|
||||
peft==0.5.0
|
||||
peft==0.10.0
|
||||
tokenizers
|
||||
bitsandbytes>=0.41.1
|
||||
accelerate==0.23.0
|
||||
|
|
|
|||
Loading…
Reference in a new issue