Refine axolotl quickstart (#10957)

* Add default accelerate config for axolotl quickstart.
* Fix requirement link.
* Upgrade peft to 0.10.0 in requirement.
This commit is contained in:
Qiyuan Gong 2024-05-08 09:34:02 +08:00 committed by GitHub
parent c801c37bc6
commit 164e6957af
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 38 additions and 4 deletions

View file

@ -33,7 +33,7 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl/tree/v0.4.0
cd axolotl cd axolotl
# replace requirements.txt # replace requirements.txt
remove requirements.txt remove requirements.txt
wget -O requirements.txt https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt wget -O requirements.txt https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
pip install -e . pip install -e .
pip install transformers==4.36.0 pip install transformers==4.36.0
# to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544 # to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
@ -92,7 +92,14 @@ Configure oneAPI variables by running the following command:
``` ```
Configure accelerate to avoid training with CPU Configure accelerate to avoid training with CPU. You can download a default `default_config.yaml` with `use_cpu: false`.
```cmd
mkdir -p ~/.cache/huggingface/accelerate/
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
```
As an alternative, you can config accelerate based on your requirements.
```cmd ```cmd
accelerate config accelerate config

View file

@ -36,6 +36,15 @@ source /opt/intel/oneapi/setvars.sh
#### 2.2 Configures `accelerate` in command line interactively. #### 2.2 Configures `accelerate` in command line interactively.
You can download a default `default_config.yaml` with `use_cpu: false`.
```bash
mkdir -p ~/.cache/huggingface/accelerate/
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
```
As an alternative, you can config accelerate based on your requirements.
```bash ```bash
accelerate config accelerate config
``` ```

View file

@ -0,0 +1,18 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: 'NO'
downcast_bf16: 'no'
gpu_ids: all
ipex_config:
use_xpu: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View file

@ -1,7 +1,7 @@
# This file is copied from https://github.com/OpenAccess-AI-Collective/axolotl/blob/v0.4.0/requirements.txt # This file is copied from https://github.com/OpenAccess-AI-Collective/axolotl/blob/v0.4.0/requirements.txt
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
packaging==23.2 packaging==23.2
peft==0.5.0 peft==0.10.0
tokenizers tokenizers
bitsandbytes>=0.41.1 bitsandbytes>=0.41.1
accelerate==0.23.0 accelerate==0.23.0