ChatGLM3, Baichuan2 and Qwen1.5 QLoRA example (#11078)

* Add chatglm3, qwen15-7b and baichuan-7b QLoRA alpaca example
* Remove unnecessary tokenization setting.
This commit is contained in:
Qiyuan Gong 2024-05-21 15:29:43 +08:00 committed by GitHub
parent ecb16dcf14
commit 1210491748
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
5 changed files with 104 additions and 6 deletions

View file

@ -142,6 +142,45 @@ bash qlora_finetune_llama3_8b_arc_1_card.sh
</details>
<details>
<summary> Show ChatGLM3-6B examples </summary>
##### Finetuning ChatGLM3-6B examples on single Arc A770
```bash
bash qlora_finetune_chatglm3_6b_arc_1_card.sh
```
</details>
<details>
<summary> Show Qwen-1.5-7B examples </summary>
##### Finetuning Qwen-1.5-7B examples on single Arc A770
Install transformers 4.37.0
```bash
pip install transformers==4.37.0
```
```bash
bash qlora_finetune_qwen15_7b_arc_1_card.sh
```
</details>
<details>
<summary> Show Baichuan2-7B examples </summary>
##### Finetuning Baichuan2-7B examples on single Arc A770
```bash
bash qlora_finetune_baichuan2_7b_arc_1_card.sh
```
</details>
### 4. (Optional) Resume Training
If you fail to complete the whole finetuning process, it is suggested to resume training from a previously saved checkpoint by specifying `resume_from_checkpoint` to the local checkpoint folder as following:**

View file

@ -173,6 +173,7 @@ def train(
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model,
torch_dtype=torch.bfloat16,
quantization_config=bnb_config,
trust_remote_code=True)
# below is also supported
@ -191,12 +192,6 @@ def train(
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
print(f"Tokenizer loaded on rank {os.environ.get('LOCAL_RANK')}")
tokenizer.pad_token_id = (
0 # unk. we want this to be different from the eos token
)
tokenizer.padding_side = "left" # Allow batched inference
print(model)
# Prepare a IPEX-LLM compatible Peft model

View file

@ -0,0 +1,21 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# You could also specify `--base_model` to the local path of the huggingface model checkpoint folder and `--data_path` to the local path of the dataset JSON file
python ./alpaca_qlora_finetuning.py \
--base_model "baichuan-inc/Baichuan2-7B-Chat" \
--data_path "yahma/alpaca-cleaned" \
--output_dir "./ipex-llm-qlora-alpaca"

View file

@ -0,0 +1,22 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# You could also specify `--base_model` to the local path of the huggingface model checkpoint folder and `--data_path` to the local path of the dataset JSON file
python ./alpaca_qlora_finetuning.py \
--base_model "THUDM/chatglm3-6b" \
--data_path "yahma/alpaca-cleaned" \
--lora_target_modules '[query_key_value,dense,dense_h_to_4h,dense_4h_to_h]' \
--output_dir "./ipex-llm-qlora-alpaca"

View file

@ -0,0 +1,21 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# You could also specify `--base_model` to the local path of the huggingface model checkpoint folder and `--data_path` to the local path of the dataset JSON file
python ./alpaca_qlora_finetuning.py \
--base_model "Qwen/Qwen1.5-7B-Chat" \
--data_path "yahma/alpaca-cleaned" \
--output_dir "./ipex-llm-qlora-alpaca"