Qiyuan Gong
|
f6c9ffe4dc
|
Add WANDB_MODE and HF_HUB_OFFLINE to XPU finetune README (#11097)
* Add WANDB_MODE=offline to avoid multi-GPUs finetune errors.
* Add HF_HUB_OFFLINE=1 to avoid Hugging Face related errors.
|
2024-05-22 15:20:53 +08:00 |
|
Qiyuan Gong
|
492ed3fd41
|
Add verified models to GPU finetune README (#11088)
* Add verified models to GPU finetune README
|
2024-05-21 15:49:15 +08:00 |
|
Qiyuan Gong
|
b727767f00
|
Add axolotl v0.3.0 with ipex-llm on Intel GPU (#10717)
* Add axolotl v0.3.0 support on Intel GPU.
* Add finetune example on llama-2-7B with Alpaca dataset.
|
2024-04-10 14:38:29 +08:00 |
|
Wang, Jian4
|
16b2ef49c6
|
Update_document by heyang (#30)
|
2024-03-25 10:06:02 +08:00 |
|
binbin Deng
|
2958ca49c0
|
LLM: add patching function for llm finetuning (#10247)
|
2024-03-21 16:01:01 +08:00 |
|
binbin Deng
|
c1ec3d8921
|
LLM: update FAQ about too many open files (#10119)
|
2024-02-07 15:02:24 +08:00 |
|
binbin Deng
|
aae20d728e
|
LLM: Add initial DPO finetuning example (#10021)
|
2024-02-01 14:18:08 +08:00 |
|
binbin Deng
|
171fb2d185
|
LLM: reorganize GPU finetuning examples (#9952)
|
2024-01-25 19:02:38 +08:00 |
|