ipex-llm/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
Qiyuan Gong f2e923b3ca
Axolotl v0.4.0 support (#10773)
* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml
2024-04-17 09:49:11 +08:00

43 lines
659 B
Text

# This file is copied from https://github.com/OpenAccess-AI-Collective/axolotl/blob/v0.4.0/requirements.txt
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
packaging==23.2
peft==0.5.0
tokenizers
bitsandbytes>=0.41.1
accelerate==0.23.0
deepspeed>=0.13.1
addict
fire
PyYAML>=6.0
datasets
#flash-attn==2.3.3
sentencepiece
wandb
einops
#xformers==0.0.22
optimum==1.13.2
hf_transfer
colorama
numba
numpy>=1.24.4
mlflow
# qlora things
bert-score==0.3.13
evaluate==0.4.0
rouge-score==0.1.2
scipy
scikit-learn==1.2.2
pynvml
art
fschat==0.2.34
gradio==3.50.2
tensorboard
mamba-ssm==1.1.1
# remote filesystems
s3fs
gcsfs
# adlfs
trl>=0.7.9