Shaojun Liu
72b4efaad4
Enhanced XPU Dockerfiles: Optimized Environment Variables and Documentation ( #11506 )
...
* Added SYCL_CACHE_PERSISTENT=1 to xpu Dockerfile
* Update the document to add explanations for environment variables.
* update quickstart
2024-07-04 20:18:38 +08:00
Qiyuan Gong
1e00bd7bbe
Re-org XPU finetune images ( #10971 )
...
* Rename xpu finetune image from `ipex-llm-finetune-qlora-xpu` to `ipex-llm-finetune-xpu`.
* Add axolotl to xpu finetune image.
* Upgrade peft to 0.10.0, transformers to 4.36.0.
* Add accelerate default config to home.
2024-05-15 09:42:43 +08:00
Shengsheng Huang
0b7e78b592
revise the benchmark part in python inference docker ( #11020 )
2024-05-14 18:43:41 +08:00
Shengsheng Huang
586a151f9c
update the README and reorganize the docker guides structure. ( #11016 )
...
* update the README and reorganize the docker guides structure.
* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
Shaojun Liu
7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) ( #10970 )
...
* add entrypoint.sh
* add quickstart
* remove entrypoint
* update
* Install related library of benchmarking
* update
* print out results
* update docs
* minor update
* update
* update quickstart
* update
* update
* update
* update
* update
* update
* add chat & example section
* add more details
* minor update
* rename quickstart
* update
* minor update
* update
* update config.yaml
* update readme
* use --gpu
* add tips
* minor update
* update
2024-05-14 12:58:31 +08:00
Zephyr1101
7e7d969dcb
a experimental for workflow abuse step1 fix a typo ( #10965 )
...
* Update llm_unit_tests.yml
* Update README.md
* Update llm_unit_tests.yml
* Update llm_unit_tests.yml
2024-05-08 17:12:50 +08:00
Guancheng Fu
3b82834aaf
Update README.md ( #10838 )
2024-04-22 14:18:51 +08:00
Shaojun Liu
1aef3bc0ab
verify and refine ipex-llm-finetune-qlora-xpu docker document ( #10638 )
...
* verify and refine finetune-xpu document
* update export_merged_model.py link
* update link
2024-04-03 11:33:13 +08:00
Shaojun Liu
20a5e72da0
refine and verify ipex-llm-serving-xpu docker document ( #10615 )
...
* refine serving on cpu/xpu
* minor fix
* replace localhost with 0.0.0.0 so that service can be accessed through ip address
2024-04-02 11:45:45 +08:00
Shaojun Liu
b06de94a50
verify xpu-inference image and refine document ( #10593 )
2024-03-29 16:11:12 +08:00
Shaojun Liu
52f1b541cf
refine and verify ipex-inference-cpu docker document ( #10565 )
...
* restructure the index
* refine and verify cpu-inference document
* update
2024-03-29 10:16:10 +08:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm ( #10554 )
...
* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update
2024-03-28 13:54:40 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang ( #29 )
2024-03-25 10:05:46 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm ( #24 )
...
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Heyang Sun
c672e97239
Fix CPU finetuning docker ( #10494 )
...
* Fix CPU finetuning docker
* Update README.md
2024-03-21 11:53:30 +08:00
Wang, Jian4
1de13ea578
LLM: remove CPU english_quotes dataset and update docker example ( #10399 )
...
* update dataset
* update readme
* update docker cpu
* update xpu docker
2024-03-18 10:45:14 +08:00
Lilac09
a5c481fedd
add chat.py denpendency in Dockerfile ( #9699 )
2023-12-18 09:00:22 +08:00
Shaojun Liu
0e5ab5ebfc
update docker tag to 2.5.0-SNAPSHOT ( #9443 )
2023-11-13 16:53:40 +08:00
Lilac09
74a8ad32dc
Add entry point to llm-serving-xpu ( #9339 )
...
* add entry point to llm-serving-xpu
* manually build
* manually build
* add entry point to llm-serving-xpu
* manually build
* add entry point to llm-serving-xpu
* add entry point to llm-serving-xpu
* add entry point to llm-serving-xpu
2023-11-02 16:31:07 +08:00
Ziteng Zhang
0d62bd4adb
Added Docker installation guide and modified link in Dockerfile ( #9224 )
...
* changed '/ppml' into '/bigdl' and modified llama-7b
* Added the contents of finetuning in README
* Modified link of qlora_finetuning.py in Dockerfile
2023-10-19 15:28:05 +08:00
Lilac09
160c543a26
README for BigDL-LLM on docker ( #9197 )
...
* add instruction for MacOS/Linux
* modify html label of gif images
* organize structure of README
* change title name
* add inference-xpu, serving-cpu and serving-xpu parts
* revise README
* revise README
* revise README
2023-10-19 13:48:06 +08:00
Lilac09
326ef7f491
add README for llm-inference-cpu ( #9147 )
...
* add README for llm-inference-cpu
* modify README
* add README for llm-inference-cpu on Windows
2023-10-16 10:27:44 +08:00