Qiyuan Gong
1e00bd7bbe
Re-org XPU finetune images ( #10971 )
...
* Rename xpu finetune image from `ipex-llm-finetune-qlora-xpu` to `ipex-llm-finetune-xpu`.
* Add axolotl to xpu finetune image.
* Upgrade peft to 0.10.0, transformers to 4.36.0.
* Add accelerate default config to home.
2024-05-15 09:42:43 +08:00
Shengsheng Huang
0b7e78b592
revise the benchmark part in python inference docker ( #11020 )
2024-05-14 18:43:41 +08:00
Shengsheng Huang
586a151f9c
update the README and reorganize the docker guides structure. ( #11016 )
...
* update the README and reorganize the docker guides structure.
* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
Shaojun Liu
7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) ( #10970 )
...
* add entrypoint.sh
* add quickstart
* remove entrypoint
* update
* Install related library of benchmarking
* update
* print out results
* update docs
* minor update
* update
* update quickstart
* update
* update
* update
* update
* update
* update
* add chat & example section
* add more details
* minor update
* rename quickstart
* update
* minor update
* update
* update config.yaml
* update readme
* use --gpu
* add tips
* minor update
* update
2024-05-14 12:58:31 +08:00
Zephyr1101
7e7d969dcb
a experimental for workflow abuse step1 fix a typo ( #10965 )
...
* Update llm_unit_tests.yml
* Update README.md
* Update llm_unit_tests.yml
* Update llm_unit_tests.yml
2024-05-08 17:12:50 +08:00
Qiyuan Gong
c11170b96f
Upgrade Peft to 0.10.0 in finetune examples and docker ( #10930 )
...
* Upgrade Peft to 0.10.0 in finetune examples.
* Upgrade Peft to 0.10.0 in docker.
2024-05-07 15:12:26 +08:00
Qiyuan Gong
41ffe1526c
Modify CPU finetune docker for bz2 error ( #10919 )
...
* Avoid bz2 error
* change to cpu torch
2024-05-06 10:41:50 +08:00
Guancheng Fu
2c64754eb0
Add vLLM to ipex-llm serving image ( #10807 )
...
* add vllm
* done
* doc work
* fix done
* temp
* add docs
* format
* add start-fastchat-service.sh
* fix
2024-04-29 17:25:42 +08:00
Heyang Sun
751f6d11d8
fix typos in qlora README ( #10893 )
2024-04-26 14:03:06 +08:00
Guancheng Fu
3b82834aaf
Update README.md ( #10838 )
2024-04-22 14:18:51 +08:00
Shaojun Liu
7297036c03
upgrade python ( #10769 )
2024-04-16 09:28:10 +08:00
Shaojun Liu
3590e1be83
revert python to 3.9 for finetune image ( #10758 )
2024-04-15 10:37:10 +08:00
Shaojun Liu
29bf28bd6f
Upgrade python to 3.11 in Docker Image ( #10718 )
...
* install python 3.11 for cpu-inference docker image
* update xpu-inference dockerfile
* update cpu-serving image
* update qlora image
* update lora image
* update document
2024-04-10 14:41:27 +08:00
Heyang Sun
4f6df37805
fix wrong cpu core num seen by docker ( #10645 )
2024-04-03 15:52:25 +08:00
Shaojun Liu
1aef3bc0ab
verify and refine ipex-llm-finetune-qlora-xpu docker document ( #10638 )
...
* verify and refine finetune-xpu document
* update export_merged_model.py link
* update link
2024-04-03 11:33:13 +08:00
Heyang Sun
b8b923ed04
move chown step to behind add script in qlora Dockerfile
2024-04-02 23:04:51 +08:00
Shaojun Liu
a10f5a1b8d
add python style check ( #10620 )
...
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
2024-04-02 16:17:56 +08:00
Shaojun Liu
20a5e72da0
refine and verify ipex-llm-serving-xpu docker document ( #10615 )
...
* refine serving on cpu/xpu
* minor fix
* replace localhost with 0.0.0.0 so that service can be accessed through ip address
2024-04-02 11:45:45 +08:00
Shaojun Liu
59058bb206
replace 2.5.0-SNAPSHOT with 2.1.0-SNAPSHOT for llm docker images ( #10603 )
2024-04-01 09:58:51 +08:00
Shaojun Liu
b06de94a50
verify xpu-inference image and refine document ( #10593 )
2024-03-29 16:11:12 +08:00
Shaojun Liu
52f1b541cf
refine and verify ipex-inference-cpu docker document ( #10565 )
...
* restructure the index
* refine and verify cpu-inference document
* update
2024-03-29 10:16:10 +08:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm ( #10554 )
...
* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update
2024-03-28 13:54:40 +08:00
Cheen Hau, 俊豪
1c5eb14128
Update pip install to use --extra-index-url for ipex package ( #10557 )
...
* Change to 'pip install .. --extra-index-url' for readthedocs
* Change to 'pip install .. --extra-index-url' for examples
* Change to 'pip install .. --extra-index-url' for remaining files
* Fix URL for ipex
* Add links for ipex US and CN servers
* Update ipex cpu url
* remove readme
* Update for github actions
* Update for dockerfiles
2024-03-28 09:56:23 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang ( #29 )
2024-03-25 10:05:46 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm ( #24 )
...
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Heyang Sun
c672e97239
Fix CPU finetuning docker ( #10494 )
...
* Fix CPU finetuning docker
* Update README.md
2024-03-21 11:53:30 +08:00
Shaojun Liu
0e388f4b91
Fix Trivy Docker Image Vulnerabilities for BigDL Release 2.5.0 ( #10447 )
...
* Update pypi version to fix trivy issues
* refine
2024-03-19 14:52:15 +08:00
Wang, Jian4
1de13ea578
LLM: remove CPU english_quotes dataset and update docker example ( #10399 )
...
* update dataset
* update readme
* update docker cpu
* update xpu docker
2024-03-18 10:45:14 +08:00
ZehuaCao
146b77f113
fix qlora-finetune Dockerfile ( #10379 )
2024-03-12 13:20:06 +08:00
ZehuaCao
267de7abc3
fix fschat DEP version error ( #10325 )
2024-03-06 16:15:27 +08:00
Lilac09
a2ed4d714e
Fix vllm service error ( #10279 )
2024-02-29 15:45:04 +08:00
Ziteng Zhang
e08c74f1d1
Fix build error of bigdl-llm-cpu ( #10228 )
2024-02-23 16:30:21 +08:00
Ziteng Zhang
f7e2591f15
[LLM] change IPEX230 to IPEX220 in dockerfile ( #10222 )
...
* change IPEX230 to IPEX220 in dockerfile
2024-02-23 15:02:08 +08:00
Shaojun Liu
079f2011ea
Update bigdl-llm-finetune-qlora-xpu Docker Image ( #10194 )
...
* Bump oneapi version to 2024.0
* pip install bitsandbytes scipy
* Pin level-zero-gpu version
* Pin accelerate version 0.23.0
2024-02-21 15:18:27 +08:00
Lilac09
eca69a6022
Fix build error of bigdl-llm-cpu ( #10176 )
...
* fix build error
* fix build error
* fix build error
* fix build error
2024-02-20 14:50:12 +08:00
Lilac09
f8dcaff7f4
use default python ( #10070 )
2024-02-05 09:06:59 +08:00
Lilac09
72e67eedbb
Add speculative support in docker ( #10058 )
...
* add speculative environment
* add speculative environment
* add speculative environment
2024-02-01 09:53:53 +08:00
binbin Deng
171fb2d185
LLM: reorganize GPU finetuning examples ( #9952 )
2024-01-25 19:02:38 +08:00
ZehuaCao
51aa8b62b2
add gradio_web_ui to llm-serving image ( #9918 )
2024-01-25 11:11:39 +08:00
Lilac09
de27ddd81a
Update Dockerfile ( #9981 )
2024-01-24 11:10:06 +08:00
Lilac09
a2718038f7
Fix qwen model adapter in docker ( #9969 )
...
* fix qwen in docker
* add patch for model_adapter.py in fastchat
* add patch for model_adapter.py in fastchat
2024-01-24 11:01:29 +08:00
Lilac09
052962dfa5
Using original fastchat and add bigdl worker in docker image ( #9967 )
...
* add vllm worker
* add options in entrypoint
2024-01-23 14:17:05 +08:00
Shaojun Liu
32c56ffc71
pip install deps ( #9916 )
2024-01-17 11:03:57 +08:00
ZehuaCao
05ea0ecd70
add pv for llm-serving k8s deployment ( #9906 )
2024-01-16 11:32:54 +08:00
Guancheng Fu
0396fafed1
Update BigDL-LLM-inference image ( #9805 )
...
* upgrade to oneapi 2024
* Pin level-zero-gpu version
* add flag
2024-01-03 14:00:09 +08:00
Lilac09
a5c481fedd
add chat.py denpendency in Dockerfile ( #9699 )
2023-12-18 09:00:22 +08:00
Lilac09
3afed99216
fix path issue ( #9696 )
2023-12-15 11:21:49 +08:00
ZehuaCao
d204125e88
[LLM] Use to build a more slim docker for k8s ( #9608 )
...
* Create Dockerfile.k8s
* Update Dockerfile
More slim standalone image
* Update Dockerfile
* Update Dockerfile.k8s
* Update bigdl-qlora-finetuing-entrypoint.sh
* Update qlora_finetuning_cpu.py
* Update alpaca_qlora_finetuning_cpu.py
Refer to this [pr](https://github.com/intel-analytics/BigDL/pull/9551/files#diff-2025188afa54672d21236e6955c7c7f7686bec9239532e41c7983858cc9aaa89 ), update the LoraConfig
* update
* update
* update
* update
* update
* update
* update
* update transformer version
* update Dockerfile
* update Docker image name
* fix error
2023-12-08 10:25:36 +08:00
Heyang Sun
4e70e33934
[LLM] code and document for distributed qlora ( #9585 )
...
* [LLM] code and document for distributed qlora
* doc
* refine for gradient checkpoint
* refine
* Update alpaca_qlora_finetuning_cpu.py
* Update alpaca_qlora_finetuning_cpu.py
* Update alpaca_qlora_finetuning_cpu.py
* add link in doc
2023-12-06 09:23:17 +08:00
Guancheng Fu
8b00653039
fix doc ( #9599 )
2023-12-05 13:49:31 +08:00