Commit graph

172 commits

Author SHA1 Message Date
Shaojun Liu
5aa3e427a9
Fix docker images (#11362)
* Fix docker images

* add-apt-repository requires gnupg, gpg-agent, software-properties-common

* update

* avoid importing ipex again
2024-06-20 15:44:55 +08:00
Xiangyu Tian
ef9f740801
Docs: Fix CPU Serving Docker README (#11351)
Fix CPU Serving Docker README
2024-06-18 16:27:51 +08:00
Guancheng Fu
c9b4cadd81
fix vLLM/docker issues (#11348)
* fix

* fix

* ffix
2024-06-18 16:23:53 +08:00
Qiyuan Gong
de4bb97b4f
Remove accelerate 0.23.0 install command in readme and docker (#11333)
*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。
2024-06-17 17:52:12 +08:00
Shaojun Liu
77809be946
Install packages for ipex-llm-serving-cpu docker image (#11321)
* apt-get install patch

* Update Dockerfile

* Update Dockerfile

* revert
2024-06-14 15:26:01 +08:00
Shaojun Liu
9760ffc256
Fix SDLe CT222 Vulnerabilities (#11237)
* fix ct222 vuln

* update

* fix

* update ENTRYPOINT

* revert ENTRYPOINT

* Fix CT222 Vulns

* fix

* revert changes

* fix

* revert

* add sudo permission to ipex-llm user

* do not use ipex-llm user
2024-06-13 15:31:22 +08:00
Shaojun Liu
84f04087fb
Add intelanalytics/ipex-llm:sources image for OSPDT (#11296)
* Add intelanalytics/ipex-llm:sources image

* apt-get source
2024-06-13 14:29:14 +08:00
Guancheng Fu
2e75bbccf9
Add more control arguments for benchmark_vllm_throughput (#11291) 2024-06-12 17:43:06 +08:00
Guancheng Fu
eeffeeb2e2
fix benchmark script(#11243) 2024-06-06 17:44:19 +08:00
Shaojun Liu
1f2057b16a
Fix ipex-llm-cpu docker image (#11213)
* fix

* fix ipex-llm-cpu image
2024-06-05 11:13:17 +08:00
Xiangyu Tian
ac3d53ff5d
LLM: Fix vLLM CPU version error (#11206)
Fix vLLM CPU version error
2024-06-04 19:10:23 +08:00
Guancheng Fu
3ef4aa98d1
Refine vllm_quickstart doc (#11199)
* refine doc

* refine
2024-06-04 18:46:27 +08:00
Shaojun Liu
744042d1b2
remove software-properties-common from Dockerfile (#11203) 2024-06-04 17:37:42 +08:00
Guancheng Fu
daf7b1cd56
[Docker] Fix image using two cards error (#11144)
* fix all

* done
2024-05-27 16:20:13 +08:00
Qiyuan Gong
21a1a973c1
Remove axolotl and python3-blinker (#11127)
* Remove axolotl from image to reduce image size.
* Remove python3-blinker to avoid axolotl lib conflict.
2024-05-24 13:54:19 +08:00
Wang, Jian4
1443b802cc
Docker:Fix building cpp_docker and remove unimportant dependencies (#11114)
* test build

* update
2024-05-24 09:49:44 +08:00
Xiangyu Tian
b3f6faa038
LLM: Add CPU vLLM entrypoint (#11083)
Add CPU vLLM entrypoint and update CPU vLLM serving example.
2024-05-24 09:16:59 +08:00
Shaojun Liu
e0f401d97d
FIX: APT Repository not working (signatures invalid) (#11112)
* chmod 644 gpg key

* chmod 644 gpg key
2024-05-23 16:15:45 +08:00
binbin Deng
ecb16dcf14
Add deepspeed autotp support for xpu docker (#11077) 2024-05-21 14:49:54 +08:00
Wang, Jian4
00d4410746
Update cpp docker quickstart (#11040)
* add sample output

* update link

* update

* update header

* update
2024-05-16 14:55:13 +08:00
Guancheng Fu
7e29928865
refactor serving docker image (#11028) 2024-05-16 09:30:36 +08:00
Wang, Jian4
86cec80b51
LLM: Add llm inference_cpp_xpu_docker (#10933)
* test_cpp_docker

* update

* update

* update

* update

* add sudo

* update nodejs version

* no need npm

* remove blinker

* new cpp docker

* restore

* add line

* add manually_build

* update and add mtl

* update for workdir llm

* add benchmark part

* update readme

* update 1024-128

* update readme

* update

* fix

* update

* update

* update readme too

* update readme

* no change

* update dir_name

* update readme
2024-05-15 11:10:22 +08:00
Qiyuan Gong
1e00bd7bbe
Re-org XPU finetune images (#10971)
* Rename xpu finetune image from `ipex-llm-finetune-qlora-xpu` to `ipex-llm-finetune-xpu`.
* Add axolotl to xpu finetune image.
* Upgrade peft to 0.10.0, transformers to 4.36.0.
* Add accelerate default config to home.
2024-05-15 09:42:43 +08:00
Shengsheng Huang
0b7e78b592
revise the benchmark part in python inference docker (#11020) 2024-05-14 18:43:41 +08:00
Shengsheng Huang
586a151f9c
update the README and reorganize the docker guides structure. (#11016)
* update the README and reorganize the docker guides structure.

* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
Shaojun Liu
7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00
Zephyr1101
7e7d969dcb
a experimental for workflow abuse step1 fix a typo (#10965)
* Update llm_unit_tests.yml

* Update README.md

* Update llm_unit_tests.yml

* Update llm_unit_tests.yml
2024-05-08 17:12:50 +08:00
Qiyuan Gong
c11170b96f
Upgrade Peft to 0.10.0 in finetune examples and docker (#10930)
* Upgrade Peft to 0.10.0 in finetune examples.
* Upgrade Peft to 0.10.0 in docker.
2024-05-07 15:12:26 +08:00
Qiyuan Gong
41ffe1526c
Modify CPU finetune docker for bz2 error (#10919)
* Avoid bz2 error
* change to cpu torch
2024-05-06 10:41:50 +08:00
Guancheng Fu
2c64754eb0
Add vLLM to ipex-llm serving image (#10807)
* add vllm

* done

* doc work

* fix done

* temp

* add docs

* format

* add start-fastchat-service.sh

* fix
2024-04-29 17:25:42 +08:00
Heyang Sun
751f6d11d8
fix typos in qlora README (#10893) 2024-04-26 14:03:06 +08:00
Guancheng Fu
3b82834aaf
Update README.md (#10838) 2024-04-22 14:18:51 +08:00
Shaojun Liu
7297036c03
upgrade python (#10769) 2024-04-16 09:28:10 +08:00
Shaojun Liu
3590e1be83
revert python to 3.9 for finetune image (#10758) 2024-04-15 10:37:10 +08:00
Shaojun Liu
29bf28bd6f
Upgrade python to 3.11 in Docker Image (#10718)
* install python 3.11 for cpu-inference docker image

* update xpu-inference dockerfile

* update cpu-serving image

* update qlora image

* update lora image

* update document
2024-04-10 14:41:27 +08:00
Heyang Sun
4f6df37805
fix wrong cpu core num seen by docker (#10645) 2024-04-03 15:52:25 +08:00
Shaojun Liu
1aef3bc0ab
verify and refine ipex-llm-finetune-qlora-xpu docker document (#10638)
* verify and refine finetune-xpu document

* update export_merged_model.py link

* update link
2024-04-03 11:33:13 +08:00
Heyang Sun
b8b923ed04
move chown step to behind add script in qlora Dockerfile 2024-04-02 23:04:51 +08:00
Shaojun Liu
a10f5a1b8d
add python style check (#10620)
* add python style check

* fix style checks

* update runner

* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow

* update tag to 2.1.0-SNAPSHOT
2024-04-02 16:17:56 +08:00
Shaojun Liu
20a5e72da0
refine and verify ipex-llm-serving-xpu docker document (#10615)
* refine serving on cpu/xpu

* minor fix

* replace localhost with 0.0.0.0 so that service can be accessed through ip address
2024-04-02 11:45:45 +08:00
Shaojun Liu
59058bb206
replace 2.5.0-SNAPSHOT with 2.1.0-SNAPSHOT for llm docker images (#10603) 2024-04-01 09:58:51 +08:00
Shaojun Liu
b06de94a50
verify xpu-inference image and refine document (#10593) 2024-03-29 16:11:12 +08:00
Shaojun Liu
52f1b541cf
refine and verify ipex-inference-cpu docker document (#10565)
* restructure the index

* refine and verify cpu-inference document

* update
2024-03-29 10:16:10 +08:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm (#10554)
* fix ipex with ipex_llm

* fix ipex with ipex_llm

* update

* update

* update

* update

* update

* update

* update

* update
2024-03-28 13:54:40 +08:00
Cheen Hau, 俊豪
1c5eb14128
Update pip install to use --extra-index-url for ipex package (#10557)
* Change to 'pip install .. --extra-index-url' for readthedocs

* Change to 'pip install .. --extra-index-url' for examples

* Change to 'pip install .. --extra-index-url' for remaining files

* Fix URL for ipex

* Add links for ipex US and CN servers

* Update ipex cpu url

* remove readme

* Update for github actions

* Update for dockerfiles
2024-03-28 09:56:23 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang (#29) 2024-03-25 10:05:46 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Heyang Sun
c672e97239 Fix CPU finetuning docker (#10494)
* Fix CPU finetuning docker

* Update README.md
2024-03-21 11:53:30 +08:00
Shaojun Liu
0e388f4b91 Fix Trivy Docker Image Vulnerabilities for BigDL Release 2.5.0 (#10447)
* Update pypi version to fix trivy issues

* refine
2024-03-19 14:52:15 +08:00
Wang, Jian4
1de13ea578 LLM: remove CPU english_quotes dataset and update docker example (#10399)
* update dataset

* update readme

* update docker cpu

* update xpu docker
2024-03-18 10:45:14 +08:00
ZehuaCao
146b77f113 fix qlora-finetune Dockerfile (#10379) 2024-03-12 13:20:06 +08:00
ZehuaCao
267de7abc3 fix fschat DEP version error (#10325) 2024-03-06 16:15:27 +08:00
Lilac09
a2ed4d714e Fix vllm service error (#10279) 2024-02-29 15:45:04 +08:00
Ziteng Zhang
e08c74f1d1 Fix build error of bigdl-llm-cpu (#10228) 2024-02-23 16:30:21 +08:00
Ziteng Zhang
f7e2591f15 [LLM] change IPEX230 to IPEX220 in dockerfile (#10222)
* change IPEX230 to IPEX220 in dockerfile
2024-02-23 15:02:08 +08:00
Shaojun Liu
079f2011ea Update bigdl-llm-finetune-qlora-xpu Docker Image (#10194)
* Bump oneapi version to 2024.0

* pip install bitsandbytes scipy

* Pin level-zero-gpu version

* Pin accelerate version 0.23.0
2024-02-21 15:18:27 +08:00
Lilac09
eca69a6022 Fix build error of bigdl-llm-cpu (#10176)
* fix build error

* fix build error

* fix build error

* fix build error
2024-02-20 14:50:12 +08:00
Lilac09
f8dcaff7f4 use default python (#10070) 2024-02-05 09:06:59 +08:00
Lilac09
72e67eedbb Add speculative support in docker (#10058)
* add speculative environment

* add speculative environment

* add speculative environment
2024-02-01 09:53:53 +08:00
binbin Deng
171fb2d185 LLM: reorganize GPU finetuning examples (#9952) 2024-01-25 19:02:38 +08:00
ZehuaCao
51aa8b62b2 add gradio_web_ui to llm-serving image (#9918) 2024-01-25 11:11:39 +08:00
Lilac09
de27ddd81a Update Dockerfile (#9981) 2024-01-24 11:10:06 +08:00
Lilac09
a2718038f7 Fix qwen model adapter in docker (#9969)
* fix qwen in docker

* add patch for model_adapter.py in fastchat

* add patch for model_adapter.py in fastchat
2024-01-24 11:01:29 +08:00
Lilac09
052962dfa5 Using original fastchat and add bigdl worker in docker image (#9967)
* add vllm worker

* add options in entrypoint
2024-01-23 14:17:05 +08:00
Shaojun Liu
32c56ffc71 pip install deps (#9916) 2024-01-17 11:03:57 +08:00
ZehuaCao
05ea0ecd70 add pv for llm-serving k8s deployment (#9906) 2024-01-16 11:32:54 +08:00
Guancheng Fu
0396fafed1 Update BigDL-LLM-inference image (#9805)
* upgrade to oneapi 2024

* Pin level-zero-gpu version

* add flag
2024-01-03 14:00:09 +08:00
Lilac09
a5c481fedd add chat.py denpendency in Dockerfile (#9699) 2023-12-18 09:00:22 +08:00
Lilac09
3afed99216 fix path issue (#9696) 2023-12-15 11:21:49 +08:00
ZehuaCao
d204125e88 [LLM] Use to build a more slim docker for k8s (#9608)
* Create Dockerfile.k8s

* Update Dockerfile

More slim standalone image

* Update Dockerfile

* Update Dockerfile.k8s

* Update bigdl-qlora-finetuing-entrypoint.sh

* Update qlora_finetuning_cpu.py

* Update alpaca_qlora_finetuning_cpu.py

Refer to this [pr](https://github.com/intel-analytics/BigDL/pull/9551/files#diff-2025188afa54672d21236e6955c7c7f7686bec9239532e41c7983858cc9aaa89), update the LoraConfig

* update

* update

* update

* update

* update

* update

* update

* update transformer version

* update Dockerfile

* update Docker image name

* fix error
2023-12-08 10:25:36 +08:00
Heyang Sun
4e70e33934 [LLM] code and document for distributed qlora (#9585)
* [LLM] code and document for distributed qlora

* doc

* refine for gradient checkpoint

* refine

* Update alpaca_qlora_finetuning_cpu.py

* Update alpaca_qlora_finetuning_cpu.py

* Update alpaca_qlora_finetuning_cpu.py

* add link in doc
2023-12-06 09:23:17 +08:00
Guancheng Fu
8b00653039 fix doc (#9599) 2023-12-05 13:49:31 +08:00
Heyang Sun
74fd7077a2 [LLM] Multi-process and distributed QLoRA on CPU platform (#9491)
* [LLM] Multi-process and distributed QLoRA on CPU platform

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* enable llm-init and bind to socket

* refine

* Update Dockerfile

* add all files of qlora cpu example to /bigdl

* fix

* fix k8s

* Update bigdl-qlora-finetuing-entrypoint.sh

* Update bigdl-qlora-finetuing-entrypoint.sh

* Update bigdl-qlora-finetuning-job.yaml

* fix train sync and performance issues

* add node affinity

* disable user to tune cpu per pod

* Update bigdl-qlora-finetuning-job.yaml
2023-12-01 13:47:19 +08:00
Lilac09
b785376f5c Add vllm-example to docker inference image (#9570)
* add vllm-serving to cpu image

* add vllm-serving to cpu image

* add vllm-serving
2023-11-30 17:04:53 +08:00
Lilac09
2554ba0913 Add usage of vllm (#9564)
* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm
2023-11-30 14:19:23 +08:00
Lilac09
557bb6bbdb add judgement for running serve (#9555) 2023-11-29 16:57:00 +08:00
Guancheng Fu
2b200bf2f2 Add vllm_worker related arguments in docker serving image's entrypoint (#9500)
* fix entrypoint

* fix missing long mode argument
2023-11-21 14:41:06 +08:00
Lilac09
566ec85113 add stream interval option to entrypoint (#9498) 2023-11-21 09:47:32 +08:00
Lilac09
13f6eb77b4 Add exec bash to entrypoint.sh to keep container running after being booted. (#9471)
* add bigdl-llm-init

* boot bash
2023-11-15 16:09:16 +08:00
Lilac09
24146d108f add bigdl-llm-init (#9468) 2023-11-15 14:55:33 +08:00
Lilac09
b2b085550b Remove bigdl-nano and add ipex into inference-cpu image (#9452)
* remove bigdl-nano and add ipex into inference-cpu image

* remove bigdl-nano in docker

* remove bigdl-nano in docker
2023-11-14 10:50:52 +08:00
Wang, Jian4
0f78ebe35e LLM : Add qlora cpu finetune docker image (#9271)
* init qlora cpu docker image

* update

* remove ipex and update

* update

* update readme

* update example and readme
2023-11-14 10:36:53 +08:00
Shaojun Liu
0e5ab5ebfc update docker tag to 2.5.0-SNAPSHOT (#9443) 2023-11-13 16:53:40 +08:00
Lilac09
5d4ec44488 Add all-in-one benchmark into inference-cpu docker image (#9433)
* add all-in-one into inference-cpu image

* manually_build

* revise files
2023-11-13 13:07:56 +08:00
Lilac09
74a8ad32dc Add entry point to llm-serving-xpu (#9339)
* add entry point to llm-serving-xpu

* manually build

* manually build

* add entry point to llm-serving-xpu

* manually build

* add entry point to llm-serving-xpu

* add entry point to llm-serving-xpu

* add entry point to llm-serving-xpu
2023-11-02 16:31:07 +08:00
Ziteng Zhang
4df66f5cbc Update llm-finetune-lora-cpu dockerfile and readme
* Update README.md

* Update Dockerfile
2023-11-02 16:26:24 +08:00
Lilac09
2c2bc959ad add tools into previously built images (#9317)
* modify Dockerfile

* manually build

* modify Dockerfile

* add chat.py into inference-xpu

* add benchmark into inference-cpu

* manually build

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add chat.py into inference-xpu

* add chat.py into inference-xpu

* change ADD to COPY in dockerfile

* fix dependency issue

* temporarily remove run-spr in llm-cpu

* temporarily remove run-spr in llm-cpu
2023-10-31 16:35:18 +08:00
Lilac09
030edeecac Ubuntu upgrade: fix installation error (#9309)
* upgrade ubuntu version in llm-inference cpu image

* fix installation issue

* fix installation issue

* fix installation issue
2023-10-31 09:55:15 +08:00
Lilac09
5842f7530e upgrade ubuntu version in llm-inference cpu image (#9307) 2023-10-30 16:51:38 +08:00
Ziteng Zhang
ca2965fb9f hosted k8s.png on readthedocs (#9258) 2023-10-24 15:07:16 +08:00
Guancheng Fu
7f66bc5c14 Fix bigdl-llm-serving-cpu Dockerfile (#9247) 2023-10-23 16:51:30 +08:00
Shaojun Liu
9dc76f19c0 fix hadolint error (#9223) 2023-10-19 16:22:32 +08:00
Ziteng Zhang
0d62bd4adb Added Docker installation guide and modified link in Dockerfile (#9224)
* changed '/ppml' into '/bigdl' and modified llama-7b

* Added the contents of finetuning in README

* Modified link of qlora_finetuning.py in Dockerfile
2023-10-19 15:28:05 +08:00
Lilac09
160c543a26 README for BigDL-LLM on docker (#9197)
* add instruction for MacOS/Linux

* modify html label of gif images

* organize structure of README

* change title name

* add inference-xpu, serving-cpu and serving-xpu parts

* revise README

* revise README

* revise README
2023-10-19 13:48:06 +08:00
Ziteng Zhang
2f14f53b1c changed '/ppml' into '/bigdl' and modified llama-7b (#9209) 2023-10-18 10:25:12 +08:00
Lilac09
326ef7f491 add README for llm-inference-cpu (#9147)
* add README for llm-inference-cpu

* modify README

* add README for llm-inference-cpu on Windows
2023-10-16 10:27:44 +08:00
Lilac09
e02fbb40cc add bigdl-llm-tutorial into llm-inference-cpu image (#9139)
* add bigdl-llm-tutorial into llm-inference-cpu image

* modify Dockerfile

* modify Dockerfile
2023-10-11 16:41:04 +08:00
Ziteng Zhang
4a0a3c376a Add stand-alone mode on cpu for finetuning (#9127)
* Added steps for finetune on CPU in stand-alone mode

* Add stand-alone mode to bigdl-lora-finetuing-entrypoint.sh

* delete redundant docker commands

* Update README.md

Turn to intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT and append example outputs to allow users to check the running

* Update bigdl-lora-finetuing-entrypoint.sh

Add some tunable parameters

* Add parameters --cpus and -e WORKER_COUNT_DOCKER

* Modified the cpu number range parameters

* Set -ppn to CCL_WORKER_COUNT

* Add related configuration suggestions in README.md
2023-10-11 15:01:21 +08:00
Lilac09
30e3c196f3 Merge pull request #9108 from Zhengjin-Wang/main
Add instruction for chat.py in bigdl-llm-cpu
2023-10-10 16:40:52 +08:00
Lilac09
1e78b0ac40 Optimize LoRA Docker by Shrinking Image Size (#9110)
* modify dockerfile

* modify dockerfile
2023-10-10 15:53:17 +08:00