Commit graph

32 commits

Author SHA1 Message Date
Xiangyu Tian
ef9f740801
Docs: Fix CPU Serving Docker README (#11351)
Fix CPU Serving Docker README
2024-06-18 16:27:51 +08:00
Shaojun Liu
77809be946
Install packages for ipex-llm-serving-cpu docker image (#11321)
* apt-get install patch

* Update Dockerfile

* Update Dockerfile

* revert
2024-06-14 15:26:01 +08:00
Shaojun Liu
9760ffc256
Fix SDLe CT222 Vulnerabilities (#11237)
* fix ct222 vuln

* update

* fix

* update ENTRYPOINT

* revert ENTRYPOINT

* Fix CT222 Vulns

* fix

* revert changes

* fix

* revert

* add sudo permission to ipex-llm user

* do not use ipex-llm user
2024-06-13 15:31:22 +08:00
Xiangyu Tian
ac3d53ff5d
LLM: Fix vLLM CPU version error (#11206)
Fix vLLM CPU version error
2024-06-04 19:10:23 +08:00
Guancheng Fu
3ef4aa98d1
Refine vllm_quickstart doc (#11199)
* refine doc

* refine
2024-06-04 18:46:27 +08:00
Xiangyu Tian
b3f6faa038
LLM: Add CPU vLLM entrypoint (#11083)
Add CPU vLLM entrypoint and update CPU vLLM serving example.
2024-05-24 09:16:59 +08:00
Shaojun Liu
29bf28bd6f
Upgrade python to 3.11 in Docker Image (#10718)
* install python 3.11 for cpu-inference docker image

* update xpu-inference dockerfile

* update cpu-serving image

* update qlora image

* update lora image

* update document
2024-04-10 14:41:27 +08:00
Shaojun Liu
59058bb206
replace 2.5.0-SNAPSHOT with 2.1.0-SNAPSHOT for llm docker images (#10603) 2024-04-01 09:58:51 +08:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm (#10554)
* fix ipex with ipex_llm

* fix ipex with ipex_llm

* update

* update

* update

* update

* update

* update

* update

* update
2024-03-28 13:54:40 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang (#29) 2024-03-25 10:05:46 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Shaojun Liu
0e388f4b91 Fix Trivy Docker Image Vulnerabilities for BigDL Release 2.5.0 (#10447)
* Update pypi version to fix trivy issues

* refine
2024-03-19 14:52:15 +08:00
ZehuaCao
267de7abc3 fix fschat DEP version error (#10325) 2024-03-06 16:15:27 +08:00
ZehuaCao
51aa8b62b2 add gradio_web_ui to llm-serving image (#9918) 2024-01-25 11:11:39 +08:00
Lilac09
de27ddd81a Update Dockerfile (#9981) 2024-01-24 11:10:06 +08:00
Lilac09
a2718038f7 Fix qwen model adapter in docker (#9969)
* fix qwen in docker

* add patch for model_adapter.py in fastchat

* add patch for model_adapter.py in fastchat
2024-01-24 11:01:29 +08:00
Lilac09
052962dfa5 Using original fastchat and add bigdl worker in docker image (#9967)
* add vllm worker

* add options in entrypoint
2024-01-23 14:17:05 +08:00
ZehuaCao
05ea0ecd70 add pv for llm-serving k8s deployment (#9906) 2024-01-16 11:32:54 +08:00
Lilac09
2554ba0913 Add usage of vllm (#9564)
* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm

* add usage of vllm
2023-11-30 14:19:23 +08:00
Lilac09
557bb6bbdb add judgement for running serve (#9555) 2023-11-29 16:57:00 +08:00
Guancheng Fu
2b200bf2f2 Add vllm_worker related arguments in docker serving image's entrypoint (#9500)
* fix entrypoint

* fix missing long mode argument
2023-11-21 14:41:06 +08:00
Lilac09
566ec85113 add stream interval option to entrypoint (#9498) 2023-11-21 09:47:32 +08:00
Lilac09
13f6eb77b4 Add exec bash to entrypoint.sh to keep container running after being booted. (#9471)
* add bigdl-llm-init

* boot bash
2023-11-15 16:09:16 +08:00
Lilac09
24146d108f add bigdl-llm-init (#9468) 2023-11-15 14:55:33 +08:00
Lilac09
b2b085550b Remove bigdl-nano and add ipex into inference-cpu image (#9452)
* remove bigdl-nano and add ipex into inference-cpu image

* remove bigdl-nano in docker

* remove bigdl-nano in docker
2023-11-14 10:50:52 +08:00
Shaojun Liu
0e5ab5ebfc update docker tag to 2.5.0-SNAPSHOT (#9443) 2023-11-13 16:53:40 +08:00
Lilac09
2c2bc959ad add tools into previously built images (#9317)
* modify Dockerfile

* manually build

* modify Dockerfile

* add chat.py into inference-xpu

* add benchmark into inference-cpu

* manually build

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add chat.py into inference-xpu

* add chat.py into inference-xpu

* change ADD to COPY in dockerfile

* fix dependency issue

* temporarily remove run-spr in llm-cpu

* temporarily remove run-spr in llm-cpu
2023-10-31 16:35:18 +08:00
Guancheng Fu
7f66bc5c14 Fix bigdl-llm-serving-cpu Dockerfile (#9247) 2023-10-23 16:51:30 +08:00
Shaojun Liu
9dc76f19c0 fix hadolint error (#9223) 2023-10-19 16:22:32 +08:00
Guancheng Fu
df8df751c4 Modify readme for bigdl-llm-serving-cpu (#9105) 2023-10-09 09:56:09 +08:00
ZehuaCao
b773d67dd4 Add Kubernetes support for BigDL-LLM-serving CPU. (#9071) 2023-10-07 09:37:48 +08:00
Guancheng Fu
cc84ed70b3 Create serving images (#9048)
* Finished & Tested

* Install latest pip from base images

* Add blank line

* Delete unused comment

* fix typos
2023-09-25 15:51:45 +08:00