Shaojun Liu
29bf28bd6f
Upgrade python to 3.11 in Docker Image ( #10718 )
...
* install python 3.11 for cpu-inference docker image
* update xpu-inference dockerfile
* update cpu-serving image
* update qlora image
* update lora image
* update document
2024-04-10 14:41:27 +08:00
Shaojun Liu
59058bb206
replace 2.5.0-SNAPSHOT with 2.1.0-SNAPSHOT for llm docker images ( #10603 )
2024-04-01 09:58:51 +08:00
ZehuaCao
52a2135d83
Replace ipex with ipex-llm ( #10554 )
...
* fix ipex with ipex_llm
* fix ipex with ipex_llm
* update
* update
* update
* update
* update
* update
* update
* update
2024-03-28 13:54:40 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang ( #29 )
2024-03-25 10:05:46 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm ( #24 )
...
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Shaojun Liu
0e388f4b91
Fix Trivy Docker Image Vulnerabilities for BigDL Release 2.5.0 ( #10447 )
...
* Update pypi version to fix trivy issues
* refine
2024-03-19 14:52:15 +08:00
ZehuaCao
267de7abc3
fix fschat DEP version error ( #10325 )
2024-03-06 16:15:27 +08:00
ZehuaCao
51aa8b62b2
add gradio_web_ui to llm-serving image ( #9918 )
2024-01-25 11:11:39 +08:00
Lilac09
de27ddd81a
Update Dockerfile ( #9981 )
2024-01-24 11:10:06 +08:00
Lilac09
a2718038f7
Fix qwen model adapter in docker ( #9969 )
...
* fix qwen in docker
* add patch for model_adapter.py in fastchat
* add patch for model_adapter.py in fastchat
2024-01-24 11:01:29 +08:00
Lilac09
052962dfa5
Using original fastchat and add bigdl worker in docker image ( #9967 )
...
* add vllm worker
* add options in entrypoint
2024-01-23 14:17:05 +08:00
ZehuaCao
05ea0ecd70
add pv for llm-serving k8s deployment ( #9906 )
2024-01-16 11:32:54 +08:00
Lilac09
2554ba0913
Add usage of vllm ( #9564 )
...
* add usage of vllm
* add usage of vllm
* add usage of vllm
* add usage of vllm
* add usage of vllm
* add usage of vllm
2023-11-30 14:19:23 +08:00
Lilac09
557bb6bbdb
add judgement for running serve ( #9555 )
2023-11-29 16:57:00 +08:00
Guancheng Fu
2b200bf2f2
Add vllm_worker related arguments in docker serving image's entrypoint ( #9500 )
...
* fix entrypoint
* fix missing long mode argument
2023-11-21 14:41:06 +08:00
Lilac09
566ec85113
add stream interval option to entrypoint ( #9498 )
2023-11-21 09:47:32 +08:00
Lilac09
13f6eb77b4
Add exec bash to entrypoint.sh to keep container running after being booted. ( #9471 )
...
* add bigdl-llm-init
* boot bash
2023-11-15 16:09:16 +08:00
Lilac09
24146d108f
add bigdl-llm-init ( #9468 )
2023-11-15 14:55:33 +08:00
Lilac09
b2b085550b
Remove bigdl-nano and add ipex into inference-cpu image ( #9452 )
...
* remove bigdl-nano and add ipex into inference-cpu image
* remove bigdl-nano in docker
* remove bigdl-nano in docker
2023-11-14 10:50:52 +08:00
Shaojun Liu
0e5ab5ebfc
update docker tag to 2.5.0-SNAPSHOT ( #9443 )
2023-11-13 16:53:40 +08:00
Lilac09
74a8ad32dc
Add entry point to llm-serving-xpu ( #9339 )
...
* add entry point to llm-serving-xpu
* manually build
* manually build
* add entry point to llm-serving-xpu
* manually build
* add entry point to llm-serving-xpu
* add entry point to llm-serving-xpu
* add entry point to llm-serving-xpu
2023-11-02 16:31:07 +08:00
Lilac09
2c2bc959ad
add tools into previously built images ( #9317 )
...
* modify Dockerfile
* manually build
* modify Dockerfile
* add chat.py into inference-xpu
* add benchmark into inference-cpu
* manually build
* add benchmark into inference-cpu
* add benchmark into inference-cpu
* add benchmark into inference-cpu
* add chat.py into inference-xpu
* add chat.py into inference-xpu
* change ADD to COPY in dockerfile
* fix dependency issue
* temporarily remove run-spr in llm-cpu
* temporarily remove run-spr in llm-cpu
2023-10-31 16:35:18 +08:00
Guancheng Fu
7f66bc5c14
Fix bigdl-llm-serving-cpu Dockerfile ( #9247 )
2023-10-23 16:51:30 +08:00
Shaojun Liu
9dc76f19c0
fix hadolint error ( #9223 )
2023-10-19 16:22:32 +08:00
Guancheng Fu
df8df751c4
Modify readme for bigdl-llm-serving-cpu ( #9105 )
2023-10-09 09:56:09 +08:00
ZehuaCao
b773d67dd4
Add Kubernetes support for BigDL-LLM-serving CPU. ( #9071 )
2023-10-07 09:37:48 +08:00
Guancheng Fu
cc84ed70b3
Create serving images ( #9048 )
...
* Finished & Tested
* Install latest pip from base images
* Add blank line
* Delete unused comment
* fix typos
2023-09-25 15:51:45 +08:00