Commit graph

1635 commits

Author SHA1 Message Date
Lilac09
74a8ad32dc Add entry point to llm-serving-xpu (#9339)
* add entry point to llm-serving-xpu

* manually build

* manually build

* add entry point to llm-serving-xpu

* manually build

* add entry point to llm-serving-xpu

* add entry point to llm-serving-xpu

* add entry point to llm-serving-xpu
2023-11-02 16:31:07 +08:00
Ziteng Zhang
4df66f5cbc Update llm-finetune-lora-cpu dockerfile and readme
* Update README.md

* Update Dockerfile
2023-11-02 16:26:24 +08:00
dingbaorong
2e3bfbfe1f Add internlm_xcomposer cpu examples (#9337)
* add internlm-xcomposer cpu examples

* use chat

* some fixes

* add license

* address shengsheng's comments

* use demo.jpg
2023-11-02 15:50:02 +08:00
Jin Qiao
97a38958bd LLM: add CodeLlama CPU and GPU examples (#9338)
* LLM: add codellama CPU pytorch examples

* LLM: add codellama CPU transformers examples

* LLM: add codellama GPU transformers examples

* LLM: add codellama GPU pytorch examples

* LLM: add codellama in readme

* LLM: add LLaVA link
2023-11-02 15:34:25 +08:00
Chen, Zhentao
d4dffbdb62 Merge harness (#9319)
* add harness patch and llb script

* add readme

* add license

* use patch instead

* update readme

* rename tests to evaluation

* fix typo

* remove nano dependency

* add original harness link

* rename title of usage

* rename BigDLGPULM as BigDLLM

* empty commit to rerun job
2023-11-02 15:14:19 +08:00
Zheng, Yi
63b2556ce2 Add cpu examples of skywork (#9340) 2023-11-02 15:10:45 +08:00
dingbaorong
f855a864ef add llava gpu example (#9324)
* add llava gpu example

* use 7b model

* fix typo

* add in README
2023-11-02 14:48:29 +08:00
Ziteng Zhang
dd3cf2f153 LLM: Add python 3.10 & 3.11 UT
LLM: Add python 3.10 & 3.11 UT
2023-11-02 14:09:29 +08:00
Wang, Jian4
149146004f LLM: Add qlora finetunning CPU example (#9275)
* add qlora finetunning example

* update readme

* update example

* remove merge.py and update readme
2023-11-02 09:45:42 +08:00
Jasonzzt
d1bdc0ef72 spr & arc ut with python 3.9 & 3.10 & 3.11 2023-11-01 22:57:48 +08:00
Jasonzzt
687da21467 test 3.11 2023-11-01 19:14:53 +08:00
WeiguangHan
9722e811be LLM: add more models to the arc perf test (#9297)
* LLM: add more models to the arc perf test

* remove some old models

* install some dependencies
2023-11-01 16:56:32 +08:00
Jasonzzt
3c3329010d add conda update -n base conda 2023-11-01 16:36:35 +08:00
Jasonzzt
2fff0e8c21 use runner avx2 with linux 2023-11-01 16:28:29 +08:00
Jasonzzt
964a8e6dc1 update conda 2023-11-01 16:20:19 +08:00
Jin Qiao
6a128aee32 LLM: add ui for portable-zip (#9262) 2023-11-01 15:36:59 +08:00
Jasonzzt
cb7ef38e86 rerun 2023-11-01 15:30:34 +08:00
Jasonzzt
8f6e979fad test again 2023-11-01 15:10:11 +08:00
Jasonzzt
b66584f23b test 2023-11-01 14:51:23 +08:00
Jasonzzt
ba148ff3ff test py311 2023-11-01 14:08:49 +08:00
Yishuo Wang
726203d778 [LLM] Replace Embedding layer to fix it on CPU (#9254) 2023-11-01 13:58:10 +08:00
Jasonzzt
6f1cee90a4 test 2023-11-01 13:58:03 +08:00
Jasonzzt
d51821e264 test 2023-11-01 13:49:32 +08:00
Jasonzzt
7c7a7f2ec1 spr & arc ut with python3,9&3.10&3.11 2023-11-01 13:17:13 +08:00
Yang Wang
e1bc18f8eb fix import ipex problem (#9323)
* fix import ipex problem

* fix style
2023-10-31 20:31:34 -07:00
Cengguang Zhang
9f3d4676c6 LLM: Add qwen-vl gpu example (#9290)
* create qwen-vl gpu example.

* add readme.

* fix.

* change input figure and update outputs.

* add qwen-vl pytorch model gpu example.

* fix.

* add readme.
2023-11-01 11:01:39 +08:00
Ruonan Wang
7e73c354a6 LLM: decoupling bigdl-llm and bigdl-nano (#9306) 2023-11-01 11:00:54 +08:00
Yina Chen
2262ae4d13 Support MoFQ4 on arc (#9301)
* init

* update

* fix style

* fix style

* fix style

* meet comments
2023-11-01 10:59:46 +08:00
Jasonzzt
4f9fd0dffd arc-ut with 3.10 & 3.11 2023-11-01 10:51:57 +08:00
binbin Deng
8ef8e25178 LLM: improve response speed in multi-turn chat (#9299)
* update

* fix stop word and add chatglm2 support

* remove system prompt
2023-11-01 10:30:44 +08:00
Cengguang Zhang
d4ab5904ef LLM: Add python 3.10 llm UT (#9302)
* add py310 test for llm-unit-test.

* add py310 llm-unit-tests

* add llm-cpp-build-py310

* test

* test

* test.

* test

* test

* fix deactivate.

* fix

* fix.

* fix

* test

* test

* test

* add build chatglm for win.

* test.

* fix
2023-11-01 10:15:32 +08:00
WeiguangHan
03aa368776 LLM: add the comparison between latest arc perf test and last one (#9296)
* add the comparison between latest test and last one to html

* resolve some comments

* modify some code logics
2023-11-01 09:53:02 +08:00
Jin Qiao
96f8158fe2 LLM: adjust dolly v2 GPU example README (#9318) 2023-11-01 09:50:22 +08:00
Jin Qiao
c44c6dc43a LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
Xin Qiu
06447a3ef6 add malloc and intel openmp to llm deps (#9322) 2023-11-01 09:47:45 +08:00
Cheen Hau, 俊豪
d638b93dfe Add test script and workflow for qlora fine-tuning (#9295)
* Add test script and workflow for qlora fine-tuning

* Test fix export model

* Download dataset

* Fix export model issue

* Reduce number of training steps

* Rename script

* Correction
2023-11-01 09:39:53 +08:00
Lilac09
2c2bc959ad add tools into previously built images (#9317)
* modify Dockerfile

* manually build

* modify Dockerfile

* add chat.py into inference-xpu

* add benchmark into inference-cpu

* manually build

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add benchmark into inference-cpu

* add chat.py into inference-xpu

* add chat.py into inference-xpu

* change ADD to COPY in dockerfile

* fix dependency issue

* temporarily remove run-spr in llm-cpu

* temporarily remove run-spr in llm-cpu
2023-10-31 16:35:18 +08:00
Ruonan Wang
d383ee8efb LLM: update QLoRA example about accelerate version(#9314) 2023-10-31 13:54:38 +08:00
Lilac09
030edeecac Ubuntu upgrade: fix installation error (#9309)
* upgrade ubuntu version in llm-inference cpu image

* fix installation issue

* fix installation issue

* fix installation issue
2023-10-31 09:55:15 +08:00
Lilac09
5842f7530e upgrade ubuntu version in llm-inference cpu image (#9307) 2023-10-30 16:51:38 +08:00
Cheen Hau, 俊豪
cee9eaf542 [LLM] Fix llm arc ut oom (#9300)
* Move model to cpu after testing so that gpu memory is deallocated

* Add code comment

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2023-10-30 14:38:34 +08:00
dingbaorong
ee5becdd61 use coco image in Qwen-VL (#9298)
* use coco image

* add output

* address yuwen's comments
2023-10-30 14:32:35 +08:00
Yang Wang
163d033616 Support qlora in CPU (#9233)
* support qlora in CPU

* revert example

* fix style
2023-10-27 14:01:15 -07:00
Yang Wang
8838707009 Add deepspeed autotp example readme (#9289)
* Add deepspeed autotp example readme

* change word
2023-10-27 13:04:38 -07:00
dingbaorong
f053688cad add cpu example of LLaVA (#9269)
* add LLaVA cpu example

* Small text updates

* update link

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2023-10-27 18:59:20 +08:00
Zheng, Yi
7f2ad182fd Minor Fixes of README (#9294) 2023-10-27 18:25:46 +08:00
Zheng, Yi
1bff54a378 Display demo.jpg n the README.md of HuggingFace Transformers Agent (#9293)
* Display demo.jpg

* remove demo.jpg
2023-10-27 18:00:03 +08:00
Zheng, Yi
a4a1dec064 Add a cpu example of HuggingFace Transformers Agent (use vicuna-7b-v1.5) (#9284)
* Add examples of HF Agent

* Modify folder structure and add link of demo.jpg

* Fixes of readme

* Merge applications and Applications
2023-10-27 17:14:12 +08:00
Guoqiong Song
aa319de5e8 Add streaming-llm using llama2 on CPU (#9265)
Enable streaming-llm to let model take infinite inputs, tested on desktop and SPR10
2023-10-27 01:30:39 -07:00
Yuwen Hu
21631209a9 [LLM] Skip CPU performance test for now (#9291)
* Skip llm cpu performance test for now

* Add install for wheel package
2023-10-27 12:55:04 +08:00