Commit graph

397 commits

Author SHA1 Message Date
Yang Wang
067c7e8098 Support deepspeed AutoTP (#9230)
* Support deepspeed

* add test script

* refactor convert

* refine example

* refine

* refine example

* fix style

* refine example and adapte latest ipex

* fix style
2023-10-24 23:46:28 -07:00
Yining Wang
a6a8afc47e Add qwen vl CPU example (#9221)
* eee

* add examples on CPU and GPU

* fix

* fix

* optimize model examples

* add Qwen-VL-Chat CPU example

* Add Qwen-VL CPU example

* fix optimize problem

* fix error

* Have updated, benchmark fix removed from this PR

* add generate API example

* Change formats in qwen-vl example

* Add CPU transformer int4 example for qwen-vl

* fix repo-id problem and add Readme

* change picture url

* Remove unnecessary file

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2023-10-25 13:22:12 +08:00
binbin Deng
f597a9d4f5 LLM: update perf test configuration (#9264) 2023-10-25 12:35:48 +08:00
binbin Deng
770ac70b00 LLM: add low_bit option in benchmark scripts (#9257) 2023-10-25 10:27:48 +08:00
WeiguangHan
ec9195da42 LLM: using html to visualize the perf result for Arc (#9228)
* LLM: using html to visualize the perf result for Arc

* deploy the html file

* add python license

* reslove some comments
2023-10-24 18:05:25 +08:00
Jin Qiao
90162264a3 LLM: replace torch.float32 with auto type (#9261) 2023-10-24 17:12:13 +08:00
SONG Ge
bd5215d75b [LLM] Reimplement chatglm fuse rms optimization (#9260)
* re-implement chatglm rope rms

* update
2023-10-24 16:35:12 +08:00
dingbaorong
5a2ce421af add cpu and gpu examples of flan-t5 (#9171)
* add cpu and gpu examples of flan-t5

* address yuwen's comments
* Add explanation  why we add modules to not convert
* Refine prompt and add a translation example
* Add a empty line at the end of files

* add examples of flan-t5 using optimize_mdoel api

* address bin's comments

* address binbin's comments

* add flan-t5 in readme
2023-10-24 15:24:01 +08:00
Yining Wang
4a19f50d16 phi-1_5 CPU and GPU examples (#9173)
* eee

* add examples on CPU and GPU

* fix

* fix

* optimize model examples

* have updated

* Warmup and configs added

* Update two tables
2023-10-24 15:08:04 +08:00
SONG Ge
bfc1e2d733 add fused rms optimization for chatglm model (#9256) 2023-10-24 14:40:58 +08:00
Ruonan Wang
b15656229e LLM: fix benchmark issue (#9255) 2023-10-24 14:15:05 +08:00
Guancheng Fu
f37547249d Refine README/CICD (#9253) 2023-10-24 12:56:03 +08:00
binbin Deng
db37edae8a LLM: update langchain api document page (#9222) 2023-10-24 10:13:41 +08:00
Xin Qiu
0c5055d38c add position_ids and fuse embedding for falcon (#9242)
* add position_ids for falcon

* add cpu

* add cpu

* add license
2023-10-24 09:58:20 +08:00
Wang, Jian4
c14a61681b Add load low-bit in model-serving for reduce EPC (#9239)
* init load low-bit

* fix

* fix
2023-10-23 11:28:20 +08:00
Yina Chen
0383306688 Add arc fp8 support (#9232)
* add fp8 support

* add log

* fix style
2023-10-20 17:15:07 +08:00
Yang Wang
118249b011 support transformers 4.34+ for llama (#9229) 2023-10-19 22:36:30 -07:00
Chen, Zhentao
5850241423 correct Readme GPU example and API docstring (#9225)
* update readme to correct GPU usage

* update from_pretrained supported low bit options

* fix stype check
2023-10-19 16:08:47 +08:00
WeiguangHan
f87f67ee1c LLM: arc perf test for some popular models (#9188) 2023-10-19 15:56:15 +08:00
Yang Wang
b0ddde0410 Fix removing convert dtype bug (#9216)
* Fix removing convert dtype bug

* fix style
2023-10-18 11:24:22 -07:00
Ruonan Wang
942d6418e7 LLM: fix chatglm kv cache (#9215) 2023-10-18 19:09:53 +08:00
SONG Ge
0765f94770 [LLM] Optimize kv_cache for mistral model family (#9189)
* add kv_cache optimization for mistral model

* kv_cache optimize for mistral

* update stylr

* update
2023-10-18 15:13:37 +08:00
Ruonan Wang
3555ebc148 LLM: fix wrong length in gptj kv_cache optimization (#9210)
* fix wrong length in gptj kv cache

* update
2023-10-18 14:59:02 +08:00
Shengsheng Huang
6dad8d16df optimize NormHead for Baichuan2 (#9205)
* optimize NormHead for Baichuan2

* fix ut and change name

* rename functions
2023-10-18 14:05:07 +08:00
Jin Qiao
a3b664ed03 LLM: add GPU More-Data-Types and Save/Load example (#9199) 2023-10-18 13:13:45 +08:00
WeiguangHan
b9194c5786 LLM: skip some model tests using certain api (#9163)
* LLM: Skip some model tests using certain api

* initialize variable named result
2023-10-18 09:39:27 +08:00
Ruonan Wang
09815f7064 LLM: fix RMSNorm optimization of Baichuan2-13B/Baichuan-13B (#9204)
* fix rmsnorm of baichuan2-13B

* update baichuan1-13B too

* fix style
2023-10-17 18:40:34 +08:00
Jin Qiao
d7ce78edf0 LLM: fix portable zip README image link (#9201)
* LLM: fix portable zip readme img link

* LLM: make README first image center align
2023-10-17 16:38:22 +08:00
Cheen Hau, 俊豪
66c2e45634 Add unit tests for optimized model correctness (#9151)
* Add test to check correctness of optimized model

* Refactor optimized model test

* Use models in llm-unit-test

* Use AutoTokenizer for bloom

* Print out each passed test

* Remove unused tokenizer from import
2023-10-17 14:46:41 +08:00
Jin Qiao
d946bd7c55 LLM: add CPU More-Data-Types and Save-Load examples (#9179) 2023-10-17 14:38:52 +08:00
Ruonan Wang
c0497ab41b LLM: support kv_cache optimization for Qwen-VL-Chat (#9193)
* dupport qwen_vl_chat

* fix style
2023-10-17 13:33:56 +08:00
binbin Deng
1cd9ab15b8 LLM: fix ChatGLMConfig check (#9191) 2023-10-17 11:52:56 +08:00
Yang Wang
7160afd4d1 Support XPU DDP training and autocast for LowBitMatmul (#9167)
* support autocast in low bit matmul

* Support XPU DDP training

* fix  amp
2023-10-16 20:47:19 -07:00
Ruonan Wang
77afb8796b LLM: fix convert of chatglm (#9190) 2023-10-17 10:48:13 +08:00
dingbaorong
af3b575c7e expose modules_to_not_convert in optimize_model (#9180)
* expose modules_to_not_convert in optimize_model

* some fixes
2023-10-17 09:50:26 +08:00
Cengguang Zhang
5ca8a851e9 LLM: add fuse optimization for Mistral. (#9184)
* add fuse optimization for mistral.

* fix.

* fix

* fix style.

* fix.

* fix error.

* fix style.

* fix style.
2023-10-16 16:50:31 +08:00
Jiao Wang
49e1381c7f update rope (#9155) 2023-10-15 21:51:45 -07:00
Jason Dai
b192a8032c Update llm-readme (#9176) 2023-10-16 10:54:52 +08:00
binbin Deng
a164c24746 LLM: add kv_cache optimization for chatglm2-6b-32k (#9165) 2023-10-16 10:43:15 +08:00
Yang Wang
7a2de00b48 Fixes for xpu Bf16 training (#9156)
* Support bf16 training

* Use a stable transformer version

* remove env

* fix style
2023-10-14 21:28:59 -07:00
Cengguang Zhang
51a133de56 LLM: add fuse rope and norm optimization for Baichuan. (#9166)
* add fuse rope optimization.

* add rms norm optimization.
2023-10-13 17:36:52 +08:00
Jin Qiao
db7f938fdc LLM: add replit and starcoder to gpu pytorch model example (#9154) 2023-10-13 15:44:17 +08:00
Jin Qiao
797b156a0d LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153) 2023-10-13 15:43:35 +08:00
Yishuo Wang
259cbb4126 [LLM] add initial bigdl-llm-init (#9150) 2023-10-13 15:31:45 +08:00
Cengguang Zhang
433f408081 LLM: Add fuse rope and norm optimization for Aquila. (#9161)
* add fuse norm optimization.

* add fuse rope optimization
2023-10-13 14:18:37 +08:00
SONG Ge
e7aa67e141 [LLM] Add rope optimization for internlm (#9159)
* add rope and norm optimization for internlm and gptneox

* revert gptneox back and split with pr#9155 #

* add norm_forward

* style fix

* update

* update
2023-10-13 14:18:28 +08:00
Jin Qiao
f754ab3e60 LLM: add baichuan and baichuan2 to gpu pytorch model example (#9152) 2023-10-13 13:44:31 +08:00
Ruonan Wang
b8aee7bb1b LLM: Fix Qwen kv_cache optimization (#9148)
* first commit

* ut pass

* accelerate rotate half by using common util function

* fix style
2023-10-12 15:49:42 +08:00
binbin Deng
69942d3826 LLM: fix model check before attention optimization (#9149) 2023-10-12 15:21:51 +08:00
JIN Qiao
1a1ddc4144 LLM: Add Replit CPU and GPU example (#9028) 2023-10-12 13:42:14 +08:00