Commit graph

1058 commits

Author SHA1 Message Date
Ruonan Wang
28c315a5b9 LLM: fix deepspeed error of finetuning on xpu (#10484) 2024-03-21 09:46:25 +08:00
Kai Huang
021d77fd22 Remove softmax upcast fp32 in llama (#10481)
* update

* fix style
2024-03-20 18:17:34 +08:00
Yishuo Wang
cfdf8ad496 Fix modules_not_to_convert argument (#10483) 2024-03-20 17:47:03 +08:00
Xiangyu Tian
cbe24cc7e6 LLM: Enable BigDL IPEX Int8 (#10480)
Enable BigDL IPEX Int8
2024-03-20 15:59:54 +08:00
ZehuaCao
1d062e24db Update serving doc (#10475)
* update serving doc

* add tob

* update

* update

* update

* update vllm worker
2024-03-20 14:44:43 +08:00
Cengguang Zhang
4581e4f17f LLM: fix whiper model missing config. (#10473)
* fix whiper model missing config.

* fix style.

* fix style.

* style.
2024-03-20 14:22:37 +08:00
Jin Qiao
e41d556436 LLM: change fp16 benchmark to model.half (#10477)
* LLM: change fp16 benchmark to model.half

* fix
2024-03-20 13:38:39 +08:00
Yishuo Wang
749bedaf1e fix rwkv v5 fp16 (#10474) 2024-03-20 13:15:08 +08:00
Yuwen Hu
72bcc27da9 [LLM] Add TransformersBgeEmbeddings class in bigdl.llm.langchain.embeddings (#10459)
* Add TransformersBgeEmbeddings class in bigdl.llm.langchain.embeddings

* Small fixes
2024-03-19 18:04:35 +08:00
Cengguang Zhang
463a86cd5d LLM: fix qwen-vl interpolation gpu abnormal results. (#10457)
* fix qwen-vl interpolation gpu abnormal results.

* fix style.

* update qwen-vl gpu example.

* fix comment and update example.

* fix style.
2024-03-19 16:59:39 +08:00
Jin Qiao
e9055c32f9 LLM: fix fp16 mem record in benchmark (#10461)
* LLM: fix fp16 mem record in benchmark

* change style
2024-03-19 16:17:23 +08:00
Jiao Wang
f3fefdc9ce fix pad_token_id issue (#10425) 2024-03-18 23:30:28 -07:00
Yuxuan Xia
74e7490fda Fix Baichuan2 prompt format (#10334)
* Fix Baichuan2 prompt format

* Fix Baichuan2 README

* Change baichuan2 prompt info

* Change baichuan2 prompt info
2024-03-19 12:48:07 +08:00
Jin Qiao
0451103a43 LLM: add int4+fp16 benchmark script for windows benchmarking (#10449)
* LLM: add fp16 for benchmark script

* remove transformer_int4_fp16_loadlowbit_gpu_win
2024-03-19 11:11:25 +08:00
Xin Qiu
bbd749dceb qwen2 fp8 cache (#10446)
* qwen2 fp8 cache

* fix style check
2024-03-19 08:32:39 +08:00
Yang Wang
9e763b049c Support running pipeline parallel inference by vertically partitioning model to different devices (#10392)
* support pipeline parallel inference

* fix logging

* remove benchmark file

* fic

* need to warmup twice

* support qwen and qwen2

* fix lint

* remove genxir

* refine
2024-03-18 13:04:45 -07:00
Ruonan Wang
66b4bb5c5d LLM: update setup to provide cpp for windows (#10448) 2024-03-18 18:20:55 +08:00
Xiangyu Tian
dbdeaddd6a LLM: Fix log condition for BIGDL_OPT_IPEX (#10441)
remove log for BIGDL_OPT_IPEX
2024-03-18 16:03:51 +08:00
Wang, Jian4
1de13ea578 LLM: remove CPU english_quotes dataset and update docker example (#10399)
* update dataset

* update readme

* update docker cpu

* update xpu docker
2024-03-18 10:45:14 +08:00
Xin Qiu
399843faf0 Baichuan 7b fp16 sdp and qwen2 pvc sdp (#10435)
* add baichuan sdp

* update

* baichuan2

* fix

* fix style

* revert 13b

* revert
2024-03-18 10:15:34 +08:00
Jiao Wang
5ab52ef5b5 update (#10424) 2024-03-15 09:24:26 -07:00
Yishuo Wang
bd64488b2a add mask support for llama/chatglm fp8 sdp (#10433)
* add mask support for fp8 sdp

* fix chatglm2 dtype

* update
2024-03-15 17:36:52 +08:00
Keyan (Kyrie) Zhang
444b11af22 Add LangChain upstream ut test for ipynb (#10387)
* Add LangChain upstream ut test for ipynb

* Integrate unit test for LangChain upstream ut and ipynb into one file

* Modify file name

* Remove LangChain version update in unit test

* Move Langchain upstream ut job to arc

* Modify path in .yml file

* Modify path in llm_unit_tests.yml

* Avoid create directory repeatedly
2024-03-15 16:31:01 +08:00
Jin Qiao
ca372f6dab LLM: add save/load example for ModelScope (#10397)
* LLM: add sl example for modelscope

* fix according to comments

* move file
2024-03-15 15:17:50 +08:00
Xin Qiu
24473e331a Qwen2 fp16 sdp (#10427)
* qwen2 sdp and refine

* update

* update

* fix style

* remove use_flash_attention
2024-03-15 13:12:03 +08:00
Kai Huang
1315150e64 Add baichuan2-13b 1k to arc nightly perf (#10406) 2024-03-15 10:29:11 +08:00
Ruonan Wang
b036205be2 LLM: add fp8 sdp for chatglm2/3 (#10411)
* add fp8 sdp for chatglm2

* fix style
2024-03-15 09:38:18 +08:00
Wang, Jian4
fe8976a00f LLM: Support gguf models use low_bit and fix no json(#10408)
* support others model use low_bit

* update readme

* update to add *.json
2024-03-15 09:34:18 +08:00
Xin Qiu
cda38f85a9 Qwen fp16 sdp (#10401)
* qwen sdp

* fix

* update

* update

* update sdp

* update

* fix style check

* add to origin type
2024-03-15 08:51:50 +08:00
dingbaorong
1c0f7ed3fa add xpu support (#10419) 2024-03-14 17:13:48 +08:00
Heyang Sun
7d29765092 refactor qwen2 forward to enable XPU (#10409)
* refactor awen2 forward to enable XPU

* Update qwen2.py
2024-03-14 11:03:05 +08:00
Yuxuan Xia
f36224aac4 Fix ceval run.sh (#10410) 2024-03-14 10:57:25 +08:00
ZehuaCao
f66329e35d Fix multiple get_enable_ipex function error (#10400)
* fix multiple get_enable_ipex function error

* remove get_enable_ipex_low_bit function
2024-03-14 10:14:13 +08:00
Kai Huang
76e30d8ec8 Empty cache for lm_head (#10317)
* empty cache

* add comments
2024-03-13 20:31:53 +08:00
Ruonan Wang
2be8bbd236 LLM: add cpp option in setup.py (#10403)
* add llama_cpp option

* meet code review
2024-03-13 20:12:59 +08:00
Ovo233
0dbce53464 LLM: Add decoder/layernorm unit tests (#10211)
* add decoder/layernorm unit tests

* update tests

* delete decoder tests

* address comments

* remove none type check

* restore nonetype checks

* delete nonetype checks; add decoder tests for Llama

* add gc

* deal with tuple output
2024-03-13 19:41:47 +08:00
Yishuo Wang
06a851afa9 support new baichuan model (#10404) 2024-03-13 17:45:50 +08:00
Yuxuan Xia
a90e9b6ec2 Fix C-Eval Workflow (#10359)
* Fix Baichuan2 prompt format

* Fix ceval workflow errors

* Fix ceval workflow error

* Fix ceval error

* Fix ceval error

* Test ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Fix ceval

* Add ceval dependency test

* Fix ceval

* Fix ceval

* Test full ceval

* Test full ceval

* Fix ceval

* Fix ceval
2024-03-13 17:23:17 +08:00
Yishuo Wang
b268baafd6 use fp8 sdp in llama (#10396) 2024-03-13 16:45:38 +08:00
Xiangyu Tian
60043a3ae8 LLM: Support Baichuan2-13b in BigDL-vLLM (#10398)
Support Baichuan2-13b in BigDL-vLLM.
2024-03-13 16:21:06 +08:00
Xiangyu Tian
e10de2c42d [Fix] LLM: Fix condition check error for speculative decoding on CPU (#10402)
Fix condition check error for speculative decoding on CPU
2024-03-13 16:05:06 +08:00
Keyan (Kyrie) Zhang
f158b49835 [LLM] Recover arc ut test for Falcon (#10385) 2024-03-13 13:31:35 +08:00
Heyang Sun
d72c0fad0d Qwen2 SDPA forward on CPU (#10395)
* Fix Qwen1.5 CPU forward

* Update convert.py

* Update qwen2.py
2024-03-13 13:10:03 +08:00
Yishuo Wang
ca58a69b97 fix arc rms norm UT (#10394) 2024-03-13 13:09:15 +08:00
Wang, Jian4
0193f29411 LLM : Enable gguf float16 and Yuan2 model (#10372)
* enable float16

* add yun files

* enable yun

* enable set low_bit on yuan2

* update

* update license

* update generate

* update readme

* update python style

* update
2024-03-13 10:19:18 +08:00
Yina Chen
f5d65203c0 First token lm_head optimization (#10318)
* add lm head linear

* update

* address comments and fix style

* address comment
2024-03-13 10:11:32 +08:00
Keyan (Kyrie) Zhang
7cf01e6ec8 Add LangChain upstream ut test (#10349)
* Add LangChain upstream ut test

* Add LangChain upstream ut test

* Specify version numbers in yml script

* Correct langchain-community version
2024-03-13 09:52:45 +08:00
Xin Qiu
28c4a8cf5c Qwen fused qkv (#10368)
* fused qkv + rope for qwen

* quantized kv cache

* fix

* update qwen

* fixed quantized qkv

* fix

* meet code review

* update split

* convert.py

* extend when no enough kv

* fix
2024-03-12 17:39:00 +08:00
Yishuo Wang
741c2bf1df use new rms norm (#10384) 2024-03-12 17:29:51 +08:00
Xiangyu Tian
0ded0b4b13 LLM: Enable BigDL IPEX optimization for int4 (#10319)
Enable BigDL IPEX optimization for int4
2024-03-12 17:08:50 +08:00