Commit graph

1343 commits

Author SHA1 Message Date
Yishuo Wang
453df868c9 add rwkv v5 attention kernel (#9927) 2024-01-18 10:16:29 +08:00
Ruonan Wang
054952f82f LLM: Fix rope of chatglm3 to support speculative decoding on CPU (#9926) 2024-01-18 09:28:10 +08:00
Ziteng Zhang
18cd1f1432 [LLM]Solve the problem of calling bmm operator in BF16Linear (#9924)
* Solve the problem of calling bmm operator in BF16Linear
2024-01-17 18:08:35 +08:00
Yina Chen
98b86f83d4 Support fast rope for training (#9745)
* init

* init

* fix style

* add test and fix

* address comment

* update

* merge upstream main
2024-01-17 15:51:38 +08:00
Yuwen Hu
0c498a7b64 Add llama2-13b to igpu perf test (#9920) 2024-01-17 14:58:45 +08:00
Ruonan Wang
b059a32fff LLM: add benchmark api for bigdl-llm fp16 on GPU (#9919)
* add bmk for bigdl fp16

* fix
2024-01-17 14:24:35 +08:00
Ruonan Wang
427f75000b LLM: fix sdp of chatglm3 (#9917)
* fix

* fix

* fix
2024-01-17 13:37:28 +08:00
Yishuo Wang
94767da7cf optimize rwkv v4 first token performance (#9912) 2024-01-17 09:27:41 +08:00
Cengguang Zhang
511cbcf773 LLM: add Ceval benchmark test. (#9872)
* init ceval benchmark test.

* upload dataset.

* add other tests.

* add qwen evaluator.

* fix qwen evaluator style.

* fix qwen evaluator style.

* update qwen evaluator.

* add llama evaluator.

* update eval

* fix typo.

* fix

* fix typo.

* fix llama evaluator.

* fix bug.

* fix style.

* delete dataset.

* fix style.

* fix style.

* add README.md and fix typo.

* fix comments.

* remove run scripts
2024-01-16 19:14:26 +08:00
Shaojun Liu
b909c5c9c2 GGUF load memory optimization (#9913)
* block-wise

* convert linear for module

* revert

* Fix PEP8 checks Error
2024-01-16 18:54:39 +08:00
Yuwen Hu
8643b62521 [LLM] Support longer context in iGPU perf tests (2048-256) (#9910) 2024-01-16 17:48:37 +08:00
Xin Qiu
dee32f7d15 copy fused rms norm's reuslt to avoid <unk> (#9909) 2024-01-16 16:54:08 +08:00
Ruonan Wang
8d7326ae03 LLM: fix chatglm3 sdp to support speculative decoding (#9900)
* fix chatglm3

* fix

* update

* meet code review

* fix
2024-01-16 11:29:13 +08:00
Guancheng Fu
9f34da7cdb Update PVC XMX condition (#9901)
* update pvc xmx condition

* update condition

* update conditon
2024-01-15 15:42:15 +08:00
Yishuo Wang
6637860ddf change xmx condition (#9896) 2024-01-12 19:51:48 +08:00
WeiguangHan
0e69bfe6b0 LLM: fix the performance drop of starcoder (#9889)
* LLM: fix the performance drop of starcoder

* small fix

* small fix
2024-01-12 09:14:15 +08:00
Ruonan Wang
d9cf55bce9 LLM: fix MLP check of mixtral (#9891) 2024-01-11 18:01:59 +08:00
Ziteng Zhang
4f4ce73f31 [LLM] Add transformer_autocast_bf16 into all-in-one (#9890)
* Add transformer_autocast_bf16 into all-in-one
2024-01-11 17:51:07 +08:00
Ziteng Zhang
4af88a67b9 support chatglm3 with bf16 (#9888)
* support chatglm3 with bigdl-bf16
2024-01-11 16:45:21 +08:00
Yuwen Hu
0aef35a965 [LLM] Improve LLM doc regarding windows gpu related info (#9880)
* Improve runtime configuration for windows

* Add python 310/311 supports for wheel downloading

* Add troubleshooting for windows gpu

* Remove manually import ipex due to auto importer

* Add info regarding cpu_embedding=True on iGPU

* More info for Windows users

* Small updates to API docs

* Python style fix

* Remove tip for loading from saved optimize_model for now

* Updated based on comments

* Update win info for multi-intel gpus selection

* Small fix

* Small fix
2024-01-11 14:37:16 +08:00
Jinyi Wan
07485eff5a Add SOLAR-10.7B to README (#9869) 2024-01-11 14:28:41 +08:00
WeiguangHan
33fd1f9c76 LLM: fix input length logic for run_transformer_int4_gpu (#9864)
* LLM: fix input length logic for run_transformer_int4_gpu

* small fix

* small fix

* small fix
2024-01-10 18:20:14 +08:00
Ruonan Wang
53531ae4ee LLM: support qkv fusion for fp8e5 (#9878)
* update

* add mistral

* meet code review
2024-01-10 17:50:00 +08:00
Lilac09
cb32b985ec add mistral and chatglm support to vllm (#9879)
* add mistral and chatglm support to vllm

* add mistral and chatglm support to vllm
2024-01-10 15:38:42 +08:00
ZehuaCao
e76d984164 [LLM] Support llm-awq vicuna-7b-1.5 on arc (#9874)
* support llm-awq vicuna-7b-1.5 on arc

* support llm-awq vicuna-7b-1.5 on arc
2024-01-10 14:28:39 +08:00
Ruonan Wang
3e05c9e11b LLM: update esimd sdp kernel (#9871) 2024-01-09 18:10:01 +08:00
Yuwen Hu
023679459e [LLM] Small fixes for finetune related examples and UTs (#9870) 2024-01-09 18:05:03 +08:00
Cheen Hau, 俊豪
b2aa267f50 Enhance LLM GPU installation document (#9828)
* Improve gpu install doc

* Add troubleshooting - setvars.sh not done properly.

* Further improvements

* 2024.x.x -> 2024.0

* Fixes

* Fix Install BigDL-LLM From Wheel : bigdl-llm[xpu_2.0]

* Remove "export USE_XETLA=OFF" for Max GPU
2024-01-09 16:30:50 +08:00
Yuwen Hu
23fc888abe Update llm gpu xpu default related info to PyTorch 2.1 (#9866) 2024-01-09 15:38:47 +08:00
Yishuo Wang
36496d60ac only use quantize kv cache on MTL (#9862) 2024-01-09 13:24:02 +08:00
ZehuaCao
146076bdb5 Support llm-awq backend (#9856)
* Support for LLM-AWQ Backend

* fix

* Update README.md

* Add awqconfig

* modify init

* update

* support llm-awq

* fix style

* fix style

* update

* fix AwqBackendPackingMethod not found error

* fix style

* update README

* fix style

---------

Co-authored-by: Uxito-Ada <414416158@qq.com>
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
Co-authored-by: cyita <yitastudy@gmail.com>
2024-01-09 13:07:32 +08:00
Ruonan Wang
fea6f16057 LLM: add mlp fusion for fp8e5 and update related check (#9860)
* update mlp fusion

* fix style

* update
2024-01-09 09:56:32 +08:00
binbin Deng
294fd32787 LLM: update DeepSpeed AutoTP example with GPU memory optimization (#9823) 2024-01-09 09:22:49 +08:00
Yuwen Hu
5ba1dc38d4 [LLM] Change default Linux GPU install option to PyTorch 2.1 (#9858)
* Update default xpu to ipex 2.1

* Update related install ut support correspondingly

* Add arc ut tests for both ipex 2.0 and 2.1

* Small fix

* Diable ipex 2.1 test for now as oneapi 2024.0 has not beed installed on the test machine

* Update document for default PyTorch 2.1

* Small fix

* Small fix

* Small doc fixes

* Small fixes
2024-01-08 17:16:17 +08:00
Mingyu Wei
ed81baa35e LLM: Use default typing-extension in LangChain examples (#9857)
* remove typing extension downgrade in readme; minor fixes of code

* fix typos in README

* change default question of docqa.py
2024-01-08 16:50:55 +08:00
Jiao Wang
3b6372ab12 Fix Llama transformers 4.36 support (#9852)
* supoort 4.36

* style

* update

* update

* update

* fix merge

* update
2024-01-08 00:32:23 -08:00
Chen, Zhentao
1b585b0d40 set fp8 default as e5m2 (#9859) 2024-01-08 15:53:57 +08:00
Ruonan Wang
dc995006cc LLM: add flash attention for mistral / mixtral (#9846)
* add flash attention for mistral

* update

* add flash attn for mixtral

* fix style
2024-01-08 09:51:34 +08:00
Yishuo Wang
afaa871144 [LLM] support quantize kv cache to fp8 (#9812) 2024-01-08 09:28:20 +08:00
Jiao Wang
248ae7fad2 LLama optimize_model to support transformers 4.36 (#9818)
* supoort 4.36

* style

* update

* update

* update
2024-01-05 11:30:18 -08:00
Ruonan Wang
a60bda3324 LLM: update check for deepspeed (#9838) 2024-01-05 16:44:10 +08:00
Ruonan Wang
16433dd959 LLM: fix first token judgement of flash attention (#9841)
* fix flash attention

* meet code review

* fix
2024-01-05 13:49:37 +08:00
Yina Chen
f919f5792a fix kv cache out of bound (#9827) 2024-01-05 12:38:57 +08:00
Ruonan Wang
5df31db773 LLM: fix accuracy issue of chatglm3 (#9830)
* add attn mask for first token

* fix

* fix

* change attn calculation

* fix

* fix

* fix style

* fix style
2024-01-05 10:52:05 +08:00
Jinyi Wan
3147ebe63d Add cpu and gpu examples for SOLAR-10.7B (#9821) 2024-01-05 09:50:28 +08:00
WeiguangHan
ad6b182916 LLM: change the color of peak diff (#9836) 2024-01-04 19:30:32 +08:00
Xiangyu Tian
38c05be1c0 [LLM] Fix dtype mismatch in Baichuan2-13b (#9834) 2024-01-04 15:34:42 +08:00
Ruonan Wang
8504a2bbca LLM: update qlora alpaca example to change lora usage (#9835)
* update example

* fix style
2024-01-04 15:22:20 +08:00
Ziteng Zhang
05b681fa85 [LLM] IPEX auto importer set on by default (#9832)
* Set BIGDL_IMPORT_IPEX default to True

* Remove import intel_extension_for_pytorch as ipex from GPU example
2024-01-04 13:33:29 +08:00
Wang, Jian4
4ceefc9b18 LLM: Support bitsandbytes config on qlora finetune (#9715)
* test support bitsandbytesconfig

* update style

* update cpu example

* update example

* update readme

* update unit test

* use bfloat16

* update logic

* use int4

* set defalut bnb_4bit_use_double_quant

* update

* update example

* update model.py

* update

* support lora example
2024-01-04 11:23:16 +08:00
WeiguangHan
9a14465560 LLM: add peak diff (#9789)
* add peak diff

* small fix

* revert yml file
2024-01-03 18:18:19 +08:00
Mingyu Wei
f4eb5da42d disable arc ut (#9825) 2024-01-03 18:10:34 +08:00
Ruonan Wang
20e9742fa0 LLM: fix chatglm3 issue (#9820)
* fix chatglm3 issue

* small update
2024-01-03 16:15:55 +08:00
Wang, Jian4
a54cd767b1 LLM: Add gguf falcon (#9801)
* init falcon

* update convert.py

* update style
2024-01-03 14:49:02 +08:00
Yuwen Hu
668c2095b1 Remove unnecessary warning when installing llm (#9815) 2024-01-03 10:30:05 +08:00
dingbaorong
f5752ead36 Add whisper test (#9808)
* add whisper benchmark code

* add librispeech_asr.py

* add bigdl license
2024-01-02 16:36:05 +08:00
binbin Deng
6584539c91 LLM: fix installation of codellama (#9813) 2024-01-02 14:32:50 +08:00
Kai Huang
4d01069302 Temp remove baichuan2-13b 1k from arc perf test (#9810) 2023-12-29 12:54:13 +08:00
dingbaorong
a2e668a61d fix arc ut test (#9736) 2023-12-28 16:55:34 +08:00
Qiyuan Gong
f0f9d45eac [LLM] IPEX import support bigdl-core-xe-21 (#9769)
Add support for bigdl-core-xe-21.
2023-12-28 15:23:58 +08:00
dingbaorong
a8baf68865 fix csv_to_html (#9802) 2023-12-28 14:58:51 +08:00
Guancheng Fu
5857a38321 [vLLM] Add option to adjust KV_CACHE_ALLOC_BLOCK_LENGTH (#9782)
* add option kv_cache_block

* change var name
2023-12-28 14:41:47 +08:00
Ruonan Wang
99bddd3ab4 LLM: better FP16 support for Intel GPUs (#9791)
* initial support

* fix

* fix style

* fix

* limi esimd usage condition

* refactor code

* fix style

* small fix

* meet code review

* small fix
2023-12-28 13:30:13 +08:00
Yishuo Wang
7d9f6c6efc fix cpuinfo error (#9793) 2023-12-28 09:23:44 +08:00
Wang, Jian4
7ed9538b9f LLM: support gguf mpt (#9773)
* add gguf mpt

* update
2023-12-28 09:22:39 +08:00
Cengguang Zhang
d299f108d0 update falcon attention forward. (#9796) 2023-12-28 09:11:59 +08:00
Shaojun Liu
a5e5c3daec set warm_up: 3 num_trials: 50 for cpu stress test (#9799) 2023-12-28 08:55:43 +08:00
dingbaorong
f6bb4ab313 Arc stress test (#9795)
* add arc stress test

* triger ci

* triger CI

* triger ci

* disable ci
2023-12-27 21:02:41 +08:00
Kai Huang
40eaf76ae3 Add baichuan2-13b to Arc perf (#9794)
* add baichuan2-13b

* fix indent

* revert
2023-12-27 19:38:53 +08:00
Shaojun Liu
6c75c689ea bigdl-llm stress test for stable version (#9781)
* 1k-512 2k-512 baseline

* add cpu stress test

* update yaml name

* update

* update

* clean up

* test

* update

* update

* update

* test

* update
2023-12-27 15:40:53 +08:00
dingbaorong
5cfb4c4f5b Arc stable version performance regression test (#9785)
* add arc stable version regression test

* empty gpu mem between different models

* triger ci

* comment spr test

* triger ci

* address kai's comments and disable ci

* merge fp8 and int4

* disable ci
2023-12-27 11:01:56 +08:00
binbin Deng
40edb7b5d7 LLM: fix get environment variables setting (#9787) 2023-12-27 09:11:37 +08:00
Kai Huang
689889482c Reduce max_cache_pos to reduce Baichuan2-13B memory (#9694)
* optimize baichuan2 memory

* fix

* style

* fp16 mask

* disable fp16

* fix style

* empty cache

* revert empty cache
2023-12-26 19:51:25 +08:00
Jason Dai
361781bcd0 Update readme (#9788) 2023-12-26 19:46:11 +08:00
Yuwen Hu
c38e18f2ff [LLM] Migrate iGPU perf tests to new machine (#9784)
* Move 1024 test just after 32-32 test; and enable all model for 1024-128

* Make sure python output encoding in utf-8 so that redirect to txt can always be success

* Upload results to ftp

* Small fix
2023-12-26 19:15:57 +08:00
WeiguangHan
c05d7e1532 LLM: add star_corder_15.5b model (#9772)
* LLM: add star_corder_15.5b model

* revert llm_performance_tests.yml
2023-12-26 18:55:56 +08:00
Ziteng Zhang
44b4a0c9c5 [LLM] Correct prompt format of Yi, Llama2 and Qwen in generate.py (#9786)
* correct prompt format of Yi

* correct prompt format of llama2 in cpu generate.py

* correct prompt format of Qwen in GPU example
2023-12-26 16:57:55 +08:00
Xiangyu Tian
0ea842231e [LLM] vLLM: Add api_server entrypoint (#9783)
Add vllm.entrypoints.api_server for benchmark_serving.py in vllm.
2023-12-26 16:03:57 +08:00
dingbaorong
64d05e581c add peak gpu mem stats in transformer_int4_gpu (#9766)
* add peak gpu mem stats in transformer_int4_gpu

* address weiguang's comments
2023-12-26 15:38:28 +08:00
Ziteng Zhang
87b4100054 [LLM] Support Yi model in chat.py (#9778)
* Suppot Yi model

* code style& add reference link
2023-12-26 10:03:39 +08:00
Ruonan Wang
11d883301b LLM: fix wrong batch output caused by flash attention (#9780)
* fix

* meet code review

* move batch size check to the beginning

* move qlen check inside function

* meet code review
2023-12-26 09:41:27 +08:00
Heyang Sun
66e286a73d Support for Mixtral AWQ (#9775)
* Support for Mixtral AWQ

* Update README.md

* Update README.md

* Update awq_config.py

* Update README.md

* Update README.md
2023-12-25 16:08:09 +08:00
Ruonan Wang
1917bbe626 LLM: fix BF16Linear related training & inference issue (#9755)
* fix bf16 related issue

* fix

* update based on comment & add arc lora script

* update readme

* update based on comment

* update based on comment

* update

* force to bf16

* fix style

* move check input dtype into function

* update convert

* meet code review

* meet code review

* update merged model to support new training_mode api

* fix typo
2023-12-25 14:49:30 +08:00
Xiangyu Tian
30dab36f76 [LLM] vLLM: Fix kv cache init (#9771)
Fix kv cache init
2023-12-25 14:17:06 +08:00
Yina Chen
449b387125 Support relora in bigdl-llm (#9687)
* init

* fix style

* update

* support resume & update readme

* update

* update

* remove important

* add training mode

* meet comments
2023-12-25 14:04:28 +08:00
Shaojun Liu
b6222404b8 bigdl-llm stable version: let the perf test fail if the difference between perf and baseline is greater than 5% (#9750)
* test

* test

* test

* update

* revert
2023-12-25 13:47:11 +08:00
Ziteng Zhang
986f65cea9 [LLM] Add trust_remote_code for local renamed model in bigdl_llm_model.py (#9762) 2023-12-25 11:31:14 +08:00
Yishuo Wang
be13b162fe add codeshell example (#9743) 2023-12-25 10:54:01 +08:00
Guancheng Fu
daf536fb2d vLLM: Apply attention optimizations for selective batching (#9758)
* fuse_rope for prefil

* apply kv_cache optimizations

* apply fast_decoding_path

* Re-enable kv_cache optimizations for prefill

* reduce KV_CACHE_ALLOC_BLOCK for selective_batching
2023-12-25 10:29:31 +08:00
binbin Deng
ed8ed76d4f LLM: update deepspeed autotp usage (#9733) 2023-12-25 09:41:14 +08:00
Yuwen Hu
02436c6cce [LLM] Enable more long context in-out pairs for iGPU perf tests (#9765)
* Add test for 1024-128 and enable more tests for 512-64

* Fix date in results csv name to the time when the performance is triggered

* Small fix

* Small fix

* further fixes
2023-12-22 18:18:23 +08:00
Chen, Zhentao
7fd7c37e1b Enable fp8e5 harness (#9761)
* fix precision format like fp8e5

* match fp8_e5m2
2023-12-22 16:59:48 +08:00
Qiyuan Gong
4c487313f2 Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759)
This reverts commit 0284801fbd.
2023-12-22 16:38:24 +08:00
Qiyuan Gong
0284801fbd [LLM] IPEX auto importer turn on by default for XPU (#9730)
* Set BIGDL_IMPORT_IPEX default to true, i.e., auto import IPEX for XPU.
* Remove import intel_extension_for_pytorch as ipex from GPU example.
* Add support for bigdl-core-xe-21.
2023-12-22 16:20:32 +08:00
Chen, Zhentao
86a69e289c fix harness runner label of manual trigger (#9754)
* fix runner

* update golden
2023-12-22 15:09:22 +08:00
Guancheng Fu
fdf93c9267 Implement selective batching for vLLM (#9659)
* add control to load hf model

* finish initial version of selective_batching

* temp

* finish

* Remove print statement

* fix error

* Apply yang's optimization

* a version that works

* We need to check kv_cache passed in, this could be an error. TODO: add fast decoding path

* format

* temp solution: not batching prefill requests

* a version that works for prefill batching

* format

* a solid version: works normally

* a temp version

* Solid version: remove redundant functions

* fix format

* format

* solid: add option to enable selective_batching

* remove logic for using transformer models

* format

* format

* solid: enable argument VLLM_ENABLE_SELECTIVE_BATCHING

* format

* finish

* format
2023-12-22 13:45:46 +08:00
Ruonan Wang
2f36769208 LLM: bigdl-llm lora support & lora example (#9740)
* lora support and single card example

* support multi-card, refactor code

* fix model id and style

* remove torch patch, add two new class for bf16, update example

* fix style

* change to training_mode

* small fix

* add more info in help

* fixstyle, update readme

* fix ut

* fix ut

* Handling compatibility issues with default LoraConfig
2023-12-22 11:05:39 +08:00
SONG Ge
ba0b939579 [LLM] Support transformers-v4.36.0 on mistral model (#9744)
* add support transformers-v4.36.0 on mistral model

* python/llm/src/bigdl/llm/transformers/models/mistral.py

* make the redundant implementation as utils

* fix code style

* fix

* fix style

* update with utils enough_kv_room
2023-12-22 09:59:27 +08:00
Xin Qiu
e36111e713 mixstral fused qkv and rope (#9724)
* mixstral fused qkv and rope

* fix and clean

* fix style

* update

* update

* fix

* update

* fix
2023-12-22 09:26:35 +08:00
Jiao Wang
e4f6e43675 safetenor to false (#9728) 2023-12-21 14:41:51 -08:00
Shaojun Liu
bb52239e0a bigdl-llm stable version release & test (#9732)
* stable version test

* trigger spr test

* update

* trigger

* test

* test

* test

* test

* test

* refine

* release linux first
2023-12-21 22:55:33 +08:00
WeiguangHan
d4d2ccdd9d LLM: remove startcorder-15.5b (#9748) 2023-12-21 18:52:52 +08:00
WeiguangHan
474c099559 LLM: using separate threads to do inference (#9727)
* using separate threads to do inference

* resolve some comments

* resolve some comments

* revert llm_performance_tests.yml file
2023-12-21 17:56:43 +08:00
Yishuo Wang
426660b88e simplify qwen attention (#9747) 2023-12-21 17:53:29 +08:00
Wang, Jian4
984697afe2 LLM: Add bloom gguf support (#9734)
* init

* update bloom add merges

* update

* update readme

* update for llama error

* update
2023-12-21 14:06:25 +08:00
Heyang Sun
df775cf316 fix python style (#9742)
* fix python style

* fix

* fix
2023-12-21 11:25:05 +08:00
Chen, Zhentao
b06a3146c8 Fix 70b oom (#9738)
* add default value to bigdl llm

* fix model oom
2023-12-21 10:40:52 +08:00
Xin Qiu
6c3e698bf1 mistral decoding_fast_path and fused mlp (#9714)
* mistral decoding_fast_path and fused mlp

* meet code review
2023-12-21 10:11:37 +08:00
Heyang Sun
d157f623b6 Load Mixtral gguf in a block-wise way (#9725)
* Load Mixtral gguf in a block-wise way

* refine
2023-12-21 10:03:23 +08:00
WeiguangHan
34bb804189 LLM: check csv and its corresponding yaml file (#9702)
* LLM: check csv and its corresponding yaml file

* run PR arc perf test

* modify the name of some variables

* execute the check results script in right place

* use cp to replace mv command

* resolve some comments

* resolve more comments

* revert the llm_performance_test.yaml file
2023-12-21 09:54:33 +08:00
Zhao Changmin
4bda975a3e LLM: Align lowbit model config (#9735)
* align lowbit model config
2023-12-21 09:48:58 +08:00
Wang, Jian4
e1e921f425 LLM: gguf other model using dtype (#9729) 2023-12-21 09:33:40 +08:00
Yishuo Wang
13ea6330bd optimize qwen rope (#9737) 2023-12-20 17:34:34 +08:00
Ziteng Zhang
4c032a433e [LLM] Add glibc checker (#9624)
* Add glibc checker
* Add env BIGDL_GLIBC_CHECK to control glibc checker. The default is false, i.e., don't check.
2023-12-20 16:52:43 +08:00
Yina Chen
cd652a1710 Support fp8 e5m2 on arc (#9711)
* init

* fix style

* update

* fix style

* update
2023-12-20 16:26:17 +08:00
Yishuo Wang
e54c428d30 add bf16/fp16 fuse mlp support (#9726) 2023-12-20 10:40:45 +08:00
Heyang Sun
612651cb5d fix typo (#9723) 2023-12-20 09:41:59 +08:00
WeiguangHan
3aa8b66bc3 LLM: remove starcoder-15.5b model temporarily (#9720) 2023-12-19 20:14:46 +08:00
Yishuo Wang
522cf5ed82 [LLM] Improve chatglm2/3 rest token performance with long context (#9716) 2023-12-19 17:29:38 +08:00
Yishuo Wang
f2e6abb563 fix mlp batch size check (#9718) 2023-12-19 14:22:22 +08:00
Heyang Sun
1fa7793fc0 Load Mixtral GGUF Model (#9690)
* Load Mixtral GGUF Model

* refactor

* fix empty tensor when to cpu

* update gpu and cpu readmes

* add dtype when set tensor into module
2023-12-19 13:54:38 +08:00
Qiyuan Gong
d0a3095b97 [LLM] IPEX auto importer (#9706)
* IPEX auto importer and get_ipex_version.
* Add BIGDL_IMPORT_IPEX to control auto import, default is false.
2023-12-19 13:39:38 +08:00
Yang Wang
f4fb58d99c fusing qkv project and rope (#9612)
* Try fusing qkv project and rope

* add fused mlp

* fuse append cache

* fix style and clean up code

* clean up
2023-12-18 16:45:00 -08:00
Kai Huang
4c112ee70c Rename qwen in model name for arc perf test (#9712) 2023-12-18 20:34:31 +08:00
Cengguang Zhang
4d22add4af LLM: fix qwen efficiency issue in perf-test. 2023-12-18 18:32:54 +08:00
Ruonan Wang
8ed89557e5 LLM: add mlp optimization of mixtral (#9709) 2023-12-18 16:59:52 +08:00
Chen, Zhentao
b3647507c0 Fix harness workflow (#9704)
* error when larger than 0.001

* fix env setup

* fix typo

* fix typo
2023-12-18 15:42:10 +08:00
binbin Deng
12df70953e LLM: add resume_from_checkpoint related section (#9705) 2023-12-18 12:27:02 +08:00
Xin Qiu
320110d158 handle empty fused norm result (#9688)
* handle empty fused norm result

* remove fast_rms_norm

* fix style
2023-12-18 09:56:11 +08:00
Ziteng Zhang
67cc155771 [LLM] Correct chat format of llama and add llama_stream_chat in chat.py
* correct chat format of llama
* add llama_stream_chat
2023-12-15 16:36:46 +08:00
Ziteng Zhang
0d41b7ba7b [LLM] Correct chat format & add stop words for chatglm3 in chat.py
* correct chat format of chatglm3
* correct stop words of chatglm3
2023-12-15 16:35:17 +08:00
Ziteng Zhang
d57efd8eb9 [LM] Add stop_word for Qwen model and correct qwen chat format in chat.py (#9642)
* add stop words list for qwen

* change qwen chat format
2023-12-15 14:53:58 +08:00
SONG Ge
d5b81af7bd Support mixtral attention optimization on transformers-v4.36.0 (#9674)
* add example code to support mistral/mixtral attention on transformers v4.36.0

* update

* style fix

* add update for seen-tokens

* support mixtral

* rm mistral change

* small fix

* add more comments and remove use_cache part

---------

Co-authored-by: plusbang <binbin1.deng@intel.com>
2023-12-15 14:30:23 +08:00
Cengguang Zhang
adbef56001 LLM: update qwen attention forward. (#9695)
* feat: update qwen attention forward.

* fix: style.
2023-12-15 14:06:15 +08:00
Wang, Jian4
b8437a1c1e LLM: Add gguf mistral model support (#9691)
* add mistral support

* need to upgrade transformers version

* update
2023-12-15 13:37:39 +08:00
Wang, Jian4
496bb2e845 LLM: Support load BaiChuan model family gguf model (#9685)
* support baichuan model family gguf model

* update gguf generate.py

* add verify models

* add support model_family

* update

* update style

* update type

* update readme

* update

* remove support model_family
2023-12-15 13:34:33 +08:00
Lilac09
3afed99216 fix path issue (#9696) 2023-12-15 11:21:49 +08:00
Jason Dai
37f509bb95 Update readme (#9692) 2023-12-14 19:50:21 +08:00
WeiguangHan
1f0245039d LLM: check the final csv results for arc perf test (#9684)
* LLM: check the final csv results for arc perf test

* delete useless python script

* change threshold

* revert the llm_performance_tests.yml
2023-12-14 19:46:08 +08:00
Yishuo Wang
9a330bfc2b fix fuse mlp when using q5_0 or fp8 (#9689) 2023-12-14 16:16:05 +08:00
Yuwen Hu
82ac2dbf55 [LLM] Small fixes for win igpu test for ipex 2.1 (#9686)
* Fixes to install for igpu performance tests

* Small update for core performance tests model lists
2023-12-14 15:39:51 +08:00
WeiguangHan
3e8d198b57 LLM: add eval func (#9662)
* Add eval func

* add left eval
2023-12-14 14:59:02 +08:00
Ziteng Zhang
21c7503a42 [LLM] Correct prompt format of Qwen in generate.py (#9678)
* Change qwen prompt format to chatml
2023-12-14 14:01:30 +08:00
Qiyuan Gong
223c9622f7 [LLM] Mixtral CPU examples (#9673)
* Mixtral CPU PyTorch and hugging face examples, based on #9661 and #9671
2023-12-14 10:35:11 +08:00
Xin Qiu
5e46e0e5af fix baichuan2-7b 1st token performance regression on xpu (#9683)
* fix baichuan2-7b 1st token performance regression

* add comments

* fix style
2023-12-14 09:58:32 +08:00
ZehuaCao
877229f3be [LLM]Add Yi-34B-AWQ to verified AWQ model. (#9676)
* verfiy Yi-34B-AWQ

* update
2023-12-14 09:55:47 +08:00
binbin Deng
68a4be762f remove disco mixtral, update oneapi version (#9671) 2023-12-13 23:24:59 +08:00
Ruonan Wang
1456d30765 LLM: add dot to option name in setup (#9682) 2023-12-13 20:57:27 +08:00
Yuwen Hu
cbdd49f229 [LLM] win igpu performance for ipex 2.1 and oneapi 2024.0 (#9679)
* Change igpu win tests for ipex 2.1 and oneapi 2024.0

* Qwen model repo id updates; updates model list for 512-64

* Add .eval for win igpu all-in-one benchmark for best performance
2023-12-13 18:52:29 +08:00
Mingyu Wei
16febc949c [LLM] Add exclude option in all-in-one performance test (#9632)
* add exclude option in all-in-one perf test

* update arc-perf-test.yaml

* Exclude in_out_pairs in main function

* fix some bugs

* address Kai's comments

* define excludes at the beginning

* add bloomz:2048 to exclude
2023-12-13 18:13:06 +08:00
Ruonan Wang
9b9cd51de1 LLM: update setup to provide new install option to support ipex 2.1 & oneapi 2024 (#9647)
* update setup

* default to 2.0 now

* meet code review
2023-12-13 17:31:56 +08:00
Yishuo Wang
09ca540f9b use fuse mlp in qwen (#9672) 2023-12-13 17:20:08 +08:00
Ruonan Wang
c7741c4e84 LLM: update moe block convert to optimize rest token latency of Mixtral (#9669)
* update moe block convert

* further accelerate final_hidden_states

* fix style

* fix style
2023-12-13 16:17:06 +08:00
ZehuaCao
503880809c verfiy codeLlama (#9668) 2023-12-13 15:39:31 +08:00
Xiangyu Tian
1c6499e880 [LLM] vLLM: Support Mixtral Model (#9670)
Add Mixtral support for BigDL vLLM.
2023-12-13 14:44:47 +08:00
Ruonan Wang
dc5b1d7e9d LLM: integrate sdp kernel for FP16 rest token inference on GPU [DG2/ATSM] (#9633)
* integrate sdp

* update api

* fix style

* meet code review

* fix

* distinguish mtl from arc

* small fix
2023-12-13 11:29:57 +08:00
Qiyuan Gong
5b0e7e308c [LLM] Add support for empty activation (#9664)
* Add support for empty activation, e.g., [0, 4096]. Empty activation is allowed by PyTorch.
* Add comments.
2023-12-13 11:07:45 +08:00
SONG Ge
284e7697b1 [LLM] Optimize ChatGLM2 kv_cache to support beam_search on ARC (#9579)
* optimize kv_cache to support beam_search on Arc

* correctness test update

* fix query_length issue

* simplify implementation

* only enable the optimization on gpu device

* limit the beam_search support only enabled with gpu device and batch_size > 1

* add comments for beam_search case and revert ut change

* meet comments

* add more comments to describe the differece between multi-cases
2023-12-13 11:02:14 +08:00
Heyang Sun
c64e2248ef fix str returned by get_int_from_str rather than expected int (#9667) 2023-12-13 11:01:21 +08:00
binbin Deng
bf1bcf4a14 add official Mixtral model support (#9663) 2023-12-12 22:27:07 +08:00
Ziteng Zhang
8931f2eb62 [LLM] Fix transformer qwen size mismatch and rename causal_mask (#9655)
* Fix size mismatching caused by context_layer
* Change registered_causal_mask to causal_mask
2023-12-12 20:57:40 +08:00
binbin Deng
2fe38b4b9b LLM: add mixtral GPU examples (#9661) 2023-12-12 20:26:36 +08:00
Yuwen Hu
968d99e6f5 Remove empty cache between each iteration of generation (#9660) 2023-12-12 17:24:06 +08:00
Xin Qiu
0e639b920f disable test_optimized_model.py temporarily due to out of memory on A730M(pr validation machine) (#9658)
* disable test_optimized_model.py

* disable seq2seq
2023-12-12 17:13:52 +08:00
binbin Deng
59ce86d292 LLM: support optimize_model=True for Mixtral model (#9657) 2023-12-12 16:41:26 +08:00
Yuwen Hu
d272b6dc47 [LLM] Enable generation of html again for win igpu tests (#9652)
* Enable generation of html again and comment out rwkv for 32-512 as it is not very stable

* Small fix
2023-12-11 19:15:17 +08:00
WeiguangHan
afa895877c LLM: fix the issue that may generate blank html (#9650)
* LLM: fix the issue that may generate blank html

* reslove some comments
2023-12-11 19:14:57 +08:00
ZehuaCao
45721f3473 verfiy llava (#9649) 2023-12-11 14:26:05 +08:00
Heyang Sun
9f02f96160 [LLM] support for Yi AWQ model (#9648) 2023-12-11 14:07:34 +08:00
Xin Qiu
82255f9726 Enable fused layernorm (#9614)
* bloom layernorm

* fix

* layernorm

* fix

* fix

* fix

* style fix

* fix

* replace nn.LayerNorm
2023-12-11 09:26:13 +08:00
Yuwen Hu
894d0aaf5e [LLM] iGPU win perf test reorg based on in-out pairs (#9645)
* trigger pr temparorily

* Saparate benchmark run for win igpu based in in-out pairs

* Rename fix

* Test workflow

* Small fix

* Skip generation of html for now

* Change back to nightly triggered
2023-12-08 20:46:40 +08:00
Chen, Zhentao
972cdb9992 gsm8k OOM workaround (#9597)
* update bigdl_llm.py

* update the installation of harness

* fix partial function

* import ipex

* force seq len in decrease order

* put func outside class

* move comments

* default 'trust_remote_code' as True

* Update llm-harness-evaluation.yml
2023-12-08 18:47:25 +08:00
WeiguangHan
1ff4bc43a6 degrade pandas version (#9643) 2023-12-08 17:44:51 +08:00
Yina Chen
70f5e7bf0d Support peft LoraConfig (#9636)
* support peft loraconfig

* use testcase to test

* fix style

* meet comments
2023-12-08 16:13:03 +08:00
Xin Qiu
0b6f29a7fc add fused rms norm for Yi and Qwen (#9640) 2023-12-08 16:04:38 +08:00
Xin Qiu
5636b0ba80 set new linear status (#9639) 2023-12-08 11:02:49 +08:00
binbin Deng
499100daf1 LLM: Add solution to fix oneccl related error (#9630) 2023-12-08 10:51:55 +08:00
ZehuaCao
6eca8a8bb5 update transformer version (#9631) 2023-12-08 09:36:00 +08:00
WeiguangHan
e9299adb3b LLM: Highlight some values in the html (#9635)
* highlight some values in the html

* revert the llm_performance_tests.yml
2023-12-07 19:02:41 +08:00
Yuwen Hu
6f34978b94 [LLM] Add more performance tests for win iGPU (more in-out pairs, RWKV model) (#9626)
* Add supports for loading rwkv models using from_pretrained api

* Temporarily enable pr tests

* Add RWKV in tests and more in-out pairs

* Add rwkv for 512 tests

* Make iterations smaller

* Change back to nightly trigger
2023-12-07 18:55:16 +08:00
Ruonan Wang
d9b0c01de3 LLM: fix unlora module in qlora finetune (#9621)
* fix unlora module

* split train and inference
2023-12-07 16:32:02 +08:00
Heyang Sun
3811cf43c9 [LLM] update AWQ documents (#9623)
* [LLM] update AWQ and verified models' documents

* refine

* refine links

* refine
2023-12-07 16:02:20 +08:00
Yishuo Wang
7319f2c227 use fused mlp in baichuan2 (#9620) 2023-12-07 15:50:57 +08:00
Xiangyu Tian
deee65785c [LLM] vLLM: Delete last_kv_cache before prefilling (#9619)
Remove last_kv_cache before prefilling to reduce peak memory usage.
2023-12-07 11:32:33 +08:00
Yuwen Hu
48b85593b3 Update all-in-one benchmark readme (#9618) 2023-12-07 10:32:09 +08:00
Xiangyu Tian
0327169b50 [LLM] vLLM: fix memory leak in prepare_kv_cache (#9616)
Revert modification in prepare_kv_cache to fix memory leak.
2023-12-07 10:08:18 +08:00
Xin Qiu
13d47955a8 use fused rms norm in chatglm2 and baichuan (#9613)
* use fused rms norm in chatglm2 and baichuan

* style fix
2023-12-07 09:21:41 +08:00
Jason Dai
51b668f229 Update GGUF readme (#9611) 2023-12-06 18:21:54 +08:00
dingbaorong
a7bc89b3a1 remove q4_1 in gguf example (#9610)
* remove q4_1

* fixes
2023-12-06 16:00:05 +08:00
Yina Chen
404e101ded QALora example (#9551)
* Support qa-lora

* init

* update

* update

* update

* update

* update

* update merge

* update

* fix style & update scripts

* update

* address comments

* fix typo

* fix typo

---------

Co-authored-by: Yang Wang <yang3.wang@intel.com>
2023-12-06 15:36:21 +08:00
Guancheng Fu
6978b2c316 [VLLM] Change padding patterns for vLLM & clean code (#9609)
* optimize

* fix minor error

* optimizations

* fix style
2023-12-06 15:27:26 +08:00
dingbaorong
89069d6173 Add gpu gguf example (#9603)
* add gpu gguf example

* some fixes

* address kai's comments

* address json's comments
2023-12-06 15:17:54 +08:00
Yuwen Hu
0e8f4020e5 Add traceback error output for win igpu test api in benchmark (#9607) 2023-12-06 14:35:16 +08:00
Ziteng Zhang
aeb77b2ab1 Add minimum Qwen model version (#9606) 2023-12-06 11:49:14 +08:00
Yuwen Hu
c998f5f2ba [LLM] iGPU long context tests (#9598)
* Temp enable PR

* Enable tests for 256-64

* Try again 128-64

* Empty cache after each iteration for igpu benchmark scripts

* Try tests for 512

* change order for 512

* Skip chatglm3 and llama2 for now

* Separate tests for 512-64

* Small fix

* Further fixes

* Change back to nightly again
2023-12-06 10:19:20 +08:00
Heyang Sun
4e70e33934 [LLM] code and document for distributed qlora (#9585)
* [LLM] code and document for distributed qlora

* doc

* refine for gradient checkpoint

* refine

* Update alpaca_qlora_finetuning_cpu.py

* Update alpaca_qlora_finetuning_cpu.py

* Update alpaca_qlora_finetuning_cpu.py

* add link in doc
2023-12-06 09:23:17 +08:00
Zheng, Yi
d154b38bf9 Add llama2 gpu low memory example (#9514)
* Add low memory example

* Minor fixes

* Update readme.md
2023-12-05 17:29:48 +08:00
Jason Dai
06febb5fa7 Update readme for FP8/FP4 inference examples (#9601) 2023-12-05 15:59:03 +08:00
dingbaorong
a66fbedd7e add gpu more data types example (#9592)
* add gpu more data types example

* add int8
2023-12-05 15:45:38 +08:00
Ziteng Zhang
65934c9f4f [LLM] Fix Qwen causal_mask and attention_mask size mismatching (#9600)
* Fix #9582 , caused by Qwen modified modeling_qwen.py 7f62181c94 (d2h-049182)
2023-12-05 15:15:54 +08:00
Jinyi Wan
b721138132 Add cpu and gpu examples for BlueLM (#9589)
* Add cpu int4 example for BlueLM

* addexample optimize_model cpu for bluelm

* add example gpu int4 blueLM

* add example optimiza_model GPU for bluelm

* Fixing naming issues and BigDL package version.

* Fixing naming issues...

* Add BlueLM in README.md "Verified Models"
2023-12-05 13:59:02 +08:00
Guancheng Fu
8b00653039 fix doc (#9599) 2023-12-05 13:49:31 +08:00
Qiyuan Gong
f211f136b6 Configurable TORCH_LINEAR_THRESHOLD from env (#9588)
* Add TORCH_LINEAR_THRESHOLD from env (BIGDL_LLM_LINEAR_THRESHOLD)
* Change default to 512
2023-12-05 13:19:47 +08:00
Yuwen Hu
1012507a40 [LLM] Fix performance tests (#9596)
* Fix missing key for cpu_embedding

* Remove 512 as it stuck for now

* Small fix
2023-12-05 10:59:28 +08:00
Chen, Zhentao
8c8a27ded7 Add harness summary job (#9457)
* format yml

* add make_table_results

* add summary job

* add a job to print single result

* upload full directory
2023-12-05 10:04:10 +08:00
Yuwen Hu
3f4ad97929 [LLM] Add performance tests for windows iGPU (#9584)
* Add support for win gpu benchmark with peak gpu memory monitoring

* Add win igpu tests

* Small fix

* Forward outputs

* Small fix

* Test and small fixes

* Small fix

* Small fix and test

* Small fixes

* Add tests for 512-64 and change back to nightly tests

* Small fix
2023-12-04 20:50:02 +08:00
Chen, Zhentao
9557aa9c21 Fix harness nightly (#9586)
* update golden

* loose the restriction of diff

* only compare results when scheduled
2023-12-04 11:45:00 +08:00
Xiangyu Tian
5c03651309 [LLM] vLLM: Add Preempt for scheduler (#9568)
Implement Preempt_by_recompute method for vllm.
2023-12-03 20:16:25 +08:00
Chen, Zhentao
cb228c70ea Add harness nightly (#9552)
* modify output_path as a directory

* schedule nightly at 21 on Friday

* add tasks and models for nightly

* add accuracy regression

* comment out if to test

* mixed fp4

* for test

* add  missing delimiter

* remove comma

* fixed golden results

* add mixed 4 golden result

* add more options

* add mistral results

* get golden result of stable lm

* move nightly scripts and results to test folder

* add license

* add fp8 stable lm golden

* run on all available devices

* trigger only when ready for review

* fix new line

* update golden

* add mistral
2023-12-01 14:16:35 +08:00
Chen, Zhentao
4d7d5d4c59 Add 3 leaderboard tasks (#9566)
* update leaderboard map

* download model and dataset without overwritten

* fix task drop

* run on all available devices
2023-12-01 14:01:14 +08:00
Wang, Jian4
ed0dc57c6e LLM: Add cpu qlora support other models guide (#9567)
* use bf16 flag

* add using baichuan model

* update merge

* remove

* update
2023-12-01 11:18:04 +08:00
Jason Dai
bda404fc8f Update readme (#9575) 2023-11-30 22:45:52 +08:00
Xin Qiu
69c49d21f5 use fused rms norm (#9572)
* use fused rms norm

* meet code review
2023-11-30 21:47:41 +08:00
Yishuo Wang
66f5b45f57 [LLM] add a llama2 gguf example (#9553) 2023-11-30 16:37:17 +08:00
Yishuo Wang
7f6465518a support loading llama tokenizer from gguf model (#9565) 2023-11-30 14:56:12 +08:00
Wang, Jian4
a0a80d232e LLM: Add qlora cpu distributed readme (#9561)
* init readme

* add distributed guide

* update
2023-11-30 13:42:30 +08:00
Chen, Zhentao
c8e0c2ed48 Fixed dumped logs in harness (#9549)
* install transformers==4.34.0

* modify output_path as a directory

* add device and task to output dir parents
2023-11-30 12:47:56 +08:00
Qiyuan Gong
d85a430a8c Uing bigdl-llm-init instead of bigdl-nano-init (#9558)
* Replace `bigdl-nano-init` with `bigdl-llm-init`.
* Install `bigdl-llm` instead of `bigdl-nano`.
* Remove nano in README.
2023-11-30 10:10:29 +08:00
Yuwen Hu
34503efa6a Fix cpu pinned embedding (#9556) 2023-11-29 18:27:56 +08:00
binbin Deng
4ff2ca9d0d LLM: fix loss error on Arc (#9550) 2023-11-29 15:16:18 +08:00
Yishuo Wang
65121c7997 support loading q4_1/q5_0/q5_1/q8_0 gguf model (#9546) 2023-11-29 14:40:37 +08:00
Wang, Jian4
b824754256 LLM: Update for cpu qlora mpirun (#9548) 2023-11-29 10:56:17 +08:00
Yuwen Hu
5f5ca38b74 [LLM Doc] Fix api doc rendering error (#9542)
* Fix api rendering error

* Fix python style
2023-11-29 09:17:09 +08:00
Yishuo Wang
a86c6e0b56 [LLM] support loading gguf model (#9544) 2023-11-28 15:51:15 +08:00
Xiangyu Tian
916c338772 fix bugs in vllm length check (#9543) 2023-11-28 11:09:54 +08:00
WeiguangHan
5098bc3544 LLM: enable previous models (#9505)
* enable previous models

* test mistral model

* for test

* run models separately

* test all models

* for test

* revert the llm_performance_test.yaml
2023-11-28 10:21:07 +08:00
Zhao Changmin
e7e0cd3b5e CPU Pinned embedding Layer (#9538)
* CPU Pinned embedding
2023-11-28 09:46:31 +08:00
Guancheng Fu
963a5c8d79 Add vLLM-XPU version's README/examples (#9536)
* test

* test

* fix last kv cache

* add xpu readme

* remove numactl for xpu example

* fix link error

* update max_num_batched_tokens logic

* add explaination

* add xpu environement version requirement

* refine gpu memory

* fix

* fix style
2023-11-28 09:44:03 +08:00
Guancheng Fu
b6c3520748 Remove xformers from vLLM-CPU (#9535) 2023-11-27 11:21:25 +08:00
binbin Deng
2b9c7d2a59 LLM: quick fix alpaca qlora finetuning script (#9534) 2023-11-27 11:04:27 +08:00
Yuwen Hu
11fa3de290 Add sutup support of win gpu for bigdl-llm (#9512) 2023-11-24 17:49:21 +08:00
Chen, Zhentao
45820cf3b9 add optimize model option (#9530) 2023-11-24 17:10:49 +08:00
binbin Deng
6bec0faea5 LLM: support Mistral AWQ models (#9520) 2023-11-24 16:20:22 +08:00
Ruonan Wang
914a5a5a27 LLM: fix abnormal Mistral GPU accuracy by updating rms_norm (#9529) 2023-11-24 15:37:50 +08:00
SONG Ge
3d24823cda hot-fix mistral kv_cache (#9528) 2023-11-24 14:33:04 +08:00
Zhao Changmin
42b7a16bc5 Replace torch.bmm with safe_bmm (#9519)
* replace bmm with safe one

* rename args and deprecated warning
2023-11-24 12:16:48 +08:00
Jason Dai
b3178d449f Update README.md (#9525) 2023-11-23 21:45:20 +08:00
Jason Dai
82898a4203 Update GPU example README (#9524) 2023-11-23 21:20:26 +08:00
Jason Dai
064848028f Update README.md (#9523) 2023-11-23 21:16:21 +08:00
Ruonan Wang
b63aae8a8e LLM: add flash attention support for llama (#9518)
* add initial flash attention for llama

* accelerate fp32 first token by changing to fp16 in advance

* support fp32
2023-11-23 18:40:18 +08:00
Guancheng Fu
bf579507c2 Integrate vllm (#9310)
* done

* Rename structure

* add models

* Add structure/sampling_params,sequence

* add input_metadata

* add outputs

* Add policy,logger

* add and update

* add parallelconfig back

* core/scheduler.py

* Add llm_engine.py

* Add async_llm_engine.py

* Add tested entrypoint

* fix minor error

* Fix everything

* fix kv cache view

* fix

* fix

* fix

* format&refine

* remove logger from repo

* try to add token latency

* remove logger

* Refine config.py

* finish worker.py

* delete utils.py

* add license

* refine

* refine sequence.py

* remove sampling_params.py

* finish

* add license

* format

* add license

* refine

* refine

* Refine line too long

* remove exception

* so dumb style-check

* refine

* refine

* refine

* refine

* refine

* refine

* add README

* refine README

* add warning instead error

* fix padding

* add license

* format

* format

* format fix

* Refine vllm dependency (#1)

vllm dependency clear

* fix licence

* fix format

* fix format

* fix

* adapt LLM engine

* fix

* add license

* fix format

* fix

* Moving README.md to the correct position

* Fix readme.md

* done

* guide for adding models

* fix

* Fix README.md

* Add new model readme

* remove ray-logic

* refactor arg_utils.py

* remove distributed_init_method logic

* refactor entrypoints

* refactor input_metadata

* refactor model_loader

* refactor utils.py

* refactor models

* fix api server

* remove vllm.stucture

* revert by txy 1120

* remove utils

* format

* fix license

* add bigdl model

* Refer to a specfic commit

* Change code base

* add comments

* add async_llm_engine comment

* refine

* formatted

* add worker comments

* add comments

* add comments

* fix style

* add changes

---------

Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
Co-authored-by: Xiangyu Tian <109123695+xiangyuT@users.noreply.github.com>
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2023-11-23 16:46:45 +08:00
Heyang Sun
48fbb1eb94 support ccl (MPI) distributed mode in alpaca_qlora_finetuning_cpu (#9507) 2023-11-23 10:58:09 +08:00
Qiyuan Gong
0f0c6bb631 [LLM] Fix Qwen registered_causal_mask is None (#9513)
* Add registered_causal_mask init based on 2abd8e5777.
2023-11-23 09:28:04 +08:00
Heyang Sun
11fa5a8a0e Fix QLoRA CPU dispatch_model issue about accelerate (#9506) 2023-11-23 08:41:25 +08:00
Heyang Sun
1453046938 install bigdl-llm in deepspeed cpu inference example (#9508) 2023-11-23 08:39:21 +08:00
binbin Deng
86743fb57b LLM: fix transformers version in CPU finetuning example (#9511) 2023-11-22 15:53:07 +08:00
binbin Deng
1a2129221d LLM: support resume from checkpoint in Alpaca QLoRA (#9502) 2023-11-22 13:49:14 +08:00
Ruonan Wang
139e98aa18 LLM: quick fix benchmark (#9509) 2023-11-22 10:19:57 +08:00
WeiguangHan
c2aeb4d1e8 del model after test (#9504) 2023-11-21 18:41:50 +08:00
Ruonan Wang
076d106ef5 LLM: GPU QLoRA update to bf16 to accelerate gradient checkpointing (#9499)
* update to bf16 to accelerate gradient checkpoint

* add utils and fix ut
2023-11-21 17:08:36 +08:00
Cheen Hau, 俊豪
3e39828420 Update all in one benchmark readme (#9496)
* Add gperftools install to all in one benchmark readme

* Update readme
2023-11-21 14:57:16 +08:00
binbin Deng
b7ae572ac3 LLM: update Alpaca QLoRA finetuning example on GPU (#9492) 2023-11-21 14:22:19 +08:00
Wang, Jian4
c5cb3ab82e LLM : Add CPU alpaca qlora example (#9469)
* init

* update xpu to cpu

* update

* update readme

* update example

* update

* add refer

* add guide to train different datasets

* update readme

* update
2023-11-21 09:19:58 +08:00
binbin Deng
96fd26759c LLM: fix QLoRA finetuning example on CPU (#9489) 2023-11-20 14:31:24 +08:00
Xin Qiu
50b01058f1 enable new q4_1 (#9479) 2023-11-17 14:58:57 +08:00
binbin Deng
3dac21ac7b LLM: add more example usages about alpaca qlora on different hardware (#9458) 2023-11-17 09:56:43 +08:00
Heyang Sun
921b263d6a update deepspeed install and run guide in README (#9441) 2023-11-17 09:11:39 +08:00
Zhao Changmin
30abd304a7 LLM: Fix baichuan pre-normalize model tensor assigning issue when loading (#9481)
* No need to normalized when loading
2023-11-16 21:57:28 +08:00
WeiguangHan
bc06bec90e LLM: modify the script to generate html results more accurately (#9445)
* modify the script to generate html results more accurately

* resolve some comments

* revert some codes
2023-11-16 19:50:23 +08:00
Ruonan Wang
c0ef70df02 llm: quick fix of fast_rms_norm (#9480) 2023-11-16 14:42:16 +08:00
Yina Chen
d5263e6681 Add awq load support (#9453)
* Support directly loading GPTQ models from huggingface

* fix style

* fix tests

* change example structure

* address comments

* fix style

* init

* address comments

* add examples

* fix style

* fix style

* fix style

* fix style

* update

* remove

* meet comments

* fix style

---------

Co-authored-by: Yang Wang <yang3.wang@intel.com>
2023-11-16 14:06:25 +08:00
Ruonan Wang
d2c064124a LLM: update rms related usage to suport ipex 2.1 new api (#9466)
* update rms related usage

* fix style
2023-11-16 11:21:50 +08:00
Yuwen Hu
731b0aaade Empty cache after embedding to cpu (#9477) 2023-11-16 10:52:30 +08:00
WeiguangHan
c487b53f21 LLM: only run arc perf test nightly (#9448)
* LLM: only run arc perf test nightly

* deleted unused python scripts

* rebase main
2023-11-15 19:38:14 +08:00
WeiguangHan
0d55bbd9f1 LLM: ajust the order of some models (#9470) 2023-11-15 17:04:59 +08:00
Xin Qiu
170e0072af chatglm2 correctness test (#9450)
* chatglm2 ut

* some update

* chatglm2 path

* fix

* add print
2023-11-15 15:44:56 +08:00
Ruonan Wang
0f82b8c3a0 LLM: update qlora example (#9454)
* update qlora example

* fix loss=0
2023-11-15 09:24:15 +08:00
Chen, Zhentao
dbbdb53a18 fix multiple gpu usage (#9459) 2023-11-14 17:06:27 +08:00
Chen, Zhentao
d19ca21957 patch bigdl-llm model to harness by binding instead of patch file (#9420)
* add run_llb.py

* fix args interpret

* modify outputs

* update workflow

* add license

* test mixed 4 bit

* update readme

* use autotokenizer

* add timeout

* refactor workflow file

* fix working directory

* fix env

* throw exception if some jobs failed

* improve terminal outputs

* Disable var which cause the run stuck

* fix unknown precision

* fix key error

* directly output config instead

* rm harness submodule
2023-11-14 12:51:39 +08:00
Yang Wang
51d07a9fd8 Support directly loading gptq models from huggingface (#9391)
* Support directly loading GPTQ models from huggingface

* fix style

* fix tests

* change example structure

* address comments

* fix style

* address comments
2023-11-13 20:48:12 -08:00
WeiguangHan
d109275333 temporarily disable the test of some models (#9434) 2023-11-13 18:50:53 +08:00
Chen, Zhentao
0ecb9efb05 use AutoTokenizer to enable more models (#9446) 2023-11-13 17:47:43 +08:00
Cengguang Zhang
ece5805572 LLM: add chatglm3-6b to latency benchmark test. (#9442) 2023-11-13 17:24:37 +08:00
Chen, Zhentao
5747e2fe69 fix multiple gpu usage of harness (#9444) 2023-11-13 16:53:23 +08:00
Heyang Sun
da6bbc8c11 fix deepspeed dependencies to install (#9400)
* remove reductant parameter from deepspeed install

* Update install.sh

* Update install.sh
2023-11-13 16:42:50 +08:00
Yuwen Hu
4faf5af8f1 [LLM] Add perf test for core on Windows (#9397)
* temporary stop other perf test

* Add framework for core performance test with one test model

* Small fix and add platform control

* Comment out lp for now

* Add missing ymal file

* Small fix

* Fix sed contents

* Small fix

* Small path fixes

* Small fix

* Add update to ftp

* Small upload fix

* add chatglm3-6b

* LLM: add model names

* Keep repo id same as ftp and temporary make baichuan2 first priority

* change order

* Remove temp if false and separate pr and nightly results

* Small fix

---------

Co-authored-by: jinbridge <2635480475@qq.com>
2023-11-13 13:58:40 +08:00
Zheng, Yi
9b5d0e9c75 Add examples for Yi-6B (#9421) 2023-11-13 10:53:15 +08:00
SONG Ge
2888818b3a [LLM] Support mixed_fp8 on Arc (#9415)
* ut gpu allocation memory fix

* support mix_8bit on arc

* rename mixed_4bit to mixed_fp4 and mixed_8bit to mixed_fp8

* revert unexpected changes

* revert unexpected changes

* unify common logits

* rename in llm xmx_checker

* fix typo error and re-unify
2023-11-13 09:26:30 +08:00
Wang, Jian4
ac7fbe77e2 Update qlora readme (#9416) 2023-11-12 19:29:29 +08:00
Yining Wang
d7334513e1 codeshell: fix wrong links (#9417) 2023-11-12 19:22:33 +08:00
Zheng, Yi
0674146cfb Add cpu and gpu examples of distil-whisper (#9374)
* Add distil-whisper examples

* Fixes based on comments

* Minor fixes

---------

Co-authored-by: Ariadne330 <wyn2000330@126.com>
2023-11-10 16:09:55 +08:00
Ziteng Zhang
ad81b5d838 Update qlora README.md (#9422) 2023-11-10 15:19:25 +08:00
Heyang Sun
b23b91407c fix llm-init on deepspeed missing lib (#9419) 2023-11-10 13:51:24 +08:00
SONG Ge
dfb00e37e9 [LLM] Add model correctness test on ARC for llama and falcon (#9347)
* add correctness test on arc for llama model

* modify layer name

* add falcon ut

* refactor and add ut for falcon model

* modify lambda positions and update docs

* replace loading pre input with last decodelayer output

* switch lower bound to single model instead of using the common one

* make the code implementation simple

* fix gpu action allocation memory issue
2023-11-10 13:48:57 +08:00
dingbaorong
36fbe2144d Add CPU examples of fuyu (#9393)
* add fuyu cpu examples

* add gpu example

* add comments

* add license

* remove gpu example

* fix inference time
2023-11-09 15:29:19 +08:00
Heyang Sun
df8e4d7889 [LLM] apply allreduce and bias to training in LowBitLinear (#9395) 2023-11-09 14:35:54 +08:00
Wang, Jian4
40cead6b5b LLM: Fix CPU qlora dtype convert issue (#9394) 2023-11-09 14:34:01 +08:00
WeiguangHan
34449cb4bb LLM: add remaining models to the arc perf test (#9384)
* add remaining models

* modify the filepath which stores the test result on ftp server

* resolve some comments
2023-11-09 14:28:42 +08:00
Ruonan Wang
bfca76dfa7 LLM: optimize QLoRA by updating lora convert logic (#9372)
* update convert logic of qlora

* update

* refactor and further improve performance

* fix style

* meet code review
2023-11-08 17:46:49 +08:00
binbin Deng
54d95e4907 LLM: add alpaca qlora finetuning example (#9276) 2023-11-08 16:25:17 +08:00
binbin Deng
97316bbb66 LLM: highlight transformers version requirement in mistral examples (#9380) 2023-11-08 16:05:03 +08:00
Ruonan Wang
7e8fb29b7c LLM: optimize QLoRA by reducing convert time (#9370) 2023-11-08 13:14:34 +08:00
Chen, Zhentao
298b64217e add auto triggered acc test (#9364)
* add auto triggered acc test

* use llama 7b instead

* fix env

* debug download

* fix download prefix

* add cut dirs

* fix env of model path

* fix dataset download

* full job

* source xpu env vars

* use matrix to trigger model run

* reset batch=1

* remove redirect

* remove some trigger

* add task matrix

* add precision list

* test llama-7b-chat

* use /mnt/disk1 to store model and datasets

* remove installation test

* correct downloading path

* fix HF vars

* add bigdl-llm env vars

* rename file

* fix hf_home

* fix script path

* rename as harness evalution

* rerun
2023-11-08 10:22:27 +08:00
Yishuo Wang
bfd9f88f0d [LLM] Use fp32 as dtype when batch_size <=8 and qtype is q4_0/q8_0/fp8 (#9365) 2023-11-08 09:54:53 +08:00
WeiguangHan
84ab614aab LLM: add more models and skip runtime error (#9349)
* add more models and skip runtime error

* upgrade transformers

* temporarily removed Mistral-7B-v0.1

* temporarily disable the upload of arc perf result
2023-11-08 09:45:53 +08:00
Heyang Sun
fae6db3ddc [LLM] refactor cpu low-bit forward logic (#9366)
* [LLM] refactor cpu low-bit forward logic

* fix style

* Update low_bit_linear.py

* Update low_bit_linear.py

* refine
2023-11-07 15:09:16 +08:00
Heyang Sun
af94058203 [LLM] Support CPU deepspeed distributed inference (#9259)
* [LLM] Support CPU Deepspeed distributed inference

* Update run_deepspeed.py

* Rename

* fix style

* add new codes

* refine

* remove annotated codes

* refine

* Update README.md

* refine doc and example code
2023-11-06 17:56:42 +08:00
Jin Qiao
f9bf5382ff Fix: add aquila2 in README (#9362) 2023-11-06 16:37:57 +08:00
Jin Qiao
e6b6afa316 LLM: add aquila2 model example (#9356) 2023-11-06 15:47:39 +08:00
Xin Qiu
1420e45cc0 Chatglm2 rope optimization on xpu (#9350) 2023-11-06 13:56:34 +08:00
Yining Wang
9377b9c5d7 add CodeShell CPU example (#9345)
* add CodeShell CPU example

* fix some problems
2023-11-03 13:15:54 +08:00
ZehuaCao
ef83c3302e Use to test llm-performance on spr-perf (#9316)
* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update action.yml

* Create cpu-perf-test.yaml

* Update action.yml

* Update action.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml

* Update llm_performance_tests.yml
2023-11-03 11:17:16 +08:00
Yuwen Hu
a0150bb205 [LLM] Move embedding layer to CPU for iGPU inference (#9343)
* Move embedding layer to CPU for iGPU llm inference

* Empty cache after to cpu

* Remove empty cache as it seems to have some negative effect to first token
2023-11-03 11:13:45 +08:00
Cheen Hau, 俊豪
8f23fb04dc Add inference test for Whisper model on Arc (#9330)
* Add inference test for Whisper model

* Remove unnecessary inference time measurement
2023-11-03 10:15:52 +08:00
Zheng, Yi
63411dff75 Add cpu examples of WizardCoder (#9344)
* Add wizardcoder example

* Minor fixes
2023-11-02 20:22:43 +08:00
dingbaorong
2e3bfbfe1f Add internlm_xcomposer cpu examples (#9337)
* add internlm-xcomposer cpu examples

* use chat

* some fixes

* add license

* address shengsheng's comments

* use demo.jpg
2023-11-02 15:50:02 +08:00
Jin Qiao
97a38958bd LLM: add CodeLlama CPU and GPU examples (#9338)
* LLM: add codellama CPU pytorch examples

* LLM: add codellama CPU transformers examples

* LLM: add codellama GPU transformers examples

* LLM: add codellama GPU pytorch examples

* LLM: add codellama in readme

* LLM: add LLaVA link
2023-11-02 15:34:25 +08:00
Chen, Zhentao
d4dffbdb62 Merge harness (#9319)
* add harness patch and llb script

* add readme

* add license

* use patch instead

* update readme

* rename tests to evaluation

* fix typo

* remove nano dependency

* add original harness link

* rename title of usage

* rename BigDLGPULM as BigDLLM

* empty commit to rerun job
2023-11-02 15:14:19 +08:00
Zheng, Yi
63b2556ce2 Add cpu examples of skywork (#9340) 2023-11-02 15:10:45 +08:00
dingbaorong
f855a864ef add llava gpu example (#9324)
* add llava gpu example

* use 7b model

* fix typo

* add in README
2023-11-02 14:48:29 +08:00
Ziteng Zhang
dd3cf2f153 LLM: Add python 3.10 & 3.11 UT
LLM: Add python 3.10 & 3.11 UT
2023-11-02 14:09:29 +08:00
Wang, Jian4
149146004f LLM: Add qlora finetunning CPU example (#9275)
* add qlora finetunning example

* update readme

* update example

* remove merge.py and update readme
2023-11-02 09:45:42 +08:00
WeiguangHan
9722e811be LLM: add more models to the arc perf test (#9297)
* LLM: add more models to the arc perf test

* remove some old models

* install some dependencies
2023-11-01 16:56:32 +08:00
Jin Qiao
6a128aee32 LLM: add ui for portable-zip (#9262) 2023-11-01 15:36:59 +08:00
Jasonzzt
cb7ef38e86 rerun 2023-11-01 15:30:34 +08:00
Jasonzzt
ba148ff3ff test py311 2023-11-01 14:08:49 +08:00
Yishuo Wang
726203d778 [LLM] Replace Embedding layer to fix it on CPU (#9254) 2023-11-01 13:58:10 +08:00
Jasonzzt
7c7a7f2ec1 spr & arc ut with python3,9&3.10&3.11 2023-11-01 13:17:13 +08:00
Yang Wang
e1bc18f8eb fix import ipex problem (#9323)
* fix import ipex problem

* fix style
2023-10-31 20:31:34 -07:00
Cengguang Zhang
9f3d4676c6 LLM: Add qwen-vl gpu example (#9290)
* create qwen-vl gpu example.

* add readme.

* fix.

* change input figure and update outputs.

* add qwen-vl pytorch model gpu example.

* fix.

* add readme.
2023-11-01 11:01:39 +08:00
Ruonan Wang
7e73c354a6 LLM: decoupling bigdl-llm and bigdl-nano (#9306) 2023-11-01 11:00:54 +08:00
Yina Chen
2262ae4d13 Support MoFQ4 on arc (#9301)
* init

* update

* fix style

* fix style

* fix style

* meet comments
2023-11-01 10:59:46 +08:00
binbin Deng
8ef8e25178 LLM: improve response speed in multi-turn chat (#9299)
* update

* fix stop word and add chatglm2 support

* remove system prompt
2023-11-01 10:30:44 +08:00
Cengguang Zhang
d4ab5904ef LLM: Add python 3.10 llm UT (#9302)
* add py310 test for llm-unit-test.

* add py310 llm-unit-tests

* add llm-cpp-build-py310

* test

* test

* test.

* test

* test

* fix deactivate.

* fix

* fix.

* fix

* test

* test

* test

* add build chatglm for win.

* test.

* fix
2023-11-01 10:15:32 +08:00
WeiguangHan
03aa368776 LLM: add the comparison between latest arc perf test and last one (#9296)
* add the comparison between latest test and last one to html

* resolve some comments

* modify some code logics
2023-11-01 09:53:02 +08:00
Jin Qiao
96f8158fe2 LLM: adjust dolly v2 GPU example README (#9318) 2023-11-01 09:50:22 +08:00
Jin Qiao
c44c6dc43a LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
Xin Qiu
06447a3ef6 add malloc and intel openmp to llm deps (#9322) 2023-11-01 09:47:45 +08:00
Cheen Hau, 俊豪
d638b93dfe Add test script and workflow for qlora fine-tuning (#9295)
* Add test script and workflow for qlora fine-tuning

* Test fix export model

* Download dataset

* Fix export model issue

* Reduce number of training steps

* Rename script

* Correction
2023-11-01 09:39:53 +08:00
Ruonan Wang
d383ee8efb LLM: update QLoRA example about accelerate version(#9314) 2023-10-31 13:54:38 +08:00
Cheen Hau, 俊豪
cee9eaf542 [LLM] Fix llm arc ut oom (#9300)
* Move model to cpu after testing so that gpu memory is deallocated

* Add code comment

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2023-10-30 14:38:34 +08:00
dingbaorong
ee5becdd61 use coco image in Qwen-VL (#9298)
* use coco image

* add output

* address yuwen's comments
2023-10-30 14:32:35 +08:00
Yang Wang
163d033616 Support qlora in CPU (#9233)
* support qlora in CPU

* revert example

* fix style
2023-10-27 14:01:15 -07:00
Yang Wang
8838707009 Add deepspeed autotp example readme (#9289)
* Add deepspeed autotp example readme

* change word
2023-10-27 13:04:38 -07:00
dingbaorong
f053688cad add cpu example of LLaVA (#9269)
* add LLaVA cpu example

* Small text updates

* update link

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2023-10-27 18:59:20 +08:00
Zheng, Yi
7f2ad182fd Minor Fixes of README (#9294) 2023-10-27 18:25:46 +08:00
Zheng, Yi
1bff54a378 Display demo.jpg n the README.md of HuggingFace Transformers Agent (#9293)
* Display demo.jpg

* remove demo.jpg
2023-10-27 18:00:03 +08:00
Zheng, Yi
a4a1dec064 Add a cpu example of HuggingFace Transformers Agent (use vicuna-7b-v1.5) (#9284)
* Add examples of HF Agent

* Modify folder structure and add link of demo.jpg

* Fixes of readme

* Merge applications and Applications
2023-10-27 17:14:12 +08:00
Guoqiong Song
aa319de5e8 Add streaming-llm using llama2 on CPU (#9265)
Enable streaming-llm to let model take infinite inputs, tested on desktop and SPR10
2023-10-27 01:30:39 -07:00
Cheen Hau, 俊豪
6c9ae420a5 Add regression test for optimize_model on gpu (#9268)
* Add MPT model to transformer API test

* Add regression test for optimize_model on gpu.

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2023-10-27 09:23:19 +08:00
Cengguang Zhang
44b5fcc190 LLM: fix pretraining_tp argument issue. (#9281) 2023-10-26 18:43:58 +08:00
WeiguangHan
6b2a32eba2 LLM: add missing function for PyTorch InternLM model (#9285) 2023-10-26 18:05:23 +08:00
Yina Chen
f879c48f98 fp8 convert use ggml code (#9277) 2023-10-26 17:03:29 +08:00
Yina Chen
e2264e8845 Support arc fp4 (#9266)
* support arc fp4

* fix style

* fix style
2023-10-25 15:42:48 +08:00
Cheen Hau, 俊豪
ab40607b87 Enable unit test workflow on Arc (#9213)
* Add gpu workflow and a transformers API inference test

* Set device-specific env variables in script instead of workflow

* Fix status message

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2023-10-25 15:17:18 +08:00
SONG Ge
160a1e5ee7 [WIP] Add UT for Mistral Optimized Model (#9248)
* add ut for mistral model

* update

* fix model path

* upgrade transformers version for mistral model

* refactor correctness ut for mustral model

* refactor mistral correctness ut

* revert test_optimize_model back

* remove mistral from test_optimize_model

* add to revert transformers version back to 4.31.0
2023-10-25 15:14:17 +08:00
Yang Wang
067c7e8098 Support deepspeed AutoTP (#9230)
* Support deepspeed

* add test script

* refactor convert

* refine example

* refine

* refine example

* fix style

* refine example and adapte latest ipex

* fix style
2023-10-24 23:46:28 -07:00
Yining Wang
a6a8afc47e Add qwen vl CPU example (#9221)
* eee

* add examples on CPU and GPU

* fix

* fix

* optimize model examples

* add Qwen-VL-Chat CPU example

* Add Qwen-VL CPU example

* fix optimize problem

* fix error

* Have updated, benchmark fix removed from this PR

* add generate API example

* Change formats in qwen-vl example

* Add CPU transformer int4 example for qwen-vl

* fix repo-id problem and add Readme

* change picture url

* Remove unnecessary file

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2023-10-25 13:22:12 +08:00
binbin Deng
f597a9d4f5 LLM: update perf test configuration (#9264) 2023-10-25 12:35:48 +08:00
binbin Deng
770ac70b00 LLM: add low_bit option in benchmark scripts (#9257) 2023-10-25 10:27:48 +08:00
WeiguangHan
ec9195da42 LLM: using html to visualize the perf result for Arc (#9228)
* LLM: using html to visualize the perf result for Arc

* deploy the html file

* add python license

* reslove some comments
2023-10-24 18:05:25 +08:00
Jin Qiao
90162264a3 LLM: replace torch.float32 with auto type (#9261) 2023-10-24 17:12:13 +08:00
SONG Ge
bd5215d75b [LLM] Reimplement chatglm fuse rms optimization (#9260)
* re-implement chatglm rope rms

* update
2023-10-24 16:35:12 +08:00
dingbaorong
5a2ce421af add cpu and gpu examples of flan-t5 (#9171)
* add cpu and gpu examples of flan-t5

* address yuwen's comments
* Add explanation  why we add modules to not convert
* Refine prompt and add a translation example
* Add a empty line at the end of files

* add examples of flan-t5 using optimize_mdoel api

* address bin's comments

* address binbin's comments

* add flan-t5 in readme
2023-10-24 15:24:01 +08:00
Yining Wang
4a19f50d16 phi-1_5 CPU and GPU examples (#9173)
* eee

* add examples on CPU and GPU

* fix

* fix

* optimize model examples

* have updated

* Warmup and configs added

* Update two tables
2023-10-24 15:08:04 +08:00
SONG Ge
bfc1e2d733 add fused rms optimization for chatglm model (#9256) 2023-10-24 14:40:58 +08:00
Ruonan Wang
b15656229e LLM: fix benchmark issue (#9255) 2023-10-24 14:15:05 +08:00
Guancheng Fu
f37547249d Refine README/CICD (#9253) 2023-10-24 12:56:03 +08:00
binbin Deng
db37edae8a LLM: update langchain api document page (#9222) 2023-10-24 10:13:41 +08:00
Xin Qiu
0c5055d38c add position_ids and fuse embedding for falcon (#9242)
* add position_ids for falcon

* add cpu

* add cpu

* add license
2023-10-24 09:58:20 +08:00
Wang, Jian4
c14a61681b Add load low-bit in model-serving for reduce EPC (#9239)
* init load low-bit

* fix

* fix
2023-10-23 11:28:20 +08:00
Yina Chen
0383306688 Add arc fp8 support (#9232)
* add fp8 support

* add log

* fix style
2023-10-20 17:15:07 +08:00
Yang Wang
118249b011 support transformers 4.34+ for llama (#9229) 2023-10-19 22:36:30 -07:00
Chen, Zhentao
5850241423 correct Readme GPU example and API docstring (#9225)
* update readme to correct GPU usage

* update from_pretrained supported low bit options

* fix stype check
2023-10-19 16:08:47 +08:00
WeiguangHan
f87f67ee1c LLM: arc perf test for some popular models (#9188) 2023-10-19 15:56:15 +08:00
Yang Wang
b0ddde0410 Fix removing convert dtype bug (#9216)
* Fix removing convert dtype bug

* fix style
2023-10-18 11:24:22 -07:00
Ruonan Wang
942d6418e7 LLM: fix chatglm kv cache (#9215) 2023-10-18 19:09:53 +08:00
SONG Ge
0765f94770 [LLM] Optimize kv_cache for mistral model family (#9189)
* add kv_cache optimization for mistral model

* kv_cache optimize for mistral

* update stylr

* update
2023-10-18 15:13:37 +08:00
Ruonan Wang
3555ebc148 LLM: fix wrong length in gptj kv_cache optimization (#9210)
* fix wrong length in gptj kv cache

* update
2023-10-18 14:59:02 +08:00
Shengsheng Huang
6dad8d16df optimize NormHead for Baichuan2 (#9205)
* optimize NormHead for Baichuan2

* fix ut and change name

* rename functions
2023-10-18 14:05:07 +08:00
Jin Qiao
a3b664ed03 LLM: add GPU More-Data-Types and Save/Load example (#9199) 2023-10-18 13:13:45 +08:00
WeiguangHan
b9194c5786 LLM: skip some model tests using certain api (#9163)
* LLM: Skip some model tests using certain api

* initialize variable named result
2023-10-18 09:39:27 +08:00
Ruonan Wang
09815f7064 LLM: fix RMSNorm optimization of Baichuan2-13B/Baichuan-13B (#9204)
* fix rmsnorm of baichuan2-13B

* update baichuan1-13B too

* fix style
2023-10-17 18:40:34 +08:00
Jin Qiao
d7ce78edf0 LLM: fix portable zip README image link (#9201)
* LLM: fix portable zip readme img link

* LLM: make README first image center align
2023-10-17 16:38:22 +08:00
Cheen Hau, 俊豪
66c2e45634 Add unit tests for optimized model correctness (#9151)
* Add test to check correctness of optimized model

* Refactor optimized model test

* Use models in llm-unit-test

* Use AutoTokenizer for bloom

* Print out each passed test

* Remove unused tokenizer from import
2023-10-17 14:46:41 +08:00
Jin Qiao
d946bd7c55 LLM: add CPU More-Data-Types and Save-Load examples (#9179) 2023-10-17 14:38:52 +08:00
Ruonan Wang
c0497ab41b LLM: support kv_cache optimization for Qwen-VL-Chat (#9193)
* dupport qwen_vl_chat

* fix style
2023-10-17 13:33:56 +08:00
binbin Deng
1cd9ab15b8 LLM: fix ChatGLMConfig check (#9191) 2023-10-17 11:52:56 +08:00
Yang Wang
7160afd4d1 Support XPU DDP training and autocast for LowBitMatmul (#9167)
* support autocast in low bit matmul

* Support XPU DDP training

* fix  amp
2023-10-16 20:47:19 -07:00
Ruonan Wang
77afb8796b LLM: fix convert of chatglm (#9190) 2023-10-17 10:48:13 +08:00
dingbaorong
af3b575c7e expose modules_to_not_convert in optimize_model (#9180)
* expose modules_to_not_convert in optimize_model

* some fixes
2023-10-17 09:50:26 +08:00
Cengguang Zhang
5ca8a851e9 LLM: add fuse optimization for Mistral. (#9184)
* add fuse optimization for mistral.

* fix.

* fix

* fix style.

* fix.

* fix error.

* fix style.

* fix style.
2023-10-16 16:50:31 +08:00
Jiao Wang
49e1381c7f update rope (#9155) 2023-10-15 21:51:45 -07:00
Jason Dai
b192a8032c Update llm-readme (#9176) 2023-10-16 10:54:52 +08:00
binbin Deng
a164c24746 LLM: add kv_cache optimization for chatglm2-6b-32k (#9165) 2023-10-16 10:43:15 +08:00
Yang Wang
7a2de00b48 Fixes for xpu Bf16 training (#9156)
* Support bf16 training

* Use a stable transformer version

* remove env

* fix style
2023-10-14 21:28:59 -07:00
Cengguang Zhang
51a133de56 LLM: add fuse rope and norm optimization for Baichuan. (#9166)
* add fuse rope optimization.

* add rms norm optimization.
2023-10-13 17:36:52 +08:00
Jin Qiao
db7f938fdc LLM: add replit and starcoder to gpu pytorch model example (#9154) 2023-10-13 15:44:17 +08:00
Jin Qiao
797b156a0d LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153) 2023-10-13 15:43:35 +08:00
Yishuo Wang
259cbb4126 [LLM] add initial bigdl-llm-init (#9150) 2023-10-13 15:31:45 +08:00
Cengguang Zhang
433f408081 LLM: Add fuse rope and norm optimization for Aquila. (#9161)
* add fuse norm optimization.

* add fuse rope optimization
2023-10-13 14:18:37 +08:00
SONG Ge
e7aa67e141 [LLM] Add rope optimization for internlm (#9159)
* add rope and norm optimization for internlm and gptneox

* revert gptneox back and split with pr#9155 #

* add norm_forward

* style fix

* update

* update
2023-10-13 14:18:28 +08:00
Jin Qiao
f754ab3e60 LLM: add baichuan and baichuan2 to gpu pytorch model example (#9152) 2023-10-13 13:44:31 +08:00
Ruonan Wang
b8aee7bb1b LLM: Fix Qwen kv_cache optimization (#9148)
* first commit

* ut pass

* accelerate rotate half by using common util function

* fix style
2023-10-12 15:49:42 +08:00
binbin Deng
69942d3826 LLM: fix model check before attention optimization (#9149) 2023-10-12 15:21:51 +08:00
JIN Qiao
1a1ddc4144 LLM: Add Replit CPU and GPU example (#9028) 2023-10-12 13:42:14 +08:00
JIN Qiao
d74834ff4c LLM: add gpu pytorch-models example llama2 and chatglm2 (#9142) 2023-10-12 13:41:48 +08:00
Ruonan Wang
4f34557224 LLM: support num_beams in all-in-one benchmark (#9141)
* support num_beams

* fix
2023-10-12 13:35:12 +08:00
Ruonan Wang
62ac7ae444 LLM: fix inaccurate input / output tokens of current all-in-one benchmark (#9137)
* first fix

* fix all apis

* fix
2023-10-11 17:13:34 +08:00
binbin Deng
eb3fb18eb4 LLM: improve PyTorch API doc (#9128) 2023-10-11 15:03:39 +08:00
binbin Deng
995b0f119f LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
Ruonan Wang
1c8d5da362 LLM: fix llama tokenizer for all-in-one benchmark (#9129)
* fix tokenizer for gpu benchmark

* fix ipex fp16

* meet code review

* fix
2023-10-11 13:39:39 +08:00
binbin Deng
2ad67a18b1 LLM: add mistral examples (#9121) 2023-10-11 13:38:15 +08:00
Ruonan Wang
1363e666fc LLM: update benchmark_util.py for beam search (#9126)
* update reorder_cache

* fix
2023-10-11 09:41:53 +08:00
Guoqiong Song
e8c5645067 add LLM example of aquila on GPU (#9056)
* aquila, dolly-v1, dolly-v2, vacuna
2023-10-10 17:01:35 -07:00
Ruonan Wang
388f688ef3 LLM: update setup.py to add bigdl-core-xe package (#9122) 2023-10-10 15:02:48 +08:00
Zhao Changmin
1709beba5b LLM: Explicitly close pickle file pointer before removing temporary directory (#9120)
* fp close
2023-10-10 14:57:23 +08:00
Yuwen Hu
0e09dd926b [LLM] Fix example test (#9118)
* Update llm example test link due to example layout change

* Add better change detect
2023-10-10 13:24:18 +08:00
Ruonan Wang
ad7d9231f5 LLM: add benchmark script for Max gpu and ipex fp16 gpu (#9112)
* add pvc bash

* meet code review

* rename to run-max-gpu.sh
2023-10-10 10:18:41 +08:00
binbin Deng
e4d1457a70 LLM: improve transformers style API doc (#9113) 2023-10-10 09:31:00 +08:00
Yuwen Hu
65212451cc [LLM] Small update to performance tests (#9106)
* small updates to llm performance tests regarding model handling

* Small fix
2023-10-09 16:55:25 +08:00
Zhao Changmin
edccfb2ed3 LLM: Check model device type (#9092)
* check model device
2023-10-09 15:49:15 +08:00
binbin Deng
5e9962b60e LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
Yina Chen
4c4f8d1663 [LLM]Fix Arc falcon abnormal output issue (#9096)
* update

* update

* fix error & style

* fix style

* update train

* to input_seq_size
2023-10-09 15:09:37 +08:00
Zhao Changmin
548e4dd5fe LLM: Adapt transformers models for optimize model SL (#9022)
* LLM: Adapt transformers model for SL
2023-10-09 11:13:44 +08:00
Ruonan Wang
f64257a093 LLM: basic api support for esimd fp16 (#9067)
* basic api support for fp16

* fix style

* fix

* fix error and style

* fix style

* meet code review

* update based on comments
2023-10-09 11:05:17 +08:00
JIN Qiao
65373d2a8b LLM: adjust portable zip content (#9054)
* LLM: adjust portable zip content

* LLM: adjust portable zip README
2023-10-09 10:51:19 +08:00
Xin Qiu
b3e94a32d4 change log4error import (#9098) 2023-10-08 09:23:28 +08:00
Kai Huang
78ea7ddb1c Combine apply_rotary_pos_emb for gpt-neox (#9074) 2023-10-07 16:27:46 +08:00
Yang Wang
36dd4afd61 Fix llama when rope scaling is not None (#9086)
* Fix llama when rope scaling is not None

* fix style

* fix style
2023-10-06 13:27:37 -07:00
Yang Wang
fcb1c618a0 using bigdl-llm fused rope for llama (#9066)
* optimize llama xpu rope

* fix bug

* fix style

* refine append cache

* remove check

* do not cache cos sin

* remove unnecessary changes

* clean up

* fix style

* check for training
2023-10-06 09:57:29 -07:00
Jiao Wang
aefa5a5bfe Qwen kv cache (#9079)
* qwen and aquila

* update

* update

* style
2023-10-05 11:59:17 -07:00
Jiao Wang
d5ca1f32b6 Aquila KV cache optimization (#9080)
* update

* update

* style
2023-10-05 11:10:57 -07:00
Yang Wang
88565c76f6 add export merged model example (#9018)
* add export merged model example

* add sources

* add script

* fix style
2023-10-04 21:18:52 -07:00
Yang Wang
0cd8f1c79c Use ipex fused rms norm for llama (#9081)
* also apply rmsnorm

* fix cpu
2023-10-04 21:04:55 -07:00
Cengguang Zhang
fb883100e7 LLM: support chatglm-18b convert attention forward in benchmark scripts. (#9072)
* add chatglm-18b convert.

* fix if statement.

* fix
2023-09-28 14:04:52 +08:00
Yishuo Wang
6de2189e90 [LLM] fix chatglm main choice (#9073) 2023-09-28 11:23:37 +08:00
Cengguang Zhang
ad62c58b33 LLM: Enable jemalloc in benchmark scripts. (#9058)
* enable jemalloc.

* fix readme.
2023-09-26 15:37:49 +08:00
Cengguang Zhang
b4a1266ef0 [WIP] LLM: add kv cache support for internlm. (#9036)
* LLM: add kv cache support for internlm

* add internlm apply_rotary_pos_emb

* fix.

* fix style.
2023-09-25 14:16:59 +08:00
Ruonan Wang
975da86e00 LLM: fix gptneox kv cache (#9044) 2023-09-25 13:03:57 +08:00
Cengguang Zhang
26213a5829 LLM: Change benchmark bf16 load format. (#9035)
* LLM: Change benchmark bf16 load format.

* comment on bf16 chatglm.

* fix.
2023-09-22 17:38:38 +08:00
JinBridge
023555fb1f LLM: Add one-click installer for Windows (#8999)
* LLM: init one-click installer for windows

* LLM: fix typo in one-click installer readme

* LLM: one-click installer try except logic

* LLM: one-click installer add dependency

* LLM: one-click installer adjust README.md

* LLM: one-click installer split README and add zip compress in setup.bat

* LLM: one-click installer verified internlm and llama2 and replace gif

* LLM: remove one-click installer images

* LLM: finetune the one-click installer README.md

* LLM: fix typo in one-click installer README.md

* LLM: rename one-click installer to protable executable

* LLM: rename other places to protable executable

* LLM: rename the zip filename to executable

* LLM: update .gitignore

* LLM: add colorama to setup.bat
2023-09-22 14:46:30 +08:00
Jiao Wang
028a6d9383 MPT model optimize for long sequence (#9020)
* mpt_long_seq

* update

* update

* update

* style

* style2

* update
2023-09-21 21:27:23 -07:00
Ruonan Wang
b943d73844 LLM: refactor kv cache (#9030)
* refactor utils

* meet code review; update all models

* small fix
2023-09-21 21:28:03 +08:00
Cengguang Zhang
868511cf02 LLM: fix kv cache issue of bloom and falcon. (#9029) 2023-09-21 18:12:20 +08:00
Ruonan Wang
bf51ec40b2 LLM: Fix empty cache (#9024)
* fix

* fix

* update example
2023-09-21 17:16:07 +08:00
Yina Chen
714884414e fix error (#9025) 2023-09-21 16:42:11 +08:00
binbin Deng
edb225530b add bark (#9016) 2023-09-21 12:24:58 +08:00
SONG Ge
fa47967583 [LLM] Optimize kv_cache for gptj model family (#9010)
* optimize gptj model family attention

* add license and comment for dolly-model

* remove xpu mentioned

* remove useless info

* code sytle

* style fix

* code style in gptj fix

* remove gptj arch

* move apply_rotary_pos_emb into utils

* kv_seq_length update

* use hidden_states instead of query layer to reach batch size
2023-09-21 10:42:08 +08:00
Cengguang Zhang
b3cad7de57 LLM: add bloom kv cache support (#9012)
* LLM: add bloom kv cache support

* fix style.
2023-09-20 21:10:53 +08:00
Kai Huang
156af15d1e Add NF3 (#9008)
* add nf3

* grammar
2023-09-20 20:03:07 +08:00
Kai Huang
6981745fe4 Optimize kv_cache for gpt-neox model family (#9015)
* override gptneox

* style

* move to utils

* revert
2023-09-20 19:59:19 +08:00
JinBridge
48b503c630 LLM: add example of aquila (#9006)
* LLM: add example of aquila

* LLM: replace AquilaChat with Aquila

* LLM: shorten prompt of aquila example
2023-09-20 15:52:56 +08:00
Cengguang Zhang
735a17f7b4 LLM: add kv cache to falcon family. (#8995)
* add kv cache to falcon family.

* fix: import error.

* refactor

* update comments.

* add two version falcon attention forward.

* fix

* fix.

* fix.

* fix.

* fix style.

* fix style.
2023-09-20 15:36:30 +08:00
Ruonan Wang
94a7f8917b LLM: fix optimized kv cache for baichuan-13b (#9009)
* fix baichuan 13b

* fix style

* fix

* fix style
2023-09-20 15:30:14 +08:00
Yang Wang
c88f6ec457 Experiment XPU QLora Finetuning (#8937)
* Support xpu finetuning

* support xpu finetuning

* fix style

* fix style

* fix style

* refine example

* add readme

* refine readme

* refine api

* fix fp16

* fix example

* refactor

* fix style

* fix compute type

* add qlora

* refine training args

* fix example

* fix style

* fast path forinference

* address comments

* refine readme

* revert lint
2023-09-19 10:15:44 -07:00
Jason Dai
51518e029d Update llm readme (#9005) 2023-09-19 20:01:33 +08:00
Ruonan Wang
249386261c LLM: add Baichuan2 cpu example (#9002)
* add baichuan2 cpu examples

* add link

* update prompt
2023-09-19 18:08:30 +08:00
Ruonan Wang
004c45c2be LLM: Support optimized kv_cache for baichuan family (#8997)
* add initial support for baichuan attantion

* support baichuan1

* update based on comment

* update based on comment

* support baichuan2

* update link, change how to jusge baichuan2

* fix style

* add model parameter for pob emb

* update based on comment
2023-09-19 15:38:54 +08:00
Xin Qiu
37bb0cbf8f Speed up gpt-j in gpubenchmark (#9000)
* Speedup gpt-j in gpubenchmark

* meet code review
2023-09-19 14:22:28 +08:00
Zhao Changmin
2a05581da7 LLM: Apply low_cpu_mem_usage algorithm on optimize_model API (#8987)
* low_cpu_mem_usage
2023-09-18 21:41:42 +08:00
Cengguang Zhang
8299b68fea update readme. (#8996) 2023-09-18 17:06:15 +08:00
binbin Deng
c1d25a51a8 LLM: add optimize_model example for bert (#8975) 2023-09-18 16:18:35 +08:00
Cengguang Zhang
74338fd291 LLM: add auto torch dtype in benchmark. (#8981) 2023-09-18 15:48:25 +08:00
Ruonan Wang
cabe7c0358 LLM: add baichuan2 example for arc (#8994)
* add baichuan2 examples

* add link

* small fix
2023-09-18 14:32:27 +08:00
binbin Deng
0a552d5bdc LLM: fix installation on windows (#8989) 2023-09-18 11:14:54 +08:00
Ruonan Wang
32716106e0 update use_cahce=True (#8986) 2023-09-18 07:59:33 +08:00
Xin Qiu
64ee1d7689 update run_transformer_int4_gpu (#8983)
* xpuperf

* update run.py

* clean upo

* uodate

* update

* meet code review
2023-09-15 15:10:04 +08:00
Zhao Changmin
16b9412e80 tie_word_embeddings (#8977)
tie_word_embeddings
2023-09-15 10:17:09 +08:00
JinBridge
c12b8f24b6 LLM: add use_cache=True for all gpu examples (#8971) 2023-09-15 09:54:38 +08:00
Guancheng Fu
d1b62ef2f2 [bigdl-llm] Remove serving-dep from all_requires (#8980)
* Remove serving-dep from all_requires

* pin fastchat version
2023-09-14 16:59:24 +08:00
Yishuo Wang
bcf456070c fix bloom-176b int overflow (#8973) 2023-09-14 14:37:57 +08:00
Ruonan Wang
dd57623650 LLM: reduce GPU memory for optimize_model=True (#8965)
* reduce gpu memory for llama & chatglm

* change to device type
2023-09-13 17:27:09 +08:00
binbin Deng
be29c75c18 LLM: refactor gpu examples (#8963)
* restructure

* change to hf-transformers-models/
2023-09-13 14:47:47 +08:00
Cengguang Zhang
cca84b0a64 LLM: update llm benchmark scripts. (#8943)
* update llm benchmark scripts.

* change tranformer_bf16 to pytorch_autocast_bf16.

* add autocast in transformer int4.

* revert autocast.

* add "pytorch_autocast_bf16" to doc

* fix comments.
2023-09-13 12:23:28 +08:00
SONG Ge
7132ef6081 [LLM Doc] Add optimize_model doc in transformers api (#8957)
* add optimize in from_pretrained

* add api doc for load_low_bit

* update api docs following comments

* update api docs

* update

* reord comments
2023-09-13 10:42:33 +08:00
Zhao Changmin
c32c260ce2 LLM: Add save/load API in optimize_model to support general pytorch model (#8956)
* support hf format SL
2023-09-13 10:22:00 +08:00
Ruonan Wang
4de73f592e LLM: add gpu example of chinese-llama-2-7b (#8960)
* add gpu example of chinese -llama2

* update model name and link

* update name
2023-09-13 10:16:51 +08:00
Guancheng Fu
0bf5857908 [LLM] Integrate FastChat as a serving framework for BigDL-LLM (#8821)
* Finish changing

* format

* add licence

* Add licence

* fix

* fix

* Add xpu support for fschat

* Fix patch

* Also install webui dependencies

* change setup.py dependency installs

* fiox

* format

* final test
2023-09-13 09:28:05 +08:00
Yuwen Hu
cb534ed5c4 [LLM] Add Arc demo gif to readme and readthedocs (#8958)
* Add arc demo in main readme

* Small style fix

* Realize using table

* Update based on comments

* Small update

* Try to solve with height problem

* Small fix

* Update demo for inner llm readme

* Update demo video for readthedocs

* Small fix

* Update based on comments
2023-09-13 09:23:52 +08:00
Zhao Changmin
dcaa4dc130 LLM: Support GQA on llama kvcache (#8938)
* support GQA
2023-09-12 12:18:40 +08:00
binbin Deng
2d81521019 LLM: add optimize_model examples for llama2 and chatglm (#8894)
* add llama2 and chatglm optimize_model examples

* update default usage

* update command and some descriptions

* move folder and remove general_int4 descriptions

* change folder name
2023-09-12 10:36:29 +08:00
Zhao Changmin
f00c442d40 fix accelerate (#8946)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-12 09:27:58 +08:00
Yang Wang
16761c58be Make llama attention stateless (#8928)
* Make llama attention stateless

* fix style

* fix chatglm

* fix chatglm xpu
2023-09-11 18:21:50 -07:00
Zhao Changmin
e62eda74b8 refine (#8912)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-11 16:40:33 +08:00
Yina Chen
df165ad165 init (#8933) 2023-09-11 14:30:55 +08:00
Ruonan Wang
b3f5dd5b5d LLM: update q8 convert xpu&cpu (#8930) 2023-09-08 16:01:17 +08:00
Yina Chen
33d75adadf [LLM]Support q5_0 on arc (#8926)
* support q5_0

* delete

* fix style
2023-09-08 15:52:36 +08:00
Yuwen Hu
ca35c93825 [LLM] Fix langchain UT (#8929)
* Change dependency version for langchain uts

* Downgrade pandas version instead; and update example readme accordingly
2023-09-08 13:51:04 +08:00
Xin Qiu
ea0853c0b5 update benchmark_utils readme (#8925)
* update readme

* meet code review
2023-09-08 10:30:26 +08:00
Yang Wang
ee98cdd85c Support latest transformer version (#8923)
* Support latest transformer version

* fix style
2023-09-07 19:01:32 -07:00
Yang Wang
25428b22b4 Fix chatglm2 attention and kv cache (#8924)
* fix chatglm2 attention

* fix bf16 bug

* make model stateless

* add utils

* cleanup

* fix style
2023-09-07 18:54:29 -07:00
Yina Chen
b209b8f7b6 [LLM] Fix arc qtype != q4_0 generate issue (#8920)
* Fix arc precision!=q4_0 generate issue

* meet comments
2023-09-07 08:56:36 -07:00
Cengguang Zhang
3d2efe9608 LLM: update llm latency benchmark. (#8922) 2023-09-07 19:00:19 +08:00
binbin Deng
7897eb4b51 LLM: add benchmark scripts on GPU (#8916) 2023-09-07 18:08:17 +08:00
Xin Qiu
d8a01d7c4f fix chatglm in run.pu (#8919) 2023-09-07 16:44:10 +08:00
Xin Qiu
e9de9d9950 benchmark for native int4 (#8918)
* native4

* update

* update

* update
2023-09-07 15:56:15 +08:00
Ruonan Wang
c0797ea232 LLM: update setup to specify bigdl-core-xe version (#8913) 2023-09-07 15:11:55 +08:00
Ruonan Wang
057e77e229 LLM: update benchmark_utils.py to handle do_sample=True (#8903) 2023-09-07 14:20:47 +08:00
Yang Wang
c34400e6b0 Use new layout for xpu qlinear (#8896)
* use new layout for xpu qlinear

* fix style
2023-09-06 21:55:33 -07:00
Zhao Changmin
8bc1d8a17c LLM: Fix discards in optimize_model with non-hf models and add openai whisper example (#8877)
* openai-whisper
2023-09-07 10:35:59 +08:00
Xin Qiu
5d9942a3ca transformer int4 and native int4's benchmark script for 32 256 1k 2k input (#8871)
* transformer

* move

* update

* add header

* update all-in-one

* clean up
2023-09-07 09:49:55 +08:00
Yina Chen
bfc71fbc15 Add known issue in arc voice assistant example (#8902)
* add known issue in voice assistant example

* update cpu
2023-09-07 09:28:26 +08:00
Yuwen Hu
db26c7b84d [LLM] Update readme gif & image url to the ones hosted on readthedocs (#8900) 2023-09-06 20:04:17 +08:00
SONG Ge
7a71ced78f [LLM Docs] Remain API Docs Issues Solution (#8780)
* langchain readthedocs update

* solve langchain.llms.transformersllm issues

* langchain.embeddings.transformersembeddings/transfortmersllms issues

* update docs for get_num_tokens

* add low_bit api doc

* add optimizer model api doc

* update rst index

* fix coomments style

* update docs following the comments

* update api doc
2023-09-06 16:29:34 +08:00
Xin Qiu
49a39452c6 update benchmark (#8899) 2023-09-06 15:11:43 +08:00
Kai Huang
4a9ff050a1 Add qlora nf4 (#8782)
* add nf4

* dequant nf4

* style
2023-09-06 09:39:22 +08:00
xingyuan li
704a896e90 [LLM] Add perf test on xpu for bigdl-llm (#8866)
* add xpu latency job
* update install way
* remove duplicated workflow
* add perf upload
2023-09-05 17:36:24 +09:00
Zhao Changmin
95271f10e0 LLM: Rename low bit layer (#8875)
* rename lowbit

---------

Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-05 13:21:12 +08:00
Yina Chen
74a2c2ddf5 Update optimize_model=True in llama2 chatglm2 arc examples (#8878)
* add optimize_model=True in llama2 chatglm2 examples

* add ipex optimize in gpt-j example
2023-09-05 10:35:37 +08:00
Jason Dai
5e58f698cd Update readthedocs (#8882) 2023-09-04 15:42:16 +08:00
Song Jiaming
7b3ac66e17 [LLM] auto performance test fix specific settings to template (#8876) 2023-09-01 15:49:04 +08:00
Yang Wang
242c9d6036 Fix chatglm2 multi-turn streamchat (#8867) 2023-08-31 22:13:49 -07:00
Song Jiaming
c06f1ca93e [LLM] auto perf test to output to csv (#8846) 2023-09-01 10:48:00 +08:00
Zhao Changmin
9c652fbe95 LLM: Whisper long segment recognize example (#8826)
* LLM: Long segment recognize example
2023-08-31 16:41:25 +08:00
Yishuo Wang
a232c5aa21 [LLM] add protobuf in bigdl-llm dependency (#8861) 2023-08-31 15:23:31 +08:00
xingyuan li
de6c6bb17f [LLM] Downgrade amx build gcc version and remove avx flag display (#8856)
* downgrade to gcc 11
* remove avx display
2023-08-31 14:08:13 +09:00
Yang Wang
3b4f4e1c3d Fix llama attention optimization for XPU (#8855)
* Fix llama attention optimization fo XPU

* fix chatglm2

* fix typo
2023-08-30 21:30:49 -07:00
Shengsheng Huang
7b566bf686 [LLM] add new API for optimize any pytorch models (#8827)
* add new API for optimize any pytorch models

* change test util name

* revise API and update UT

* fix python style

* update ut config, change default value

* change defaults, disable ut transcribe
2023-08-30 19:41:53 +08:00
Xin Qiu
8eca982301 windows add env (#8852) 2023-08-30 15:54:52 +08:00
Zhao Changmin
731916c639 LLM: Enable attempting loading method automatically (#8841)
* enable auto load method

* warning error

* logger info

---------

Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-08-30 15:41:55 +08:00
Yishuo Wang
bba73ec9d2 [LLM] change chatglm native int4 checkpoint name (#8851) 2023-08-30 15:05:19 +08:00
Yina Chen
55e705a84c [LLM] Support the rest of AutoXXX classes in Transformers API (#8815)
* add transformers auto models

* fix
2023-08-30 11:16:14 +08:00
Zhao Changmin
887018b0f2 Update ut save&load (#8847)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-08-30 10:32:57 +08:00
Yina Chen
3462fd5c96 Add arc gpt-j example (#8840) 2023-08-30 10:31:24 +08:00
Ruonan Wang
f42c0bad1b LLM: update GPU doc (#8845) 2023-08-30 09:24:19 +08:00
Jason Dai
aab7deab1f Reorganize GPU examples (#8844) 2023-08-30 08:32:08 +08:00
Yang Wang
a386ad984e Add Data Center GPU Flex Series to Readme (#8835)
* Add Data Center GPU Flex Series to Readme

* remove

* update starcoder
2023-08-29 11:19:09 -07:00
Yishuo Wang
7429ea0606 [LLM] support transformer int4 + amx int4 (#8838) 2023-08-29 17:27:18 +08:00
Ruonan Wang
ddff7a6f05 Update readme of GPU to specify oneapi version(#8820) 2023-08-29 13:14:22 +08:00
Zhao Changmin
bb31d4fe80 LLM: Implement hf low_cpu_mem_usage with 1xbinary file peak memory on transformer int4 (#8731)
* 1x peak memory
2023-08-29 09:33:17 +08:00
Yina Chen
35fdf94031 [LLM]Arc starcoder example (#8814)
* arc starcoder example init

* add log

* meet comments
2023-08-28 16:48:00 +08:00
xingyuan li
6a902b892e [LLM] Add amx build step (#8822)
* add amx build step
2023-08-28 17:41:18 +09:00
Ruonan Wang
eae92bc7da llm: quick fix path (#8810) 2023-08-25 16:02:31 +08:00
Ruonan Wang
0186f3ab2f llm: update all ARC int4 examples (#8809)
* update GPU examples

* update other examples

* fix

* update based on comment
2023-08-25 15:26:10 +08:00
Song Jiaming
b8b1b6888b [LLM] Performance test (#8796) 2023-08-25 14:31:45 +08:00
Yang Wang
9d0f6a8cce rename math.py in example to avoid conflict (#8805) 2023-08-24 21:06:31 -07:00
SONG Ge
d2926c7672 [LLM] Unify Langchain Native and Transformers LLM API (#8752)
* deprecate BigDLNativeTransformers and add specific LMEmbedding method

* deprecate and add LM methods for langchain llms

* add native params to native langchain

* new imple for embedding

* move ut from bigdlnative to casual llm

* rename embeddings api and examples update align with usage updating

* docqa example hot-fix

* add more api docs

* add langchain ut for starcoder

* support model_kwargs for transformer methods when calling causalLM and add ut

* ut fix for transformers embedding

* update for langchain causal supporting transformers

* remove model_family in readme doc

* add model_families params to support more models

* update api docs and remove chatglm embeddings for now

* remove chatglm embeddings in examples

* new refactor for ut to add bloom and transformers llama ut

* disable llama transformers embedding ut
2023-08-25 11:14:21 +08:00
binbin Deng
5582872744 LLM: update chatglm example to be more friendly for beginners (#8795) 2023-08-25 10:55:01 +08:00
Yina Chen
7c37424a63 Fix voice assistant example input error on Linux (#8799)
* fix linux error

* update

* remove alsa log
2023-08-25 10:47:27 +08:00
Yang Wang
bf3591e2ff Optimize chatglm2 for bf16 (#8725)
* make chatglm works with bf16

* fix style

* support chatglm v1

* fix style

* fix style

* add chatglm2 file
2023-08-24 10:04:25 -07:00
xingyuan li
c94bdd3791 [LLM] Merge windows & linux nightly test (#8756)
* fix download statement
* add check before build wheel
* use curl to upload files
* windows unittest won't upload converted model
* split llm-cli test into windows & linux versions
* update tempdir create way
* fix nightly converted model name
* windows llm-cli starcoder test temply disabled
* remove taskset dependency
* rename llm_unit_tests_linux to llm_unit_tests
2023-08-23 12:48:41 +09:00
Jason Dai
dcadd09154 Update llm document (#8784) 2023-08-21 22:34:44 +08:00
Yishuo Wang
611c1fb628 [LLM] change default n_threads of native int4 langchain API (#8779) 2023-08-21 13:30:12 +08:00
Yishuo Wang
3d1f2b44f8 LLM: change default n_threads of native int4 models (#8776) 2023-08-18 15:46:19 +08:00
Yishuo Wang
2ba2133613 fix starcoder chinese output (#8773) 2023-08-18 13:37:02 +08:00
binbin Deng
548f7a6cf7 LLM: update convert of llama family to support llama2-70B (#8747) 2023-08-18 09:30:35 +08:00
Yina Chen
4afea496ab support q8_0 (#8765) 2023-08-17 15:06:36 +08:00
Ruonan Wang
e9aa2bd890 LLM: reduce GPU 1st token latency and update example (#8763)
* reduce 1st token latency

* update example

* fix

* fix style

* update readme of gpu benchmark
2023-08-16 18:01:23 +08:00
binbin Deng
06609d9260 LLM: add qwen example on arc (#8757) 2023-08-16 17:11:08 +08:00
SONG Ge
f4164e4492 [BigDL LLM] Update readme for unifying transformers API (#8737)
* update readme doc

* fix readthedocs error

* update comment

* update exception error info

* invalidInputError instead

* fix readme typo error and remove import error

* fix more typo
2023-08-16 14:22:32 +08:00
Song Jiaming
c1f9af6d97 [LLM] chatglm example and transformers low-bit examples (#8751) 2023-08-16 11:41:44 +08:00
Ruonan Wang
8805186f2f LLM: add benchmark tool for gpu (#8760)
* add benchmark tool for gpu

* update
2023-08-16 11:22:10 +08:00
binbin Deng
97283c033c LLM: add falcon example on arc (#8742) 2023-08-15 17:38:38 +08:00
binbin Deng
8c55911308 LLM: add baichuan-13B on arc example (#8755) 2023-08-15 15:07:04 +08:00
binbin Deng
be2ae6eb7c LLM: fix langchain native int4 voiceasistant example (#8750) 2023-08-14 17:23:33 +08:00
Ruonan Wang
d28ad8f7db LLM: add whisper example for arc transformer int4 (#8749)
* add whisper example for arc int4

* fix
2023-08-14 17:05:48 +08:00
Yishuo Wang
77844125f2 [LLM] Support chatglm cache (#8745) 2023-08-14 15:10:46 +08:00
Ruonan Wang
faaccb64a2 LLM: add chatglm2 example for Arc (#8741)
* add chatglm2 example

* update

* fix readme
2023-08-14 10:43:08 +08:00
binbin Deng
b10d7e1adf LLM: add mpt example on arc (#8723) 2023-08-14 09:40:01 +08:00
binbin Deng
e9a1afffc5 LLM: add internlm example on arc (#8722) 2023-08-14 09:39:39 +08:00
SONG Ge
aceea4dc29 [LLM] Unify Transformers and Native API (#8713)
* re-open pr to run on latest runner

* re-add examples and ut

* rename ut and move deprecate to warning instead of raising an error info

* ut fix
2023-08-11 19:45:47 +08:00
Yishuo Wang
f91035c298 [LLM] fix chatglm native int4 emoji output (#8739) 2023-08-11 15:38:41 +08:00
binbin Deng
77efcf7b1d LLM: fix ChatGLM2 native int4 stream output (#8733) 2023-08-11 14:51:50 +08:00
Ruonan Wang
ca3e59a1dc LLM: support stop for starcoder native int4 stream (#8734) 2023-08-11 14:51:30 +08:00
Song Jiaming
e292dfd970 [WIP] LLM transformers api for langchain (#8642) 2023-08-11 13:32:35 +08:00
Yishuo Wang
3d5a7484a2 [LLM] fix bloom and starcoder memory release (#8728) 2023-08-11 11:18:19 +08:00
xingyuan li
02ec01cb48 [LLM] Add bigdl-core-xe dependency when installing bigdl-llm[xpu] (#8716)
* add bigdl-core-xe dependency
2023-08-10 17:41:42 +09:00
Shengsheng Huang
7c56c39e36 Fix GPU examples READ to use bigdl-core-xe (#8714)
* Update README.md

* Update README.md
2023-08-10 12:53:49 +08:00
Yina Chen
6d1ca88aac add voice assistant example (#8711) 2023-08-10 12:42:14 +08:00
Song Jiaming
e717e304a6 LLM first example test and template (#8658) 2023-08-10 10:03:11 +08:00
Ruonan Wang
1a7b698a83 [LLM] support ipex arc int4 & add basic llama2 example (#8700)
* first support of xpu

* make it works on gpu

update setup

update

add GPU llama2 examples

add use_optimize flag to disbale optimize for gpu

fix style

update gpu exmaple readme

fix

* update example, and update env

* fix setup to add cpp files

* replace jit with aot to avoid data leak

* rename to bigdl-core-xe

* update installation in example readme
2023-08-09 22:20:32 +08:00
Jason Dai
d03218674a Update llm readme (#8703) 2023-08-09 14:47:26 +08:00
Kai Huang
1b65288bdb Add api doc for LLM (#8605)
* api doc initial

* update desc
2023-08-08 18:17:16 +08:00
binbin Deng
4c44153584 LLM: add Qwen transformers int4 example (#8699) 2023-08-08 11:23:09 +08:00
Yishuo Wang
710b9b8982 [LLM] add linux chatglm pybinding binary file (#8698) 2023-08-08 11:16:30 +08:00
binbin Deng
ea5d7aff5b LLM: add chatglm native int4 transformers API (#8695) 2023-08-07 17:52:47 +08:00
Yishuo Wang
6da830cf7e [LLM] add chaglm pybinding binary file in setup.py (#8692) 2023-08-07 09:41:03 +08:00
Cengguang Zhang
ebcf75d506 feat: set transformers lib version. (#8683) 2023-08-04 15:01:59 +08:00
Yishuo Wang
ef08250c21 [LLM] chatglm pybinding support (#8672) 2023-08-04 14:27:29 +08:00
Yishuo Wang
5837cc424a [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
Yang Wang
b6468bac43 optimize chatglm2 long sequence (#8662)
* add chatglm2

* optimize a little

* optimize chatglm long sequence

* fix style

* address comments and fix style

* fix bug
2023-08-03 17:56:24 -07:00
Yang Wang
3407f87075 Fix llama kv cache bug (#8674) 2023-08-03 17:54:55 -07:00
Yina Chen
59903ea668 llm linux support avx & avx2 (#8669) 2023-08-03 17:10:59 +08:00
xingyuan li
110cfb5546 [LLM] Remove old windows nightly test code (#8668)
Remove old Windows nightly test code triggered by task scheduler
Add new Windows nightly workflow for nightly testing
2023-08-03 17:12:23 +09:00
xingyuan li
610084e3c0 [LLM] Complete windows unittest (#8611)
* add windows nightly test workflow
* use github runner to run pr test
* model load should use lowbit
* remove tmp dir after testing
2023-08-03 14:48:42 +09:00
binbin Deng
a15a2516e6 add (#8659) 2023-08-03 10:12:10 +08:00
Xin Qiu
0714888705 build windows avx dll (#8657)
* windows avx

* add to actions
2023-08-03 02:06:24 +08:00
Yina Chen
119bf6d710 [LLM] Support linux cpp dynamic load .so (#8655)
* support linux cpp dynamic load .so

* update cli
2023-08-02 20:15:45 +08:00
Zhao Changmin
ca998cc6f2 LLM: Mute shape mismatch output (#8601)
* LLM: Mute shape mismatch output
2023-08-02 16:46:22 +08:00
Zhao Changmin
04c713ef06 LLM: Disable transformer api pretraining_tp (#8645)
* disable pretraining_tp
2023-08-02 11:26:01 +08:00
binbin Deng
6fc31bb4cf LLM: first update descriptions for ChatGLM transformers int4 example (#8646) 2023-08-02 11:00:56 +08:00
Yang Wang
cbeae97a26 Optimize Llama Attention to to reduce KV cache memory copy (#8580)
* Optimize llama attention to reduce KV cache memory copy

* fix bug

* fix style

* remove git

* fix style

* fix style

* fix style

* fix tests

* move llama attention to another file

* revert

* fix style

* remove jit

* fix
2023-08-01 16:37:58 -07:00
binbin Deng
39994738d1 LLM: add chat & stream chat example for ChatGLM2 transformers int4 (#8636) 2023-08-01 14:57:45 +08:00
xingyuan li
cdfbe652ca [LLM] Add chatglm support for llm-cli (#8641)
* add chatglm build
* add llm-cli support
* update git
* install cmake
* add ut for chatglm
* add files to setup
* fix bug cause permission error when sf lack file
2023-08-01 14:30:17 +09:00
Zhao Changmin
d6cbfc6d2c LLM: Add requirements in whisper example (#8644)
* LLM: Add requirements in whisper example
2023-08-01 12:07:14 +08:00
Zhao Changmin
3e10260c6d LLM: llm-convert support chatglm family (#8643)
* convert chatglm
2023-08-01 11:16:18 +08:00
Yina Chen
a607972c0b [LLM]LLM windows load -api.dll (#8631)
* temp

* update

* revert setup.py
2023-07-31 13:47:20 +08:00
xingyuan li
3361b66449 [LLM] Revert llm-cli to disable selecting executables on Windows (#8630)
* revert vnni file select
* revert setup.py
* add model-api.dll
2023-07-31 11:15:44 +09:00
binbin Deng
3dbab9087b LLM: add llama2-7b native int4 example (#8629) 2023-07-28 10:56:16 +08:00
binbin Deng
fb32fefcbe LLM: support tensor input of native int4 generate (#8620) 2023-07-27 17:59:49 +08:00
Zhao Changmin
5b484ab48d LLM: Support load_low_bit loading models in shards format (#8612)
* shards_model

---------

Co-authored-by: leonardozcm <leonaordo1997zcm@gmail.com>
2023-07-26 13:30:01 +08:00
binbin Deng
fcf8c085e3 LLM: add llama2-13b native int4 example (#8613) 2023-07-26 10:12:52 +08:00
Song Jiaming
650b82fa6e [LLM] add CausalLM and Speech UT (#8597) 2023-07-25 11:22:36 +08:00
Zhao Changmin
af201052db avoid malloc all missing keys in fp32 (#8600) 2023-07-25 09:48:51 +08:00
binbin Deng
3f24202e4c [LLM] Add more transformers int4 example (Llama 2) (#8602) 2023-07-25 09:21:12 +08:00
Jason Dai
0f8201c730 llm readme update (#8595) 2023-07-24 09:47:49 +08:00
Yuwen Hu
ba42a6da63 [LLM] Set torch_dtype default value to 'auto' for transformers low bit from_pretrained API 2023-07-21 17:55:00 +08:00
Yuwen Hu
bbde423349 [LLM] Add current Linux UT inference tests to nightly tests (#8578)
* Add current inference uts to nightly tests

* Change test model from chatglm-6b to chatglm2-6b

* Add thread num env variable for nightly test

* Fix urls

* Small fix
2023-07-21 13:26:38 +08:00
Yang Wang
feb3af0567 Optimize transformer int4 memory footprint (#8579) 2023-07-20 20:22:13 -07:00
Yang Wang
57e880f63a [LLM] use pytorch linear for large input matrix (#8492)
* use pytorch linear for large input matrix

* only works on server

* fix style

* optimize memory

* first check server

* revert

* address comments

* fix style
2023-07-20 09:54:25 -07:00
Yuwen Hu
6504e31a97 Small fix (#8577) 2023-07-20 16:37:04 +08:00
Yuwen Hu
2266ca7d2b [LLM] Small updates to transformers int4 ut (#8574)
* Small fix to transformers int4 ut

* Small fix
2023-07-20 13:20:25 +08:00
xingyuan li
7b8d9c1b0d [LLM] Add dependency file check in setup.py (#8565)
* add package file check
2023-07-20 14:20:08 +09:00
Song Jiaming
411d896636 LLM first transformers UT (#8514)
* ut

* transformers api first ut

* name

* dir issue

* use chatglm instead of chatglm2

* omp

* set omp in sh

* source

* taskset

* test

* test omp

* add test
2023-07-20 10:16:27 +08:00
Yuwen Hu
cad78740a7 [LLM] Small fixes to the Whisper transformers INT4 example (#8573)
* Small fixes to the whisper example

* Small fix

* Small fix
2023-07-20 10:11:33 +08:00
binbin Deng
7a9fdf74df [LLM] Add more transformers int4 example (Dolly v2) (#8571)
* add

* add trust_remote_mode
2023-07-19 18:20:16 +08:00
Zhao Changmin
e680af45ea LLM: Optimize Langchain Pipeline (#8561)
* LLM: Optimize Langchain Pipeline

* load in low bit
2023-07-19 17:43:13 +08:00
Shengsheng Huang
616b7cb0a2 add more langchain examples (#8542)
* update langchain descriptions

* add mathchain example

* update readme

* update readme
2023-07-19 17:42:18 +08:00
binbin Deng
457571b44e [LLM] Add more transformers int4 example (InternLM) (#8557) 2023-07-19 15:15:38 +08:00
xingyuan li
b6510fa054 fix move/download dll step (#8564) 2023-07-19 12:17:07 +09:00
xingyuan li
c52ed37745 fix starcoder dll name (#8563) 2023-07-19 11:55:06 +09:00
Zhao Changmin
3dbe3bf18e transformer_int4 (#8553) 2023-07-19 08:33:58 +08:00
Zhao Changmin
49d636e295 [LLM] whisper model transformer int4 verification and example (#8511)
* LLM: transformer api support

* va

* example

* revert

* pep8

* pep8
2023-07-19 08:33:20 +08:00
Yina Chen
9a7bc17ca1 [LLM] llm supports vnni link on windows (#8543)
* support win vnni link

* fix style

* fix style

* use isa_checker

* fix

* typo

* fix

* update
2023-07-18 16:43:45 +08:00
Yina Chen
4582b6939d [LLM]llm gptneox chat (#8527)
* linux

* support win

* merge upstream & support vnni lib in chat
2023-07-18 11:17:17 +08:00
Jason Dai
1ebc43b151 Update READMEs (#8554) 2023-07-18 11:06:06 +08:00
Yuwen Hu
ee70977c07 [LLM] Transformers int4 example small typo fixes (#8550) 2023-07-17 18:15:32 +08:00
Yuwen Hu
1344f50f75 [LLM] Add more transformers int4 examples (Falcon) (#8546)
* Initial commit

* Add Falcon examples and other small fix

* Small fix

* Small fix

* Update based on comments

* Small fix
2023-07-17 17:36:21 +08:00
Yuwen Hu
de772e7a80 Update mpt for prompt tuning (#8547) 2023-07-17 17:33:54 +08:00
binbin Deng
f1fd746722 [LLM] Add more transformers int4 example (vicuna) (#8544) 2023-07-17 16:59:55 +08:00
Xin Qiu
fccae91461 Add load_low_bit save_load_bit to AutoModelForCausalLM (#8531)
* transformers save_low_bit load_low_bit

* update example and add readme

* update

* update

* update

* add ut

* update
2023-07-17 15:29:55 +08:00
binbin Deng
808a64d53a [LLM] Add more transformers int4 example (starcoder) (#8540) 2023-07-17 14:41:19 +08:00
xingyuan li
e57db777e0 [LLM] Setup.py & llm-cli update for windows vnni binary files (#8537)
* update setup.py
* update llm-cli
2023-07-17 12:28:38 +09:00
binbin Deng
f56b5ade4c [LLM] Add more transformers int4 example (chatglm2) (#8539) 2023-07-14 17:58:33 +08:00
binbin Deng
92d33cf35a [LLM] Add more transformers int4 example (phoenix) (#8520) 2023-07-14 17:58:04 +08:00
Yuwen Hu
e0f0def279 Remove unused example for now (#8538) 2023-07-14 17:32:50 +08:00
binbin Deng
b397e40015 [LLM] Add more transformers int4 example (RedPajama) (#8523) 2023-07-14 17:30:28 +08:00
Yuwen Hu
7bf3e10415 [LLM] Add more int4 transformers examples (MOSS) (#8532)
* Add Moss example

* Small fix
2023-07-14 16:41:41 +08:00
Yuwen Hu
59b7287ef5 [LLM] Add more transformers int4 example (Baichuan) (#8522)
* Add example model Baichuan

* Small updates to client windows settings

* Small refactor

* Small fix
2023-07-14 16:41:29 +08:00
Yuwen Hu
ca6e38607c [LLM] Add more transformers examples (ChatGLM) (#8521)
* Add example for chatglm v1 and other small fixes

* Small fix

* Small further fix

* Small fix

* Update based on comments & updates for client windows recommended settingts

* Small fix

* Small refactor

* Small fix

* Small fix

* Small fix to dolly v1

* Small fix
2023-07-14 16:41:13 +08:00
xingyuan li
c87853233b [LLM] Add windows vnni binary build step (#8518)
* add windows vnni build step
* update build info
* add download command
2023-07-14 17:24:39 +09:00
Yishuo Wang
6320bf201e LLM: fix memory access violation (#8519) 2023-07-13 17:08:08 +08:00
xingyuan li
60c2c0c3dc Bug fix for merged pr #8503 (#8516) 2023-07-13 17:26:30 +09:00
Yuwen Hu
349bcb4bae [LLM] Add more transformers int4 example (Dolly v1) (#8517)
* Initial commit for dolly v1

* Add example for Dolly v1 and other small fix

* Small output updates

* Small fix

* fix based on comments
2023-07-13 16:13:47 +08:00
Xin Qiu
90e3d86bce rename low bit type name (#8512)
* change qx_0 to sym_intx

* update

* fix typo

* update

* fix type

* fix style

* add python doc

* meet code review

* fix style
2023-07-13 15:53:31 +08:00
xingyuan li
4f152b4e3a [LLM] Merge the llm.cpp build and the pypi release (#8503)
* checkout llm.cpp to build new binary
* use artifact to get latest built binary files
* rename quantize
* modify all release workflow
2023-07-13 16:34:24 +09:00
Yuwen Hu
bcde8ec83e [LLM] Small fix to MPT Example (#8513) 2023-07-13 14:33:21 +08:00
Zhao Changmin
ba0da17b40 LLM: Support AutoModelForSeq2SeqLM transformer API (#8449)
* LLM: support AutoModelForSeq2SeqLM transformer API
2023-07-13 13:33:51 +08:00
Yishuo Wang
86b5938075 LLM: fix llm pybinding (#8509) 2023-07-13 10:27:08 +08:00
Yuwen Hu
fcc352eee3 [LLM] Add more transformers_int4 examples (MPT) (#8498)
* Update transformers_int4 readme, and initial commit for mpt

* Update example for mpt

* Small fix and recover transformers_int4_pipeline_readme.md for now

* Update based on comments

* Small fix

* Small fix

* Update based on comments
2023-07-13 09:41:16 +08:00
Zhao Changmin
23f6a4c21f LLM: Optimize transformer int4 loading (#8499)
* LLM: Optimize transformer int4 loading
2023-07-12 15:25:42 +08:00
Yishuo Wang
dd3f953288 Support vnni check (#8497) 2023-07-12 10:11:15 +08:00
Xin Qiu
cd7a980ec4 Transformer int4 add qtype, support q4_1 q5_0 q5_1 q8_0 (#8481)
* quant in Q4 5 8

* meet code review

* update readme

* style

* update

* fix error

* fix error

* update

* fix style

* update

* Update README.md

* Add load_in_low_bit
2023-07-12 08:23:08 +08:00
Yishuo Wang
db39d0a6b3 LLM: disable mmap by default for better performance (#8467) 2023-07-11 09:26:26 +08:00
Yuwen Hu
52c6b057d6 Initial LLM Transformers example refactor (#8491) 2023-07-10 17:53:57 +08:00
Junwei Deng
254a7aa3c4 bigdl-llm: add voice-assistant example that are migrated from langchain use-case document (#8468) 2023-07-10 16:51:45 +08:00
Yishuo Wang
98bac815e4 specify numpy version (#8489) 2023-07-10 16:50:16 +08:00
Zhao Changmin
81d655cda9 LLM: transformer int4 save and load (#8462)
* LLM: transformer int4 save and load
2023-07-10 16:34:41 +08:00
binbin Deng
d489775d2c LLM: fix inconsistency between output token number and max_new_token (#8479) 2023-07-07 17:31:05 +08:00