Commit graph

1962 commits

Author SHA1 Message Date
Cheen Hau, 俊豪
b2aa267f50 Enhance LLM GPU installation document (#9828)
* Improve gpu install doc

* Add troubleshooting - setvars.sh not done properly.

* Further improvements

* 2024.x.x -> 2024.0

* Fixes

* Fix Install BigDL-LLM From Wheel : bigdl-llm[xpu_2.0]

* Remove "export USE_XETLA=OFF" for Max GPU
2024-01-09 16:30:50 +08:00
Yuwen Hu
aebed4b7bc Enable llm gpu tests for PyTorch 2.1 (#9863) 2024-01-09 16:29:02 +08:00
Yuwen Hu
23fc888abe Update llm gpu xpu default related info to PyTorch 2.1 (#9866) 2024-01-09 15:38:47 +08:00
Jason Dai
a3725b0816 Update readme (#9865) 2024-01-09 15:19:42 +08:00
Yishuo Wang
36496d60ac only use quantize kv cache on MTL (#9862) 2024-01-09 13:24:02 +08:00
ZehuaCao
146076bdb5 Support llm-awq backend (#9856)
* Support for LLM-AWQ Backend

* fix

* Update README.md

* Add awqconfig

* modify init

* update

* support llm-awq

* fix style

* fix style

* update

* fix AwqBackendPackingMethod not found error

* fix style

* update README

* fix style

---------

Co-authored-by: Uxito-Ada <414416158@qq.com>
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
Co-authored-by: cyita <yitastudy@gmail.com>
2024-01-09 13:07:32 +08:00
Ruonan Wang
fea6f16057 LLM: add mlp fusion for fp8e5 and update related check (#9860)
* update mlp fusion

* fix style

* update
2024-01-09 09:56:32 +08:00
binbin Deng
294fd32787 LLM: update DeepSpeed AutoTP example with GPU memory optimization (#9823) 2024-01-09 09:22:49 +08:00
Yuwen Hu
5ba1dc38d4 [LLM] Change default Linux GPU install option to PyTorch 2.1 (#9858)
* Update default xpu to ipex 2.1

* Update related install ut support correspondingly

* Add arc ut tests for both ipex 2.0 and 2.1

* Small fix

* Diable ipex 2.1 test for now as oneapi 2024.0 has not beed installed on the test machine

* Update document for default PyTorch 2.1

* Small fix

* Small fix

* Small doc fixes

* Small fixes
2024-01-08 17:16:17 +08:00
Mingyu Wei
ed81baa35e LLM: Use default typing-extension in LangChain examples (#9857)
* remove typing extension downgrade in readme; minor fixes of code

* fix typos in README

* change default question of docqa.py
2024-01-08 16:50:55 +08:00
Jiao Wang
3b6372ab12 Fix Llama transformers 4.36 support (#9852)
* supoort 4.36

* style

* update

* update

* update

* fix merge

* update
2024-01-08 00:32:23 -08:00
Chen, Zhentao
1b585b0d40 set fp8 default as e5m2 (#9859) 2024-01-08 15:53:57 +08:00
Chen, Zhentao
cad5c2f516 fixed harness deps version (#9854)
* fixed harness deps version

* fix typo
2024-01-08 15:22:42 +08:00
Kai Huang
62a3c0fc16 Fix quick build doc (#9853) 2024-01-08 14:26:34 +08:00
Ruonan Wang
dc995006cc LLM: add flash attention for mistral / mixtral (#9846)
* add flash attention for mistral

* update

* add flash attn for mixtral

* fix style
2024-01-08 09:51:34 +08:00
Yishuo Wang
afaa871144 [LLM] support quantize kv cache to fp8 (#9812) 2024-01-08 09:28:20 +08:00
Jiao Wang
248ae7fad2 LLama optimize_model to support transformers 4.36 (#9818)
* supoort 4.36

* style

* update

* update

* update
2024-01-05 11:30:18 -08:00
WeiguangHan
4269a585b2 LLM: arc perf test using ipex2.1 (#9837)
* LLM: upgrade to ipex_2.1 for arc perf test

* revert llm_performance_tests.yml
2024-01-05 18:12:19 +08:00
Yuwen Hu
86f86a64a2 Small fixes to ipex 2.1 UT support (#9848) 2024-01-05 17:36:21 +08:00
Ruonan Wang
a60bda3324 LLM: update check for deepspeed (#9838) 2024-01-05 16:44:10 +08:00
Yuwen Hu
f25d23dfbf [LLM] Add support for PyTorch 2.1 install in UT for GPU (#9845)
* Add support for ipex 2.1 install in UT and fix perf test

* Small fix
2024-01-05 16:13:18 +08:00
Ruonan Wang
16433dd959 LLM: fix first token judgement of flash attention (#9841)
* fix flash attention

* meet code review

* fix
2024-01-05 13:49:37 +08:00
Yuwen Hu
ad4a6b5096 Fix langchain UT by not downgrading typing-extension (#9842) 2024-01-05 13:38:04 +08:00
Yina Chen
f919f5792a fix kv cache out of bound (#9827) 2024-01-05 12:38:57 +08:00
Ruonan Wang
5df31db773 LLM: fix accuracy issue of chatglm3 (#9830)
* add attn mask for first token

* fix

* fix

* change attn calculation

* fix

* fix

* fix style

* fix style
2024-01-05 10:52:05 +08:00
Jinyi Wan
3147ebe63d Add cpu and gpu examples for SOLAR-10.7B (#9821) 2024-01-05 09:50:28 +08:00
WeiguangHan
ad6b182916 LLM: change the color of peak diff (#9836) 2024-01-04 19:30:32 +08:00
Xiangyu Tian
38c05be1c0 [LLM] Fix dtype mismatch in Baichuan2-13b (#9834) 2024-01-04 15:34:42 +08:00
Ruonan Wang
8504a2bbca LLM: update qlora alpaca example to change lora usage (#9835)
* update example

* fix style
2024-01-04 15:22:20 +08:00
Ziteng Zhang
05b681fa85 [LLM] IPEX auto importer set on by default (#9832)
* Set BIGDL_IMPORT_IPEX default to True

* Remove import intel_extension_for_pytorch as ipex from GPU example
2024-01-04 13:33:29 +08:00
Wang, Jian4
4ceefc9b18 LLM: Support bitsandbytes config on qlora finetune (#9715)
* test support bitsandbytesconfig

* update style

* update cpu example

* update example

* update readme

* update unit test

* use bfloat16

* update logic

* use int4

* set defalut bnb_4bit_use_double_quant

* update

* update example

* update model.py

* update

* support lora example
2024-01-04 11:23:16 +08:00
WeiguangHan
9a14465560 LLM: add peak diff (#9789)
* add peak diff

* small fix

* revert yml file
2024-01-03 18:18:19 +08:00
Mingyu Wei
f4eb5da42d disable arc ut (#9825) 2024-01-03 18:10:34 +08:00
Ruonan Wang
20e9742fa0 LLM: fix chatglm3 issue (#9820)
* fix chatglm3 issue

* small update
2024-01-03 16:15:55 +08:00
Wang, Jian4
a54cd767b1 LLM: Add gguf falcon (#9801)
* init falcon

* update convert.py

* update style
2024-01-03 14:49:02 +08:00
Guancheng Fu
0396fafed1 Update BigDL-LLM-inference image (#9805)
* upgrade to oneapi 2024

* Pin level-zero-gpu version

* add flag
2024-01-03 14:00:09 +08:00
Yishuo Wang
5c6543e070 Reorganize LLM GPU installation document (#9777) 2024-01-03 13:53:05 +08:00
Jason Dai
3ab3105bab Update readme (#9816) 2024-01-03 12:07:00 +08:00
Yuwen Hu
668c2095b1 Remove unnecessary warning when installing llm (#9815) 2024-01-03 10:30:05 +08:00
dingbaorong
f5752ead36 Add whisper test (#9808)
* add whisper benchmark code

* add librispeech_asr.py

* add bigdl license
2024-01-02 16:36:05 +08:00
binbin Deng
6584539c91 LLM: fix installation of codellama (#9813) 2024-01-02 14:32:50 +08:00
Kai Huang
4d01069302 Temp remove baichuan2-13b 1k from arc perf test (#9810) 2023-12-29 12:54:13 +08:00
dingbaorong
a2e668a61d fix arc ut test (#9736) 2023-12-28 16:55:34 +08:00
Qiyuan Gong
f0f9d45eac [LLM] IPEX import support bigdl-core-xe-21 (#9769)
Add support for bigdl-core-xe-21.
2023-12-28 15:23:58 +08:00
dingbaorong
a8baf68865 fix csv_to_html (#9802) 2023-12-28 14:58:51 +08:00
Guancheng Fu
5857a38321 [vLLM] Add option to adjust KV_CACHE_ALLOC_BLOCK_LENGTH (#9782)
* add option kv_cache_block

* change var name
2023-12-28 14:41:47 +08:00
Ruonan Wang
99bddd3ab4 LLM: better FP16 support for Intel GPUs (#9791)
* initial support

* fix

* fix style

* fix

* limi esimd usage condition

* refactor code

* fix style

* small fix

* meet code review

* small fix
2023-12-28 13:30:13 +08:00
Yishuo Wang
7d9f6c6efc fix cpuinfo error (#9793) 2023-12-28 09:23:44 +08:00
Wang, Jian4
7ed9538b9f LLM: support gguf mpt (#9773)
* add gguf mpt

* update
2023-12-28 09:22:39 +08:00
Cengguang Zhang
d299f108d0 update falcon attention forward. (#9796) 2023-12-28 09:11:59 +08:00