Yuwen Hu
5ba1dc38d4
[LLM] Change default Linux GPU install option to PyTorch 2.1 ( #9858 )
...
* Update default xpu to ipex 2.1
* Update related install ut support correspondingly
* Add arc ut tests for both ipex 2.0 and 2.1
* Small fix
* Diable ipex 2.1 test for now as oneapi 2024.0 has not beed installed on the test machine
* Update document for default PyTorch 2.1
* Small fix
* Small fix
* Small doc fixes
* Small fixes
2024-01-08 17:16:17 +08:00
Mingyu Wei
ed81baa35e
LLM: Use default typing-extension in LangChain examples ( #9857 )
...
* remove typing extension downgrade in readme; minor fixes of code
* fix typos in README
* change default question of docqa.py
2024-01-08 16:50:55 +08:00
Jiao Wang
3b6372ab12
Fix Llama transformers 4.36 support ( #9852 )
...
* supoort 4.36
* style
* update
* update
* update
* fix merge
* update
2024-01-08 00:32:23 -08:00
Chen, Zhentao
1b585b0d40
set fp8 default as e5m2 ( #9859 )
2024-01-08 15:53:57 +08:00
Chen, Zhentao
cad5c2f516
fixed harness deps version ( #9854 )
...
* fixed harness deps version
* fix typo
2024-01-08 15:22:42 +08:00
Kai Huang
62a3c0fc16
Fix quick build doc ( #9853 )
2024-01-08 14:26:34 +08:00
Ruonan Wang
dc995006cc
LLM: add flash attention for mistral / mixtral ( #9846 )
...
* add flash attention for mistral
* update
* add flash attn for mixtral
* fix style
2024-01-08 09:51:34 +08:00
Yishuo Wang
afaa871144
[LLM] support quantize kv cache to fp8 ( #9812 )
2024-01-08 09:28:20 +08:00
Jiao Wang
248ae7fad2
LLama optimize_model to support transformers 4.36 ( #9818 )
...
* supoort 4.36
* style
* update
* update
* update
2024-01-05 11:30:18 -08:00
WeiguangHan
4269a585b2
LLM: arc perf test using ipex2.1 ( #9837 )
...
* LLM: upgrade to ipex_2.1 for arc perf test
* revert llm_performance_tests.yml
2024-01-05 18:12:19 +08:00
Yuwen Hu
86f86a64a2
Small fixes to ipex 2.1 UT support ( #9848 )
2024-01-05 17:36:21 +08:00
Ruonan Wang
a60bda3324
LLM: update check for deepspeed ( #9838 )
2024-01-05 16:44:10 +08:00
Yuwen Hu
f25d23dfbf
[LLM] Add support for PyTorch 2.1 install in UT for GPU ( #9845 )
...
* Add support for ipex 2.1 install in UT and fix perf test
* Small fix
2024-01-05 16:13:18 +08:00
Ruonan Wang
16433dd959
LLM: fix first token judgement of flash attention ( #9841 )
...
* fix flash attention
* meet code review
* fix
2024-01-05 13:49:37 +08:00
Yuwen Hu
ad4a6b5096
Fix langchain UT by not downgrading typing-extension ( #9842 )
2024-01-05 13:38:04 +08:00
Yina Chen
f919f5792a
fix kv cache out of bound ( #9827 )
2024-01-05 12:38:57 +08:00
Ruonan Wang
5df31db773
LLM: fix accuracy issue of chatglm3 ( #9830 )
...
* add attn mask for first token
* fix
* fix
* change attn calculation
* fix
* fix
* fix style
* fix style
2024-01-05 10:52:05 +08:00
Jinyi Wan
3147ebe63d
Add cpu and gpu examples for SOLAR-10.7B ( #9821 )
2024-01-05 09:50:28 +08:00
WeiguangHan
ad6b182916
LLM: change the color of peak diff ( #9836 )
2024-01-04 19:30:32 +08:00
Xiangyu Tian
38c05be1c0
[LLM] Fix dtype mismatch in Baichuan2-13b ( #9834 )
2024-01-04 15:34:42 +08:00
Ruonan Wang
8504a2bbca
LLM: update qlora alpaca example to change lora usage ( #9835 )
...
* update example
* fix style
2024-01-04 15:22:20 +08:00
Ziteng Zhang
05b681fa85
[LLM] IPEX auto importer set on by default ( #9832 )
...
* Set BIGDL_IMPORT_IPEX default to True
* Remove import intel_extension_for_pytorch as ipex from GPU example
2024-01-04 13:33:29 +08:00
Wang, Jian4
4ceefc9b18
LLM: Support bitsandbytes config on qlora finetune ( #9715 )
...
* test support bitsandbytesconfig
* update style
* update cpu example
* update example
* update readme
* update unit test
* use bfloat16
* update logic
* use int4
* set defalut bnb_4bit_use_double_quant
* update
* update example
* update model.py
* update
* support lora example
2024-01-04 11:23:16 +08:00
WeiguangHan
9a14465560
LLM: add peak diff ( #9789 )
...
* add peak diff
* small fix
* revert yml file
2024-01-03 18:18:19 +08:00
Mingyu Wei
f4eb5da42d
disable arc ut ( #9825 )
2024-01-03 18:10:34 +08:00
Ruonan Wang
20e9742fa0
LLM: fix chatglm3 issue ( #9820 )
...
* fix chatglm3 issue
* small update
2024-01-03 16:15:55 +08:00
Wang, Jian4
a54cd767b1
LLM: Add gguf falcon ( #9801 )
...
* init falcon
* update convert.py
* update style
2024-01-03 14:49:02 +08:00
Guancheng Fu
0396fafed1
Update BigDL-LLM-inference image ( #9805 )
...
* upgrade to oneapi 2024
* Pin level-zero-gpu version
* add flag
2024-01-03 14:00:09 +08:00
Yishuo Wang
5c6543e070
Reorganize LLM GPU installation document ( #9777 )
2024-01-03 13:53:05 +08:00
Jason Dai
3ab3105bab
Update readme ( #9816 )
2024-01-03 12:07:00 +08:00
Yuwen Hu
668c2095b1
Remove unnecessary warning when installing llm ( #9815 )
2024-01-03 10:30:05 +08:00
dingbaorong
f5752ead36
Add whisper test ( #9808 )
...
* add whisper benchmark code
* add librispeech_asr.py
* add bigdl license
2024-01-02 16:36:05 +08:00
binbin Deng
6584539c91
LLM: fix installation of codellama ( #9813 )
2024-01-02 14:32:50 +08:00
Kai Huang
4d01069302
Temp remove baichuan2-13b 1k from arc perf test ( #9810 )
2023-12-29 12:54:13 +08:00
dingbaorong
a2e668a61d
fix arc ut test ( #9736 )
2023-12-28 16:55:34 +08:00
Qiyuan Gong
f0f9d45eac
[LLM] IPEX import support bigdl-core-xe-21 ( #9769 )
...
Add support for bigdl-core-xe-21.
2023-12-28 15:23:58 +08:00
dingbaorong
a8baf68865
fix csv_to_html ( #9802 )
2023-12-28 14:58:51 +08:00
Guancheng Fu
5857a38321
[vLLM] Add option to adjust KV_CACHE_ALLOC_BLOCK_LENGTH ( #9782 )
...
* add option kv_cache_block
* change var name
2023-12-28 14:41:47 +08:00
Ruonan Wang
99bddd3ab4
LLM: better FP16 support for Intel GPUs ( #9791 )
...
* initial support
* fix
* fix style
* fix
* limi esimd usage condition
* refactor code
* fix style
* small fix
* meet code review
* small fix
2023-12-28 13:30:13 +08:00
Yishuo Wang
7d9f6c6efc
fix cpuinfo error ( #9793 )
2023-12-28 09:23:44 +08:00
Wang, Jian4
7ed9538b9f
LLM: support gguf mpt ( #9773 )
...
* add gguf mpt
* update
2023-12-28 09:22:39 +08:00
Cengguang Zhang
d299f108d0
update falcon attention forward. ( #9796 )
2023-12-28 09:11:59 +08:00
Shaojun Liu
a5e5c3daec
set warm_up: 3 num_trials: 50 for cpu stress test ( #9799 )
2023-12-28 08:55:43 +08:00
dingbaorong
f6bb4ab313
Arc stress test ( #9795 )
...
* add arc stress test
* triger ci
* triger CI
* triger ci
* disable ci
2023-12-27 21:02:41 +08:00
Kai Huang
40eaf76ae3
Add baichuan2-13b to Arc perf ( #9794 )
...
* add baichuan2-13b
* fix indent
* revert
2023-12-27 19:38:53 +08:00
Yuwen Hu
dfe28c58bb
Small upload fix for igpu-perf test ( #9792 )
2023-12-27 15:50:58 +08:00
Shaojun Liu
6c75c689ea
bigdl-llm stress test for stable version ( #9781 )
...
* 1k-512 2k-512 baseline
* add cpu stress test
* update yaml name
* update
* update
* clean up
* test
* update
* update
* update
* test
* update
2023-12-27 15:40:53 +08:00
dingbaorong
5cfb4c4f5b
Arc stable version performance regression test ( #9785 )
...
* add arc stable version regression test
* empty gpu mem between different models
* triger ci
* comment spr test
* triger ci
* address kai's comments and disable ci
* merge fp8 and int4
* disable ci
2023-12-27 11:01:56 +08:00
binbin Deng
40edb7b5d7
LLM: fix get environment variables setting ( #9787 )
2023-12-27 09:11:37 +08:00
Kai Huang
689889482c
Reduce max_cache_pos to reduce Baichuan2-13B memory ( #9694 )
...
* optimize baichuan2 memory
* fix
* style
* fp16 mask
* disable fp16
* fix style
* empty cache
* revert empty cache
2023-12-26 19:51:25 +08:00