Commit graph

3260 commits

Author SHA1 Message Date
Yina Chen
45c730ff39
Chatglm support compresskv (#11690)
* chatglm4 support compresskv

* fix

* fix style

* support chatglm2

* fix quantkv conflict

* fix style
2024-08-01 18:20:20 +08:00
Qiyuan Gong
762ad49362
Add RANK_WAIT_TIME into DeepSpeed-AutoTP to avoid CPU memory OOM (#11704)
* DeepSpeed-AutoTP will start multiple processors to load models and convert them in CPU memory. If model/rank_num is large, this will lead to OOM. Add RANK_WAIT_TIME to reduce memory usage by controlling model reading parallelism.
2024-08-01 18:16:21 +08:00
hxsz1997
8ef4caaf5d
add 3k and 4k input of nightly perf test on iGPU (#11701)
* Add 3k&4k input in workflow for iGPU (#11685)

* add 3k&4k input in workflow

* comment for test

* comment models for accelarate test

* remove OOM models

* modify typo

* change test model (#11696)

* reverse test models (#11700)
2024-08-01 14:17:46 +08:00
Guancheng Fu
afeca38a47
Fix import vllm condition (#11682) 2024-07-31 13:50:01 +08:00
Ruonan Wang
54bf3a23a6
add fallback for unsupported k-quants (#11691)
* add fallback

* fix style

* fix
2024-07-31 11:39:58 +08:00
Zijie Li
5079ed9e06
Add Llama3.1 example (#11689)
* Add Llama3.1 example

Add Llama3.1 example for Linux arc and Windows MTL

* Changes made to adjust compatibilities

transformers changed to 4.43.1

* Update index.rst

* Update README.md

* Update index.rst

* Update index.rst

* Update index.rst
2024-07-31 10:53:30 +08:00
Jin, Qiao
6e3ce28173
Upgrade glm-4 example transformers version (#11659)
* upgrade glm-4 example transformers version

* move pip install in one line
2024-07-31 10:24:50 +08:00
Jin, Qiao
a44ab32153
Switch to conhost when running on NPU (#11687) 2024-07-30 17:08:06 +08:00
Wang, Jian4
b119825152
Remove tgi parameter validation (#11688)
* remove validation

* add min warm up

* remove no need source
2024-07-30 16:37:44 +08:00
Yina Chen
670ad887fc
Qwen support compress kv (#11680)
* Qwen support compress kv

* fix style

* fix
2024-07-30 11:16:42 +08:00
hxsz1997
9b36877897
disable default quantize_kv of GQA on MTL (#11679)
* disable default quantizekv of gqa in mtl

* fix stype

* fix stype

* fix stype

* fix stype

* fix stype

* fix stype
2024-07-30 09:38:46 +08:00
Yishuo Wang
c02003925b
add mlp for gemma2 (#11678) 2024-07-29 16:10:23 +08:00
RyuKosei
1da1f1dd0e
Combine two versions of run_wikitext.py (#11597)
* Combine two versions of run_wikitext.py

* Update run_wikitext.py

* Update run_wikitext.py

* aligned the format

* update error display

* simplified argument parser

---------

Co-authored-by: jenniew <jenniewang123@gmail.com>
2024-07-29 15:56:16 +08:00
Yishuo Wang
6f999e6e90
add sdp for gemma2 (#11677) 2024-07-29 15:15:47 +08:00
Ruonan Wang
c11d5301d7
add sdp fp8 for llama (#11671)
* add sdp fp8 for llama

* fix style

* refactor
2024-07-29 13:46:22 +08:00
Yishuo Wang
7f88ce23cd
add more gemma2 optimization (#11673) 2024-07-29 11:13:00 +08:00
Yishuo Wang
3e8819734b
add basic gemma2 optimization (#11672) 2024-07-29 10:46:51 +08:00
Jason Dai
418640e466
Update install_gpu.md 2024-07-27 08:30:10 +08:00
Guoqiong Song
336dfc04b1
fix 1482 (#11661)
Co-authored-by: rnwang04 <ruonan1.wang@intel.com>
2024-07-26 12:39:09 -07:00
Heyang Sun
ba01b85c13
empty cache only for 1st token but rest token to speed up (#11665) 2024-07-26 16:46:21 +08:00
Yina Chen
fc7f8feb83
Support compress kv (#11642)
* mistral snapkv

* update

* mtl update

* update

* update

* update

* add comments

* style fix

* fix style

* support llama

* llama use compress kv

* support mistral 4.40

* fix style

* support diff transformers versions

* move snapkv util to kv

* fix style

* meet comments & small fix

* revert all in one

* fix indent

---------

Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2024-07-26 16:02:00 +08:00
Yishuo Wang
6bcdc6cc8f
fix qwen2 cpu (#11663) 2024-07-26 13:41:51 +08:00
Wang, Jian4
23681fbf5c
Support codegeex4-9b for lightweight-serving (#11648)
* add options, support prompt and not return end_token

* enable openai parameter

* set do_sample None and update style
2024-07-26 09:41:03 +08:00
Guancheng Fu
86fc0492f4
Update oneccl used (#11647)
* Add internal oneccl

* fix

* fix

* add oneccl
2024-07-26 09:38:39 +08:00
Guancheng Fu
a4d30a8211
Change logic for detecting if vllm is available (#11657)
* fix

* fix
2024-07-25 15:24:19 +08:00
Qiyuan Gong
0c6e0b86c0
Refine continuation get input_str (#11652)
* Remove duplicate code in continuation get input_str.
* Avoid infinite loop in all-in-one due to test_length not in the list.
2024-07-25 14:41:19 +08:00
RyuKosei
2fbd375a94
update several models for nightly perf test (#11643)
Co-authored-by: Yishuo Wang <yishuo.wang@intel.com>
2024-07-25 14:06:08 +08:00
Xiangyu Tian
4499d25c26
LLM: Fix ParallelLMHead convert in vLLM cpu (#11654) 2024-07-25 13:07:19 +08:00
binbin Deng
777e61d8c8
Fix qwen2 & int4 on NPU (#11646) 2024-07-24 13:14:39 +08:00
Yishuo Wang
1b3b46e54d
fix chatglm new model (#11639) 2024-07-23 13:44:56 +08:00
Xu, Shuo
7f80db95eb
Change run.py in benchmark to support phi-3-vision in arc-perf (#11638)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-23 09:51:36 +08:00
Xiangyu Tian
060792a648
LLM: Refine Pipeline Parallel FastAPI (#11587)
Refine Pipeline Parallel FastAPI
2024-07-22 15:52:05 +08:00
Shaojun Liu
4d56ef5646
Fix openssf issue (#11632) 2024-07-22 14:14:28 +08:00
Ruonan Wang
ac97b31664
update cpp quickstart about ONEAPI_DEVICE_SELECTOR (#11630)
* update

* update

* small fix
2024-07-22 13:40:28 +08:00
Yuwen Hu
af6d406178
Add section title for conduct graphrag indexing (#11628) 2024-07-22 10:23:26 +08:00
Wang, Jian4
1eed0635f2
Add lightweight serving and support tgi parameter (#11600)
* init tgi request

* update openai api

* update for pp

* update and add readme

* add to docker

* add start bash

* update

* update

* update
2024-07-19 13:15:56 +08:00
Xiangyu Tian
d27a8cd08c
Fix Pipeline Parallel dtype (#11623) 2024-07-19 13:07:40 +08:00
Yishuo Wang
d020ad6397
add save_low_bit support for DiskEmbedding (#11621) 2024-07-19 10:34:53 +08:00
Guoqiong Song
380717f50d
fix gemma for 4.41 (#11531)
* fix gemma for 4.41
2024-07-18 15:02:50 -07:00
Guoqiong Song
5a6211fd56
fix minicpm for transformers>=4.39 (#11533)
* fix minicpm for transformers>=4.39
2024-07-18 15:01:57 -07:00
Yishuo Wang
0209427cf4
Add disk_embedding parameter to support put Embedding layer on CPU (#11617) 2024-07-18 17:06:06 +08:00
Yuwen Hu
2478e2c14b
Add check in iGPU perf workflow for results integrity (#11616)
* Add csv check for igpu benchmark workflow (#11610)

* add csv check for igpu benchmark workflow

* ready to test

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>

* Restore the temporarily removed models in iGPU-perf (#11615)

Co-authored-by: ATMxsp01 <shou.xu@intel.com>

---------

Co-authored-by: Xu, Shuo <100334393+ATMxsp01@users.noreply.github.com>
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-18 14:13:16 +08:00
Xiangyu Tian
4594a3dd6c
LLM: Fix DummyLayer.weight device in Pipeline Parallel (#11612) 2024-07-18 13:39:34 +08:00
Ruonan Wang
4da93709b1
update doc/setup to use onednn gemm for cpp (#11598)
* update doc/setup to use onednn gemm

* small fix

* Change TOC of graphrag quickstart back
2024-07-18 13:04:38 +08:00
Yishuo Wang
f4077fa905
fix llama3-8b npu long input stuck (#11613) 2024-07-18 11:08:17 +08:00
Zhao Changmin
e5c0058c0e
fix baichuan (#11606) 2024-07-18 09:43:36 +08:00
Guoqiong Song
bfcdc35b04
phi-3 on "transformers>=4.37.0,<=4.42.3" (#11534) 2024-07-17 17:19:57 -07:00
Guoqiong Song
d64711900a
Fix cohere model on transformers>=4.41 (#11575)
* fix cohere model for 4-41
2024-07-17 17:18:59 -07:00
Guoqiong Song
5b6eb85b85
phi model readme (#11595)
Co-authored-by: rnwang04 <ruonan1.wang@intel.com>
2024-07-17 17:18:34 -07:00
Shaojun Liu
2b17536424
Fix python style check: update python version to 3.11 (#11601)
* Update python version to 3.11
2024-07-17 15:39:46 +08:00