Commit graph

3260 commits

Author SHA1 Message Date
Wang, Jian4
9c15abf825
Refactor fastapi-serving and add one card serving(#11581)
* init fastapi-serving one card

* mv api code to source

* update worker

* update for style-check

* add worker

* update bash

* update

* update worker name and add readme

* rename update

* rename to fastapi
2024-07-17 11:12:43 +08:00
Jason Dai
373ccbbb0c
Update README.md (#11592) 2024-07-16 22:13:43 +08:00
Yishuo Wang
5837bc0014
fix chatglm3 npu output (#11590) 2024-07-16 18:16:30 +08:00
Guancheng Fu
06930ab258
Enable ipex-llm optimization for lm head (#11589)
* basic

* Modify convert.py

* fix
2024-07-16 16:48:44 +08:00
Heyang Sun
365adad59f
Support LoRA ChatGLM with Alpaca Dataset (#11580)
* Support LoRA ChatGLM with Alpaca Dataset

* refine

* fix

* add 2-card alpaca
2024-07-16 15:40:02 +08:00
Yina Chen
99c22745b2
fix qwen 14b fp6 abnormal output (#11583) 2024-07-16 10:59:00 +08:00
Yishuo Wang
c279849d27
add disk embedding api (#11585) 2024-07-16 10:43:39 +08:00
Xiangyu Tian
79c742dfd5
LLM: Add XPU Memory Optimizations for Pipeline Parallel (#11567)
Add XPU Memory Optimizations for Pipeline Parallel
2024-07-16 09:44:50 +08:00
Yuwen Hu
f06d2f72fb
Add GraphRAG QuickStart (#11582)
* Add framework for graphrag quickstart

* Add quickstart contents for graphrag

* Small fixes and add toc

* Update for graph

* Small fixes
2024-07-16 09:27:54 +08:00
Xin Qiu
91409ffe8c
Add mtl AOT packages in faq.md (#11577)
* Update faq.md

* Update faq.md

* Update faq.md

* Update faq.md

* Update faq.md
2024-07-16 08:46:03 +08:00
Ch1y0q
50cf563a71
Add example: MiniCPM-V (#11570) 2024-07-15 10:55:48 +08:00
Zhao Changmin
06745e5742
Add npu benchmark all-in-one script (#11571)
* npu benchmark
2024-07-15 10:42:37 +08:00
Yishuo Wang
019da6c0ab
use mlp silu_mul fusion in qwen2 to optimize memory usage (#11574) 2024-07-13 16:32:54 +08:00
Xu, Shuo
13a72dc51d
Test MiniCPM performance on iGPU in a more stable way (#11573)
* Test MiniCPM performance on iGPU in a more stable way

* small fix

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-12 17:07:41 +08:00
Xiangyu Tian
0981b72275
Fix /generate_stream api in Pipeline Parallel FastAPI (#11569) 2024-07-12 13:19:42 +08:00
Yishuo Wang
a945500a98
fix internlm xcomposser stream chat (#11564) 2024-07-11 18:21:17 +08:00
Zhao Changmin
b9c66994a5
add npu sdp (#11562) 2024-07-11 16:57:35 +08:00
binbin Deng
2b8ad8731e
Support pipeline parallel for glm-4v (#11545) 2024-07-11 16:06:06 +08:00
Xiangyu Tian
7f5111a998
LLM: Refine start script for Pipeline Parallel Serving (#11557)
Refine start script and readme for Pipeline Parallel Serving
2024-07-11 15:45:27 +08:00
Xu, Shuo
1355b2ce06
Add model Qwen-VL-Chat to iGPU-perf (#11558)
* Add model Qwen-VL-Chat to iGPU-perf

* small fix

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-11 15:39:02 +08:00
Zhao Changmin
105e124752
optimize phi3-v encoder npu performance and add multimodal example (#11553)
* phi3-v

* readme
2024-07-11 13:59:14 +08:00
Cengguang Zhang
70ab1a6f1a
LLM: unify memory optimization env variables. (#11549)
* LLM: unify memory optimization env variables.

* fix comments.
2024-07-11 11:01:28 +08:00
Wang, Jian4
51f2effb05
Add xpu-tgi manually_build (#11556) 2024-07-11 10:35:40 +08:00
Xu, Shuo
028ad4f63c
Add model phi-3-vision-128k-instruct to iGPU-perf benchmark (#11554)
* try to improve MIniCPM performance

* Add model phi-3-vision-128k-instruct to iGPU-perf benchmark

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-10 17:26:30 +08:00
Yishuo Wang
994e49a510
optimize internlm xcomposser performance again (#11551) 2024-07-10 17:08:56 +08:00
Xu, Shuo
61613b210c
try to improve MIniCPM performance (#11552)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-10 16:58:23 +08:00
Yishuo Wang
82f9514303
optimize internlm xcomposer2 performance (#11550) 2024-07-10 15:57:04 +08:00
Zhao Changmin
3c16c9f725
Optimize baichuan on NPU (#11548)
* baichuan_npu
2024-07-10 13:18:48 +08:00
Yuwen Hu
8982ab73d5
Add Yi-6B and StableLM to iGPU perf test (#11546)
* Add transformer4.38.2 test to igpu benchmark (#11529)

* add transformer4.38.1 test to igpu benchmark

* use transformers4.38.2 & fix csv name error in 4.38 workflow

* add model Yi-6B-Chat & remove temporarily most models

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>

* filter some errorlevel (#11541)

Co-authored-by: ATMxsp01 <shou.xu@intel.com>

* Restore the temporarily removed models in iGPU-perf (#11544)

* filter some errorlevel

* restore the temporarily removed models in iGPU-perf

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>

---------

Co-authored-by: Xu, Shuo <100334393+ATMxsp01@users.noreply.github.com>
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-09 18:51:23 +08:00
Yishuo Wang
7dc6756d86
add disk embedding (#11543) 2024-07-09 17:38:40 +08:00
Zhao Changmin
76a5802acf
update NPU examples (#11540)
* update NPU examples
2024-07-09 17:19:42 +08:00
Yishuo Wang
99b2802d3b
optimize qewn2 memory (#11535) 2024-07-09 17:14:01 +08:00
Yishuo Wang
2929eb262e
support npu glm4 (#11539) 2024-07-09 15:46:49 +08:00
Xiangyu Tian
a1cede926d
Fix update_kv_cache in Pipeline-Parallel-Serving for glm4-9b model (#11537) 2024-07-09 14:08:04 +08:00
Cengguang Zhang
fa81dbefd3
LLM: update multi gpu write csv in all-in-one benchmark. (#11538) 2024-07-09 11:14:17 +08:00
Xin Qiu
69701b3ec8
fix typo in python/llm/scripts/README.md (#11536) 2024-07-09 09:53:14 +08:00
Jason Dai
099486afb7
Update README.md (#11530) 2024-07-08 20:18:41 +08:00
binbin Deng
66f6ffe4b2
Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
Xu, Shuo
f9a199900d
add model RWKV/v5-Eagle-7B-HF to igpu benchmark (#11528)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-08 15:50:16 +08:00
Shaojun Liu
9b37ca6027
remove (#11527) 2024-07-08 15:49:52 +08:00
Yishuo Wang
c26651f91f
add mistral npu support (#11523) 2024-07-08 13:17:15 +08:00
Jun Wang
5a57e54400
[ADD] add 5 new models for igpu-perf (#11524) 2024-07-08 11:12:15 +08:00
Xu, Shuo
64cfed602d
Add new models to benchmark (#11505)
* Add new models to benchmark

* remove Qwen/Qwen-VL-Chat to pass the validation

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-07-08 10:35:55 +08:00
binbin Deng
252426793b
Fix setting of use_quantize_kv_cache on different GPU in pipeline parallel (#11516) 2024-07-08 09:27:01 +08:00
Yishuo Wang
7cb09a8eac
optimize qwen2 memory usage again (#11520) 2024-07-05 17:32:34 +08:00
Yuwen Hu
8f376e5192
Change igpu perf to mainly test int4+fp16 (#11513) 2024-07-05 17:12:33 +08:00
Jun Wang
1efb6ebe93
[ADD] add transformer_int4_fp16_loadlowbit_gpu_win api (#11511)
* [ADD] add transformer_int4_fp16_loadlowbit_gpu_win api

* [UPDATE] add int4_fp16_lowbit config and description

* [FIX] fix run.py mistake

* [FIX] fix run.py mistake

* [FIX] fix indent; change dtype=float16 to model.half()
2024-07-05 16:38:41 +08:00
Zhao Changmin
f7e957aaf9
Clean npu dtype branch (#11515)
* clean branch

* create_npu_kernels
2024-07-05 15:45:26 +08:00
Yishuo Wang
14ce058004
add chatglm3 npu support (#11518) 2024-07-05 15:31:27 +08:00
Xin Qiu
a31f2cbe13
update minicpm.py (#11517)
* update minicpm

* meet code review
2024-07-05 15:25:44 +08:00