jenniew
|
1a360823ce
|
Mofidy Typos
|
2024-04-12 14:13:21 +08:00 |
|
jenniew
|
cdbb1de972
|
Mark Color Modification
|
2024-04-12 14:00:50 +08:00 |
|
jenniew
|
9bbfcaf736
|
Mark Color Modification
|
2024-04-12 13:30:16 +08:00 |
|
jenniew
|
bb34c6e325
|
Mark Color Modification
|
2024-04-12 13:26:36 +08:00 |
|
jenniew
|
b151a9b672
|
edit csv_to_html to combine en & zh
|
2024-04-11 17:35:36 +08:00 |
|
jenniew
|
591bae092c
|
combine english and chinese, remove nan
|
2024-04-08 19:37:51 +08:00 |
|
Cengguang Zhang
|
7c43ac0164
|
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
|
2024-04-08 17:48:11 +08:00 |
|
Xin Qiu
|
1274cba79b
|
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
|
2024-04-08 15:16:46 +08:00 |
|
Yishuo Wang
|
65127622aa
|
fix UT threshold (#10689)
|
2024-04-08 14:58:20 +08:00 |
|
Cengguang Zhang
|
c0cd238e40
|
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
|
2024-04-08 11:43:15 +08:00 |
|
Zhicun
|
321bc69307
|
Fix llamaindex ut (#10673)
* fix llamaindex ut
* add GPU ut
|
2024-04-08 09:47:51 +08:00 |
|
yb-peng
|
2d88bb9b4b
|
add test api transformer_int4_fp16_gpu (#10627)
* add test api transformer_int4_fp16_gpu
* update config.yaml and README.md in all-in-one
* modify run.py in all-in-one
* re-order test-api
* re-order test-api in config
* modify README.md in all-in-one
* modify README.md in all-in-one
* modify config.yaml
---------
Co-authored-by: pengyb2001 <arda@arda-arc21.sh.intel.com>
Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
|
2024-04-07 15:47:17 +08:00 |
|
Wang, Jian4
|
47cabe8fcc
|
LLM: Fix no return_last_logit running bigdl_ipex chatglm3 (#10678)
* fix no return_last_logits
* update only for chatglm
|
2024-04-07 15:27:58 +08:00 |
|
Wang, Jian4
|
9ad4b29697
|
LLM: CPU benchmark using tcmalloc (#10675)
|
2024-04-07 14:17:01 +08:00 |
|
binbin Deng
|
d9a1153b4e
|
LLM: upgrade deepspeed in AutoTP on GPU (#10647)
|
2024-04-07 14:05:19 +08:00 |
|
Jin Qiao
|
56dfcb2ade
|
Migrate portable zip to ipex-llm (#10617)
* change portable zip prompt to ipex-llm
* fix chat with ui
* add no proxy
|
2024-04-07 13:58:58 +08:00 |
|
Zhicun
|
9d8ba64c0d
|
Llamaindex: add tokenizer_id and support chat (#10590)
* add tokenizer_id
* fix
* modify
* add from_model_id and from_mode_id_low_bit
* fix typo and add comment
* fix python code style
---------
Co-authored-by: pengyb2001 <284261055@qq.com>
|
2024-04-07 13:51:34 +08:00 |
|
Jin Qiao
|
10ee786920
|
Replace with IPEX-LLM in example comments (#10671)
* Replace with IPEX-LLM in example comments
* More replacement
* revert some changes
|
2024-04-07 13:29:51 +08:00 |
|
Xiangyu Tian
|
08018a18df
|
Remove not-imported MistralConfig (#10670)
|
2024-04-07 10:32:05 +08:00 |
|
Cengguang Zhang
|
1a9b8204a4
|
LLM: support int4 fp16 chatglm2-6b 8k input. (#10648)
|
2024-04-07 09:39:21 +08:00 |
|
Jiao Wang
|
69bdbf5806
|
Fix vllm print error message issue (#10664)
* update chatglm readme
* Add condition to invalidInputError
* update
* update
* style
|
2024-04-05 15:08:13 -07:00 |
|
Jason Dai
|
29d97e4678
|
Update readme (#10665)
|
2024-04-05 18:01:57 +08:00 |
|
Xin Qiu
|
4c3e493b2d
|
fix stablelm2 1.6b (#10656)
* fix stablelm2 1.6b
* meet code review
|
2024-04-03 22:15:32 +08:00 |
|
Jin Qiao
|
cc8b3be11c
|
Add GPU and CPU example for stablelm-zephyr-3b (#10643)
* Add example for StableLM
* fix
* add to readme
|
2024-04-03 16:28:31 +08:00 |
|
Heyang Sun
|
6000241b10
|
Add Deepspeed Example of FLEX Mistral (#10640)
|
2024-04-03 16:04:17 +08:00 |
|
Shaojun Liu
|
d18dbfb097
|
update spr perf test (#10644)
|
2024-04-03 15:53:55 +08:00 |
|
Yishuo Wang
|
702e686901
|
optimize starcoder normal kv cache (#10642)
|
2024-04-03 15:27:02 +08:00 |
|
Xin Qiu
|
3a9ab8f1ae
|
fix stablelm logits diff (#10636)
* fix logits diff
* Small fixes
---------
Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
|
2024-04-03 15:08:12 +08:00 |
|
Zhicun
|
b827f534d5
|
Add tokenizer_id in Langchain (#10588)
* fix low-bit
* fix
* fix style
---------
Co-authored-by: arda <arda@arda-arc12.sh.intel.com>
|
2024-04-03 14:25:35 +08:00 |
|
Zhicun
|
f6fef09933
|
fix prompt format for llama-2 in langchain (#10637)
|
2024-04-03 14:17:34 +08:00 |
|
Jiao Wang
|
330d4b4f4b
|
update readme (#10631)
|
2024-04-02 23:08:02 -07:00 |
|
Kai Huang
|
c875b3c858
|
Add seq len check for llama softmax upcast to fp32 (#10629)
|
2024-04-03 12:05:13 +08:00 |
|
Jiao Wang
|
4431134ec5
|
update readme (#10632)
|
2024-04-02 19:54:30 -07:00 |
|
Jiao Wang
|
23e33a0ca1
|
Fix qwen-vl style (#10633)
* update
* update
|
2024-04-02 18:41:38 -07:00 |
|
binbin Deng
|
2bbd8a1548
|
LLM: fix llama2 FP16 & bs>1 & autotp on PVC and ARC (#10611)
|
2024-04-03 09:28:04 +08:00 |
|
Jiao Wang
|
654dc5ba57
|
Fix Qwen-VL example problem (#10582)
* update
* update
* update
* update
|
2024-04-02 12:17:30 -07:00 |
|
Yuwen Hu
|
fd384ddfb8
|
Optimize StableLM (#10619)
* Initial commit for stablelm optimizations
* Small style fix
* add dependency
* Add mlp optimizations
* Small fix
* add attention forward
* Remove quantize kv for now as head_dim=80
* Add merged qkv
* fix lisence
* Python style fix
---------
Co-authored-by: qiuxin2012 <qiuxin2012cs@gmail.com>
|
2024-04-02 18:58:38 +08:00 |
|
binbin Deng
|
27be448920
|
LLM: add cpu_embedding and peak memory record for deepspeed autotp script (#10621)
|
2024-04-02 17:32:50 +08:00 |
|
Yishuo Wang
|
ba8cc6bd68
|
optimize starcoder2-3b (#10625)
|
2024-04-02 17:16:29 +08:00 |
|
Shaojun Liu
|
a10f5a1b8d
|
add python style check (#10620)
* add python style check
* fix style checks
* update runner
* add ipex-llm-finetune-qlora-cpu-k8s to manually_build workflow
* update tag to 2.1.0-SNAPSHOT
|
2024-04-02 16:17:56 +08:00 |
|
Cengguang Zhang
|
58b57177e3
|
LLM: support bigdl quantize kv cache env and add warning. (#10623)
* LLM: support bigdl quantize kv cache env and add warnning.
* fix style.
* fix comments.
|
2024-04-02 15:41:08 +08:00 |
|
Kai Huang
|
0a95c556a1
|
Fix starcoder first token perf (#10612)
* add bias check
* update
|
2024-04-02 09:21:38 +08:00 |
|
Cengguang Zhang
|
e567956121
|
LLM: add memory optimization for llama. (#10592)
* add initial memory optimization.
* fix logic.
* fix logic,
* remove env var check in mlp split.
|
2024-04-02 09:07:50 +08:00 |
|
Keyan (Kyrie) Zhang
|
01f491757a
|
Modify the link in Langchain-upstream ut (#10608)
* Modify the link in Langchain-upstream ut
* fix langchain-upstream ut
|
2024-04-01 17:03:40 +08:00 |
|
Ruonan Wang
|
bfc1caa5e5
|
LLM: support iq1s for llama2-70b-hf (#10596)
|
2024-04-01 13:13:13 +08:00 |
|
Ruonan Wang
|
d6af4877dd
|
LLM: remove ipex.optimize for gpt-j (#10606)
* remove ipex.optimize
* fix
* fix
|
2024-04-01 12:21:49 +08:00 |
|
Yishuo Wang
|
437a349dd6
|
fix rwkv with pip installer (#10591)
|
2024-03-29 17:56:45 +08:00 |
|
WeiguangHan
|
9a83f21b86
|
LLM: check user env (#10580)
* LLM: check user env
* small fix
* small fix
* small fix
|
2024-03-29 17:19:34 +08:00 |
|
Keyan (Kyrie) Zhang
|
848fa04dd6
|
Fix typo in Baichuan2 example (#10589)
|
2024-03-29 13:31:47 +08:00 |
|
Ruonan Wang
|
0136fad1d4
|
LLM: support iq1_s (#10564)
* init version
* update utils
* remove unsed code
|
2024-03-29 09:43:55 +08:00 |
|