Yishuo Wang
|
8086554d33
|
use new fp16 sdp in llama and mistral (#10734)
|
2024-04-12 10:49:02 +08:00 |
|
Yang Wang
|
019293e1b9
|
Fuse MOE indexes computation (#10716)
* try moe
* use c++ cpu to compute indexes
* fix style
|
2024-04-11 10:12:55 -07:00 |
|
binbin Deng
|
70ed9397f9
|
LLM: fix AttributeError of FP16Linear (#10740)
|
2024-04-11 17:03:56 +08:00 |
|
Keyan (Kyrie) Zhang
|
1256a2cc4e
|
Add chatglm3 long input example (#10739)
* Add long context input example for chatglm3
* Small fix
* Small fix
* Small fix
|
2024-04-11 16:33:43 +08:00 |
|
hxsz1997
|
fd473ddb1b
|
Merge pull request #10730 from MargarettMao/MargarettMao-parent_folder
Edit ppl update_HTML_parent_folder
|
2024-04-11 15:45:24 +08:00 |
|
Qiyuan Gong
|
2d64630757
|
Remove transformers version in axolotl example (#10736)
* Remove transformers version in axolotl requirements.txt
|
2024-04-11 14:02:31 +08:00 |
|
yb-peng
|
2685c41318
|
Modify all-in-one benchmark (#10726)
* Update 8192 prompt in all-in-one
* Add cpu_embedding param for linux api
* Update run.py
* Update README.md
|
2024-04-11 13:38:50 +08:00 |
|
Xiangyu Tian
|
301504aa8d
|
Fix transformers version warning (#10732)
|
2024-04-11 13:12:49 +08:00 |
|
Wenjing Margaret Mao
|
9bec233e4d
|
Delete python/llm/test/benchmark/perplexity/update_html_in_parent_folder.py
Delete due to repetition
|
2024-04-11 07:21:12 +08:00 |
|
Cengguang Zhang
|
4b024b7aac
|
LLM: optimize chatglm2 8k input. (#10723)
* LLM: optimize chatglm2 8k input.
* rename.
|
2024-04-10 16:59:06 +08:00 |
|
Yuxuan Xia
|
cd22cb8257
|
Update Env check Script (#10709)
* Update env check bash file
* Update env-check
|
2024-04-10 15:06:00 +08:00 |
|
Shaojun Liu
|
29bf28bd6f
|
Upgrade python to 3.11 in Docker Image (#10718)
* install python 3.11 for cpu-inference docker image
* update xpu-inference dockerfile
* update cpu-serving image
* update qlora image
* update lora image
* update document
|
2024-04-10 14:41:27 +08:00 |
|
Qiyuan Gong
|
b727767f00
|
Add axolotl v0.3.0 with ipex-llm on Intel GPU (#10717)
* Add axolotl v0.3.0 support on Intel GPU.
* Add finetune example on llama-2-7B with Alpaca dataset.
|
2024-04-10 14:38:29 +08:00 |
|
Wang, Jian4
|
c9e6d42ad1
|
LLM: Fix chatglm3-6b-32k error (#10719)
* fix chatglm3-6b-32k
* update style
|
2024-04-10 11:24:06 +08:00 |
|
Keyan (Kyrie) Zhang
|
585c174e92
|
Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables (#10707)
* Read the value of KV_CACHE_ALLOC_BLOCK_LENGTH from the environment variables.
* Fix style
|
2024-04-10 10:48:46 +08:00 |
|
Jiao Wang
|
d1eaea509f
|
update chatglm readme (#10659)
|
2024-04-09 14:24:46 -07:00 |
|
Jiao Wang
|
878a97077b
|
Fix llava example to support transformerds 4.36 (#10614)
* fix llava example
* update
|
2024-04-09 13:47:07 -07:00 |
|
Jiao Wang
|
1e817926ba
|
Fix low memory generation example issue in transformers 4.36 (#10702)
* update cache in low memory generate
* update
|
2024-04-09 09:56:52 -07:00 |
|
Yuwen Hu
|
97db2492c8
|
Update setup.py for bigdl-core-xe-esimd-21 on Windows (#10705)
* Support bigdl-core-xe-esimd-21 for windows in setup.py
* Update setup-llm-env accordingly
|
2024-04-09 18:21:21 +08:00 |
|
Zhicun
|
b4147a97bb
|
Fix dtype mismatch error (#10609)
* fix llama
* fix
* fix code style
* add torch type in model.py
---------
Co-authored-by: arda <arda@arda-arc19.sh.intel.com>
|
2024-04-09 17:50:33 +08:00 |
|
Shaojun Liu
|
f37a1f2a81
|
Upgrade to python 3.11 (#10711)
* create conda env with python 3.11
* recommend to use Python 3.11
* update
|
2024-04-09 17:41:17 +08:00 |
|
Yishuo Wang
|
8f45e22072
|
fix llama2 (#10710)
|
2024-04-09 17:28:37 +08:00 |
|
Yishuo Wang
|
e438f941f2
|
disable rwkv5 fp16 (#10699)
|
2024-04-09 16:42:11 +08:00 |
|
Cengguang Zhang
|
6a32216269
|
LLM: add llama2 8k input example. (#10696)
* LLM: add llama2-32K example.
* refactor name.
* fix comments.
* add IPEX_LLM_LOW_MEM notes and update sample output.
|
2024-04-09 16:02:37 +08:00 |
|
Wenjing Margaret Mao
|
289cc99cd6
|
Update README.md (#10700)
Edit "summarize the results"
|
2024-04-09 16:01:12 +08:00 |
|
Wenjing Margaret Mao
|
d3116de0db
|
Update README.md (#10701)
edit "summarize the results"
|
2024-04-09 15:50:25 +08:00 |
|
Chen, Zhentao
|
d59e0cce5c
|
Migrate harness to ipexllm (#10703)
* migrate to ipexlm
* fix workflow
* fix run_multi
* fix precision map
* rename ipexlm to ipexllm
* rename bigdl to ipex in comments
|
2024-04-09 15:48:53 +08:00 |
|
Keyan (Kyrie) Zhang
|
1e27e08322
|
Modify example from fp32 to fp16 (#10528)
* Modify example from fp32 to fp16
* Remove Falcon from fp16 example for now
* Remove MPT from fp16 example
|
2024-04-09 15:45:49 +08:00 |
|
binbin Deng
|
44922bb5c2
|
LLM: support baichuan2-13b using AutoTP (#10691)
|
2024-04-09 14:06:01 +08:00 |
|
Yina Chen
|
c7422712fc
|
mistral 4.36 use fp16 sdp (#10704)
|
2024-04-09 13:50:33 +08:00 |
|
Ovo233
|
dcb2038aad
|
Enable optimization for sentence_transformers (#10679)
* enable optimization for sentence_transformers
* fix python style check failure
|
2024-04-09 12:33:46 +08:00 |
|
Yang Wang
|
5a1f446d3c
|
support fp8 in xetla (#10555)
* support fp8 in xetla
* change name
* adjust model file
* support convert back to cpu
* factor
* fix bug
* fix style
|
2024-04-08 13:22:09 -07:00 |
|
Cengguang Zhang
|
7c43ac0164
|
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.
* fix block real size.
* fix comment.
* fix style.
* refactor.
|
2024-04-08 17:48:11 +08:00 |
|
Xin Qiu
|
1274cba79b
|
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache
* update
* fix
* change to fp8 matmul
* fix style
* fix
* fix
* meet code review
* add comment
|
2024-04-08 15:16:46 +08:00 |
|
Yishuo Wang
|
65127622aa
|
fix UT threshold (#10689)
|
2024-04-08 14:58:20 +08:00 |
|
Cengguang Zhang
|
c0cd238e40
|
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.
* fix comment and style.
* fix style.
* fix comments and split tensor to quantized attention forward.
* fix style.
* refactor name.
* fix style.
* fix style.
* fix style.
* refactor checker name.
* refactor native sdp split qkv tensor name.
* fix style.
* fix comment rename variables.
* fix co-exist of intermedia results.
|
2024-04-08 11:43:15 +08:00 |
|
Zhicun
|
321bc69307
|
Fix llamaindex ut (#10673)
* fix llamaindex ut
* add GPU ut
|
2024-04-08 09:47:51 +08:00 |
|
yb-peng
|
2d88bb9b4b
|
add test api transformer_int4_fp16_gpu (#10627)
* add test api transformer_int4_fp16_gpu
* update config.yaml and README.md in all-in-one
* modify run.py in all-in-one
* re-order test-api
* re-order test-api in config
* modify README.md in all-in-one
* modify README.md in all-in-one
* modify config.yaml
---------
Co-authored-by: pengyb2001 <arda@arda-arc21.sh.intel.com>
Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
|
2024-04-07 15:47:17 +08:00 |
|
Wang, Jian4
|
47cabe8fcc
|
LLM: Fix no return_last_logit running bigdl_ipex chatglm3 (#10678)
* fix no return_last_logits
* update only for chatglm
|
2024-04-07 15:27:58 +08:00 |
|
Wang, Jian4
|
9ad4b29697
|
LLM: CPU benchmark using tcmalloc (#10675)
|
2024-04-07 14:17:01 +08:00 |
|
binbin Deng
|
d9a1153b4e
|
LLM: upgrade deepspeed in AutoTP on GPU (#10647)
|
2024-04-07 14:05:19 +08:00 |
|
Jin Qiao
|
56dfcb2ade
|
Migrate portable zip to ipex-llm (#10617)
* change portable zip prompt to ipex-llm
* fix chat with ui
* add no proxy
|
2024-04-07 13:58:58 +08:00 |
|
Zhicun
|
9d8ba64c0d
|
Llamaindex: add tokenizer_id and support chat (#10590)
* add tokenizer_id
* fix
* modify
* add from_model_id and from_mode_id_low_bit
* fix typo and add comment
* fix python code style
---------
Co-authored-by: pengyb2001 <284261055@qq.com>
|
2024-04-07 13:51:34 +08:00 |
|
Jin Qiao
|
10ee786920
|
Replace with IPEX-LLM in example comments (#10671)
* Replace with IPEX-LLM in example comments
* More replacement
* revert some changes
|
2024-04-07 13:29:51 +08:00 |
|
Xiangyu Tian
|
08018a18df
|
Remove not-imported MistralConfig (#10670)
|
2024-04-07 10:32:05 +08:00 |
|
Cengguang Zhang
|
1a9b8204a4
|
LLM: support int4 fp16 chatglm2-6b 8k input. (#10648)
|
2024-04-07 09:39:21 +08:00 |
|
Jiao Wang
|
69bdbf5806
|
Fix vllm print error message issue (#10664)
* update chatglm readme
* Add condition to invalidInputError
* update
* update
* style
|
2024-04-05 15:08:13 -07:00 |
|
Jason Dai
|
29d97e4678
|
Update readme (#10665)
|
2024-04-05 18:01:57 +08:00 |
|
Xin Qiu
|
4c3e493b2d
|
fix stablelm2 1.6b (#10656)
* fix stablelm2 1.6b
* meet code review
|
2024-04-03 22:15:32 +08:00 |
|
Jin Qiao
|
cc8b3be11c
|
Add GPU and CPU example for stablelm-zephyr-3b (#10643)
* Add example for StableLM
* fix
* add to readme
|
2024-04-03 16:28:31 +08:00 |
|