Commit graph

2794 commits

Author SHA1 Message Date
jenniew
591bae092c combine english and chinese, remove nan 2024-04-08 19:37:51 +08:00
Cengguang Zhang
7c43ac0164
LLM: optimize llama natvie sdp for split qkv tensor (#10693)
* LLM: optimize llama natvie sdp for split qkv tensor.

* fix block real size.

* fix comment.

* fix style.

* refactor.
2024-04-08 17:48:11 +08:00
Xin Qiu
1274cba79b
stablelm fp8 kv cache (#10672)
* stablelm fp8 kvcache

* update

* fix

* change to fp8 matmul

* fix style

* fix

* fix

* meet code review

* add comment
2024-04-08 15:16:46 +08:00
Yishuo Wang
65127622aa
fix UT threshold (#10689) 2024-04-08 14:58:20 +08:00
Cengguang Zhang
c0cd238e40
LLM: support llama2 8k input with w4a16. (#10677)
* LLM: support llama2 8k input with w4a16.

* fix comment and style.

* fix style.

* fix comments and split tensor to quantized attention forward.

* fix style.

* refactor name.

* fix style.

* fix style.

* fix style.

* refactor checker name.

* refactor native sdp split qkv tensor name.

* fix style.

* fix comment rename variables.

* fix co-exist of intermedia results.
2024-04-08 11:43:15 +08:00
Shaojun Liu
db7c5cb78f
update model path for spr perf test (#10687)
* update model path for spr perf test

* revert
2024-04-08 10:21:56 +08:00
Zhicun
321bc69307
Fix llamaindex ut (#10673)
* fix llamaindex ut

* add GPU ut
2024-04-08 09:47:51 +08:00
Keyan (Kyrie) Zhang
a11b708135
Modify the .md link in chatchat readthedoc (#10681) 2024-04-07 16:33:32 +08:00
yb-peng
2d88bb9b4b
add test api transformer_int4_fp16_gpu (#10627)
* add test api transformer_int4_fp16_gpu

* update config.yaml and README.md in all-in-one

* modify run.py in all-in-one

* re-order test-api

* re-order test-api in config

* modify README.md in all-in-one

* modify README.md in all-in-one

* modify config.yaml

---------

Co-authored-by: pengyb2001 <arda@arda-arc21.sh.intel.com>
Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
2024-04-07 15:47:17 +08:00
Wang, Jian4
47cabe8fcc
LLM: Fix no return_last_logit running bigdl_ipex chatglm3 (#10678)
* fix no return_last_logits

* update only for chatglm
2024-04-07 15:27:58 +08:00
Shengsheng Huang
33f90beda0
fix quickstart docs (#10676) 2024-04-07 14:26:59 +08:00
Wang, Jian4
9ad4b29697
LLM: CPU benchmark using tcmalloc (#10675) 2024-04-07 14:17:01 +08:00
binbin Deng
d9a1153b4e
LLM: upgrade deepspeed in AutoTP on GPU (#10647) 2024-04-07 14:05:19 +08:00
Jin Qiao
56dfcb2ade
Migrate portable zip to ipex-llm (#10617)
* change portable zip prompt to ipex-llm

* fix chat with ui

* add no proxy
2024-04-07 13:58:58 +08:00
Zhicun
9d8ba64c0d
Llamaindex: add tokenizer_id and support chat (#10590)
* add tokenizer_id

* fix

* modify

* add from_model_id and from_mode_id_low_bit

* fix typo and add comment

* fix python code style

---------

Co-authored-by: pengyb2001 <284261055@qq.com>
2024-04-07 13:51:34 +08:00
Jin Qiao
10ee786920
Replace with IPEX-LLM in example comments (#10671)
* Replace with IPEX-LLM in example comments

* More replacement

* revert some changes
2024-04-07 13:29:51 +08:00
Xiangyu Tian
08018a18df
Remove not-imported MistralConfig (#10670) 2024-04-07 10:32:05 +08:00
Cengguang Zhang
1a9b8204a4
LLM: support int4 fp16 chatglm2-6b 8k input. (#10648) 2024-04-07 09:39:21 +08:00
Jason Dai
ab87b6ab21
Update readme (#10669) 2024-04-07 09:13:45 +08:00
Jiao Wang
69bdbf5806
Fix vllm print error message issue (#10664)
* update chatglm readme

* Add condition to invalidInputError

* update

* update

* style
2024-04-05 15:08:13 -07:00
Jason Dai
29d97e4678
Update readme (#10665) 2024-04-05 18:01:57 +08:00
Yang Wang
ac65ab65c6
Update llama_cpp_quickstart.md (#10663) 2024-04-04 11:00:50 -07:00
Jason Dai
6699d86192
Update index.rst (#10660) 2024-04-04 20:37:33 +08:00
Tom Aarsen
8abf4da1bc
README: Fix typo: tansformers -> transformers (#10657) 2024-04-04 08:54:48 +08:00
Xin Qiu
4c3e493b2d
fix stablelm2 1.6b (#10656)
* fix stablelm2 1.6b

* meet code review
2024-04-03 22:15:32 +08:00
Shengsheng Huang
22f09f618a
update the video demo (#10655) 2024-04-03 20:51:01 +08:00
Jason Dai
7c08d83d9e
Update quickstart (#10654) 2024-04-03 20:43:22 +08:00
Shengsheng Huang
f84e72e7af
revise ollama quickstart (#10653) 2024-04-03 20:35:34 +08:00
yb-peng
f789c2eee4
add ollama quickstart (#10649)
Co-authored-by: arda <arda@arda-arc12.sh.intel.com>
2024-04-03 19:33:39 +08:00
Shengsheng Huang
1ae519ec69
add langchain-chatchat quickstart (#10652) 2024-04-03 19:23:09 +08:00
Shengsheng Huang
45437ddc9a
update indexes, move some sections in coding quickstart to webui (#10651) 2024-04-03 18:18:49 +08:00
Shengsheng Huang
c26e06d5cf
update coding quickstart and webui quickstart for warmup note (#10650) 2024-04-03 17:18:28 +08:00
Yuwen Hu
5b096c39a6
Change style for video rendering (#10646) 2024-04-03 16:31:02 +08:00
Jin Qiao
cc8b3be11c
Add GPU and CPU example for stablelm-zephyr-3b (#10643)
* Add example for StableLM

* fix

* add to readme
2024-04-03 16:28:31 +08:00
Heyang Sun
6000241b10
Add Deepspeed Example of FLEX Mistral (#10640) 2024-04-03 16:04:17 +08:00
Shaojun Liu
d18dbfb097
update spr perf test (#10644) 2024-04-03 15:53:55 +08:00
Heyang Sun
4f6df37805
fix wrong cpu core num seen by docker (#10645) 2024-04-03 15:52:25 +08:00
Yishuo Wang
702e686901
optimize starcoder normal kv cache (#10642) 2024-04-03 15:27:02 +08:00
Xin Qiu
3a9ab8f1ae
fix stablelm logits diff (#10636)
* fix logits diff

* Small fixes

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2024-04-03 15:08:12 +08:00
Ovo233
97c626d76f
add continue quickstart (#10610)
Co-authored-by: Shengsheng Huang <shengsheng.huang@intel.com>
2024-04-03 14:50:11 +08:00
Zhicun
b827f534d5
Add tokenizer_id in Langchain (#10588)
* fix low-bit

* fix

* fix style

---------

Co-authored-by: arda <arda@arda-arc12.sh.intel.com>
2024-04-03 14:25:35 +08:00
Zhicun
f6fef09933
fix prompt format for llama-2 in langchain (#10637) 2024-04-03 14:17:34 +08:00
Jiao Wang
330d4b4f4b
update readme (#10631) 2024-04-02 23:08:02 -07:00
Kai Huang
c875b3c858
Add seq len check for llama softmax upcast to fp32 (#10629) 2024-04-03 12:05:13 +08:00
Shaojun Liu
1aef3bc0ab
verify and refine ipex-llm-finetune-qlora-xpu docker document (#10638)
* verify and refine finetune-xpu document

* update export_merged_model.py link

* update link
2024-04-03 11:33:13 +08:00
Shaojun Liu
0779ca3db0
Bump ossf/scorecard-action to v2.3.1 (#10639)
* Bump ossf/scorecard-action to v2.3.1

* revert
2024-04-03 11:14:18 +08:00
Jiao Wang
4431134ec5
update readme (#10632) 2024-04-02 19:54:30 -07:00
Shaojun Liu
dfcf08c58a
update ossf/scorecard-action to fix TUF invalid key bug (#10635) 2024-04-03 09:55:32 +08:00
Jiao Wang
23e33a0ca1
Fix qwen-vl style (#10633)
* update

* update
2024-04-02 18:41:38 -07:00
binbin Deng
2bbd8a1548
LLM: fix llama2 FP16 & bs>1 & autotp on PVC and ARC (#10611) 2024-04-03 09:28:04 +08:00