Shaojun Liu
924e01b842
Create scorecard.yml ( #10559 )
2024-03-27 16:51:10 +08:00
Guancheng Fu
04baac5a2e
Fix fastchat top_k ( #10560 )
...
* fix -1 top_k
* fix
* done
2024-03-27 16:01:58 +08:00
binbin Deng
fc8c7904f0
LLM: fix torch_dtype setting of apply fp16 optimization through optimize_model ( #10556 )
2024-03-27 14:18:45 +08:00
Ruonan Wang
ea4bc450c4
LLM: add esimd sdp for pvc ( #10543 )
...
* add esimd sdp for pvc
* update
* fix
* fix batch
2024-03-26 19:04:40 +08:00
Jin Qiao
817ef2d1de
Add verified models in document index ( #10546 )
...
* Add verified models in document index
* try to adjust column width
* try to adjust column width
* try to adjust column width
* try to adjust column width
* try replace link
* change to ipex-llm-tutorial
* try use raw html
* adjust table header
2024-03-26 18:25:32 +08:00
Jin Qiao
b78289a595
Remove ipex-llm dependency in readme ( #10544 )
2024-03-26 18:25:14 +08:00
Xiangyu Tian
11550d3f25
LLM: Add length check for IPEX-CPU speculative decoding ( #10529 )
...
Add length check for IPEX-CPU speculative decoding.
2024-03-26 17:47:10 +08:00
Guancheng Fu
a3b007f3b1
[Serving] Fix fastchat breaks ( #10548 )
...
* fix fastchat
* fix doc
2024-03-26 17:03:52 +08:00
Yishuo Wang
69a28d6b4c
fix chatglm ( #10540 )
2024-03-26 16:01:00 +08:00
Shaojun Liu
2ecd737474
change bigdl-llm-tutorial to ipex-llm-tutorial in README ( #10547 )
...
* update bigdl-llm-tutorial to ipex-llm-tutorial
* change to ipex-llm-tutorial
2024-03-26 15:19:53 +08:00
Shaojun Liu
bb9be70105
replace bigdl-llm with ipex-llm ( #10545 )
2024-03-26 15:12:38 +08:00
Shaojun Liu
c563b41491
add nightly_build workflow ( #10533 )
...
* add nightly_build workflow
* add create-job-status-badge action
* update
* update
* update
* update setup.py
* release
* revert
2024-03-26 12:47:38 +08:00
binbin Deng
0a3e4e788f
LLM: fix mistral hidden_size setting for deepspeed autotp ( #10527 )
2024-03-26 10:55:44 +08:00
Xin Qiu
1dd40b429c
enable fp4 fused mlp and qkv ( #10531 )
...
* enable fp4 fused mlp and qkv
* update qwen
* update qwen2
2024-03-26 08:34:00 +08:00
Yuwen Hu
9367db7f2b
Small typo fix ( #10535 )
2024-03-25 18:48:44 +08:00
Yuwen Hu
c182acef3f
[Doc] Update IPEX-LLM Index Page ( #10534 )
...
* Update readthedocs readme before Latest Update
* Update before quick start section in index page
* Update quickstart section
* Further updates for Code Example
* Small fix
* Small fix
* Fix migration guide style
2024-03-25 18:43:32 +08:00
Shaojun Liu
93e6804bfe
update nightly test ( #10520 )
...
* trigger nightly test
* trigger perf test
* update bigdl-llm to ipex-llm
* revert
2024-03-25 18:22:05 +08:00
Yuwen Hu
e0ea7b8244
[Doc] IPEX-LLM Doc Layout Update ( #10532 )
...
* Fix navigation bar to 1
* Remove unnecessary python api
* Fixed failed langchain native api doc
* Change index page layout
* Update quicklink for IPEX-LLM
* Simplify toc and add bigdl-llm migration guide
* Update readthedocs readme
* Add missing index link for bigdl-llm migration guide
* Update logo image and repo link
* Update copyright
* Small fix
* Update copyright
* Update top nav bar
* Small fix
2024-03-25 16:23:56 +08:00
Shengsheng Huang
de5bbf83de
update linux quickstart and formats of migration ( #10530 )
...
* update linux quickstart and formats of migration
* update quickstart
* update format
2024-03-25 15:38:02 +08:00
Shaojun Liu
c8a1c76304
Update README.md ( #10524 )
2024-03-25 13:47:45 +08:00
Jason Dai
5b76f88a8f
Update README.md ( #10518 )
2024-03-25 13:37:01 +08:00
Shengsheng Huang
d7d0e66b18
move migration guide to quickstart ( #10521 )
2024-03-25 11:50:49 +08:00
Dongjie Shi
c4dbd21cfc
update readthedocs project name ( #10519 )
...
* update readthedocs project name
* update readthedocs project name
2024-03-25 11:44:35 +08:00
Wang, Jian4
16b2ef49c6
Update_document by heyang ( #30 )
2024-03-25 10:06:02 +08:00
Wang, Jian4
e2d25de17d
Update_docker by heyang ( #29 )
2024-03-25 10:05:46 +08:00
Wang, Jian4
5dc121ee5e
Add guide for running bigdl-example using ipex-llm libs ( #28 )
...
* add guide
* update
2024-03-22 17:17:21 +08:00
Wang, Jian4
a1048ca7f6
Update setup.py and add new actions and add compatible mode ( #25 )
...
* update setup.py
* add new action
* add compatible mode
2024-03-22 15:44:59 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm ( #24 )
...
* Rename bigdl/llm to ipex_llm
* rm python/llm/src/bigdl
* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Jin Qiao
cc5806f4bc
LLM: add save/load example for hf-transformers ( #10432 )
2024-03-22 13:57:47 +08:00
Ruonan Wang
a7da61925f
LLM: add windows related info in llama-cpp quickstart ( #10505 )
...
* first commit
* update
* add image, update Prerequisites
* small fix
2024-03-22 13:51:14 +08:00
Wang, Jian4
34d0a9328c
LLM: Speed-up mixtral in pipeline parallel inference ( #10472 )
...
* speed-up mixtral
* fix style
2024-03-22 11:06:28 +08:00
Cengguang Zhang
b9d4280892
LLM: fix baichuan7b quantize kv abnormal output. ( #10504 )
...
* fix abnormal output.
* fix style.
* fix style.
2024-03-22 10:00:08 +08:00
Yishuo Wang
f0f317b6cf
fix a typo in yuan ( #10503 )
2024-03-22 09:40:04 +08:00
Cheen Hau, 俊豪
a7d38bee94
WebUI quickstart: add instruct chat mode and tested models ( #10436 )
...
* Add instruct chat mode and tested models
* Fix table
* Remove falcon from 'tested models'
* Fixes
* Open image in new window
2024-03-21 20:15:32 +08:00
Kai Huang
92ee2077b3
Update Linux Quickstart ( #10499 )
...
* fix quick start
* update toc
* expose docker
2024-03-21 20:13:21 +08:00
Guancheng Fu
3a3756b51d
Add FastChat bigdl_worker ( #10493 )
...
* done
* fix format
* add licence
* done
* fix doc
* refactor folder
* add license
2024-03-21 18:35:05 +08:00
Ruonan Wang
8d0ea1b9b3
LLM: add initial QuickStart for linux cpp usage ( #10418 )
...
* add first version
* update content and add link
* --amend
* update based on new usage
* update usage based on new pr
* temp save
* basic stable version
* change to backend
2024-03-21 17:35:58 +08:00
Xin Qiu
dba7ddaab3
add sdp fp8 for qwen llama436 baichuan mistral baichuan2 ( #10485 )
...
* add sdp fp8
* fix style
* fix qwen
* fix baichuan 13
* revert baichuan 13b and baichuan2-13b
* fix style
* update
2024-03-21 17:23:05 +08:00
Kai Huang
30f111cd32
lm_head empty_cache for more models ( #10490 )
...
* modify constraint
* fix style
2024-03-21 17:11:43 +08:00
Yuwen Hu
1579ee4421
[LLM] Add nightly igpu perf test for INT4+FP16 1024-128 ( #10496 )
2024-03-21 16:07:06 +08:00
Yuxuan Xia
3d59c74a0b
Linux quick start ( #10391 )
...
* Fix Baichuan2 prompt format
* Add linux quick start guide
* Modify the linux installation quick start
* Adjust Linux quick start
* Adjust Linux quick start
* Add linux quick start screenshots
* Revert Baichuan2 changes
* Fix linux quick start typo
* Fix linux quick start typos
* Remove linux quick start downgrade kernel
* Change linux quick start bigdl install
* Modify linux quick start
2024-03-21 16:02:29 +08:00
binbin Deng
2958ca49c0
LLM: add patching function for llm finetuning ( #10247 )
2024-03-21 16:01:01 +08:00
hxsz1997
158a49986a
Add quickstart for install bigdl-llm in docker on windows with Intel GPU ( #10421 )
...
* add quickstart for install bigdl in docker on window with Intel GPU
* modify the inference command
* add note of required disk space
* add the issue of iGPU
2024-03-21 15:57:27 +08:00
Zhicun
5b97fdb87b
update deepseek example readme ( #10420 )
...
* update readme
* update
* update readme
2024-03-21 15:21:48 +08:00
hxsz1997
a5f35757a4
Migrate langchain rag cpu example to gpu ( #10450 )
...
* add langchain rag on gpu
* add rag example in readme
* add trust_remote_code in TransformersEmbeddings.from_model_id
* add trust_remote_code in TransformersEmbeddings.from_model_id in cpu
2024-03-21 15:20:46 +08:00
Heyang Sun
c672e97239
Fix CPU finetuning docker ( #10494 )
...
* Fix CPU finetuning docker
* Update README.md
2024-03-21 11:53:30 +08:00
binbin Deng
85ef3f1d99
LLM: add empty cache in deepspeed autotp benchmark script ( #10488 )
2024-03-21 10:51:23 +08:00
Xiangyu Tian
5a5fd5af5b
LLM: Add speculative benchmark on CPU/XPU ( #10464 )
...
Add speculative benchmark on CPU/XPU.
2024-03-21 09:51:06 +08:00
Ruonan Wang
28c315a5b9
LLM: fix deepspeed error of finetuning on xpu ( #10484 )
2024-03-21 09:46:25 +08:00
Kai Huang
021d77fd22
Remove softmax upcast fp32 in llama ( #10481 )
...
* update
* fix style
2024-03-20 18:17:34 +08:00