Xin Qiu
37bb0cbf8f
Speed up gpt-j in gpubenchmark ( #9000 )
...
* Speedup gpt-j in gpubenchmark
* meet code review
2023-09-19 14:22:28 +08:00
Zhao Changmin
2a05581da7
LLM: Apply low_cpu_mem_usage algorithm on optimize_model API ( #8987 )
...
* low_cpu_mem_usage
2023-09-18 21:41:42 +08:00
Cengguang Zhang
8299b68fea
update readme. ( #8996 )
2023-09-18 17:06:15 +08:00
binbin Deng
c1d25a51a8
LLM: add optimize_model example for bert ( #8975 )
2023-09-18 16:18:35 +08:00
Cengguang Zhang
74338fd291
LLM: add auto torch dtype in benchmark. ( #8981 )
2023-09-18 15:48:25 +08:00
Ruonan Wang
cabe7c0358
LLM: add baichuan2 example for arc ( #8994 )
...
* add baichuan2 examples
* add link
* small fix
2023-09-18 14:32:27 +08:00
Guancheng Fu
7353882732
add Dockerfile ( #8993 )
2023-09-18 13:25:37 +08:00
binbin Deng
0a552d5bdc
LLM: fix installation on windows ( #8989 )
2023-09-18 11:14:54 +08:00
Xiangyu Tian
52878d3e5f
[PPML] Enable TLS in Attestation API Serving for LLM finetuning ( #8945 )
...
Add enableTLS flag to enable TLS in Attestation API Serving for LLM finetuning.
2023-09-18 09:32:25 +08:00
Ruonan Wang
32716106e0
update use_cahce=True ( #8986 )
2023-09-18 07:59:33 +08:00
Xin Qiu
64ee1d7689
update run_transformer_int4_gpu ( #8983 )
...
* xpuperf
* update run.py
* clean upo
* uodate
* update
* meet code review
2023-09-15 15:10:04 +08:00
Heyang Sun
aeef73a182
Tell User How to Find Fine-tuned Model in README ( #8985 )
...
* Tell User How to Find Fine-tuned Model in README
* Update README.md
2023-09-15 13:45:40 +08:00
Zhao Changmin
16b9412e80
tie_word_embeddings ( #8977 )
...
tie_word_embeddings
2023-09-15 10:17:09 +08:00
JinBridge
c12b8f24b6
LLM: add use_cache=True for all gpu examples ( #8971 )
2023-09-15 09:54:38 +08:00
Guancheng Fu
d1b62ef2f2
[bigdl-llm] Remove serving-dep from all_requires ( #8980 )
...
* Remove serving-dep from all_requires
* pin fastchat version
2023-09-14 16:59:24 +08:00
Yishuo Wang
bcf456070c
fix bloom-176b int overflow ( #8973 )
2023-09-14 14:37:57 +08:00
Wang Jian
7563b26ca9
Occlum fastchat build Use nocache and update order ( #8972 )
2023-09-14 14:05:15 +08:00
Ruonan Wang
dd57623650
LLM: reduce GPU memory for optimize_model=True ( #8965 )
...
* reduce gpu memory for llama & chatglm
* change to device type
2023-09-13 17:27:09 +08:00
binbin Deng
be29c75c18
LLM: refactor gpu examples ( #8963 )
...
* restructure
* change to hf-transformers-models/
2023-09-13 14:47:47 +08:00
Cengguang Zhang
cca84b0a64
LLM: update llm benchmark scripts. ( #8943 )
...
* update llm benchmark scripts.
* change tranformer_bf16 to pytorch_autocast_bf16.
* add autocast in transformer int4.
* revert autocast.
* add "pytorch_autocast_bf16" to doc
* fix comments.
2023-09-13 12:23:28 +08:00
SONG Ge
7132ef6081
[LLM Doc] Add optimize_model doc in transformers api ( #8957 )
...
* add optimize in from_pretrained
* add api doc for load_low_bit
* update api docs following comments
* update api docs
* update
* reord comments
2023-09-13 10:42:33 +08:00
Zhao Changmin
c32c260ce2
LLM: Add save/load API in optimize_model to support general pytorch model ( #8956 )
...
* support hf format SL
2023-09-13 10:22:00 +08:00
Ruonan Wang
4de73f592e
LLM: add gpu example of chinese-llama-2-7b ( #8960 )
...
* add gpu example of chinese -llama2
* update model name and link
* update name
2023-09-13 10:16:51 +08:00
Guancheng Fu
0bf5857908
[LLM] Integrate FastChat as a serving framework for BigDL-LLM ( #8821 )
...
* Finish changing
* format
* add licence
* Add licence
* fix
* fix
* Add xpu support for fschat
* Fix patch
* Also install webui dependencies
* change setup.py dependency installs
* fiox
* format
* final test
2023-09-13 09:28:05 +08:00
Yuwen Hu
cb534ed5c4
[LLM] Add Arc demo gif to readme and readthedocs ( #8958 )
...
* Add arc demo in main readme
* Small style fix
* Realize using table
* Update based on comments
* Small update
* Try to solve with height problem
* Small fix
* Update demo for inner llm readme
* Update demo video for readthedocs
* Small fix
* Update based on comments
2023-09-13 09:23:52 +08:00
Jason Dai
448a9e813a
Update Readme ( #8959 )
2023-09-12 17:27:26 +08:00
Zhao Changmin
dcaa4dc130
LLM: Support GQA on llama kvcache ( #8938 )
...
* support GQA
2023-09-12 12:18:40 +08:00
binbin Deng
2d81521019
LLM: add optimize_model examples for llama2 and chatglm ( #8894 )
...
* add llama2 and chatglm optimize_model examples
* update default usage
* update command and some descriptions
* move folder and remove general_int4 descriptions
* change folder name
2023-09-12 10:36:29 +08:00
Zhao Changmin
f00c442d40
fix accelerate ( #8946 )
...
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-12 09:27:58 +08:00
Yang Wang
16761c58be
Make llama attention stateless ( #8928 )
...
* Make llama attention stateless
* fix style
* fix chatglm
* fix chatglm xpu
2023-09-11 18:21:50 -07:00
Zhao Changmin
e62eda74b8
refine ( #8912 )
...
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-11 16:40:33 +08:00
Yina Chen
df165ad165
init ( #8933 )
2023-09-11 14:30:55 +08:00
Xiangyu Tian
4dce238867
Fix incorrect usage in docs of Finetuning to enable TDX ( #8932 )
2023-09-08 16:03:14 +08:00
Ruonan Wang
b3f5dd5b5d
LLM: update q8 convert xpu&cpu ( #8930 )
2023-09-08 16:01:17 +08:00
Yina Chen
33d75adadf
[LLM]Support q5_0 on arc ( #8926 )
...
* support q5_0
* delete
* fix style
2023-09-08 15:52:36 +08:00
Yuwen Hu
ca35c93825
[LLM] Fix langchain UT ( #8929 )
...
* Change dependency version for langchain uts
* Downgrade pandas version instead; and update example readme accordingly
2023-09-08 13:51:04 +08:00
Xin Qiu
ea0853c0b5
update benchmark_utils readme ( #8925 )
...
* update readme
* meet code review
2023-09-08 10:30:26 +08:00
Xiangyu Tian
ea6d4148e9
[PPML] Add attestation for LLM Finetuning ( #8908 )
...
Add TDX attestation for LLM Finetuning in TDX CoCo
---------
Co-authored-by: Heyang Sun <60865256+Uxito-Ada@users.noreply.github.com>
2023-09-08 10:24:04 +08:00
Yang Wang
ee98cdd85c
Support latest transformer version ( #8923 )
...
* Support latest transformer version
* fix style
2023-09-07 19:01:32 -07:00
Yang Wang
25428b22b4
Fix chatglm2 attention and kv cache ( #8924 )
...
* fix chatglm2 attention
* fix bf16 bug
* make model stateless
* add utils
* cleanup
* fix style
2023-09-07 18:54:29 -07:00
Yina Chen
b209b8f7b6
[LLM] Fix arc qtype != q4_0 generate issue ( #8920 )
...
* Fix arc precision!=q4_0 generate issue
* meet comments
2023-09-07 08:56:36 -07:00
Cengguang Zhang
3d2efe9608
LLM: update llm latency benchmark. ( #8922 )
2023-09-07 19:00:19 +08:00
binbin Deng
7897eb4b51
LLM: add benchmark scripts on GPU ( #8916 )
2023-09-07 18:08:17 +08:00
Xin Qiu
d8a01d7c4f
fix chatglm in run.pu ( #8919 )
2023-09-07 16:44:10 +08:00
Xin Qiu
e9de9d9950
benchmark for native int4 ( #8918 )
...
* native4
* update
* update
* update
2023-09-07 15:56:15 +08:00
Ruonan Wang
c0797ea232
LLM: update setup to specify bigdl-core-xe version ( #8913 )
2023-09-07 15:11:55 +08:00
Yuwen Hu
c152c719ea
Update bigdl logo url to the one hosted on readthedocs ( #8911 )
2023-09-07 14:40:57 +08:00
Ruonan Wang
057e77e229
LLM: update benchmark_utils.py to handle do_sample=True ( #8903 )
2023-09-07 14:20:47 +08:00
Yuwen Hu
3d1c7e7082
Small link fix ( #8910 )
2023-09-07 13:35:29 +08:00
Yang Wang
c34400e6b0
Use new layout for xpu qlinear ( #8896 )
...
* use new layout for xpu qlinear
* fix style
2023-09-06 21:55:33 -07:00