Commit graph

1530 commits

Author SHA1 Message Date
binbin Deng
330e67e2c0 LLM: update example doc page (#9186) 2023-10-17 16:26:11 +08:00
Cheen Hau, 俊豪
66c2e45634 Add unit tests for optimized model correctness (#9151)
* Add test to check correctness of optimized model

* Refactor optimized model test

* Use models in llm-unit-test

* Use AutoTokenizer for bloom

* Print out each passed test

* Remove unused tokenizer from import
2023-10-17 14:46:41 +08:00
Jin Qiao
d946bd7c55 LLM: add CPU More-Data-Types and Save-Load examples (#9179) 2023-10-17 14:38:52 +08:00
Ruonan Wang
c0497ab41b LLM: support kv_cache optimization for Qwen-VL-Chat (#9193)
* dupport qwen_vl_chat

* fix style
2023-10-17 13:33:56 +08:00
binbin Deng
1cd9ab15b8 LLM: fix ChatGLMConfig check (#9191) 2023-10-17 11:52:56 +08:00
Yang Wang
7160afd4d1 Support XPU DDP training and autocast for LowBitMatmul (#9167)
* support autocast in low bit matmul

* Support XPU DDP training

* fix  amp
2023-10-16 20:47:19 -07:00
Ruonan Wang
77afb8796b LLM: fix convert of chatglm (#9190) 2023-10-17 10:48:13 +08:00
dingbaorong
af3b575c7e expose modules_to_not_convert in optimize_model (#9180)
* expose modules_to_not_convert in optimize_model

* some fixes
2023-10-17 09:50:26 +08:00
Cengguang Zhang
5ca8a851e9 LLM: add fuse optimization for Mistral. (#9184)
* add fuse optimization for mistral.

* fix.

* fix

* fix style.

* fix.

* fix error.

* fix style.

* fix style.
2023-10-16 16:50:31 +08:00
Jiao Wang
49e1381c7f update rope (#9155) 2023-10-15 21:51:45 -07:00
Jason Dai
b192a8032c Update llm-readme (#9176) 2023-10-16 10:54:52 +08:00
binbin Deng
a164c24746 LLM: add kv_cache optimization for chatglm2-6b-32k (#9165) 2023-10-16 10:43:15 +08:00
Lilac09
326ef7f491 add README for llm-inference-cpu (#9147)
* add README for llm-inference-cpu

* modify README

* add README for llm-inference-cpu on Windows
2023-10-16 10:27:44 +08:00
Yang Wang
7a2de00b48 Fixes for xpu Bf16 training (#9156)
* Support bf16 training

* Use a stable transformer version

* remove env

* fix style
2023-10-14 21:28:59 -07:00
Cengguang Zhang
51a133de56 LLM: add fuse rope and norm optimization for Baichuan. (#9166)
* add fuse rope optimization.

* add rms norm optimization.
2023-10-13 17:36:52 +08:00
Jin Qiao
db7f938fdc LLM: add replit and starcoder to gpu pytorch model example (#9154) 2023-10-13 15:44:17 +08:00
Jin Qiao
797b156a0d LLM: add dolly-v1 and dolly-v2 to gpu pytorch model example (#9153) 2023-10-13 15:43:35 +08:00
Yishuo Wang
259cbb4126 [LLM] add initial bigdl-llm-init (#9150) 2023-10-13 15:31:45 +08:00
Cengguang Zhang
433f408081 LLM: Add fuse rope and norm optimization for Aquila. (#9161)
* add fuse norm optimization.

* add fuse rope optimization
2023-10-13 14:18:37 +08:00
SONG Ge
e7aa67e141 [LLM] Add rope optimization for internlm (#9159)
* add rope and norm optimization for internlm and gptneox

* revert gptneox back and split with pr#9155 #

* add norm_forward

* style fix

* update

* update
2023-10-13 14:18:28 +08:00
Jin Qiao
f754ab3e60 LLM: add baichuan and baichuan2 to gpu pytorch model example (#9152) 2023-10-13 13:44:31 +08:00
Ruonan Wang
b8aee7bb1b LLM: Fix Qwen kv_cache optimization (#9148)
* first commit

* ut pass

* accelerate rotate half by using common util function

* fix style
2023-10-12 15:49:42 +08:00
binbin Deng
69942d3826 LLM: fix model check before attention optimization (#9149) 2023-10-12 15:21:51 +08:00
JIN Qiao
1a1ddc4144 LLM: Add Replit CPU and GPU example (#9028) 2023-10-12 13:42:14 +08:00
JIN Qiao
d74834ff4c LLM: add gpu pytorch-models example llama2 and chatglm2 (#9142) 2023-10-12 13:41:48 +08:00
Ruonan Wang
4f34557224 LLM: support num_beams in all-in-one benchmark (#9141)
* support num_beams

* fix
2023-10-12 13:35:12 +08:00
Ruonan Wang
62ac7ae444 LLM: fix inaccurate input / output tokens of current all-in-one benchmark (#9137)
* first fix

* fix all apis

* fix
2023-10-11 17:13:34 +08:00
Lilac09
e02fbb40cc add bigdl-llm-tutorial into llm-inference-cpu image (#9139)
* add bigdl-llm-tutorial into llm-inference-cpu image

* modify Dockerfile

* modify Dockerfile
2023-10-11 16:41:04 +08:00
ZehuaCao
65dd73b62e Update manually_build.yml (#9138)
* Update manually_build.yml

fix llm-serving-tdx image build dir

* Update manually_build.yml
2023-10-11 15:07:09 +08:00
binbin Deng
eb3fb18eb4 LLM: improve PyTorch API doc (#9128) 2023-10-11 15:03:39 +08:00
Ziteng Zhang
4a0a3c376a Add stand-alone mode on cpu for finetuning (#9127)
* Added steps for finetune on CPU in stand-alone mode

* Add stand-alone mode to bigdl-lora-finetuing-entrypoint.sh

* delete redundant docker commands

* Update README.md

Turn to intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT and append example outputs to allow users to check the running

* Update bigdl-lora-finetuing-entrypoint.sh

Add some tunable parameters

* Add parameters --cpus and -e WORKER_COUNT_DOCKER

* Modified the cpu number range parameters

* Set -ppn to CCL_WORKER_COUNT

* Add related configuration suggestions in README.md
2023-10-11 15:01:21 +08:00
binbin Deng
995b0f119f LLM: update some gpu examples (#9136) 2023-10-11 14:23:56 +08:00
Ruonan Wang
1c8d5da362 LLM: fix llama tokenizer for all-in-one benchmark (#9129)
* fix tokenizer for gpu benchmark

* fix ipex fp16

* meet code review

* fix
2023-10-11 13:39:39 +08:00
binbin Deng
2ad67a18b1 LLM: add mistral examples (#9121) 2023-10-11 13:38:15 +08:00
Ruonan Wang
1363e666fc LLM: update benchmark_util.py for beam search (#9126)
* update reorder_cache

* fix
2023-10-11 09:41:53 +08:00
Guoqiong Song
e8c5645067 add LLM example of aquila on GPU (#9056)
* aquila, dolly-v1, dolly-v2, vacuna
2023-10-10 17:01:35 -07:00
Yuwen Hu
dc70fc7b00 Update performance tests for dependency of bigdl-core-xe-esimd (#9124) 2023-10-10 19:32:17 +08:00
Lilac09
30e3c196f3 Merge pull request #9108 from Zhengjin-Wang/main
Add instruction for chat.py in bigdl-llm-cpu
2023-10-10 16:40:52 +08:00
Lilac09
1e78b0ac40 Optimize LoRA Docker by Shrinking Image Size (#9110)
* modify dockerfile

* modify dockerfile
2023-10-10 15:53:17 +08:00
Ruonan Wang
388f688ef3 LLM: update setup.py to add bigdl-core-xe package (#9122) 2023-10-10 15:02:48 +08:00
Zhao Changmin
1709beba5b LLM: Explicitly close pickle file pointer before removing temporary directory (#9120)
* fp close
2023-10-10 14:57:23 +08:00
Yuwen Hu
0e09dd926b [LLM] Fix example test (#9118)
* Update llm example test link due to example layout change

* Add better change detect
2023-10-10 13:24:18 +08:00
Ruonan Wang
ad7d9231f5 LLM: add benchmark script for Max gpu and ipex fp16 gpu (#9112)
* add pvc bash

* meet code review

* rename to run-max-gpu.sh
2023-10-10 10:18:41 +08:00
Lilac09
6264381f2e Merge pull request #9117 from Zhengjin-Wang/manually_build
add llm-serving-xpu on github action
2023-10-10 10:09:06 +08:00
Zhengjin Wang
0dbb3a283e amend manually_build 2023-10-10 10:03:23 +08:00
Zhengjin Wang
bb3bb46400 add llm-serving-xpu on github action 2023-10-10 09:48:58 +08:00
binbin Deng
e4d1457a70 LLM: improve transformers style API doc (#9113) 2023-10-10 09:31:00 +08:00
Yuwen Hu
65212451cc [LLM] Small update to performance tests (#9106)
* small updates to llm performance tests regarding model handling

* Small fix
2023-10-09 16:55:25 +08:00
Zhao Changmin
edccfb2ed3 LLM: Check model device type (#9092)
* check model device
2023-10-09 15:49:15 +08:00
Heyang Sun
2c0c9fecd0 refine LLM containers (#9109) 2023-10-09 15:45:30 +08:00