Wang, Jian4
496bb2e845
LLM: Support load BaiChuan model family gguf model ( #9685 )
...
* support baichuan model family gguf model
* update gguf generate.py
* add verify models
* add support model_family
* update
* update style
* update type
* update readme
* update
* remove support model_family
2023-12-15 13:34:33 +08:00
Yishuo Wang
9a330bfc2b
fix fuse mlp when using q5_0 or fp8 ( #9689 )
2023-12-14 16:16:05 +08:00
Xin Qiu
5e46e0e5af
fix baichuan2-7b 1st token performance regression on xpu ( #9683 )
...
* fix baichuan2-7b 1st token performance regression
* add comments
* fix style
2023-12-14 09:58:32 +08:00
Yishuo Wang
09ca540f9b
use fuse mlp in qwen ( #9672 )
2023-12-13 17:20:08 +08:00
Ruonan Wang
c7741c4e84
LLM: update moe block convert to optimize rest token latency of Mixtral ( #9669 )
...
* update moe block convert
* further accelerate final_hidden_states
* fix style
* fix style
2023-12-13 16:17:06 +08:00
Xiangyu Tian
1c6499e880
[LLM] vLLM: Support Mixtral Model ( #9670 )
...
Add Mixtral support for BigDL vLLM.
2023-12-13 14:44:47 +08:00
Ruonan Wang
dc5b1d7e9d
LLM: integrate sdp kernel for FP16 rest token inference on GPU [DG2/ATSM] ( #9633 )
...
* integrate sdp
* update api
* fix style
* meet code review
* fix
* distinguish mtl from arc
* small fix
2023-12-13 11:29:57 +08:00
Qiyuan Gong
5b0e7e308c
[LLM] Add support for empty activation ( #9664 )
...
* Add support for empty activation, e.g., [0, 4096]. Empty activation is allowed by PyTorch.
* Add comments.
2023-12-13 11:07:45 +08:00
SONG Ge
284e7697b1
[LLM] Optimize ChatGLM2 kv_cache to support beam_search on ARC ( #9579 )
...
* optimize kv_cache to support beam_search on Arc
* correctness test update
* fix query_length issue
* simplify implementation
* only enable the optimization on gpu device
* limit the beam_search support only enabled with gpu device and batch_size > 1
* add comments for beam_search case and revert ut change
* meet comments
* add more comments to describe the differece between multi-cases
2023-12-13 11:02:14 +08:00
Ziteng Zhang
8931f2eb62
[LLM] Fix transformer qwen size mismatch and rename causal_mask ( #9655 )
...
* Fix size mismatching caused by context_layer
* Change registered_causal_mask to causal_mask
2023-12-12 20:57:40 +08:00
binbin Deng
59ce86d292
LLM: support optimize_model=True for Mixtral model ( #9657 )
2023-12-12 16:41:26 +08:00
Heyang Sun
9f02f96160
[LLM] support for Yi AWQ model ( #9648 )
2023-12-11 14:07:34 +08:00
Xin Qiu
82255f9726
Enable fused layernorm ( #9614 )
...
* bloom layernorm
* fix
* layernorm
* fix
* fix
* fix
* style fix
* fix
* replace nn.LayerNorm
2023-12-11 09:26:13 +08:00
Yina Chen
70f5e7bf0d
Support peft LoraConfig ( #9636 )
...
* support peft loraconfig
* use testcase to test
* fix style
* meet comments
2023-12-08 16:13:03 +08:00
Xin Qiu
0b6f29a7fc
add fused rms norm for Yi and Qwen ( #9640 )
2023-12-08 16:04:38 +08:00
Xin Qiu
5636b0ba80
set new linear status ( #9639 )
2023-12-08 11:02:49 +08:00
Yuwen Hu
6f34978b94
[LLM] Add more performance tests for win iGPU (more in-out pairs, RWKV model) ( #9626 )
...
* Add supports for loading rwkv models using from_pretrained api
* Temporarily enable pr tests
* Add RWKV in tests and more in-out pairs
* Add rwkv for 512 tests
* Make iterations smaller
* Change back to nightly trigger
2023-12-07 18:55:16 +08:00
Ruonan Wang
d9b0c01de3
LLM: fix unlora module in qlora finetune ( #9621 )
...
* fix unlora module
* split train and inference
2023-12-07 16:32:02 +08:00
Yishuo Wang
7319f2c227
use fused mlp in baichuan2 ( #9620 )
2023-12-07 15:50:57 +08:00
Xiangyu Tian
deee65785c
[LLM] vLLM: Delete last_kv_cache before prefilling ( #9619 )
...
Remove last_kv_cache before prefilling to reduce peak memory usage.
2023-12-07 11:32:33 +08:00
Xiangyu Tian
0327169b50
[LLM] vLLM: fix memory leak in prepare_kv_cache ( #9616 )
...
Revert modification in prepare_kv_cache to fix memory leak.
2023-12-07 10:08:18 +08:00
Xin Qiu
13d47955a8
use fused rms norm in chatglm2 and baichuan ( #9613 )
...
* use fused rms norm in chatglm2 and baichuan
* style fix
2023-12-07 09:21:41 +08:00
Yina Chen
404e101ded
QALora example ( #9551 )
...
* Support qa-lora
* init
* update
* update
* update
* update
* update
* update merge
* update
* fix style & update scripts
* update
* address comments
* fix typo
* fix typo
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com>
2023-12-06 15:36:21 +08:00
Guancheng Fu
6978b2c316
[VLLM] Change padding patterns for vLLM & clean code ( #9609 )
...
* optimize
* fix minor error
* optimizations
* fix style
2023-12-06 15:27:26 +08:00
Zheng, Yi
d154b38bf9
Add llama2 gpu low memory example ( #9514 )
...
* Add low memory example
* Minor fixes
* Update readme.md
2023-12-05 17:29:48 +08:00
Ziteng Zhang
65934c9f4f
[LLM] Fix Qwen causal_mask and attention_mask size mismatching ( #9600 )
...
* Fix #9582 , caused by Qwen modified modeling_qwen.py 7f62181c94 (d2h-049182)
2023-12-05 15:15:54 +08:00
Qiyuan Gong
f211f136b6
Configurable TORCH_LINEAR_THRESHOLD from env ( #9588 )
...
* Add TORCH_LINEAR_THRESHOLD from env (BIGDL_LLM_LINEAR_THRESHOLD)
* Change default to 512
2023-12-05 13:19:47 +08:00
Xiangyu Tian
5c03651309
[LLM] vLLM: Add Preempt for scheduler ( #9568 )
...
Implement Preempt_by_recompute method for vllm.
2023-12-03 20:16:25 +08:00
Xin Qiu
69c49d21f5
use fused rms norm ( #9572 )
...
* use fused rms norm
* meet code review
2023-11-30 21:47:41 +08:00
Yishuo Wang
7f6465518a
support loading llama tokenizer from gguf model ( #9565 )
2023-11-30 14:56:12 +08:00
Yuwen Hu
34503efa6a
Fix cpu pinned embedding ( #9556 )
2023-11-29 18:27:56 +08:00
binbin Deng
4ff2ca9d0d
LLM: fix loss error on Arc ( #9550 )
2023-11-29 15:16:18 +08:00
Yishuo Wang
65121c7997
support loading q4_1/q5_0/q5_1/q8_0 gguf model ( #9546 )
2023-11-29 14:40:37 +08:00
Yuwen Hu
5f5ca38b74
[LLM Doc] Fix api doc rendering error ( #9542 )
...
* Fix api rendering error
* Fix python style
2023-11-29 09:17:09 +08:00
Yishuo Wang
a86c6e0b56
[LLM] support loading gguf model ( #9544 )
2023-11-28 15:51:15 +08:00
Xiangyu Tian
916c338772
fix bugs in vllm length check ( #9543 )
2023-11-28 11:09:54 +08:00
Zhao Changmin
e7e0cd3b5e
CPU Pinned embedding Layer ( #9538 )
...
* CPU Pinned embedding
2023-11-28 09:46:31 +08:00
Guancheng Fu
963a5c8d79
Add vLLM-XPU version's README/examples ( #9536 )
...
* test
* test
* fix last kv cache
* add xpu readme
* remove numactl for xpu example
* fix link error
* update max_num_batched_tokens logic
* add explaination
* add xpu environement version requirement
* refine gpu memory
* fix
* fix style
2023-11-28 09:44:03 +08:00
Guancheng Fu
b6c3520748
Remove xformers from vLLM-CPU ( #9535 )
2023-11-27 11:21:25 +08:00
binbin Deng
6bec0faea5
LLM: support Mistral AWQ models ( #9520 )
2023-11-24 16:20:22 +08:00
Ruonan Wang
914a5a5a27
LLM: fix abnormal Mistral GPU accuracy by updating rms_norm ( #9529 )
2023-11-24 15:37:50 +08:00
SONG Ge
3d24823cda
hot-fix mistral kv_cache ( #9528 )
2023-11-24 14:33:04 +08:00
Zhao Changmin
42b7a16bc5
Replace torch.bmm with safe_bmm ( #9519 )
...
* replace bmm with safe one
* rename args and deprecated warning
2023-11-24 12:16:48 +08:00
Ruonan Wang
b63aae8a8e
LLM: add flash attention support for llama ( #9518 )
...
* add initial flash attention for llama
* accelerate fp32 first token by changing to fp16 in advance
* support fp32
2023-11-23 18:40:18 +08:00
Guancheng Fu
bf579507c2
Integrate vllm ( #9310 )
...
* done
* Rename structure
* add models
* Add structure/sampling_params,sequence
* add input_metadata
* add outputs
* Add policy,logger
* add and update
* add parallelconfig back
* core/scheduler.py
* Add llm_engine.py
* Add async_llm_engine.py
* Add tested entrypoint
* fix minor error
* Fix everything
* fix kv cache view
* fix
* fix
* fix
* format&refine
* remove logger from repo
* try to add token latency
* remove logger
* Refine config.py
* finish worker.py
* delete utils.py
* add license
* refine
* refine sequence.py
* remove sampling_params.py
* finish
* add license
* format
* add license
* refine
* refine
* Refine line too long
* remove exception
* so dumb style-check
* refine
* refine
* refine
* refine
* refine
* refine
* add README
* refine README
* add warning instead error
* fix padding
* add license
* format
* format
* format fix
* Refine vllm dependency (#1 )
vllm dependency clear
* fix licence
* fix format
* fix format
* fix
* adapt LLM engine
* fix
* add license
* fix format
* fix
* Moving README.md to the correct position
* Fix readme.md
* done
* guide for adding models
* fix
* Fix README.md
* Add new model readme
* remove ray-logic
* refactor arg_utils.py
* remove distributed_init_method logic
* refactor entrypoints
* refactor input_metadata
* refactor model_loader
* refactor utils.py
* refactor models
* fix api server
* remove vllm.stucture
* revert by txy 1120
* remove utils
* format
* fix license
* add bigdl model
* Refer to a specfic commit
* Change code base
* add comments
* add async_llm_engine comment
* refine
* formatted
* add worker comments
* add comments
* add comments
* fix style
* add changes
---------
Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
Co-authored-by: Xiangyu Tian <109123695+xiangyuT@users.noreply.github.com>
Co-authored-by: leonardozcm <leonardo1997zcm@gmail.com>
2023-11-23 16:46:45 +08:00
Qiyuan Gong
0f0c6bb631
[LLM] Fix Qwen registered_causal_mask is None ( #9513 )
...
* Add registered_causal_mask init based on 2abd8e5777 .
2023-11-23 09:28:04 +08:00
Ruonan Wang
076d106ef5
LLM: GPU QLoRA update to bf16 to accelerate gradient checkpointing ( #9499 )
...
* update to bf16 to accelerate gradient checkpoint
* add utils and fix ut
2023-11-21 17:08:36 +08:00
Xin Qiu
50b01058f1
enable new q4_1 ( #9479 )
2023-11-17 14:58:57 +08:00
Zhao Changmin
30abd304a7
LLM: Fix baichuan pre-normalize model tensor assigning issue when loading ( #9481 )
...
* No need to normalized when loading
2023-11-16 21:57:28 +08:00
Ruonan Wang
c0ef70df02
llm: quick fix of fast_rms_norm ( #9480 )
2023-11-16 14:42:16 +08:00
Yina Chen
d5263e6681
Add awq load support ( #9453 )
...
* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* init
* address comments
* add examples
* fix style
* fix style
* fix style
* fix style
* update
* remove
* meet comments
* fix style
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com>
2023-11-16 14:06:25 +08:00
Ruonan Wang
d2c064124a
LLM: update rms related usage to suport ipex 2.1 new api ( #9466 )
...
* update rms related usage
* fix style
2023-11-16 11:21:50 +08:00
Yuwen Hu
731b0aaade
Empty cache after embedding to cpu ( #9477 )
2023-11-16 10:52:30 +08:00
Yang Wang
51d07a9fd8
Support directly loading gptq models from huggingface ( #9391 )
...
* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* address comments
2023-11-13 20:48:12 -08:00
SONG Ge
2888818b3a
[LLM] Support mixed_fp8 on Arc ( #9415 )
...
* ut gpu allocation memory fix
* support mix_8bit on arc
* rename mixed_4bit to mixed_fp4 and mixed_8bit to mixed_fp8
* revert unexpected changes
* revert unexpected changes
* unify common logits
* rename in llm xmx_checker
* fix typo error and re-unify
2023-11-13 09:26:30 +08:00
Heyang Sun
df8e4d7889
[LLM] apply allreduce and bias to training in LowBitLinear ( #9395 )
2023-11-09 14:35:54 +08:00
Wang, Jian4
40cead6b5b
LLM: Fix CPU qlora dtype convert issue ( #9394 )
2023-11-09 14:34:01 +08:00
Ruonan Wang
bfca76dfa7
LLM: optimize QLoRA by updating lora convert logic ( #9372 )
...
* update convert logic of qlora
* update
* refactor and further improve performance
* fix style
* meet code review
2023-11-08 17:46:49 +08:00
Ruonan Wang
7e8fb29b7c
LLM: optimize QLoRA by reducing convert time ( #9370 )
2023-11-08 13:14:34 +08:00
Yishuo Wang
bfd9f88f0d
[LLM] Use fp32 as dtype when batch_size <=8 and qtype is q4_0/q8_0/fp8 ( #9365 )
2023-11-08 09:54:53 +08:00
Heyang Sun
fae6db3ddc
[LLM] refactor cpu low-bit forward logic ( #9366 )
...
* [LLM] refactor cpu low-bit forward logic
* fix style
* Update low_bit_linear.py
* Update low_bit_linear.py
* refine
2023-11-07 15:09:16 +08:00
Heyang Sun
af94058203
[LLM] Support CPU deepspeed distributed inference ( #9259 )
...
* [LLM] Support CPU Deepspeed distributed inference
* Update run_deepspeed.py
* Rename
* fix style
* add new codes
* refine
* remove annotated codes
* refine
* Update README.md
* refine doc and example code
2023-11-06 17:56:42 +08:00
Xin Qiu
1420e45cc0
Chatglm2 rope optimization on xpu ( #9350 )
2023-11-06 13:56:34 +08:00
Yuwen Hu
a0150bb205
[LLM] Move embedding layer to CPU for iGPU inference ( #9343 )
...
* Move embedding layer to CPU for iGPU llm inference
* Empty cache after to cpu
* Remove empty cache as it seems to have some negative effect to first token
2023-11-03 11:13:45 +08:00
Yishuo Wang
726203d778
[LLM] Replace Embedding layer to fix it on CPU ( #9254 )
2023-11-01 13:58:10 +08:00
Yang Wang
e1bc18f8eb
fix import ipex problem ( #9323 )
...
* fix import ipex problem
* fix style
2023-10-31 20:31:34 -07:00
Yina Chen
2262ae4d13
Support MoFQ4 on arc ( #9301 )
...
* init
* update
* fix style
* fix style
* fix style
* meet comments
2023-11-01 10:59:46 +08:00
Yang Wang
163d033616
Support qlora in CPU ( #9233 )
...
* support qlora in CPU
* revert example
* fix style
2023-10-27 14:01:15 -07:00
Cengguang Zhang
44b5fcc190
LLM: fix pretraining_tp argument issue. ( #9281 )
2023-10-26 18:43:58 +08:00
WeiguangHan
6b2a32eba2
LLM: add missing function for PyTorch InternLM model ( #9285 )
2023-10-26 18:05:23 +08:00
Yina Chen
f879c48f98
fp8 convert use ggml code ( #9277 )
2023-10-26 17:03:29 +08:00
Yina Chen
e2264e8845
Support arc fp4 ( #9266 )
...
* support arc fp4
* fix style
* fix style
2023-10-25 15:42:48 +08:00
Yang Wang
067c7e8098
Support deepspeed AutoTP ( #9230 )
...
* Support deepspeed
* add test script
* refactor convert
* refine example
* refine
* refine example
* fix style
* refine example and adapte latest ipex
* fix style
2023-10-24 23:46:28 -07:00
Jin Qiao
90162264a3
LLM: replace torch.float32 with auto type ( #9261 )
2023-10-24 17:12:13 +08:00
SONG Ge
bd5215d75b
[LLM] Reimplement chatglm fuse rms optimization ( #9260 )
...
* re-implement chatglm rope rms
* update
2023-10-24 16:35:12 +08:00
SONG Ge
bfc1e2d733
add fused rms optimization for chatglm model ( #9256 )
2023-10-24 14:40:58 +08:00
Guancheng Fu
f37547249d
Refine README/CICD ( #9253 )
2023-10-24 12:56:03 +08:00
binbin Deng
db37edae8a
LLM: update langchain api document page ( #9222 )
2023-10-24 10:13:41 +08:00
Wang, Jian4
c14a61681b
Add load low-bit in model-serving for reduce EPC ( #9239 )
...
* init load low-bit
* fix
* fix
2023-10-23 11:28:20 +08:00
Yina Chen
0383306688
Add arc fp8 support ( #9232 )
...
* add fp8 support
* add log
* fix style
2023-10-20 17:15:07 +08:00
Yang Wang
118249b011
support transformers 4.34+ for llama ( #9229 )
2023-10-19 22:36:30 -07:00
Chen, Zhentao
5850241423
correct Readme GPU example and API docstring ( #9225 )
...
* update readme to correct GPU usage
* update from_pretrained supported low bit options
* fix stype check
2023-10-19 16:08:47 +08:00
Yang Wang
b0ddde0410
Fix removing convert dtype bug ( #9216 )
...
* Fix removing convert dtype bug
* fix style
2023-10-18 11:24:22 -07:00
Ruonan Wang
942d6418e7
LLM: fix chatglm kv cache ( #9215 )
2023-10-18 19:09:53 +08:00
SONG Ge
0765f94770
[LLM] Optimize kv_cache for mistral model family ( #9189 )
...
* add kv_cache optimization for mistral model
* kv_cache optimize for mistral
* update stylr
* update
2023-10-18 15:13:37 +08:00
Ruonan Wang
3555ebc148
LLM: fix wrong length in gptj kv_cache optimization ( #9210 )
...
* fix wrong length in gptj kv cache
* update
2023-10-18 14:59:02 +08:00
Shengsheng Huang
6dad8d16df
optimize NormHead for Baichuan2 ( #9205 )
...
* optimize NormHead for Baichuan2
* fix ut and change name
* rename functions
2023-10-18 14:05:07 +08:00
Ruonan Wang
09815f7064
LLM: fix RMSNorm optimization of Baichuan2-13B/Baichuan-13B ( #9204 )
...
* fix rmsnorm of baichuan2-13B
* update baichuan1-13B too
* fix style
2023-10-17 18:40:34 +08:00
Ruonan Wang
c0497ab41b
LLM: support kv_cache optimization for Qwen-VL-Chat ( #9193 )
...
* dupport qwen_vl_chat
* fix style
2023-10-17 13:33:56 +08:00
binbin Deng
1cd9ab15b8
LLM: fix ChatGLMConfig check ( #9191 )
2023-10-17 11:52:56 +08:00
Yang Wang
7160afd4d1
Support XPU DDP training and autocast for LowBitMatmul ( #9167 )
...
* support autocast in low bit matmul
* Support XPU DDP training
* fix amp
2023-10-16 20:47:19 -07:00
Ruonan Wang
77afb8796b
LLM: fix convert of chatglm ( #9190 )
2023-10-17 10:48:13 +08:00
dingbaorong
af3b575c7e
expose modules_to_not_convert in optimize_model ( #9180 )
...
* expose modules_to_not_convert in optimize_model
* some fixes
2023-10-17 09:50:26 +08:00
Cengguang Zhang
5ca8a851e9
LLM: add fuse optimization for Mistral. ( #9184 )
...
* add fuse optimization for mistral.
* fix.
* fix
* fix style.
* fix.
* fix error.
* fix style.
* fix style.
2023-10-16 16:50:31 +08:00
Jiao Wang
49e1381c7f
update rope ( #9155 )
2023-10-15 21:51:45 -07:00
binbin Deng
a164c24746
LLM: add kv_cache optimization for chatglm2-6b-32k ( #9165 )
2023-10-16 10:43:15 +08:00
Yang Wang
7a2de00b48
Fixes for xpu Bf16 training ( #9156 )
...
* Support bf16 training
* Use a stable transformer version
* remove env
* fix style
2023-10-14 21:28:59 -07:00
Cengguang Zhang
51a133de56
LLM: add fuse rope and norm optimization for Baichuan. ( #9166 )
...
* add fuse rope optimization.
* add rms norm optimization.
2023-10-13 17:36:52 +08:00
Cengguang Zhang
433f408081
LLM: Add fuse rope and norm optimization for Aquila. ( #9161 )
...
* add fuse norm optimization.
* add fuse rope optimization
2023-10-13 14:18:37 +08:00
SONG Ge
e7aa67e141
[LLM] Add rope optimization for internlm ( #9159 )
...
* add rope and norm optimization for internlm and gptneox
* revert gptneox back and split with pr#9155 #
* add norm_forward
* style fix
* update
* update
2023-10-13 14:18:28 +08:00
Ruonan Wang
b8aee7bb1b
LLM: Fix Qwen kv_cache optimization ( #9148 )
...
* first commit
* ut pass
* accelerate rotate half by using common util function
* fix style
2023-10-12 15:49:42 +08:00
binbin Deng
69942d3826
LLM: fix model check before attention optimization ( #9149 )
2023-10-12 15:21:51 +08:00
binbin Deng
eb3fb18eb4
LLM: improve PyTorch API doc ( #9128 )
2023-10-11 15:03:39 +08:00
Zhao Changmin
1709beba5b
LLM: Explicitly close pickle file pointer before removing temporary directory ( #9120 )
...
* fp close
2023-10-10 14:57:23 +08:00
binbin Deng
e4d1457a70
LLM: improve transformers style API doc ( #9113 )
2023-10-10 09:31:00 +08:00
Zhao Changmin
edccfb2ed3
LLM: Check model device type ( #9092 )
...
* check model device
2023-10-09 15:49:15 +08:00
Yina Chen
4c4f8d1663
[LLM]Fix Arc falcon abnormal output issue ( #9096 )
...
* update
* update
* fix error & style
* fix style
* update train
* to input_seq_size
2023-10-09 15:09:37 +08:00
Zhao Changmin
548e4dd5fe
LLM: Adapt transformers models for optimize model SL ( #9022 )
...
* LLM: Adapt transformers model for SL
2023-10-09 11:13:44 +08:00
Ruonan Wang
f64257a093
LLM: basic api support for esimd fp16 ( #9067 )
...
* basic api support for fp16
* fix style
* fix
* fix error and style
* fix style
* meet code review
* update based on comments
2023-10-09 11:05:17 +08:00
Xin Qiu
b3e94a32d4
change log4error import ( #9098 )
2023-10-08 09:23:28 +08:00
Kai Huang
78ea7ddb1c
Combine apply_rotary_pos_emb for gpt-neox ( #9074 )
2023-10-07 16:27:46 +08:00
Yang Wang
36dd4afd61
Fix llama when rope scaling is not None ( #9086 )
...
* Fix llama when rope scaling is not None
* fix style
* fix style
2023-10-06 13:27:37 -07:00
Yang Wang
fcb1c618a0
using bigdl-llm fused rope for llama ( #9066 )
...
* optimize llama xpu rope
* fix bug
* fix style
* refine append cache
* remove check
* do not cache cos sin
* remove unnecessary changes
* clean up
* fix style
* check for training
2023-10-06 09:57:29 -07:00
Jiao Wang
aefa5a5bfe
Qwen kv cache ( #9079 )
...
* qwen and aquila
* update
* update
* style
2023-10-05 11:59:17 -07:00
Jiao Wang
d5ca1f32b6
Aquila KV cache optimization ( #9080 )
...
* update
* update
* style
2023-10-05 11:10:57 -07:00
Yang Wang
88565c76f6
add export merged model example ( #9018 )
...
* add export merged model example
* add sources
* add script
* fix style
2023-10-04 21:18:52 -07:00
Yang Wang
0cd8f1c79c
Use ipex fused rms norm for llama ( #9081 )
...
* also apply rmsnorm
* fix cpu
2023-10-04 21:04:55 -07:00
Cengguang Zhang
fb883100e7
LLM: support chatglm-18b convert attention forward in benchmark scripts. ( #9072 )
...
* add chatglm-18b convert.
* fix if statement.
* fix
2023-09-28 14:04:52 +08:00
Yishuo Wang
6de2189e90
[LLM] fix chatglm main choice ( #9073 )
2023-09-28 11:23:37 +08:00
Cengguang Zhang
b4a1266ef0
[WIP] LLM: add kv cache support for internlm. ( #9036 )
...
* LLM: add kv cache support for internlm
* add internlm apply_rotary_pos_emb
* fix.
* fix style.
2023-09-25 14:16:59 +08:00
Ruonan Wang
975da86e00
LLM: fix gptneox kv cache ( #9044 )
2023-09-25 13:03:57 +08:00
Jiao Wang
028a6d9383
MPT model optimize for long sequence ( #9020 )
...
* mpt_long_seq
* update
* update
* update
* style
* style2
* update
2023-09-21 21:27:23 -07:00
Ruonan Wang
b943d73844
LLM: refactor kv cache ( #9030 )
...
* refactor utils
* meet code review; update all models
* small fix
2023-09-21 21:28:03 +08:00
Cengguang Zhang
868511cf02
LLM: fix kv cache issue of bloom and falcon. ( #9029 )
2023-09-21 18:12:20 +08:00
Ruonan Wang
bf51ec40b2
LLM: Fix empty cache ( #9024 )
...
* fix
* fix
* update example
2023-09-21 17:16:07 +08:00
Yina Chen
714884414e
fix error ( #9025 )
2023-09-21 16:42:11 +08:00
SONG Ge
fa47967583
[LLM] Optimize kv_cache for gptj model family ( #9010 )
...
* optimize gptj model family attention
* add license and comment for dolly-model
* remove xpu mentioned
* remove useless info
* code sytle
* style fix
* code style in gptj fix
* remove gptj arch
* move apply_rotary_pos_emb into utils
* kv_seq_length update
* use hidden_states instead of query layer to reach batch size
2023-09-21 10:42:08 +08:00
Cengguang Zhang
b3cad7de57
LLM: add bloom kv cache support ( #9012 )
...
* LLM: add bloom kv cache support
* fix style.
2023-09-20 21:10:53 +08:00
Kai Huang
156af15d1e
Add NF3 ( #9008 )
...
* add nf3
* grammar
2023-09-20 20:03:07 +08:00
Kai Huang
6981745fe4
Optimize kv_cache for gpt-neox model family ( #9015 )
...
* override gptneox
* style
* move to utils
* revert
2023-09-20 19:59:19 +08:00
Cengguang Zhang
735a17f7b4
LLM: add kv cache to falcon family. ( #8995 )
...
* add kv cache to falcon family.
* fix: import error.
* refactor
* update comments.
* add two version falcon attention forward.
* fix
* fix.
* fix.
* fix.
* fix style.
* fix style.
2023-09-20 15:36:30 +08:00
Ruonan Wang
94a7f8917b
LLM: fix optimized kv cache for baichuan-13b ( #9009 )
...
* fix baichuan 13b
* fix style
* fix
* fix style
2023-09-20 15:30:14 +08:00
Yang Wang
c88f6ec457
Experiment XPU QLora Finetuning ( #8937 )
...
* Support xpu finetuning
* support xpu finetuning
* fix style
* fix style
* fix style
* refine example
* add readme
* refine readme
* refine api
* fix fp16
* fix example
* refactor
* fix style
* fix compute type
* add qlora
* refine training args
* fix example
* fix style
* fast path forinference
* address comments
* refine readme
* revert lint
2023-09-19 10:15:44 -07:00
Ruonan Wang
004c45c2be
LLM: Support optimized kv_cache for baichuan family ( #8997 )
...
* add initial support for baichuan attantion
* support baichuan1
* update based on comment
* update based on comment
* support baichuan2
* update link, change how to jusge baichuan2
* fix style
* add model parameter for pob emb
* update based on comment
2023-09-19 15:38:54 +08:00
Zhao Changmin
2a05581da7
LLM: Apply low_cpu_mem_usage algorithm on optimize_model API ( #8987 )
...
* low_cpu_mem_usage
2023-09-18 21:41:42 +08:00
Zhao Changmin
16b9412e80
tie_word_embeddings ( #8977 )
...
tie_word_embeddings
2023-09-15 10:17:09 +08:00
Yishuo Wang
bcf456070c
fix bloom-176b int overflow ( #8973 )
2023-09-14 14:37:57 +08:00
Ruonan Wang
dd57623650
LLM: reduce GPU memory for optimize_model=True ( #8965 )
...
* reduce gpu memory for llama & chatglm
* change to device type
2023-09-13 17:27:09 +08:00
SONG Ge
7132ef6081
[LLM Doc] Add optimize_model doc in transformers api ( #8957 )
...
* add optimize in from_pretrained
* add api doc for load_low_bit
* update api docs following comments
* update api docs
* update
* reord comments
2023-09-13 10:42:33 +08:00
Zhao Changmin
c32c260ce2
LLM: Add save/load API in optimize_model to support general pytorch model ( #8956 )
...
* support hf format SL
2023-09-13 10:22:00 +08:00
Guancheng Fu
0bf5857908
[LLM] Integrate FastChat as a serving framework for BigDL-LLM ( #8821 )
...
* Finish changing
* format
* add licence
* Add licence
* fix
* fix
* Add xpu support for fschat
* Fix patch
* Also install webui dependencies
* change setup.py dependency installs
* fiox
* format
* final test
2023-09-13 09:28:05 +08:00
Zhao Changmin
dcaa4dc130
LLM: Support GQA on llama kvcache ( #8938 )
...
* support GQA
2023-09-12 12:18:40 +08:00
Yang Wang
16761c58be
Make llama attention stateless ( #8928 )
...
* Make llama attention stateless
* fix style
* fix chatglm
* fix chatglm xpu
2023-09-11 18:21:50 -07:00
Zhao Changmin
e62eda74b8
refine ( #8912 )
...
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-11 16:40:33 +08:00
Yina Chen
df165ad165
init ( #8933 )
2023-09-11 14:30:55 +08:00
Ruonan Wang
b3f5dd5b5d
LLM: update q8 convert xpu&cpu ( #8930 )
2023-09-08 16:01:17 +08:00
Yina Chen
33d75adadf
[LLM]Support q5_0 on arc ( #8926 )
...
* support q5_0
* delete
* fix style
2023-09-08 15:52:36 +08:00
Yang Wang
ee98cdd85c
Support latest transformer version ( #8923 )
...
* Support latest transformer version
* fix style
2023-09-07 19:01:32 -07:00
Yang Wang
25428b22b4
Fix chatglm2 attention and kv cache ( #8924 )
...
* fix chatglm2 attention
* fix bf16 bug
* make model stateless
* add utils
* cleanup
* fix style
2023-09-07 18:54:29 -07:00
Yina Chen
b209b8f7b6
[LLM] Fix arc qtype != q4_0 generate issue ( #8920 )
...
* Fix arc precision!=q4_0 generate issue
* meet comments
2023-09-07 08:56:36 -07:00
Yang Wang
c34400e6b0
Use new layout for xpu qlinear ( #8896 )
...
* use new layout for xpu qlinear
* fix style
2023-09-06 21:55:33 -07:00
Zhao Changmin
8bc1d8a17c
LLM: Fix discards in optimize_model with non-hf models and add openai whisper example ( #8877 )
...
* openai-whisper
2023-09-07 10:35:59 +08:00
SONG Ge
7a71ced78f
[LLM Docs] Remain API Docs Issues Solution ( #8780 )
...
* langchain readthedocs update
* solve langchain.llms.transformersllm issues
* langchain.embeddings.transformersembeddings/transfortmersllms issues
* update docs for get_num_tokens
* add low_bit api doc
* add optimizer model api doc
* update rst index
* fix coomments style
* update docs following the comments
* update api doc
2023-09-06 16:29:34 +08:00
Kai Huang
4a9ff050a1
Add qlora nf4 ( #8782 )
...
* add nf4
* dequant nf4
* style
2023-09-06 09:39:22 +08:00
Zhao Changmin
95271f10e0
LLM: Rename low bit layer ( #8875 )
...
* rename lowbit
---------
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-05 13:21:12 +08:00
Yang Wang
242c9d6036
Fix chatglm2 multi-turn streamchat ( #8867 )
2023-08-31 22:13:49 -07:00
xingyuan li
de6c6bb17f
[LLM] Downgrade amx build gcc version and remove avx flag display ( #8856 )
...
* downgrade to gcc 11
* remove avx display
2023-08-31 14:08:13 +09:00
Yang Wang
3b4f4e1c3d
Fix llama attention optimization for XPU ( #8855 )
...
* Fix llama attention optimization fo XPU
* fix chatglm2
* fix typo
2023-08-30 21:30:49 -07:00
Shengsheng Huang
7b566bf686
[LLM] add new API for optimize any pytorch models ( #8827 )
...
* add new API for optimize any pytorch models
* change test util name
* revise API and update UT
* fix python style
* update ut config, change default value
* change defaults, disable ut transcribe
2023-08-30 19:41:53 +08:00
Xin Qiu
8eca982301
windows add env ( #8852 )
2023-08-30 15:54:52 +08:00
Zhao Changmin
731916c639
LLM: Enable attempting loading method automatically ( #8841 )
...
* enable auto load method
* warning error
* logger info
---------
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-08-30 15:41:55 +08:00
Yishuo Wang
bba73ec9d2
[LLM] change chatglm native int4 checkpoint name ( #8851 )
2023-08-30 15:05:19 +08:00
Yina Chen
55e705a84c
[LLM] Support the rest of AutoXXX classes in Transformers API ( #8815 )
...
* add transformers auto models
* fix
2023-08-30 11:16:14 +08:00
Yishuo Wang
7429ea0606
[LLM] support transformer int4 + amx int4 ( #8838 )
2023-08-29 17:27:18 +08:00
Zhao Changmin
bb31d4fe80
LLM: Implement hf low_cpu_mem_usage with 1xbinary file peak memory on transformer int4 ( #8731 )
...
* 1x peak memory
2023-08-29 09:33:17 +08:00
SONG Ge
d2926c7672
[LLM] Unify Langchain Native and Transformers LLM API ( #8752 )
...
* deprecate BigDLNativeTransformers and add specific LMEmbedding method
* deprecate and add LM methods for langchain llms
* add native params to native langchain
* new imple for embedding
* move ut from bigdlnative to casual llm
* rename embeddings api and examples update align with usage updating
* docqa example hot-fix
* add more api docs
* add langchain ut for starcoder
* support model_kwargs for transformer methods when calling causalLM and add ut
* ut fix for transformers embedding
* update for langchain causal supporting transformers
* remove model_family in readme doc
* add model_families params to support more models
* update api docs and remove chatglm embeddings for now
* remove chatglm embeddings in examples
* new refactor for ut to add bloom and transformers llama ut
* disable llama transformers embedding ut
2023-08-25 11:14:21 +08:00
Yang Wang
bf3591e2ff
Optimize chatglm2 for bf16 ( #8725 )
...
* make chatglm works with bf16
* fix style
* support chatglm v1
* fix style
* fix style
* add chatglm2 file
2023-08-24 10:04:25 -07:00
Yishuo Wang
611c1fb628
[LLM] change default n_threads of native int4 langchain API ( #8779 )
2023-08-21 13:30:12 +08:00
Yishuo Wang
3d1f2b44f8
LLM: change default n_threads of native int4 models ( #8776 )
2023-08-18 15:46:19 +08:00
Yishuo Wang
2ba2133613
fix starcoder chinese output ( #8773 )
2023-08-18 13:37:02 +08:00
binbin Deng
548f7a6cf7
LLM: update convert of llama family to support llama2-70B ( #8747 )
2023-08-18 09:30:35 +08:00
Yina Chen
4afea496ab
support q8_0 ( #8765 )
2023-08-17 15:06:36 +08:00
Ruonan Wang
e9aa2bd890
LLM: reduce GPU 1st token latency and update example ( #8763 )
...
* reduce 1st token latency
* update example
* fix
* fix style
* update readme of gpu benchmark
2023-08-16 18:01:23 +08:00
SONG Ge
f4164e4492
[BigDL LLM] Update readme for unifying transformers API ( #8737 )
...
* update readme doc
* fix readthedocs error
* update comment
* update exception error info
* invalidInputError instead
* fix readme typo error and remove import error
* fix more typo
2023-08-16 14:22:32 +08:00
Yishuo Wang
77844125f2
[LLM] Support chatglm cache ( #8745 )
2023-08-14 15:10:46 +08:00
SONG Ge
aceea4dc29
[LLM] Unify Transformers and Native API ( #8713 )
...
* re-open pr to run on latest runner
* re-add examples and ut
* rename ut and move deprecate to warning instead of raising an error info
* ut fix
2023-08-11 19:45:47 +08:00
Yishuo Wang
f91035c298
[LLM] fix chatglm native int4 emoji output ( #8739 )
2023-08-11 15:38:41 +08:00
binbin Deng
77efcf7b1d
LLM: fix ChatGLM2 native int4 stream output ( #8733 )
2023-08-11 14:51:50 +08:00
Ruonan Wang
ca3e59a1dc
LLM: support stop for starcoder native int4 stream ( #8734 )
2023-08-11 14:51:30 +08:00
Yishuo Wang
3d5a7484a2
[LLM] fix bloom and starcoder memory release ( #8728 )
2023-08-11 11:18:19 +08:00
Ruonan Wang
1a7b698a83
[LLM] support ipex arc int4 & add basic llama2 example ( #8700 )
...
* first support of xpu
* make it works on gpu
update setup
update
add GPU llama2 examples
add use_optimize flag to disbale optimize for gpu
fix style
update gpu exmaple readme
fix
* update example, and update env
* fix setup to add cpp files
* replace jit with aot to avoid data leak
* rename to bigdl-core-xe
* update installation in example readme
2023-08-09 22:20:32 +08:00
Kai Huang
1b65288bdb
Add api doc for LLM ( #8605 )
...
* api doc initial
* update desc
2023-08-08 18:17:16 +08:00
binbin Deng
ea5d7aff5b
LLM: add chatglm native int4 transformers API ( #8695 )
2023-08-07 17:52:47 +08:00
Yishuo Wang
ef08250c21
[LLM] chatglm pybinding support ( #8672 )
2023-08-04 14:27:29 +08:00
Yang Wang
b6468bac43
optimize chatglm2 long sequence ( #8662 )
...
* add chatglm2
* optimize a little
* optimize chatglm long sequence
* fix style
* address comments and fix style
* fix bug
2023-08-03 17:56:24 -07:00
Yang Wang
3407f87075
Fix llama kv cache bug ( #8674 )
2023-08-03 17:54:55 -07:00
binbin Deng
a15a2516e6
add ( #8659 )
2023-08-03 10:12:10 +08:00
Yina Chen
119bf6d710
[LLM] Support linux cpp dynamic load .so ( #8655 )
...
* support linux cpp dynamic load .so
* update cli
2023-08-02 20:15:45 +08:00
Zhao Changmin
ca998cc6f2
LLM: Mute shape mismatch output ( #8601 )
...
* LLM: Mute shape mismatch output
2023-08-02 16:46:22 +08:00
Zhao Changmin
04c713ef06
LLM: Disable transformer api pretraining_tp ( #8645 )
...
* disable pretraining_tp
2023-08-02 11:26:01 +08:00
Yang Wang
cbeae97a26
Optimize Llama Attention to to reduce KV cache memory copy ( #8580 )
...
* Optimize llama attention to reduce KV cache memory copy
* fix bug
* fix style
* remove git
* fix style
* fix style
* fix style
* fix tests
* move llama attention to another file
* revert
* fix style
* remove jit
* fix
2023-08-01 16:37:58 -07:00
xingyuan li
cdfbe652ca
[LLM] Add chatglm support for llm-cli ( #8641 )
...
* add chatglm build
* add llm-cli support
* update git
* install cmake
* add ut for chatglm
* add files to setup
* fix bug cause permission error when sf lack file
2023-08-01 14:30:17 +09:00
Zhao Changmin
3e10260c6d
LLM: llm-convert support chatglm family ( #8643 )
...
* convert chatglm
2023-08-01 11:16:18 +08:00
Yina Chen
a607972c0b
[LLM]LLM windows load -api.dll ( #8631 )
...
* temp
* update
* revert setup.py
2023-07-31 13:47:20 +08:00
xingyuan li
3361b66449
[LLM] Revert llm-cli to disable selecting executables on Windows ( #8630 )
...
* revert vnni file select
* revert setup.py
* add model-api.dll
2023-07-31 11:15:44 +09:00
binbin Deng
fb32fefcbe
LLM: support tensor input of native int4 generate ( #8620 )
2023-07-27 17:59:49 +08:00
Zhao Changmin
5b484ab48d
LLM: Support load_low_bit loading models in shards format ( #8612 )
...
* shards_model
---------
Co-authored-by: leonardozcm <leonaordo1997zcm@gmail.com>
2023-07-26 13:30:01 +08:00
Zhao Changmin
af201052db
avoid malloc all missing keys in fp32 ( #8600 )
2023-07-25 09:48:51 +08:00
Yuwen Hu
ba42a6da63
[LLM] Set torch_dtype default value to 'auto' for transformers low bit from_pretrained API
2023-07-21 17:55:00 +08:00
Yang Wang
feb3af0567
Optimize transformer int4 memory footprint ( #8579 )
2023-07-20 20:22:13 -07:00
Yang Wang
57e880f63a
[LLM] use pytorch linear for large input matrix ( #8492 )
...
* use pytorch linear for large input matrix
* only works on server
* fix style
* optimize memory
* first check server
* revert
* address comments
* fix style
2023-07-20 09:54:25 -07:00
Zhao Changmin
e680af45ea
LLM: Optimize Langchain Pipeline ( #8561 )
...
* LLM: Optimize Langchain Pipeline
* load in low bit
2023-07-19 17:43:13 +08:00
Zhao Changmin
49d636e295
[LLM] whisper model transformer int4 verification and example ( #8511 )
...
* LLM: transformer api support
* va
* example
* revert
* pep8
* pep8
2023-07-19 08:33:20 +08:00
Yina Chen
9a7bc17ca1
[LLM] llm supports vnni link on windows ( #8543 )
...
* support win vnni link
* fix style
* fix style
* use isa_checker
* fix
* typo
* fix
* update
2023-07-18 16:43:45 +08:00
Yina Chen
4582b6939d
[LLM]llm gptneox chat ( #8527 )
...
* linux
* support win
* merge upstream & support vnni lib in chat
2023-07-18 11:17:17 +08:00
Xin Qiu
fccae91461
Add load_low_bit save_load_bit to AutoModelForCausalLM ( #8531 )
...
* transformers save_low_bit load_low_bit
* update example and add readme
* update
* update
* update
* add ut
* update
2023-07-17 15:29:55 +08:00
xingyuan li
e57db777e0
[LLM] Setup.py & llm-cli update for windows vnni binary files ( #8537 )
...
* update setup.py
* update llm-cli
2023-07-17 12:28:38 +09:00
Yishuo Wang
6320bf201e
LLM: fix memory access violation ( #8519 )
2023-07-13 17:08:08 +08:00
Xin Qiu
90e3d86bce
rename low bit type name ( #8512 )
...
* change qx_0 to sym_intx
* update
* fix typo
* update
* fix type
* fix style
* add python doc
* meet code review
* fix style
2023-07-13 15:53:31 +08:00
Zhao Changmin
ba0da17b40
LLM: Support AutoModelForSeq2SeqLM transformer API ( #8449 )
...
* LLM: support AutoModelForSeq2SeqLM transformer API
2023-07-13 13:33:51 +08:00
Yishuo Wang
86b5938075
LLM: fix llm pybinding ( #8509 )
2023-07-13 10:27:08 +08:00
Zhao Changmin
23f6a4c21f
LLM: Optimize transformer int4 loading ( #8499 )
...
* LLM: Optimize transformer int4 loading
2023-07-12 15:25:42 +08:00
Yishuo Wang
dd3f953288
Support vnni check ( #8497 )
2023-07-12 10:11:15 +08:00
Xin Qiu
cd7a980ec4
Transformer int4 add qtype, support q4_1 q5_0 q5_1 q8_0 ( #8481 )
...
* quant in Q4 5 8
* meet code review
* update readme
* style
* update
* fix error
* fix error
* update
* fix style
* update
* Update README.md
* Add load_in_low_bit
2023-07-12 08:23:08 +08:00
Yishuo Wang
db39d0a6b3
LLM: disable mmap by default for better performance ( #8467 )
2023-07-11 09:26:26 +08:00
Zhao Changmin
81d655cda9
LLM: transformer int4 save and load ( #8462 )
...
* LLM: transformer int4 save and load
2023-07-10 16:34:41 +08:00
binbin Deng
d489775d2c
LLM: fix inconsistency between output token number and max_new_token ( #8479 )
2023-07-07 17:31:05 +08:00
Ruonan Wang
2f77d485d8
Llm: Initial support of langchain transformer int4 API ( #8459 )
...
* first commit of transformer int4 and pipeline
* basic examples
temp save for embeddings
support embeddings and docqa exaple
* fix based on comment
* small fix
2023-07-06 17:50:05 +08:00
binbin Deng
14626fe05b
LLM: refactor transformers and langchain class name ( #8470 )
2023-07-06 17:16:44 +08:00
binbin Deng
77808fa124
LLM: fix n_batch in starcoder pybinding ( #8461 )
2023-07-05 17:06:50 +08:00
Yina Chen
f2bb469847
[WIP] LLm llm-cli chat mode ( #8440 )
...
* fix timezone
* temp
* Update linux interactive mode
* modify init text for interactive mode
* meet comments
* update
* win script
* meet comments
2023-07-05 14:04:17 +08:00
binbin Deng
e54e52b438
LLM: fix n_batch in bloom pybinding ( #8454 )
2023-07-04 15:10:32 +08:00
Yang Wang
449aea7ffc
Optimize transformer int4 loading memory ( #8400 )
...
* Optimize transformer int4 loading memory
* move cast to convert
* default settting low_cpu_mem_usage
2023-06-30 20:12:12 -07:00
Zhao Changmin
cc76ec809a
check out dir ( #8395 )
2023-06-27 21:28:39 +08:00
Xin Qiu
e68d631c0a
gptq2ggml: support loading safetensors model. ( #8401 )
...
* update convert gptq to ggml
* update convert gptq to ggml
* gptq to ggml
* update script
* meet code review
* meet code review
2023-06-27 11:19:33 +08:00
binbin Deng
19e19efb4c
LLM: raise warning instead of error when use unsupported parameters ( #8382 )
2023-06-26 13:23:55 +08:00
Shengsheng Huang
c113ecb929
[LLM] langchain bloom, UT's, default parameters ( #8357 )
...
* update langchain default parameters to align w/ api
* add ut's for llm and embeddings
* update inference test script to install langchain deps
* update tests workflows
---------
Co-authored-by: leonardozcm <changmin.zhao@intel.com>
2023-06-25 17:38:00 +08:00
Shengsheng Huang
446175cc05
transformer api refactor ( #8389 )
...
* transformer api refactor
* fix style
* add huggingface tokenizer usage in example and make ggml tokenzizer as option 1 and huggingface tokenizer as option 2
* fix style
2023-06-25 17:15:33 +08:00
Yang Wang
ce6d06eb0a
Support directly quantizing huggingface transformers into 4bit format ( #8371 )
...
* Support directly quantizing huggingface transformers into 4bit format
* refine example
* license
* fix bias
* address comments
* move to ggml transformers
* fix example
* fix style
* fix style
* address comments
* rename
* change API
* fix style
* add lm head to conversion
* address comments
2023-06-25 16:35:06 +08:00
binbin Deng
03c5fb71a8
LLM: fix ModuleNotFoundError when use llm-cli ( #8378 )
2023-06-21 15:03:14 +08:00
Ruonan Wang
7296453f07
LLM: support starcoder in llm-cli ( #8377 )
...
* support starcoder in cli
* small fix
2023-06-21 14:38:30 +08:00
Ruonan Wang
50af0251e4
LLM: First commit of StarCoder pybinding ( #8354 )
...
* first commit of starcoder
* update setup.py and fix style
* add starcoder_cpp, fix style
* fix style
* support windows binary
* update pybinding
* fix style, add avx2 binary
* small fix
* fix style
2023-06-21 13:23:06 +08:00
Yuwen Hu
7ef1c890eb
[LLM] Supports GPTQ convert in transfomers-like API, and supports folder outfile for llm-convert ( #8366 )
...
* Add docstrings to llm_convert
* Small docstrings fix
* Unify outfile type to be a folder path for either gptq or pth model_format
* Supports gptq model input for from_pretrained
* Fix example and readme
* Small fix
* Python style fix
* Bug fix in llm_convert
* Python style check
* Fix based on comments
* Small fix
2023-06-20 17:42:38 +08:00
Zhao Changmin
4ec46afa4f
LLM: Align converting GPTQ model API with transformer style ( #8365 )
...
* LLM: Align GPTQ API with transformer style
2023-06-20 14:27:41 +08:00
Ruonan Wang
f99d348954
LLM: convert and quantize support for StarCoder ( #8359 )
...
* basic support for starcoder
* update from_pretrained
* fix bug and fix style
2023-06-20 13:39:35 +08:00
binbin Deng
5f4f399ca7
LLM: fix bugs during supporting bloom in langchain ( #8362 )
2023-06-20 13:30:37 +08:00
Zhao Changmin
30ac9a70f5
LLM: fix expected 2 blank lines ( #8360 )
2023-06-19 18:10:02 +08:00
Zhao Changmin
c256cd136b
LLM: Fix ggml return value ( #8358 )
...
* ggml return original value
2023-06-19 17:02:56 +08:00
Zhao Changmin
d4027d7164
fix typos in llm_convert ( #8355 )
2023-06-19 16:17:21 +08:00
Zhao Changmin
4d177ca0a1
LLM: Merge convert pth/gptq model script into one shell script ( #8348 )
...
* convert model in one
* model type
* license
* readme and pep8
* ut path
* rename
* readme
* fix docs
* without lines
2023-06-19 11:50:05 +08:00
Ruonan Wang
9daf543e2f
LLM: Update convert of gpenox to sync with new libgptneox.so ( #8345 )
2023-06-15 16:28:50 +08:00
Ruonan Wang
f7f4e65788
LLM: support int8 and tmp_path for from_pretrained ( #8338 )
2023-06-15 14:48:21 +08:00
Ruonan Wang
5094970175
LLM: update convert_model to support int8 ( #8326 )
...
* update example and convert_model for int8
* reset example
* fix style
2023-06-15 09:25:07 +08:00
binbin Deng
f64e703083
LLM: first add _tokenize, detokenize and _generate for bloom pybinding ( #8316 )
2023-06-14 17:29:57 +08:00
Xin Qiu
5576679a92
add convert-gptq-to-ggml.py to bigdl-llama ( #8298 )
2023-06-14 14:51:51 +08:00
Ruonan Wang
a6c4b733cb
LLM: Update subprocess to show error message ( #8323 )
...
* update subprocess
* fix style
2023-06-13 16:43:37 +08:00
Shengsheng Huang
02c583144c
[LLM] langchain integrations and examples ( #8256 )
...
* langchain intergrations and examples
* add licences and rename
* add licences
* fix license issues and change backbone to model_family
* update examples to use model_family param
* fix linting
* fix code style
* exclude langchain integration from stylecheck
* update langchain examples and update integrations based on latets changes
* update simple llama-cpp-python style API example
* remove bloom in README
* change default n_threads to 2 and remove redundant code
---------
Co-authored-by: leonardozcm <changmin.zhao@intel.com>
2023-06-12 19:22:07 +08:00
xingyuan li
c4028d507c
[LLM] Add unified default value for cli programs ( #8310 )
...
* add unified default value for threads and n_predict
2023-06-12 16:30:27 +08:00
binbin Deng
5d5da7b2c7
LLM: optimize namespace and remove unused import logic ( #8302 )
2023-06-09 15:17:49 +08:00
Ruonan Wang
5d0e130605
LLM: fix convert path error of gptneox and bloom on windows ( #8304 )
2023-06-09 10:10:19 +08:00
Yina Chen
7bfa0fcdf9
fix style ( #8300 )
2023-06-08 16:52:17 +08:00
Yina Chen
637b72f2ad
[LLM] llm transformers api support batch actions ( #8288 )
...
* llm transformers api support batch actions
* align with transformer
* meet comment
2023-06-08 15:10:08 +08:00
xingyuan li
ea3cf6783e
LLM: Command line wrapper for llama/bloom/gptneox ( #8239 )
...
* add llama/bloom/gptneox wrapper
* add readme
* upload binary main file
2023-06-08 14:55:22 +08:00
binbin Deng
08bdfce2d8
LLM: avoid unnecessary import torch except converting process ( #8297 )
2023-06-08 14:24:58 +08:00
binbin Deng
f9e2bda04a
LLM: add stop words and enhance output for bloom pybinding ( #8280 )
2023-06-08 14:06:06 +08:00
Yina Chen
1571ba6425
remove unused import gptneox_cpp ( #8293 )
2023-06-08 11:04:47 +08:00
Yina Chen
2c037e892b
fix-transformers-neox ( #8285 )
2023-06-07 14:44:43 +08:00
Ruonan Wang
39ad68e786
LLM: enhancements for convert_model ( #8278 )
...
* update convert
* change output name
* add discription for input_path, add check for input_values
* basic support for command line
* fix style
* update based on comment
* update based on comment
2023-06-07 13:22:14 +08:00
Junwei Deng
2d14e593f0
LLM: Support generate(max_new_tokens=...), tokenize and decode for transformers-like API ( #8283 )
...
* first push
* fix pep8
2023-06-07 11:50:35 +08:00
Yina Chen
11cd2a07e0
[LLM] llm transformers format interface first part ( #8276 )
...
* llm-transformers-format
* update
* fix style
2023-06-06 17:17:37 +08:00
Pingchuan Ma (Henry)
a3f353b939
[LLM] add long time loading disclaimer for LLM model converting ( #8279 )
2023-06-06 17:15:13 +08:00
Yuwen Hu
64bc123dd3
[LLM] Add transformers-like API from_pretrained ( #8271 )
...
* Init commit for bigdl.llm.transformers.AutoModelForCausalLM
* Temp change to avoid name conflicts with external transformers lib
* Support downloading model from huggingface
* Small python style fix
* Change location of transformers to avoid library conflicts
* Add return value for converted ggml binary ckpt path for convert_model
* Avoid repeated loading of shared library and adding some comments
* Small fix
* Path type fix anddocstring fix
* Small fix
* Small fix
* Change cache dir to pwd
2023-06-06 17:04:16 +08:00
xingyuan li
38be471140
[LLM] convert_model bug fix ( #8274 )
...
* Renamed all bloomz to bloom in ggml/model & utls/convert_util.py
* Add an optional parameter for specific the model conversion path to avoid running out of disk space
2023-06-06 15:16:42 +08:00
Ruonan Wang
8bd2992a8d
LLM: accelerate sample of gptneox and update quantize ( #8262 )
...
* update quantize & accelerate sample
* fix style check
* fix style error
2023-06-05 15:36:00 +08:00
Jun Wang
2bc0e7abbb
[llm] Add convert_model api ( #8244 )
...
* add convert_model api
* change the model_path to input_path
* map int4 to q4_0
* fix blank line
* change bloomz to bloom
* remove default model_family
* change dtype to lower first
2023-06-03 10:18:29 +08:00
Yuwen Hu
e290660b20
[LLM] Add so shared library for Bloom family models ( #8258 )
...
* Add so file downloading for bloom family models
* Supports selecting of avx2/avx512 so for bloom
2023-06-02 17:39:40 +08:00
Yina Chen
657ea0ee50
[LLM] Fix linux load libs for NeoX and llama ( #8257 )
...
* init
* add lisence
* fix style
2023-06-02 17:03:17 +08:00
Yuwen Hu
286b010bf1
[LLM] First push for Bloomz pybinding ( #8252 )
...
* Initial commit to move bloom pybinding to bigdl-llm
* Revise path for shared library
* Small fix
2023-06-02 14:41:04 +08:00
Junwei Deng
350d31a472
LLM: first push gptneox pybinding ( #8234 )
...
* first push gptneox pybinding
* fix
* fix code style and add license
---------
Co-authored-by: binbin <binbin1.deng@intel.com>
2023-06-02 09:28:00 +08:00
binbin Deng
3a9aa23835
LLM: fix and update related license in llama pybinding ( #8250 )
2023-06-01 17:09:15 +08:00
binbin Deng
e56f24b424
LLM: first push llama pybinding ( #8241 )
...
* first push llama binding
* update dll
2023-06-01 10:59:15 +08:00
binbin Deng
8421af51ae
LLM: support converting to ggml format ( #8235 )
...
* add convert
* fix
* fix
* fix
* try
* test
* update check
* fix
* fix
2023-05-31 15:20:06 +08:00
Ruonan Wang
c890609d1e
LLM: Support package/quantize for llama.cpp/redpajama.cpp on Windows ( #8236 )
...
* support windows of llama.cpp
* update quantize
* update version of llama.cp submodule
* add gptneox.dll
* add quantize-gptneox.exe
2023-05-31 14:47:12 +08:00
Pingchuan Ma (Henry)
1f913a6941
[LLM] Add LLM pep8 coding style checking ( #8233 )
...
* add LLM pep8 coding checking
* resolve bugs in testing scripts and code style revision
2023-05-30 15:58:14 +08:00
Ruonan Wang
4638b85f3e
[llm] Initial support of package and quantize ( #8228 )
...
* first commit of CMakeFiles.txt to include llama & gptneox
* initial support of quantize
* update cmake for only consider linux now
* support quantize interface
* update based on comment
2023-05-26 16:36:46 +08:00
Junwei Deng
ea22416525
LLM: add first round files ( #8225 )
2023-05-25 11:29:18 +08:00