Jasonzzt
ba148ff3ff
test py311
2023-11-01 14:08:49 +08:00
Jasonzzt
7c7a7f2ec1
spr & arc ut with python3,9&3.10&3.11
2023-11-01 13:17:13 +08:00
Cengguang Zhang
d4ab5904ef
LLM: Add python 3.10 llm UT ( #9302 )
...
* add py310 test for llm-unit-test.
* add py310 llm-unit-tests
* add llm-cpp-build-py310
* test
* test
* test.
* test
* test
* fix deactivate.
* fix
* fix.
* fix
* test
* test
* test
* add build chatglm for win.
* test.
* fix
2023-11-01 10:15:32 +08:00
Xin Qiu
06447a3ef6
add malloc and intel openmp to llm deps ( #9322 )
2023-11-01 09:47:45 +08:00
Yishuo Wang
259cbb4126
[LLM] add initial bigdl-llm-init ( #9150 )
2023-10-13 15:31:45 +08:00
Ruonan Wang
388f688ef3
LLM: update setup.py to add bigdl-core-xe package ( #9122 )
2023-10-10 15:02:48 +08:00
binbin Deng
0a552d5bdc
LLM: fix installation on windows ( #8989 )
2023-09-18 11:14:54 +08:00
Guancheng Fu
d1b62ef2f2
[bigdl-llm] Remove serving-dep from all_requires ( #8980 )
...
* Remove serving-dep from all_requires
* pin fastchat version
2023-09-14 16:59:24 +08:00
Guancheng Fu
0bf5857908
[LLM] Integrate FastChat as a serving framework for BigDL-LLM ( #8821 )
...
* Finish changing
* format
* add licence
* Add licence
* fix
* fix
* Add xpu support for fschat
* Fix patch
* Also install webui dependencies
* change setup.py dependency installs
* fiox
* format
* final test
2023-09-13 09:28:05 +08:00
Zhao Changmin
f00c442d40
fix accelerate ( #8946 )
...
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-12 09:27:58 +08:00
Ruonan Wang
c0797ea232
LLM: update setup to specify bigdl-core-xe version ( #8913 )
2023-09-07 15:11:55 +08:00
Yishuo Wang
a232c5aa21
[LLM] add protobuf in bigdl-llm dependency ( #8861 )
2023-08-31 15:23:31 +08:00
xingyuan li
6a902b892e
[LLM] Add amx build step ( #8822 )
...
* add amx build step
2023-08-28 17:41:18 +09:00
Song Jiaming
b8b1b6888b
[LLM] Performance test ( #8796 )
2023-08-25 14:31:45 +08:00
xingyuan li
02ec01cb48
[LLM] Add bigdl-core-xe dependency when installing bigdl-llm[xpu] ( #8716 )
...
* add bigdl-core-xe dependency
2023-08-10 17:41:42 +09:00
Ruonan Wang
1a7b698a83
[LLM] support ipex arc int4 & add basic llama2 example ( #8700 )
...
* first support of xpu
* make it works on gpu
update setup
update
add GPU llama2 examples
add use_optimize flag to disbale optimize for gpu
fix style
update gpu exmaple readme
fix
* update example, and update env
* fix setup to add cpp files
* replace jit with aot to avoid data leak
* rename to bigdl-core-xe
* update installation in example readme
2023-08-09 22:20:32 +08:00
Yishuo Wang
710b9b8982
[LLM] add linux chatglm pybinding binary file ( #8698 )
2023-08-08 11:16:30 +08:00
Yishuo Wang
6da830cf7e
[LLM] add chaglm pybinding binary file in setup.py ( #8692 )
2023-08-07 09:41:03 +08:00
Cengguang Zhang
ebcf75d506
feat: set transformers lib version. ( #8683 )
2023-08-04 15:01:59 +08:00
Yishuo Wang
ef08250c21
[LLM] chatglm pybinding support ( #8672 )
2023-08-04 14:27:29 +08:00
Yina Chen
59903ea668
llm linux support avx & avx2 ( #8669 )
2023-08-03 17:10:59 +08:00
Xin Qiu
0714888705
build windows avx dll ( #8657 )
...
* windows avx
* add to actions
2023-08-03 02:06:24 +08:00
Yina Chen
119bf6d710
[LLM] Support linux cpp dynamic load .so ( #8655 )
...
* support linux cpp dynamic load .so
* update cli
2023-08-02 20:15:45 +08:00
xingyuan li
cdfbe652ca
[LLM] Add chatglm support for llm-cli ( #8641 )
...
* add chatglm build
* add llm-cli support
* update git
* install cmake
* add ut for chatglm
* add files to setup
* fix bug cause permission error when sf lack file
2023-08-01 14:30:17 +09:00
Zhao Changmin
3e10260c6d
LLM: llm-convert support chatglm family ( #8643 )
...
* convert chatglm
2023-08-01 11:16:18 +08:00
xingyuan li
3361b66449
[LLM] Revert llm-cli to disable selecting executables on Windows ( #8630 )
...
* revert vnni file select
* revert setup.py
* add model-api.dll
2023-07-31 11:15:44 +09:00
xingyuan li
7b8d9c1b0d
[LLM] Add dependency file check in setup.py ( #8565 )
...
* add package file check
2023-07-20 14:20:08 +09:00
xingyuan li
b6510fa054
fix move/download dll step ( #8564 )
2023-07-19 12:17:07 +09:00
xingyuan li
c52ed37745
fix starcoder dll name ( #8563 )
2023-07-19 11:55:06 +09:00
xingyuan li
e57db777e0
[LLM] Setup.py & llm-cli update for windows vnni binary files ( #8537 )
...
* update setup.py
* update llm-cli
2023-07-17 12:28:38 +09:00
xingyuan li
60c2c0c3dc
Bug fix for merged pr #8503 ( #8516 )
2023-07-13 17:26:30 +09:00
xingyuan li
4f152b4e3a
[LLM] Merge the llm.cpp build and the pypi release ( #8503 )
...
* checkout llm.cpp to build new binary
* use artifact to get latest built binary files
* rename quantize
* modify all release workflow
2023-07-13 16:34:24 +09:00
Yishuo Wang
dd3f953288
Support vnni check ( #8497 )
2023-07-12 10:11:15 +08:00
Yishuo Wang
98bac815e4
specify numpy version ( #8489 )
2023-07-10 16:50:16 +08:00
Yina Chen
f2bb469847
[WIP] LLm llm-cli chat mode ( #8440 )
...
* fix timezone
* temp
* Update linux interactive mode
* modify init text for interactive mode
* meet comments
* update
* win script
* meet comments
2023-07-05 14:04:17 +08:00
Ruonan Wang
50af0251e4
LLM: First commit of StarCoder pybinding ( #8354 )
...
* first commit of starcoder
* update setup.py and fix style
* add starcoder_cpp, fix style
* fix style
* support windows binary
* update pybinding
* fix style, add avx2 binary
* small fix
* fix style
2023-06-21 13:23:06 +08:00
Zhao Changmin
4d177ca0a1
LLM: Merge convert pth/gptq model script into one shell script ( #8348 )
...
* convert model in one
* model type
* license
* readme and pep8
* ut path
* rename
* readme
* fix docs
* without lines
2023-06-19 11:50:05 +08:00
Ruonan Wang
9fda7e34f1
LLM: fix version control ( #8342 )
2023-06-15 15:18:50 +08:00
Ruonan Wang
8840dadd86
LLM: binary file version control on source forge ( #8329 )
...
* support version control for llm based on date
* update action
2023-06-15 09:53:27 +08:00
xingyuan li
ea3cf6783e
LLM: Command line wrapper for llama/bloom/gptneox ( #8239 )
...
* add llama/bloom/gptneox wrapper
* add readme
* upload binary main file
2023-06-08 14:55:22 +08:00
Yina Chen
6990328e5c
[LLM]Add bloom quantize in setup.py ( #8295 )
...
* add bloom quantize in setup.py
* fix
2023-06-08 11:18:22 +08:00
Ruonan Wang
aa91657019
LLM: add bloom dll/exe in setup ( #8284 )
2023-06-08 09:28:28 +08:00
Ruonan Wang
39ad68e786
LLM: enhancements for convert_model ( #8278 )
...
* update convert
* change output name
* add discription for input_path, add check for input_values
* basic support for command line
* fix style
* update based on comment
* update based on comment
2023-06-07 13:22:14 +08:00
Pingchuan Ma (Henry)
2ed5842448
[LLM] add convert's python deps for LLM ( #8260 )
...
* add python deps for LLM
* update release.sh
* change deps group name
* update all
* fix update
* test fix
* update
2023-06-06 16:01:17 +08:00
Yuwen Hu
e290660b20
[LLM] Add so shared library for Bloom family models ( #8258 )
...
* Add so file downloading for bloom family models
* Supports selecting of avx2/avx512 so for bloom
2023-06-02 17:39:40 +08:00
Yina Chen
91a1528fce
[LLM]Support for linux package (llama, NeoX) & quantize (llama) ( #8246 )
...
* temp
* update
* update
* remove cmake
* runtime get platform -> change platform name using sed
* update
* update
* add platform flags(default: current platform) & delete legacy libs & add neox quantize
2023-06-02 13:51:35 +08:00
Ruonan Wang
3fd716d422
LLM: update setup.py to add a missing data( #8240 )
2023-06-01 10:25:43 +08:00
Ruonan Wang
c890609d1e
LLM: Support package/quantize for llama.cpp/redpajama.cpp on Windows ( #8236 )
...
* support windows of llama.cpp
* update quantize
* update version of llama.cp submodule
* add gptneox.dll
* add quantize-gptneox.exe
2023-05-31 14:47:12 +08:00
Ruonan Wang
4638b85f3e
[llm] Initial support of package and quantize ( #8228 )
...
* first commit of CMakeFiles.txt to include llama & gptneox
* initial support of quantize
* update cmake for only consider linux now
* support quantize interface
* update based on comment
2023-05-26 16:36:46 +08:00
Junwei Deng
ea22416525
LLM: add first round files ( #8225 )
2023-05-25 11:29:18 +08:00