Commit graph

80 commits

Author SHA1 Message Date
SONG Ge
5b83493b1a
Add ipex-llm npu option in setup.py (#11858)
* add ipex-llm npu release

* update example doc

* meet latest release changes
2024-08-20 17:29:49 +08:00
Ruonan Wang
4da93709b1
update doc/setup to use onednn gemm for cpp (#11598)
* update doc/setup to use onednn gemm

* small fix

* Change TOC of graphrag quickstart back
2024-07-18 13:04:38 +08:00
Qiyuan Gong
5d7c9bf901
Upgrade accelerate to 0.23.0 (#11331)
* Upgrade accelerate to 0.23.0
2024-06-17 15:03:11 +08:00
Shaojun Liu
401013a630
Remove chatglm_C Module to Eliminate LGPL Dependency (#11178)
* remove chatglm_C.**.pyd to solve ngsolve weak copyright vunl

* fix style check error

* remove chatglm native int4 from langchain
2024-05-31 17:03:11 +08:00
Ruonan Wang
9bfbf78bf4
update api usage of xe_batch & fp16 (#11164)
* update api usage

* update setup.py
2024-05-29 15:15:14 +08:00
Yina Chen
b6b70d1ba0
Divide core-xe packages (#11131)
* temp

* add batch

* fix style

* update package name

* fix style

* add workflow

* use temp version to run uts

* trigger performance test

* trigger win igpu perf

* revert workflow & setup
2024-05-28 12:00:18 +08:00
Shaojun Liu
373f9e6c79
add ipex-llm-init.bat for Windows (#11082)
* add ipex-llm-init.bat for Windows

* update setup.py
2024-05-24 14:26:25 +08:00
Jiao Wang
0a06a6e1d4
Update tests for transformers 4.36 (#10858)
* update unit test

* update

* update

* update

* update

* update

* fix gpu attention test

* update

* update

* update

* update

* update

* update

* update example test

* replace replit code

* update

* update

* update

* update

* set safe_serialization false

* perf test

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* delete

* update

* update

* update

* update

* update

* update

* revert

* update
2024-05-24 10:26:38 +08:00
Yuwen Hu
d36b41d59e
Add setuptools limitation for ipex-llm[xpu] (#11102)
* Add setuptool limitation for ipex-llm[xpu]

* llamaindex option update
2024-05-22 18:20:30 +08:00
Shaojun Liu
584439e498
update homepage url for ipex-llm (#11094)
* update homepage url

* Update python version to 3.11

* Update long description
2024-05-22 11:10:44 +08:00
Xiangyu Tian
612a365479
LLM: Install CPU version torch with extras [all] (#10868)
Modify setup.py to install CPU version torch with extras [all]
2024-05-16 10:39:55 +08:00
Yuwen Hu
fb656fbf74
Add requirements for oneAPI pypi packages for windows Intel GPU users (#11009) 2024-05-14 13:40:54 +08:00
Yuwen Hu
9f6358e4c2
Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 (#10986)
* Remove xpu_2.0 option in setup.py

* Disable xpu_2.0 test in UT and nightly

* Update docs for deprecated pytorch 2.0

* Small doc update
2024-05-11 12:33:35 +08:00
Yuwen Hu
5c9eb5d0f5
Support llama-index install option for upstreaming purposes (#10866)
* Support llama-index install option for upstreaming purposes

* Small fix

* Small fix
2024-04-23 19:08:29 +08:00
Yuwen Hu
97db2492c8
Update setup.py for bigdl-core-xe-esimd-21 on Windows (#10705)
* Support bigdl-core-xe-esimd-21 for windows in setup.py

* Update setup-llm-env accordingly
2024-04-09 18:21:21 +08:00
Cheen Hau, 俊豪
1c5eb14128
Update pip install to use --extra-index-url for ipex package (#10557)
* Change to 'pip install .. --extra-index-url' for readthedocs

* Change to 'pip install .. --extra-index-url' for examples

* Change to 'pip install .. --extra-index-url' for remaining files

* Fix URL for ipex

* Add links for ipex US and CN servers

* Update ipex cpu url

* remove readme

* Update for github actions

* Update for dockerfiles
2024-03-28 09:56:23 +08:00
Shaojun Liu
c563b41491
add nightly_build workflow (#10533)
* add nightly_build workflow

* add create-job-status-badge action

* update

* update

* update

* update setup.py

* release

* revert
2024-03-26 12:47:38 +08:00
Wang, Jian4
a1048ca7f6
Update setup.py and add new actions and add compatible mode (#25)
* update setup.py

* add new action

* add compatible mode
2024-03-22 15:44:59 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Ruonan Wang
66b4bb5c5d LLM: update setup to provide cpp for windows (#10448) 2024-03-18 18:20:55 +08:00
Wang, Jian4
fe8976a00f LLM: Support gguf models use low_bit and fix no json(#10408)
* support others model use low_bit

* update readme

* update to add *.json
2024-03-15 09:34:18 +08:00
Ruonan Wang
2be8bbd236 LLM: add cpp option in setup.py (#10403)
* add llama_cpp option

* meet code review
2024-03-13 20:12:59 +08:00
ZehuaCao
267de7abc3 fix fschat DEP version error (#10325) 2024-03-06 16:15:27 +08:00
Ruonan Wang
19260492c7 LLM: fix action/installation error of mpmath (#10223)
* fix

* test

* fix

* update
2024-02-23 16:14:53 +08:00
Yuwen Hu
5ba1dc38d4 [LLM] Change default Linux GPU install option to PyTorch 2.1 (#9858)
* Update default xpu to ipex 2.1

* Update related install ut support correspondingly

* Add arc ut tests for both ipex 2.0 and 2.1

* Small fix

* Diable ipex 2.1 test for now as oneapi 2024.0 has not beed installed on the test machine

* Update document for default PyTorch 2.1

* Small fix

* Small fix

* Small doc fixes

* Small fixes
2024-01-08 17:16:17 +08:00
Yuwen Hu
668c2095b1 Remove unnecessary warning when installing llm (#9815) 2024-01-03 10:30:05 +08:00
Ruonan Wang
1456d30765 LLM: add dot to option name in setup (#9682) 2023-12-13 20:57:27 +08:00
Ruonan Wang
9b9cd51de1 LLM: update setup to provide new install option to support ipex 2.1 & oneapi 2024 (#9647)
* update setup

* default to 2.0 now

* meet code review
2023-12-13 17:31:56 +08:00
Yuwen Hu
11fa3de290 Add sutup support of win gpu for bigdl-llm (#9512) 2023-11-24 17:49:21 +08:00
Jasonzzt
cb7ef38e86 rerun 2023-11-01 15:30:34 +08:00
Jasonzzt
ba148ff3ff test py311 2023-11-01 14:08:49 +08:00
Jasonzzt
7c7a7f2ec1 spr & arc ut with python3,9&3.10&3.11 2023-11-01 13:17:13 +08:00
Cengguang Zhang
d4ab5904ef LLM: Add python 3.10 llm UT (#9302)
* add py310 test for llm-unit-test.

* add py310 llm-unit-tests

* add llm-cpp-build-py310

* test

* test

* test.

* test

* test

* fix deactivate.

* fix

* fix.

* fix

* test

* test

* test

* add build chatglm for win.

* test.

* fix
2023-11-01 10:15:32 +08:00
Xin Qiu
06447a3ef6 add malloc and intel openmp to llm deps (#9322) 2023-11-01 09:47:45 +08:00
Yishuo Wang
259cbb4126 [LLM] add initial bigdl-llm-init (#9150) 2023-10-13 15:31:45 +08:00
Ruonan Wang
388f688ef3 LLM: update setup.py to add bigdl-core-xe package (#9122) 2023-10-10 15:02:48 +08:00
binbin Deng
0a552d5bdc LLM: fix installation on windows (#8989) 2023-09-18 11:14:54 +08:00
Guancheng Fu
d1b62ef2f2 [bigdl-llm] Remove serving-dep from all_requires (#8980)
* Remove serving-dep from all_requires

* pin fastchat version
2023-09-14 16:59:24 +08:00
Guancheng Fu
0bf5857908 [LLM] Integrate FastChat as a serving framework for BigDL-LLM (#8821)
* Finish changing

* format

* add licence

* Add licence

* fix

* fix

* Add xpu support for fschat

* Fix patch

* Also install webui dependencies

* change setup.py dependency installs

* fiox

* format

* final test
2023-09-13 09:28:05 +08:00
Zhao Changmin
f00c442d40 fix accelerate (#8946)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-12 09:27:58 +08:00
Ruonan Wang
c0797ea232 LLM: update setup to specify bigdl-core-xe version (#8913) 2023-09-07 15:11:55 +08:00
Yishuo Wang
a232c5aa21 [LLM] add protobuf in bigdl-llm dependency (#8861) 2023-08-31 15:23:31 +08:00
xingyuan li
6a902b892e [LLM] Add amx build step (#8822)
* add amx build step
2023-08-28 17:41:18 +09:00
Song Jiaming
b8b1b6888b [LLM] Performance test (#8796) 2023-08-25 14:31:45 +08:00
xingyuan li
02ec01cb48 [LLM] Add bigdl-core-xe dependency when installing bigdl-llm[xpu] (#8716)
* add bigdl-core-xe dependency
2023-08-10 17:41:42 +09:00
Ruonan Wang
1a7b698a83 [LLM] support ipex arc int4 & add basic llama2 example (#8700)
* first support of xpu

* make it works on gpu

update setup

update

add GPU llama2 examples

add use_optimize flag to disbale optimize for gpu

fix style

update gpu exmaple readme

fix

* update example, and update env

* fix setup to add cpp files

* replace jit with aot to avoid data leak

* rename to bigdl-core-xe

* update installation in example readme
2023-08-09 22:20:32 +08:00
Yishuo Wang
710b9b8982 [LLM] add linux chatglm pybinding binary file (#8698) 2023-08-08 11:16:30 +08:00
Yishuo Wang
6da830cf7e [LLM] add chaglm pybinding binary file in setup.py (#8692) 2023-08-07 09:41:03 +08:00
Cengguang Zhang
ebcf75d506 feat: set transformers lib version. (#8683) 2023-08-04 15:01:59 +08:00
Yishuo Wang
ef08250c21 [LLM] chatglm pybinding support (#8672) 2023-08-04 14:27:29 +08:00