Commit graph

90 commits

Author SHA1 Message Date
Yuwen Hu
b2e56a2e03
Add release support for option xpu_arc (#12422)
* Add release support for xpu-arc

* Dependency update
2024-12-02 17:16:04 +08:00
Yuwen Hu
e61ae88c5b
Upgrade denpendency for xpu_lnl and xpu_arl option (#12424) 2024-11-21 18:37:15 +08:00
Yuwen Hu
516b578104
Support cpp release for ARL on Windows (#12189)
* Support cpp Windows release for ARL

* Temp commit for test

* Remove temp commit
2024-10-14 17:20:31 +08:00
Yuwen Hu
ddcdf47539
Support Windows ARL release (#12183)
* Support release for ARL

* Small fix

* Small fix to doc

* Temp for test

* Remove temp commit for test
2024-10-11 18:30:52 +08:00
Shaojun Liu
724b2ae66d
add npu-level0 pipeline.dll to ipex-llm (#12181)
* add npu-level0 pipeline.dll to ipex-llm

* test

* update runner label

* fix

* update

* fix

* fix
2024-10-11 16:05:20 +08:00
Yuwen Hu
aef1f671bd
Support LNL Windows release (#12169)
* Release for LNL on Windows

* Temp commit for release test

* Change option name

* Remove temp commit and change option name

* temp commit for test again

* Remove temp commit
2024-10-09 17:41:10 +08:00
Ruonan Wang
48d9092b5a
upgrade OneAPI version for cpp Windows (#12063)
* update version

* update quickstart
2024-09-12 11:12:12 +08:00
Shaojun Liu
b11b28e9a9
update CORE_XE_VERSION to 2.6.0 (#11929) 2024-08-27 13:10:13 +08:00
Shaojun Liu
c5b51d41fb
Update pypi tag to 2.2.0.dev0 (#11895) 2024-08-22 16:48:09 +08:00
Yuwen Hu
5e8286f72d
Update ipex-llm default transformers version to 4.37.0 (#11859)
* Update default transformers version to 4.37.0

* Add dependency requirements for qwen and qwen-vl

* Temp fix transformers version for these not yet verified models

* Skip qwen test in UT for now as it requires transformers<4.37.0
2024-08-20 17:37:58 +08:00
SONG Ge
5b83493b1a
Add ipex-llm npu option in setup.py (#11858)
* add ipex-llm npu release

* update example doc

* meet latest release changes
2024-08-20 17:29:49 +08:00
Ruonan Wang
4da93709b1
update doc/setup to use onednn gemm for cpp (#11598)
* update doc/setup to use onednn gemm

* small fix

* Change TOC of graphrag quickstart back
2024-07-18 13:04:38 +08:00
Qiyuan Gong
5d7c9bf901
Upgrade accelerate to 0.23.0 (#11331)
* Upgrade accelerate to 0.23.0
2024-06-17 15:03:11 +08:00
Shaojun Liu
401013a630
Remove chatglm_C Module to Eliminate LGPL Dependency (#11178)
* remove chatglm_C.**.pyd to solve ngsolve weak copyright vunl

* fix style check error

* remove chatglm native int4 from langchain
2024-05-31 17:03:11 +08:00
Ruonan Wang
9bfbf78bf4
update api usage of xe_batch & fp16 (#11164)
* update api usage

* update setup.py
2024-05-29 15:15:14 +08:00
Yina Chen
b6b70d1ba0
Divide core-xe packages (#11131)
* temp

* add batch

* fix style

* update package name

* fix style

* add workflow

* use temp version to run uts

* trigger performance test

* trigger win igpu perf

* revert workflow & setup
2024-05-28 12:00:18 +08:00
Shaojun Liu
373f9e6c79
add ipex-llm-init.bat for Windows (#11082)
* add ipex-llm-init.bat for Windows

* update setup.py
2024-05-24 14:26:25 +08:00
Jiao Wang
0a06a6e1d4
Update tests for transformers 4.36 (#10858)
* update unit test

* update

* update

* update

* update

* update

* fix gpu attention test

* update

* update

* update

* update

* update

* update

* update example test

* replace replit code

* update

* update

* update

* update

* set safe_serialization false

* perf test

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* delete

* update

* update

* update

* update

* update

* update

* revert

* update
2024-05-24 10:26:38 +08:00
Yuwen Hu
d36b41d59e
Add setuptools limitation for ipex-llm[xpu] (#11102)
* Add setuptool limitation for ipex-llm[xpu]

* llamaindex option update
2024-05-22 18:20:30 +08:00
Shaojun Liu
584439e498
update homepage url for ipex-llm (#11094)
* update homepage url

* Update python version to 3.11

* Update long description
2024-05-22 11:10:44 +08:00
Xiangyu Tian
612a365479
LLM: Install CPU version torch with extras [all] (#10868)
Modify setup.py to install CPU version torch with extras [all]
2024-05-16 10:39:55 +08:00
Yuwen Hu
fb656fbf74
Add requirements for oneAPI pypi packages for windows Intel GPU users (#11009) 2024-05-14 13:40:54 +08:00
Yuwen Hu
9f6358e4c2
Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 (#10986)
* Remove xpu_2.0 option in setup.py

* Disable xpu_2.0 test in UT and nightly

* Update docs for deprecated pytorch 2.0

* Small doc update
2024-05-11 12:33:35 +08:00
Yuwen Hu
5c9eb5d0f5
Support llama-index install option for upstreaming purposes (#10866)
* Support llama-index install option for upstreaming purposes

* Small fix

* Small fix
2024-04-23 19:08:29 +08:00
Yuwen Hu
97db2492c8
Update setup.py for bigdl-core-xe-esimd-21 on Windows (#10705)
* Support bigdl-core-xe-esimd-21 for windows in setup.py

* Update setup-llm-env accordingly
2024-04-09 18:21:21 +08:00
Cheen Hau, 俊豪
1c5eb14128
Update pip install to use --extra-index-url for ipex package (#10557)
* Change to 'pip install .. --extra-index-url' for readthedocs

* Change to 'pip install .. --extra-index-url' for examples

* Change to 'pip install .. --extra-index-url' for remaining files

* Fix URL for ipex

* Add links for ipex US and CN servers

* Update ipex cpu url

* remove readme

* Update for github actions

* Update for dockerfiles
2024-03-28 09:56:23 +08:00
Shaojun Liu
c563b41491
add nightly_build workflow (#10533)
* add nightly_build workflow

* add create-job-status-badge action

* update

* update

* update

* update setup.py

* release

* revert
2024-03-26 12:47:38 +08:00
Wang, Jian4
a1048ca7f6
Update setup.py and add new actions and add compatible mode (#25)
* update setup.py

* add new action

* add compatible mode
2024-03-22 15:44:59 +08:00
Wang, Jian4
9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
Ruonan Wang
66b4bb5c5d LLM: update setup to provide cpp for windows (#10448) 2024-03-18 18:20:55 +08:00
Wang, Jian4
fe8976a00f LLM: Support gguf models use low_bit and fix no json(#10408)
* support others model use low_bit

* update readme

* update to add *.json
2024-03-15 09:34:18 +08:00
Ruonan Wang
2be8bbd236 LLM: add cpp option in setup.py (#10403)
* add llama_cpp option

* meet code review
2024-03-13 20:12:59 +08:00
ZehuaCao
267de7abc3 fix fschat DEP version error (#10325) 2024-03-06 16:15:27 +08:00
Ruonan Wang
19260492c7 LLM: fix action/installation error of mpmath (#10223)
* fix

* test

* fix

* update
2024-02-23 16:14:53 +08:00
Yuwen Hu
5ba1dc38d4 [LLM] Change default Linux GPU install option to PyTorch 2.1 (#9858)
* Update default xpu to ipex 2.1

* Update related install ut support correspondingly

* Add arc ut tests for both ipex 2.0 and 2.1

* Small fix

* Diable ipex 2.1 test for now as oneapi 2024.0 has not beed installed on the test machine

* Update document for default PyTorch 2.1

* Small fix

* Small fix

* Small doc fixes

* Small fixes
2024-01-08 17:16:17 +08:00
Yuwen Hu
668c2095b1 Remove unnecessary warning when installing llm (#9815) 2024-01-03 10:30:05 +08:00
Ruonan Wang
1456d30765 LLM: add dot to option name in setup (#9682) 2023-12-13 20:57:27 +08:00
Ruonan Wang
9b9cd51de1 LLM: update setup to provide new install option to support ipex 2.1 & oneapi 2024 (#9647)
* update setup

* default to 2.0 now

* meet code review
2023-12-13 17:31:56 +08:00
Yuwen Hu
11fa3de290 Add sutup support of win gpu for bigdl-llm (#9512) 2023-11-24 17:49:21 +08:00
Jasonzzt
cb7ef38e86 rerun 2023-11-01 15:30:34 +08:00
Jasonzzt
ba148ff3ff test py311 2023-11-01 14:08:49 +08:00
Jasonzzt
7c7a7f2ec1 spr & arc ut with python3,9&3.10&3.11 2023-11-01 13:17:13 +08:00
Cengguang Zhang
d4ab5904ef LLM: Add python 3.10 llm UT (#9302)
* add py310 test for llm-unit-test.

* add py310 llm-unit-tests

* add llm-cpp-build-py310

* test

* test

* test.

* test

* test

* fix deactivate.

* fix

* fix.

* fix

* test

* test

* test

* add build chatglm for win.

* test.

* fix
2023-11-01 10:15:32 +08:00
Xin Qiu
06447a3ef6 add malloc and intel openmp to llm deps (#9322) 2023-11-01 09:47:45 +08:00
Yishuo Wang
259cbb4126 [LLM] add initial bigdl-llm-init (#9150) 2023-10-13 15:31:45 +08:00
Ruonan Wang
388f688ef3 LLM: update setup.py to add bigdl-core-xe package (#9122) 2023-10-10 15:02:48 +08:00
binbin Deng
0a552d5bdc LLM: fix installation on windows (#8989) 2023-09-18 11:14:54 +08:00
Guancheng Fu
d1b62ef2f2 [bigdl-llm] Remove serving-dep from all_requires (#8980)
* Remove serving-dep from all_requires

* pin fastchat version
2023-09-14 16:59:24 +08:00
Guancheng Fu
0bf5857908 [LLM] Integrate FastChat as a serving framework for BigDL-LLM (#8821)
* Finish changing

* format

* add licence

* Add licence

* fix

* fix

* Add xpu support for fschat

* Fix patch

* Also install webui dependencies

* change setup.py dependency installs

* fiox

* format

* final test
2023-09-13 09:28:05 +08:00
Zhao Changmin
f00c442d40 fix accelerate (#8946)
Co-authored-by: leonardozcm <leonardozcm@gmail.com>
2023-09-12 09:27:58 +08:00