Wang, Jian4
|
a54cd767b1
|
LLM: Add gguf falcon (#9801)
* init falcon
* update convert.py
* update style
|
2024-01-03 14:49:02 +08:00 |
|
Wang, Jian4
|
7ed9538b9f
|
LLM: support gguf mpt (#9773)
* add gguf mpt
* update
|
2023-12-28 09:22:39 +08:00 |
|
Heyang Sun
|
66e286a73d
|
Support for Mixtral AWQ (#9775)
* Support for Mixtral AWQ
* Update README.md
* Update README.md
* Update awq_config.py
* Update README.md
* Update README.md
|
2023-12-25 16:08:09 +08:00 |
|
Wang, Jian4
|
984697afe2
|
LLM: Add bloom gguf support (#9734)
* init
* update bloom add merges
* update
* update readme
* update for llama error
* update
|
2023-12-21 14:06:25 +08:00 |
|
Heyang Sun
|
1fa7793fc0
|
Load Mixtral GGUF Model (#9690)
* Load Mixtral GGUF Model
* refactor
* fix empty tensor when to cpu
* update gpu and cpu readmes
* add dtype when set tensor into module
|
2023-12-19 13:54:38 +08:00 |
|
Wang, Jian4
|
b8437a1c1e
|
LLM: Add gguf mistral model support (#9691)
* add mistral support
* need to upgrade transformers version
* update
|
2023-12-15 13:37:39 +08:00 |
|
Wang, Jian4
|
496bb2e845
|
LLM: Support load BaiChuan model family gguf model (#9685)
* support baichuan model family gguf model
* update gguf generate.py
* add verify models
* add support model_family
* update
* update style
* update type
* update readme
* update
* remove support model_family
|
2023-12-15 13:34:33 +08:00 |
|
ZehuaCao
|
877229f3be
|
[LLM]Add Yi-34B-AWQ to verified AWQ model. (#9676)
* verfiy Yi-34B-AWQ
* update
|
2023-12-14 09:55:47 +08:00 |
|
ZehuaCao
|
503880809c
|
verfiy codeLlama (#9668)
|
2023-12-13 15:39:31 +08:00 |
|
ZehuaCao
|
45721f3473
|
verfiy llava (#9649)
|
2023-12-11 14:26:05 +08:00 |
|
Heyang Sun
|
9f02f96160
|
[LLM] support for Yi AWQ model (#9648)
|
2023-12-11 14:07:34 +08:00 |
|
Heyang Sun
|
3811cf43c9
|
[LLM] update AWQ documents (#9623)
* [LLM] update AWQ and verified models' documents
* refine
* refine links
* refine
|
2023-12-07 16:02:20 +08:00 |
|
Jason Dai
|
51b668f229
|
Update GGUF readme (#9611)
|
2023-12-06 18:21:54 +08:00 |
|
dingbaorong
|
a7bc89b3a1
|
remove q4_1 in gguf example (#9610)
* remove q4_1
* fixes
|
2023-12-06 16:00:05 +08:00 |
|
dingbaorong
|
89069d6173
|
Add gpu gguf example (#9603)
* add gpu gguf example
* some fixes
* address kai's comments
* address json's comments
|
2023-12-06 15:17:54 +08:00 |
|
Qiyuan Gong
|
d85a430a8c
|
Uing bigdl-llm-init instead of bigdl-nano-init (#9558)
* Replace `bigdl-nano-init` with `bigdl-llm-init`.
* Install `bigdl-llm` instead of `bigdl-nano`.
* Remove nano in README.
|
2023-11-30 10:10:29 +08:00 |
|
binbin Deng
|
6bec0faea5
|
LLM: support Mistral AWQ models (#9520)
|
2023-11-24 16:20:22 +08:00 |
|
Yina Chen
|
d5263e6681
|
Add awq load support (#9453)
* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* init
* address comments
* add examples
* fix style
* fix style
* fix style
* fix style
* update
* remove
* meet comments
* fix style
---------
Co-authored-by: Yang Wang <yang3.wang@intel.com>
|
2023-11-16 14:06:25 +08:00 |
|
Yang Wang
|
51d07a9fd8
|
Support directly loading gptq models from huggingface (#9391)
* Support directly loading GPTQ models from huggingface
* fix style
* fix tests
* change example structure
* address comments
* fix style
* address comments
|
2023-11-13 20:48:12 -08:00 |
|