[LLM]Add Yi-34B-AWQ to verified AWQ model. (#9676)
* verfiy Yi-34B-AWQ * update
This commit is contained in:
parent
68a4be762f
commit
877229f3be
2 changed files with 2 additions and 0 deletions
|
|
@ -12,6 +12,7 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
||||||
- [vicuna-13B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ)
|
- [vicuna-13B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ)
|
||||||
- [llava-v1.5-13B-AWQ](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ)
|
- [llava-v1.5-13B-AWQ](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ)
|
||||||
- [Yi-6B-AWQ](https://huggingface.co/TheBloke/Yi-6B-AWQ)
|
- [Yi-6B-AWQ](https://huggingface.co/TheBloke/Yi-6B-AWQ)
|
||||||
|
- [Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,7 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
||||||
- [vicuna-13B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ)
|
- [vicuna-13B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-13B-v1.5-AWQ)
|
||||||
- [llava-v1.5-13B-AWQ](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ)
|
- [llava-v1.5-13B-AWQ](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ)
|
||||||
- [Yi-6B-AWQ](https://huggingface.co/TheBloke/Yi-6B-AWQ)
|
- [Yi-6B-AWQ](https://huggingface.co/TheBloke/Yi-6B-AWQ)
|
||||||
|
- [Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue