[LLM] Support llm-awq vicuna-7b-1.5 on arc (#9874)
* support llm-awq vicuna-7b-1.5 on arc * support llm-awq vicuna-7b-1.5 on arc
This commit is contained in:
parent
3e05c9e11b
commit
e76d984164
1 changed files with 5 additions and 0 deletions
|
|
@ -4,6 +4,7 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
|||
|
||||
## Verified Models
|
||||
|
||||
### Auto-AWQ Backend
|
||||
- [Llama-2-7B-Chat-AWQ](https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ)
|
||||
- [CodeLlama-7B-AWQ](https://huggingface.co/TheBloke/CodeLlama-7B-AWQ)
|
||||
- [Mistral-7B-Instruct-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ)
|
||||
|
|
@ -15,6 +16,10 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
|||
- [Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
|
||||
- [Mixtral-8x7B-Instruct-v0.1-AWQ](https://huggingface.co/ybelkada/Mixtral-8x7B-Instruct-v0.1-AWQ)
|
||||
|
||||
### llm-AWQ Backend
|
||||
|
||||
- [vicuna-7b-1.5-awq](https://huggingface.co/ybelkada/vicuna-7b-1.5-awq)
|
||||
|
||||
## Requirements
|
||||
|
||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
|
||||
|
|
|
|||
Loading…
Reference in a new issue