verfiy codeLlama (#9668)
This commit is contained in:
parent
1c6499e880
commit
503880809c
2 changed files with 2 additions and 0 deletions
|
|
@ -5,6 +5,7 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
|||
## Verified Models
|
||||
|
||||
- [Llama-2-7B-Chat-AWQ](https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ)
|
||||
- [CodeLlama-7B-AWQ](https://huggingface.co/TheBloke/CodeLlama-7B-AWQ)
|
||||
- [Mistral-7B-Instruct-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ)
|
||||
- [Mistral-7B-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ)
|
||||
- [vicuna-7B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-7B-v1.5-AWQ)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ This example shows how to directly run 4-bit AWQ models using BigDL-LLM on Intel
|
|||
## Verified Models
|
||||
|
||||
- [Llama-2-7B-Chat-AWQ](https://huggingface.co/TheBloke/Llama-2-7B-Chat-AWQ)
|
||||
- [CodeLlama-7B-AWQ](https://huggingface.co/TheBloke/CodeLlama-7B-AWQ)
|
||||
- [Mistral-7B-Instruct-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ)
|
||||
- [Mistral-7B-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ)
|
||||
- [vicuna-7B-v1.5-AWQ](https://huggingface.co/TheBloke/vicuna-7B-v1.5-AWQ)
|
||||
|
|
|
|||
Loading…
Reference in a new issue