Update self-speculative readme (#9986)
This commit is contained in:
parent
b27e5a27b9
commit
3bc3d0bbcd
6 changed files with 12 additions and 12 deletions
|
|
@ -1,5 +1,5 @@
|
|||
# BigDL-LLM Speculative Decoding Optimization for Large Language Model on Intel GPUs
|
||||
You can use BigDL-LLM to run almost every Huggingface Transformer models with speculative decoding optimizations on Intel GPUs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
|
||||
# Self-Speculative Decoding for Large Language Model FP16 Inference using BigDL-LLM on Intel GPUs
|
||||
You can use BigDL-LLM to run FP16 inference for any Huggingface Transformer model with ***self-speculative decoding*** on Intel GPUs. This directory contains example scripts to help you quickly get started to run some popular open-source models using self-speculative decoding. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
|
||||
|
||||
## Verified Hardware Platforms
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Baichuan2
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM speculative decoding optimizations on Baichuan2 models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [baichuan-inc/Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) and [baichuan-inc/Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) as reference Baichuan2 models.
|
||||
In this directory, you will find examples on how you could run Baichuan2 FP16 infernece with self-speculative decoding using BigDL-LLM on [Intel GPUs](../README.md). For illustration purposes, we utilize the [baichuan-inc/Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) and [baichuan-inc/Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) as reference Baichuan2 models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Chatglm3
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM speculative decoding optimizations on ChatGLM3 models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model.
|
||||
# ChatGLM3
|
||||
In this directory, you will find examples on how you could run ChatGLM3 FP16 infernece with self-speculative decoding using BigDL-LLM on [Intel GPUs](../README.md). For illustration purposes, we utilize the [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) as a reference ChatGLM3 model.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Llama2
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM speculative decoding optimizations on Llama2 models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) and [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as reference Llama2 models.
|
||||
# LLaMA2
|
||||
In this directory, you will find examples on how you could run LLaMA2 FP16 infernece with self-speculative decoding using BigDL-LLM on [Intel GPUs](../README.md). For illustration purposes, we utilize the [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) and [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as reference Llama2 models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Mistral
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM speculative decoding optimizations on Mistral models on [Intel GPUs](../README.md). For illustration purposes,we utilize the [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as reference Mistral models.
|
||||
In this directory, you will find examples on how you could run Mistral FP16 infernece with self-speculative decoding using BigDL-LLM on [Intel GPUs](../README.md). For illustration purposes,we utilize the [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as reference Mistral models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Qwen
|
||||
In this directory, you will find examples on how you could apply BigDL-LLM speculative decoding optimizations on Qwen models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) and [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) as reference Qwen models.
|
||||
In this directory, you will find examples on how you could run Qwen FP16 infernece with self-speculative decoding using BigDL-LLM on [Intel GPUs](../README.md). For illustration purposes, we utilize the [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) and [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) as reference Qwen models.
|
||||
|
||||
## 0. Requirements
|
||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
|
|
|
|||
Loading…
Reference in a new issue