Update llm README.md (#8431)
This commit is contained in:
		
							parent
							
								
									2fd751de7a
								
							
						
					
					
						commit
						2da21163f8
					
				
					 1 changed files with 109 additions and 109 deletions
				
			
		| 
						 | 
					@ -1,148 +1,148 @@
 | 
				
			||||||
# BigDL LLM
 | 
					## BigDL-LLM
 | 
				
			||||||
`bigdl-llm` is an SDK for large language model (LLM). It helps users develop AI applications that contains LLM on Intel XPU by using less computing and memory resources.`bigdl-llm` utilize a highly optimized GGML on Intel XPU.
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
Users could use `bigdl-llm` to
 | 
					**`bigdl-llm`** is a library for running ***LLM*** (language language model) on your Intel ***laptop*** using INT4 with very low latency. 
 | 
				
			||||||
- Convert their model to lower precision
 | 
					 | 
				
			||||||
- Use command line tool like `llama.cpp` to run the model inference
 | 
					 | 
				
			||||||
- Use transformers like API to run the model inference
 | 
					 | 
				
			||||||
- Integrate the model in `langchain` pipeline
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
Currently `bigdl-llm` has supported
 | 
					*(It is built on top of the excellent work of [llama.cpp](https://github.com/ggerganov/llama.cpp), [gptq](https://github.com/IST-DASLab/gptq), [ggml](https://github.com/ggerganov/ggml), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [gptq_for_llama](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [redpajama.cpp](https://github.com/togethercomputer/redpajama.cpp), [gptneox.cpp](https://github.com/byroneverson/gptneox.cpp), [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp/), etc.)*
 | 
				
			||||||
- Precision: INT4
 | 
					 | 
				
			||||||
- Model Family: llama, gptneox, bloom, starcoder
 | 
					 | 
				
			||||||
- Platform: Ubuntu 20.04 or later, CentOS 7 or later, Windows 10/11
 | 
					 | 
				
			||||||
- Device: CPU
 | 
					 | 
				
			||||||
- Python: 3.9 (recommended) or later 
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Installation
 | 
					### Demos
 | 
				
			||||||
BigDL-LLM is a self-contained SDK library for model loading and inferencing. Users could directly
 | 
					See the ***optimized performance*** of `phoenix-inst-chat-7b`, `vicuna-13b-v1.1`, and `starcoder-15b` models on a 12th Gen Intel Core CPU below.
 | 
				
			||||||
```bash
 | 
					
 | 
				
			||||||
pip install --pre --upgrade bigdl-llm
 | 
					<p align="center">
 | 
				
			||||||
```
 | 
					            <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-7b.gif" width='33%' /> <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-13b.gif" width='33%' /> <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-15b5.gif" width='33%' />
 | 
				
			||||||
While model conversion procedure will rely on some 3rd party libraries. Add `[all]` option for installation to prepare environment.
 | 
					            <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-models.png" width='85%'/>
 | 
				
			||||||
 | 
					</p>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					### Working with `bigdl-llm`
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### Install
 | 
				
			||||||
 | 
					You may install **`bigdl-llm`** as follows:
 | 
				
			||||||
```bash
 | 
					```bash
 | 
				
			||||||
pip install --pre --upgrade bigdl-llm[all]
 | 
					pip install --pre --upgrade bigdl-llm[all]
 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					#### Download Model
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## Usage
 | 
					You may download any PyTorch model in Hugging Face *Transformers* format (including *FP16* or *FP32* or *GPTQ-4bit*).
 | 
				
			||||||
A standard procedure for using `bigdl-llm` contains 3 steps:
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
1. Download model from huggingface hub
 | 
					#### Run Model
 | 
				
			||||||
2. Convert model from huggingface format to GGML format
 | 
					 
 | 
				
			||||||
3. Inference using `llm-cli`, transformers like API, or `langchain`.
 | 
					You may run the models using **`bigdl-llm`** through one of the following APIs:
 | 
				
			||||||
 | 
					1. [CLI (command line interface) Tool](#cli-tool)
 | 
				
			||||||
 | 
					2. [Hugging Face `transformer`-style API](#hugging-face-transformers-style-api)
 | 
				
			||||||
 | 
					3. [LangChain API](#langchain-api)
 | 
				
			||||||
 | 
					4. [`llama-cpp-python`-style API](#llama-cpp-python-style-api)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Convert your model
 | 
					#### CLI Tool
 | 
				
			||||||
A python function and a command line tool `llm-convert` is provided to transform the model from huggingface format to GGML format.
 | 
					Currently `bigdl-llm` CLI supports *LLaMA* (e.g., *vicuna*), *GPT-NeoX* (e.g., *redpajama*), *BLOOM* (e.g., *pheonix*) and *GPT2* (e.g., *starcoder*) model architecture; for other models, you may use the `transformer`-style or LangChain APIs.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Here is an example to use `llm-convert` command line tool.
 | 
					 - ##### Convert model
 | 
				
			||||||
```bash
 | 
					 
 | 
				
			||||||
# pth model
 | 
					    You may convert the downloaded model into native INT4 format using `llm-convert`.
 | 
				
			||||||
llm-convert "/path/to/llama-7b-hf/" --model-format pth --outfile "/path/to/llama-7b-int4/" --model-family "llama"
 | 
					    
 | 
				
			||||||
# gptq model
 | 
					   ```bash
 | 
				
			||||||
llm-convert "/path/to/vicuna-13B-1.1-GPTQ-4bit-128g/" --model-format gptq --outfile "/path/to/vicuna-13B-int4/" --model-family "llama"
 | 
					   #convert PyTorch (fp16 or fp32) model; 
 | 
				
			||||||
```
 | 
					   #llama/bloom/gptneox/starcoder model family is currently supported
 | 
				
			||||||
> An example GPTQ model can be found [here](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g/tree/main)
 | 
					   lm-convert "/path/to/model/" --model-format pth --model-family "bloom" --outfile "/path/to/output/"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Here is an example to use `llm_convert` python API.
 | 
					   #convert GPTQ-4bit model
 | 
				
			||||||
```bash
 | 
					   #only llama model family is currently supported
 | 
				
			||||||
from bigdl.llm import llm_convert
 | 
					   llm-convert "/path/to/model/" --model-format gptq --model-family "llama" --outfile "/path/to/output/"
 | 
				
			||||||
# pth model
 | 
					   ```  
 | 
				
			||||||
llm_convert(model="/path/to/llama-7b-hf/",
 | 
					  
 | 
				
			||||||
            outfile="/path/to/llama-7b-int4/",
 | 
					 - ##### Run model
 | 
				
			||||||
            model_format="pth",
 | 
					   
 | 
				
			||||||
            model_family="llama")
 | 
					   You may run the converted model using `llm-cli` (*built on top of `main.cpp` in [llama.cpp](https://github.com/ggerganov/llama.cpp)*)
 | 
				
			||||||
# gptq model
 | 
					 | 
				
			||||||
llm_convert(model="/path/to/vicuna-13B-1.1-GPTQ-4bit-128g/",
 | 
					 | 
				
			||||||
            outfile="/path/to/vicuna-13B-int4/",
 | 
					 | 
				
			||||||
            model_format="gptq",
 | 
					 | 
				
			||||||
            model_family="llama")
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
### Inferencing
 | 
					   ```bash
 | 
				
			||||||
 | 
					   #help
 | 
				
			||||||
 | 
					   #llama/bloom/gptneox/starcoder model family is currently supported
 | 
				
			||||||
 | 
					   llm-cli -x gptneox -h
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### llm-cli command line
 | 
					   #text completion
 | 
				
			||||||
llm-cli is a command-line interface tool that follows the interface as the main program in `llama.cpp`.
 | 
					   #llama/bloom/gptneox/starcoder model family is currently supported
 | 
				
			||||||
 | 
					   llm-cli -t 16 -x gptneox -m "/path/to/output/model.bin" -p 'Once upon a time,'
 | 
				
			||||||
 | 
					   ```
 | 
				
			||||||
 | 
					   
 | 
				
			||||||
 | 
					#### Hugging Face `transformers`-style API
 | 
				
			||||||
 | 
					You may run the models using `transformers`-style API in `bigdl-llm`
 | 
				
			||||||
 | 
					
 | 
				
			||||||
```bash
 | 
					- ##### Using native INT4 format
 | 
				
			||||||
# text completion
 | 
					 | 
				
			||||||
llm-cli -t 16 -x llama -m "/path/to/llama-7b-int4/bigdl-llm-xxx.bin" -p 'Once upon a time,'
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
# chatting
 | 
					   You may convert Hugging Face *Transformers* models into native INT4 format for maximum performance as follows.
 | 
				
			||||||
llm-cli -t 16 -x llama -m "/path/to/llama-7b-int4/bigdl-llm-xxx.bin" -i --color
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
# help information
 | 
					  *(Currently only llama/bloom/gptneox/starcoder model family is supported; for other models, you may use the [Hugging Face `transformers` INT4 format](#using-hugging-face-transformers-int4-format)).*
 | 
				
			||||||
llm-cli -x llama -h
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### Transformers like API
 | 
					   ```python
 | 
				
			||||||
You can also load the converted model using `BigdlForCausalLM` with a transformer like API, 
 | 
					  #convert the model
 | 
				
			||||||
```python
 | 
					  from bigdl.llm import llm_convert
 | 
				
			||||||
from bigdl.llm.transformers import BigdlForCausalLM
 | 
					  bigdl_llm_path = llm_convert(model='/path/to/model/',
 | 
				
			||||||
llm = BigdlForCausalLM.from_pretrained("/path/to/llama-7b-int4/bigdl-llm-xxx.bin",
 | 
					      outfile='/path/to/output/', outtype='int4', model_family="llama")
 | 
				
			||||||
                                           model_family="llama")
 | 
					 | 
				
			||||||
prompt="What is AI?"
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
and simply do inference end-to-end like
 | 
					 | 
				
			||||||
```python
 | 
					 | 
				
			||||||
output = llm(prompt, max_tokens=32)
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
If you need to seperate the tokenization and generation, you can also do inference like
 | 
					 | 
				
			||||||
```python
 | 
					 | 
				
			||||||
tokens_id = llm.tokenize(prompt)
 | 
					 | 
				
			||||||
output_tokens_id = llm.generate(tokens_id, max_new_tokens=32)
 | 
					 | 
				
			||||||
output = llm.batch_decode(output_tokens_id)
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					  #load the converted model
 | 
				
			||||||
 | 
					  from bigdl.llm.transformers import BigdlForCausalLM
 | 
				
			||||||
 | 
					  llm = BigdlForCausalLM.from_pretrained("/path/to/output/model.bin",...)
 | 
				
			||||||
 | 
					   
 | 
				
			||||||
 | 
					  #run the converted  model
 | 
				
			||||||
 | 
					  input_ids = llm.tokenize(prompt)
 | 
				
			||||||
 | 
					  output_ids = llm.generate(input_ids, ...)
 | 
				
			||||||
 | 
					  output = llm.batch_decode(output_ids)
 | 
				
			||||||
 | 
					  ``` 
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Alternatively, you can load huggingface model directly using `AutoModelForCausalLM.from_pretrained`. 
 | 
					- ##### Using Hugging Face `transformers` INT4 format
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					  You may apply INT4 optimizations to any Hugging Face *Transformers* models as follows.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					  ```python
 | 
				
			||||||
 | 
					  #load Hugging Face Transformers model with INT4 optimizations
 | 
				
			||||||
 | 
					  from bigdl.llm.transformers import AutoModelForCausalLM
 | 
				
			||||||
 | 
					  model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					  #run the optimized model
 | 
				
			||||||
 | 
					  from transformers import AutoTokenizer
 | 
				
			||||||
 | 
					  tokenizer = AutoTokenizer.from_pretrained(model_path)
 | 
				
			||||||
 | 
					  input_ids = tokenizer.encode(input_str, ...)
 | 
				
			||||||
 | 
					  output_ids = model.generate(input_ids, ...)
 | 
				
			||||||
 | 
					  output = tokenizer.batch_decode(output_ids)
 | 
				
			||||||
 | 
					  ```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#### LangChain API
 | 
				
			||||||
 | 
					You may convert Hugging Face *Transformers* models into *native INT4* format (currently only *llama*/*bloom*/*gptneox*/*starcoder* model family is supported), and then run the converted models using the LangChain API in `bigdl-llm` as follows.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
```python
 | 
					```python
 | 
				
			||||||
from bigdl.llm.transformers import AutoModelForCausalLM
 | 
					from bigdl.llm.langchain.llms import BigdlLLM
 | 
				
			||||||
 | 
					from bigdl.llm.langchain.embeddings import BigdlLLMEmbeddings
 | 
				
			||||||
 | 
					from langchain.chains.question_answering import load_qa_chain
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# option 1: load huggingface checkpoint
 | 
					embeddings = BigdlLLMEmbeddings(model_path='/path/to/converted/model.bin',
 | 
				
			||||||
llm = AutoModelForCausalLM.from_pretrained("/path/to/llama-7b-hf/",
 | 
					                                model_family="llama",...)
 | 
				
			||||||
                                           model_family="llama")
 | 
					bigdl_llm = BigdlLLM(model_path='/path/to/converted/model.bin',
 | 
				
			||||||
 | 
					                     model_family="llama",...)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# option 2: load from huggingface hub repo
 | 
					doc_chain = load_qa_chain(bigdl_llm, ...)
 | 
				
			||||||
llm = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-7b-hf",
 | 
					doc_chain.run(...)
 | 
				
			||||||
                                           model_family="llama")
 | 
					 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
You can then use the the model the same way as you use transformers.
 | 
					#### `llama-cpp-python`-style API
 | 
				
			||||||
```python
 | 
					 | 
				
			||||||
# Use transformers tokenizer
 | 
					 | 
				
			||||||
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
 | 
					 | 
				
			||||||
tokens = tokenizer("what is ai").input_ids
 | 
					 | 
				
			||||||
tokens_id = llm.generate(tokens, max_new_tokens=32)
 | 
					 | 
				
			||||||
tokenizer.batch_decode(tokens_id)
 | 
					 | 
				
			||||||
```
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### llama-cpp-python like API
 | 
					You may also run the converted models using the `llama-cpp-python`-style API in `bigdl-llm` as follows.
 | 
				
			||||||
`llama-cpp-python` has become a popular pybinding for `llama.cpp` program. Some users may be familiar with this API so `bigdl-llm` reserve this API and extend it to other model families (e.g., gptneox, bloom)
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
```python
 | 
					```python
 | 
				
			||||||
from bigdl.llm.models import Llama, Bloom, Gptneox, Starcoder
 | 
					from bigdl.llm.models import Llama, Bloom, Gptneox
 | 
				
			||||||
 | 
					
 | 
				
			||||||
llm = Llama("/path/to/llama-7b-int4/bigdl-llm-xxx.bin", n_threads=4)
 | 
					llm = Bloom("/path/to/converted/model.bin", n_threads=4)
 | 
				
			||||||
result = llm("what is ai")
 | 
					result = llm("what is ai")
 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### langchain integration
 | 
					### `bigdl-llm` Dependence 
 | 
				
			||||||
TODO
 | 
					The native code/lib in `bigdl-llm` has been built using the following tools; in particular, lower  `LIBC` version on your Linux system may be incompatible with `bigdl-llm`.
 | 
				
			||||||
 | 
					 | 
				
			||||||
## Examples
 | 
					 | 
				
			||||||
We prepared several examples in https://github.com/intel-analytics/BigDL/tree/main/python/llm/example
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
## Dynamic library BOM
 | 
					 | 
				
			||||||
To avoid difficaulties during the installtion. `bigdl-llm` release the C implementation by dynamic library or executive file. The compilation details are stated below. **These information is only for reference, no compilation procedure is needed for our users.** `GLIBC` version may affect the compatibility.
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
| Model family | Platform | Compiler           | GLIBC |
 | 
					| Model family | Platform | Compiler           | GLIBC |
 | 
				
			||||||
| ------------ | -------- | ------------------ | ----- |
 | 
					| ------------ | -------- | ------------------ | ----- |
 | 
				
			||||||
| llama        | Linux    | GCC 9.4.0          | 2.17  |
 | 
					| llama        | Linux    | GCC 9.3.1          | 2.17  |
 | 
				
			||||||
| llama        | Windows  | MSVC 19.36.32532.0 |       |
 | 
					| llama        | Windows  | MSVC 19.36.32532.0 |       |
 | 
				
			||||||
| gptneox      | Linux    | GCC 9.4.0          | 2.17  |
 | 
					| gptneox      | Linux    | GCC 9.3.1          | 2.17  |
 | 
				
			||||||
| gptneox      | Windows  | MSVC 19.36.32532.0 |       |
 | 
					| gptneox      | Windows  | MSVC 19.36.32532.0 |       |
 | 
				
			||||||
| bloom        | Linux    | GCC 9.4.0          | 2.31  |
 | 
					| bloom        | Linux    | GCC 9.4.0          | 2.29  |
 | 
				
			||||||
| bloom        | Windows  | MSVC 19.36.32532.0 |       |
 | 
					| bloom        | Windows  | MSVC 19.36.32532.0 |       |
 | 
				
			||||||
| starcoder    | Linux    | GCC 9.4.0          | 2.31  |
 | 
					| starcoder    | Linux    | GCC 9.4.0          | 2.29  |
 | 
				
			||||||
| starcoder    | Windows  | MSVC 19.36.32532.0 |       |
 | 
					| starcoder    | Windows  | MSVC 19.36.32532.0 |       |
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue