[LLM] BigDL-LLM Documentation Initial Version (#8833)
* Change order of LLM in header * Some updates to footer * Add BigDL-LLM index page and basic file structure * Update index page for key features * Add initial content for BigDL-LLM in 5 mins * Improvement to footnote * Add initial contents based on current contents we have * Add initial quick links * Small fix * Rename file * Hide cli section for now and change model supports to examples * Hugging Face format -> Hugging Face transformers format * Add placeholder for GPU supports * Add GPU related content structure * Add cpu/gpu installation initial contents * Add initial contents for GPU supports * Add image link to LLM index page * Hide tips and known issues for now * Small fix * Update based on comments * Small fix * Add notes for Python 3.9 * Add placehoder optimize model & reveal CLI; small revision * examples add gpu part * Hide CLI part again for first version of merging * add keyfeatures-optimize_model part (#1) * change gif link to the ones hosted on github * Small fix --------- Co-authored-by: plusbang <binbin1.deng@intel.com> Co-authored-by: binbin Deng <108676127+plusbang@users.noreply.github.com>
This commit is contained in:
parent
49a39452c6
commit
cf6a620bae
21 changed files with 689 additions and 21 deletions
|
|
@ -2,6 +2,18 @@
|
||||||
<p class="bd-links__title">Quick Links</p>
|
<p class="bd-links__title">Quick Links</p>
|
||||||
<div class="navbar-nav">
|
<div class="navbar-nav">
|
||||||
<ul class="nav">
|
<ul class="nav">
|
||||||
|
<li>
|
||||||
|
<strong class="bigdl-quicklinks-section-title">LLM QuickStart</strong>
|
||||||
|
<input id="quicklink-cluster-llm" type="checkbox" class="toctree-checkbox" />
|
||||||
|
<label for="quicklink-cluster-llm" class="toctree-toggle">
|
||||||
|
<i class="fa-solid fa-chevron-down"></i>
|
||||||
|
</label>
|
||||||
|
<ul class="nav bigdl-quicklinks-section-nav">
|
||||||
|
<li>
|
||||||
|
<a href="doc/LLM/Overview/llm.html">BigDL-LLM in 5 minutes</a>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<strong class="bigdl-quicklinks-section-title">Orca QuickStart</strong>
|
<strong class="bigdl-quicklinks-section-title">Orca QuickStart</strong>
|
||||||
<input id="quicklink-cluster-orca" type="checkbox" class="toctree-checkbox" />
|
<input id="quicklink-cluster-orca" type="checkbox" class="toctree-checkbox" />
|
||||||
|
|
@ -12,10 +24,10 @@
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Orca/Howto/tf2keras-quickstart.html">Scale TensorFlow 2 Applications</a>
|
<a href="doc/Orca/Howto/tf2keras-quickstart.html">Scale TensorFlow 2 Applications</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Orca/Howto/pytorch-quickstart.html">Scale PyTorch Applications</a>
|
<a href="doc/Orca/Howto/pytorch-quickstart.html">Scale PyTorch Applications</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Orca/Howto/ray-quickstart.html">Run Ray programs on Big Data clusters</a>
|
<a href="doc/Orca/Howto/ray-quickstart.html">Run Ray programs on Big Data clusters</a>
|
||||||
</li>
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
|
|
@ -31,16 +43,20 @@
|
||||||
<a href="doc/Nano/QuickStart/pytorch_train_quickstart.html">PyTorch Training Acceleration</a>
|
<a href="doc/Nano/QuickStart/pytorch_train_quickstart.html">PyTorch Training Acceleration</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Nano/QuickStart/pytorch_quantization_inc_onnx.html">PyTorch Inference Quantization with ONNXRuntime Acceleration </a>
|
<a href="doc/Nano/QuickStart/pytorch_quantization_inc_onnx.html">PyTorch Inference Quantization
|
||||||
|
with ONNXRuntime Acceleration </a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Nano/QuickStart/pytorch_openvino.html">PyTorch Inference Acceleration using OpenVINO</a>
|
<a href="doc/Nano/QuickStart/pytorch_openvino.html">PyTorch Inference Acceleration using
|
||||||
|
OpenVINO</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Nano/QuickStart/tensorflow_train_quickstart.html">Tensorflow Training Acceleration</a>
|
<a href="doc/Nano/QuickStart/tensorflow_train_quickstart.html">Tensorflow Training
|
||||||
|
Acceleration</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Nano/QuickStart/tensorflow_quantization_quickstart.html">Tensorflow Quantization Acceleration</a>
|
<a href="doc/Nano/QuickStart/tensorflow_quantization_quickstart.html">Tensorflow Quantization
|
||||||
|
Acceleration</a>
|
||||||
</li>
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
</li>
|
</li>
|
||||||
|
|
@ -67,7 +83,8 @@
|
||||||
</label>
|
</label>
|
||||||
<ul class="nav bigdl-quicklinks-section-nav">
|
<ul class="nav bigdl-quicklinks-section-nav">
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Chronos/QuickStart/chronos-tsdataset-forecaster-quickstart.html">Basic Forecasting</a>
|
<a href="doc/Chronos/QuickStart/chronos-tsdataset-forecaster-quickstart.html">Basic
|
||||||
|
Forecasting</a>
|
||||||
</li>
|
</li>
|
||||||
<li>
|
<li>
|
||||||
<a href="doc/Chronos/QuickStart/chronos-autotsest-quickstart.html">Forecasting using AutoML</a>
|
<a href="doc/Chronos/QuickStart/chronos-autotsest-quickstart.html">Forecasting using AutoML</a>
|
||||||
|
|
|
||||||
|
|
@ -19,6 +19,46 @@ subtrees:
|
||||||
- file: doc/Application/powered-by
|
- file: doc/Application/powered-by
|
||||||
title: "Powered by"
|
title: "Powered by"
|
||||||
|
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/index
|
||||||
|
title: "LLM"
|
||||||
|
subtrees:
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/Overview/llm
|
||||||
|
title: "LLM in 5 minutes"
|
||||||
|
- file: doc/LLM/Overview/install
|
||||||
|
title: "Installation"
|
||||||
|
subtrees:
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/Overview/install_cpu
|
||||||
|
title: "CPU"
|
||||||
|
- file: doc/LLM/Overview/install_gpu
|
||||||
|
title: "GPU"
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/index
|
||||||
|
title: "Key Features"
|
||||||
|
subtrees:
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/transformers_style_api
|
||||||
|
subtrees:
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/hugging_face_format
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/native_format
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/optimize_model
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/langchain_api
|
||||||
|
# - file: doc/LLM/Overview/KeyFeatures/cli
|
||||||
|
- file: doc/LLM/Overview/KeyFeatures/gpu_supports
|
||||||
|
- file: doc/LLM/Overview/examples
|
||||||
|
title: "Examples"
|
||||||
|
subtrees:
|
||||||
|
- entries:
|
||||||
|
- file: doc/LLM/Overview/examples_cpu
|
||||||
|
title: "CPU"
|
||||||
|
- file: doc/LLM/Overview/examples_gpu
|
||||||
|
title: "GPU"
|
||||||
|
# - file: doc/LLM/Overview/known_issues
|
||||||
|
# title: "Tips and Known Issues"
|
||||||
|
- file: doc/PythonAPI/LLM/index
|
||||||
|
title: "API Reference"
|
||||||
|
|
||||||
- entries:
|
- entries:
|
||||||
- file: doc/Orca/index
|
- file: doc/Orca/index
|
||||||
|
|
@ -329,13 +369,6 @@ subtrees:
|
||||||
- file: doc/PPML/QuickStart/tpc-ds_with_sparksql_on_k8s
|
- file: doc/PPML/QuickStart/tpc-ds_with_sparksql_on_k8s
|
||||||
- file: doc/PPML/Overview/azure_ppml_occlum
|
- file: doc/PPML/Overview/azure_ppml_occlum
|
||||||
- file: doc/PPML/Overview/secure_lightgbm_on_spark
|
- file: doc/PPML/Overview/secure_lightgbm_on_spark
|
||||||
- entries:
|
|
||||||
- file: doc/LLM/index
|
|
||||||
title: "LLM"
|
|
||||||
subtrees:
|
|
||||||
- entries:
|
|
||||||
- file: doc/PythonAPI/LLM/index
|
|
||||||
title: "API Reference"
|
|
||||||
|
|
||||||
- entries:
|
- entries:
|
||||||
- file: doc/UserGuide/contributor
|
- file: doc/UserGuide/contributor
|
||||||
|
|
|
||||||
40
docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/cli.md
Normal file
40
docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/cli.md
Normal file
|
|
@ -0,0 +1,40 @@
|
||||||
|
# CLI (Command Line Interface) Tool
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Currently ``bigdl-llm`` CLI supports *LLaMA* (e.g., vicuna), *GPT-NeoX* (e.g., redpajama), *BLOOM* (e.g., pheonix) and *GPT2* (e.g., starcoder) model architecture; for other models, you may use the ``transformers``-style or LangChain APIs.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Convert Model
|
||||||
|
|
||||||
|
You may convert the downloaded model into native INT4 format using `llm-convert`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# convert PyTorch (fp16 or fp32) model;
|
||||||
|
# llama/bloom/gptneox/starcoder model family is currently supported
|
||||||
|
llm-convert "/path/to/model/" --model-format pth --model-family "bloom" --outfile "/path/to/output/"
|
||||||
|
|
||||||
|
# convert GPTQ-4bit model
|
||||||
|
# only llama model family is currently supported
|
||||||
|
llm-convert "/path/to/model/" --model-format gptq --model-family "llama" --outfile "/path/to/output/"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Run Model
|
||||||
|
|
||||||
|
You may run the converted model using `llm-cli` or `llm-chat` (built on top of `main.cpp` in [`llama.cpp`](https://github.com/ggerganov/llama.cpp))
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# help
|
||||||
|
# llama/bloom/gptneox/starcoder model family is currently supported
|
||||||
|
llm-cli -x gptneox -h
|
||||||
|
|
||||||
|
# text completion
|
||||||
|
# llama/bloom/gptneox/starcoder model family is currently supported
|
||||||
|
llm-cli -t 16 -x gptneox -m "/path/to/output/model.bin" -p 'Once upon a time,'
|
||||||
|
|
||||||
|
# chat mode
|
||||||
|
# llama/gptneox model family is currently supported
|
||||||
|
llm-chat -m "/path/to/output/model.bin" -x llama
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,47 @@
|
||||||
|
# GPU Supports
|
||||||
|
|
||||||
|
You may apply INT4 optimizations to any Hugging Face *Transformers* models on device with Intel GPUs as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# import ipex
|
||||||
|
import intel_extension_for_pytorch as ipex
|
||||||
|
|
||||||
|
# load Hugging Face Transformers model with INT4 optimizations on Intel GPUs
|
||||||
|
from bigdl.llm.transformers import AutoModelForCausalLM
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained('/path/to/model/',
|
||||||
|
load_in_4bit=True,
|
||||||
|
optimize_model=False)
|
||||||
|
model = model.to('xpu')
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
You may apply INT8 optimizations as follows:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained('/path/to/model/',
|
||||||
|
load_in_low_bit="sym_int8",
|
||||||
|
optimize_model=False)
|
||||||
|
model = model.to('xpu')
|
||||||
|
```
|
||||||
|
|
||||||
|
After loading the Hugging Face *Transformers* model, you may easily run the optimized model as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# run the optimized model
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
||||||
|
input_ids = tokenizer.encode(input_str, ...).to('xpu')
|
||||||
|
output_ids = model.generate(input_ids, ...)
|
||||||
|
output = tokenizer.batch_decode(output_ids)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the complete examples `here <https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/GPU>`_
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,54 @@
|
||||||
|
# Hugging Face ``transformers`` Format
|
||||||
|
|
||||||
|
## Load in Low Precision
|
||||||
|
You may apply INT4 optimizations to any Hugging Face *Transformers* models as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# load Hugging Face Transformers model with INT4 optimizations
|
||||||
|
from bigdl.llm.transformers import AutoModelForCausalLM
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
After loading the Hugging Face *Transformers* model, you may easily run the optimized model as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# run the optimized model
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
||||||
|
input_ids = tokenizer.encode(input_str, ...)
|
||||||
|
output_ids = model.generate(input_ids, ...)
|
||||||
|
output = tokenizer.batch_decode(output_ids)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the complete examples `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/transformers/transformers_int4>`_
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
You may apply more low bit optimizations (including INT8, INT5 and INT4) as follows:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_low_bit="sym_int5")
|
||||||
|
|
||||||
|
See the complete example `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/transformers/transformers_low_bit>`_.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Save & Load
|
||||||
|
After the model is optimized using INT4 (or INT8/INT5), you may save and load the optimized model as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
model.save_low_bit(model_path)
|
||||||
|
|
||||||
|
new_model = AutoModelForCausalLM.load_low_bit(model_path)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the examples `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/transformers/transformers_low_bit>`_
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,19 @@
|
||||||
|
BigDL-LLM Key Features
|
||||||
|
================================
|
||||||
|
|
||||||
|
You may run the LLMs using ``bigdl-llm`` through one of the following APIs:
|
||||||
|
|
||||||
|
* |transformers_style_api|_
|
||||||
|
|
||||||
|
* |hugging_face_transformers_format|_
|
||||||
|
* `Native Format <./native_format.html>`_
|
||||||
|
|
||||||
|
* `General PyTorch Model Supports <./langchain_api.html>`_
|
||||||
|
* `LangChain API <./langchain_api.html>`_
|
||||||
|
* `GPU Supports <./gpu_supports.html>`_
|
||||||
|
|
||||||
|
.. |transformers_style_api| replace:: ``transformers``-style API
|
||||||
|
.. _transformers_style_api: ./transformers_style_api.html
|
||||||
|
|
||||||
|
.. |hugging_face_transformers_format| replace:: Hugging Face ``transformers`` Format
|
||||||
|
.. _hugging_face_transformers_format: ./hugging_face_format.html
|
||||||
|
|
@ -0,0 +1,57 @@
|
||||||
|
# LangChain API
|
||||||
|
|
||||||
|
You may run the models using the LangChain API in `bigdl-llm`.
|
||||||
|
|
||||||
|
## Using Hugging Face `transformers` INT4 Format
|
||||||
|
|
||||||
|
You may run any Hugging Face *Transformers* model (with INT4 optimiztions applied) using the LangChain API as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from bigdl.llm.langchain.llms import TransformersLLM
|
||||||
|
from bigdl.llm.langchain.embeddings import TransformersEmbeddings
|
||||||
|
from langchain.chains.question_answering import load_qa_chain
|
||||||
|
|
||||||
|
embeddings = TransformersEmbeddings.from_model_id(model_id=model_path)
|
||||||
|
bigdl_llm = TransformersLLM.from_model_id(model_id=model_path, ...)
|
||||||
|
|
||||||
|
doc_chain = load_qa_chain(bigdl_llm, ...)
|
||||||
|
output = doc_chain.run(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the examples `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/langchain/transformers_int4>`_.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using Native INT4 Format
|
||||||
|
|
||||||
|
You may also convert Hugging Face *Transformers* models into native INT4 format, and then run the converted models using the LangChain API as follows.
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
* Currently only llama/bloom/gptneox/starcoder/chatglm model families are supported; for other models, you may use the Hugging Face ``transformers`` INT4 format as described `above <./langchain_api.html#using-hugging-face-transformers-int4-format>`_.
|
||||||
|
|
||||||
|
* You may choose the corresponding API developed for specific native models to load the converted model.
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
from bigdl.llm.langchain.llms import LlamaLLM
|
||||||
|
from bigdl.llm.langchain.embeddings import LlamaEmbeddings
|
||||||
|
from langchain.chains.question_answering import load_qa_chain
|
||||||
|
|
||||||
|
# switch to ChatGLMEmbeddings/GptneoxEmbeddings/BloomEmbeddings/StarcoderEmbeddings to load other models
|
||||||
|
embeddings = LlamaEmbeddings(model_path='/path/to/converted/model.bin')
|
||||||
|
# switch to ChatGLMLLM/GptneoxLLM/BloomLLM/StarcoderLLM to load other models
|
||||||
|
bigdl_llm = LlamaLLM(model_path='/path/to/converted/model.bin')
|
||||||
|
|
||||||
|
doc_chain = load_qa_chain(bigdl_llm, ...)
|
||||||
|
doc_chain.run(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the examples `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/langchain/native_int4>`_.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,32 @@
|
||||||
|
# Native Format
|
||||||
|
|
||||||
|
You may also convert Hugging Face *Transformers* models into native INT4 format for maximum performance as follows.
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Currently only llama/bloom/gptneox/starcoder/chatglm model families are supported; you may use the corresponding API to load the converted model. (For other models, you can use the Hugging Face ``transformers`` format as described `here <./hugging_face_format.html>`_).
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
# convert the model
|
||||||
|
from bigdl.llm import llm_convert
|
||||||
|
bigdl_llm_path = llm_convert(model='/path/to/model/',
|
||||||
|
outfile='/path/to/output/', outtype='int4', model_family="llama")
|
||||||
|
|
||||||
|
# load the converted model
|
||||||
|
# switch to ChatGLMForCausalLM/GptneoxForCausalLM/BloomForCausalLM/StarcoderForCausalLM to load other models
|
||||||
|
from bigdl.llm.transformers import LlamaForCausalLM
|
||||||
|
llm = LlamaForCausalLM.from_pretrained("/path/to/output/model.bin", native=True, ...)
|
||||||
|
|
||||||
|
# run the converted model
|
||||||
|
input_ids = llm.tokenize(prompt)
|
||||||
|
output_ids = llm.generate(input_ids, ...)
|
||||||
|
output = llm.batch_decode(output_ids)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the complete example `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/transformers/native_int4/native_int4_pipeline.py>`_
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,22 @@
|
||||||
|
## General PyTorch Model Supports
|
||||||
|
|
||||||
|
You may apply BigDL-LLM optimizations on any Pytorch models, not only Hugging Face *Transformers* models for acceleration. With BigDL-LLM, PyTorch models (in FP16/BF16/FP32) can be optimized with low-bit quantizations (supported precisions include INT4/INT5/INT8).
|
||||||
|
|
||||||
|
You can easily enable BigDL-LLM INT4 optimizations on any Pytorch models just as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Create or load any Pytorch model
|
||||||
|
model = ...
|
||||||
|
|
||||||
|
# Add only two lines to enable BigDL-LLM INT4 optimizations on model
|
||||||
|
from bigdl.llm import optimize_model
|
||||||
|
model = optimize_model(model)
|
||||||
|
```
|
||||||
|
|
||||||
|
After optimizing the model, you may straightly run the optimized model with no API changed and less inference latency.
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
See the examples for Hugging Face *Transformers* models `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/transformers/general_int4>`_. And examples for other general Pytorch models can be found `here <https://github.com/intel-analytics/BigDL/blob/main/python/llm/example/pytorch-model>`_.
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,10 @@
|
||||||
|
``transformers``-style API
|
||||||
|
================================
|
||||||
|
|
||||||
|
You may run the LLMs using ``transformers``-style API in ``bigdl-llm``.
|
||||||
|
|
||||||
|
* |hugging_face_transformers_format|_
|
||||||
|
* `Native Format <./native_format.html>`_
|
||||||
|
|
||||||
|
.. |hugging_face_transformers_format| replace:: Hugging Face ``transformers`` Format
|
||||||
|
.. _hugging_face_transformers_format: ./hugging_face_format.html
|
||||||
9
docs/readthedocs/source/doc/LLM/Overview/examples.rst
Normal file
9
docs/readthedocs/source/doc/LLM/Overview/examples.rst
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
BigDL-LLM Examples
|
||||||
|
================================
|
||||||
|
|
||||||
|
You can use BigDL-LLM to run any Huggingface *Transfomers* models with INT4 optimizations on either servers or laptops.
|
||||||
|
|
||||||
|
Here, we provide examples to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Please refer to the appropriate guide based on your device:
|
||||||
|
|
||||||
|
* `CPU <./examples_cpu.html>`_
|
||||||
|
* `GPU <./examples_gpu.html>`_
|
||||||
26
docs/readthedocs/source/doc/LLM/Overview/examples_cpu.md
Normal file
26
docs/readthedocs/source/doc/LLM/Overview/examples_cpu.md
Normal file
|
|
@ -0,0 +1,26 @@
|
||||||
|
# BigDL-LLM Examples: CPU
|
||||||
|
|
||||||
|
Here, we provide some examples on how you could apply BigDL-LLM INT4 optimizations on popular open-source models in the community.
|
||||||
|
|
||||||
|
To run these examples, please first refer to [here](./install_cpu.html) for more information about how to install ``bigdl-llm``, requirements and best practices for setting up your environment.
|
||||||
|
|
||||||
|
The following models have been verified on either servers or laptops with Intel CPUs.
|
||||||
|
|
||||||
|
| Model | Example |
|
||||||
|
|-----------|----------------------------------------------------------|
|
||||||
|
| LLaMA *(such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.)* | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/vicuna) |
|
||||||
|
| LLaMA 2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/llama2) |
|
||||||
|
| MPT | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/mpt) |
|
||||||
|
| Falcon | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/falcon) |
|
||||||
|
| ChatGLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/chatglm) |
|
||||||
|
| ChatGLM2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/chatglm2) |
|
||||||
|
| Qwen | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/qwen) |
|
||||||
|
| MOSS | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/moss) |
|
||||||
|
| Baichuan | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/baichuan) |
|
||||||
|
| Dolly-v1 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/dolly_v1) |
|
||||||
|
| Dolly-v2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/dolly_v2) |
|
||||||
|
| RedPajama | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/redpajama) |
|
||||||
|
| Phoenix | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/phoenix) |
|
||||||
|
| StarCoder | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/starcoder) |
|
||||||
|
| InternLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/internlm) |
|
||||||
|
| Whisper | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/whisper) |
|
||||||
26
docs/readthedocs/source/doc/LLM/Overview/examples_gpu.md
Normal file
26
docs/readthedocs/source/doc/LLM/Overview/examples_gpu.md
Normal file
|
|
@ -0,0 +1,26 @@
|
||||||
|
# BigDL-LLM Examples: GPU
|
||||||
|
|
||||||
|
Here, we provide some examples on how you could apply BigDL-LLM INT4 optimizations on popular open-source models in the community.
|
||||||
|
|
||||||
|
To run these examples, please first refer to [here](./install_gpu.html) for more information about how to install ``bigdl-llm``, requirements and best practices for setting up your environment.
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
Only Linux system is supported now, Ubuntu 22.04 is prefered.
|
||||||
|
```
|
||||||
|
|
||||||
|
The following models have been verified on either servers or laptops with Intel GPUs.
|
||||||
|
|
||||||
|
| Model | Example |
|
||||||
|
|-----------|----------------------------------------------------------|
|
||||||
|
| LLaMA 2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/llama2) |
|
||||||
|
| MPT | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/mpt) |
|
||||||
|
| Falcon | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/falcon) |
|
||||||
|
| ChatGLM2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/chatglm2) |
|
||||||
|
| Qwen | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/qwen) |
|
||||||
|
| Baichuan | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/baichuan) |
|
||||||
|
| StarCoder | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/starcoder) |
|
||||||
|
| InternLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/internlm) |
|
||||||
|
| Whisper | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/whisper) |
|
||||||
|
| GPT-J | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/gpt-j) |
|
||||||
7
docs/readthedocs/source/doc/LLM/Overview/install.rst
Normal file
7
docs/readthedocs/source/doc/LLM/Overview/install.rst
Normal file
|
|
@ -0,0 +1,7 @@
|
||||||
|
BigDL-LLM Installation
|
||||||
|
================================
|
||||||
|
|
||||||
|
Here, we provide instructions on how to install ``bigdl-llm`` and best practices for setting up your environment. Please refer to the appropriate guide based on your device:
|
||||||
|
|
||||||
|
* `CPU <./install_cpu.html>`_
|
||||||
|
* `GPU <./install_gpu.html>`_
|
||||||
71
docs/readthedocs/source/doc/LLM/Overview/install_cpu.md
Normal file
71
docs/readthedocs/source/doc/LLM/Overview/install_cpu.md
Normal file
|
|
@ -0,0 +1,71 @@
|
||||||
|
# BigDL-LLM Installation: CPU
|
||||||
|
|
||||||
|
## Quick Installation
|
||||||
|
|
||||||
|
Install BigDL-LLM for CPU supports using pip through:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install bigdl-llm[all]
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
``all`` option will trigger installation of all the dependencies for common LLM application development.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
``bigdl-llm`` is tested with Python 3.9, which is recommended for best practices.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommended Requirements
|
||||||
|
|
||||||
|
Here list the recommended hardware and OS for smooth BigDL-LLM optimization experiences on CPU:
|
||||||
|
|
||||||
|
* Hardware
|
||||||
|
|
||||||
|
* PCs equipped with 12th Gen Intel® Core™ processor or higher, and at least 16GB RAM
|
||||||
|
* Servers equipped with Intel® Xeon® processors, at least 32G RAM.
|
||||||
|
|
||||||
|
* Operating System
|
||||||
|
|
||||||
|
* Ubuntu 20.04 or later
|
||||||
|
* CentOS 7 or later
|
||||||
|
* Windows 10/11, with or without WSL
|
||||||
|
|
||||||
|
## Environment Setup
|
||||||
|
|
||||||
|
For optimal performance with LLM models using BigDL-LLM optimizations on Intel CPUs, here are some best practices for setting up environment:
|
||||||
|
|
||||||
|
First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.9 enviroment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install bigdl-llm[all] # install bigdl-llm for CPU with 'all' option
|
||||||
|
```
|
||||||
|
|
||||||
|
Then for running a LLM model with BigDL-LLM optimizations (taking an `example.py` an example):
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. tabs::
|
||||||
|
|
||||||
|
.. tab:: Client
|
||||||
|
|
||||||
|
It is recommended to run directly with full utilization of all CPU cores:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
python example.py
|
||||||
|
|
||||||
|
.. tab:: Server
|
||||||
|
|
||||||
|
It is recommended to run with all the physical cores of a single socket:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python example.py
|
||||||
|
```
|
||||||
65
docs/readthedocs/source/doc/LLM/Overview/install_gpu.md
Normal file
65
docs/readthedocs/source/doc/LLM/Overview/install_gpu.md
Normal file
|
|
@ -0,0 +1,65 @@
|
||||||
|
# BigDL-LLM Installation: GPU
|
||||||
|
|
||||||
|
## Quick Installation
|
||||||
|
|
||||||
|
Install BigDL-LLM for GPU supports using pip through:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The above command will install ``intel_extension_for_pytorch==2.0.110+xpu`` as default. You can install specific ``ipex``/``torch`` version for your need.
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
``bigdl-llm`` is tested with Python 3.9, which is recommended for best practices.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommended Requirements
|
||||||
|
|
||||||
|
BigDL-LLM for GPU supports has been verified on:
|
||||||
|
|
||||||
|
* Intel Arc™ A-Series Graphics
|
||||||
|
* Intel Data Center GPU Flex Series
|
||||||
|
|
||||||
|
To apply Intel GPU acceleration, there're several steps for tools installation and environment preparation:
|
||||||
|
|
||||||
|
* Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.
|
||||||
|
* Step 2, please refer to our [driver installation](https://dgpu-docs.intel.com/driver/installation.html) for general purpose GPU capabilities.
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
IPEX 2.0.110+xpu requires Intel GPU Driver version is `Stable 647.21 <https://dgpu-docs.intel.com/releases/stable_647_21_20230714.html>`_.
|
||||||
|
```
|
||||||
|
* Step 3, you also need to download and install [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html). OneMKL and DPC++ compiler are needed, others are optional.
|
||||||
|
```eval_rst
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Setup
|
||||||
|
|
||||||
|
For optimal performance with LLM models using BigDL-LLM optimizations on Intel GPUs, here are some best practices for setting up environment:
|
||||||
|
|
||||||
|
First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.9 enviroment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.9
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu # install bigdl-llm for GPU
|
||||||
|
```
|
||||||
|
|
||||||
|
Then for running a LLM model with BigDL-LLM optimizations, several environment variables are recommended:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# configures OneAPI environment variables
|
||||||
|
source /opt/intel/oneapi/setvars.sh
|
||||||
|
|
||||||
|
export USE_XETLA=OFF
|
||||||
|
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||||
|
```
|
||||||
1
docs/readthedocs/source/doc/LLM/Overview/known_issues.md
Normal file
1
docs/readthedocs/source/doc/LLM/Overview/known_issues.md
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
# BigDL-LLM Known Issues
|
||||||
68
docs/readthedocs/source/doc/LLM/Overview/llm.md
Normal file
68
docs/readthedocs/source/doc/LLM/Overview/llm.md
Normal file
|
|
@ -0,0 +1,68 @@
|
||||||
|
# BigDL-LLM in 5 minutes
|
||||||
|
|
||||||
|
You can use BigDL-LLM to run any [*Hugging Face Transformers*](https://huggingface.co/docs/transformers/index) PyTorch model. It automatically optimizes and accelerates LLMs using low-precision (INT4/INT5/INT8) techniques, modern hardware accelerations and latest software optimizations.
|
||||||
|
|
||||||
|
Hugging Face transformers-based applications can run on BigDL-LLM with one-line code change, and you'll immediately observe significant speedup<sup><a href="#footnote-perf" id="ref-perf">[1]</a></sup>.
|
||||||
|
|
||||||
|
Here, let's take a relatively small LLM model, i.e [open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2), and BigDL-LLM INT4 optimizations as an example.
|
||||||
|
|
||||||
|
## Load a Pretrained Model
|
||||||
|
|
||||||
|
Simply use one-line `transformers`-style API in `bigdl-llm` to load `open_llama_3b_v2` with INT4 optimization (by specifying `load_in_4bit=True`) as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from bigdl.llm.transformers import AutoModelForCausalLM
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="openlm-research/open_llama_3b_v2",
|
||||||
|
load_in_4bit=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
```eval_rst
|
||||||
|
.. tip::
|
||||||
|
|
||||||
|
`open_llama_3b_v2 <https://huggingface.co/openlm-research/open_llama_3b_v2>`_ is a pretrained large language model hosted on Hugging Face. ``openlm-research/open_llama_3b_v2`` is its Hugging Face model id. ``from_pretrained`` will automatically download the model from Hugging Face to a local cache path (e.g. ``~/.cache/huggingface``), load the model, and converted it to ``bigdl-llm`` INT4 format.
|
||||||
|
|
||||||
|
It may take a long time to download the model using API. You can also download the model yourself, and set ``pretrained_model_name_or_path`` to the local path of the downloaded model. This way, ``from_pretrained`` will load and convert directly from local path without download.
|
||||||
|
```
|
||||||
|
## Load Tokenizer
|
||||||
|
|
||||||
|
You also need a tokenizer for inference. Just use the official `transformers` API to load `LlamaTokenizer`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import LlamaTokenizer
|
||||||
|
|
||||||
|
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path="openlm-research/open_llama_3b_v2")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Run LLM
|
||||||
|
|
||||||
|
Now you can do model inference exactly the same way as using official `transformers` API:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
|
||||||
|
with torch.inference_mode():
|
||||||
|
prompt = 'Q: What is CPU?\nA:'
|
||||||
|
|
||||||
|
# tokenize the input prompt from string to token ids
|
||||||
|
input_ids = tokenizer.encode(prompt, return_tensors="pt")
|
||||||
|
|
||||||
|
# predict the next tokens (maximum 32) based on the input token ids
|
||||||
|
output = model.generate(input_ids,
|
||||||
|
max_new_tokens=32)
|
||||||
|
|
||||||
|
# decode the predicted token ids to output string
|
||||||
|
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||||
|
|
||||||
|
print(output_str)
|
||||||
|
```
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<p>
|
||||||
|
<sup><a href="#ref-perf" id="footnote-perf">[1]</a>
|
||||||
|
Performance varies by use, configuration and other factors. <code><span>bigdl-llm</span></code> may not optimize to the same degree for non-Intel products. Learn more at <a href="https://www.Intel.com/PerformanceIndex">www.Intel.com/PerformanceIndex</a>.
|
||||||
|
</sup>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
@ -1,14 +1,53 @@
|
||||||
BigDL-LLM
|
BigDL-LLM
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
|
.. raw:: html
|
||||||
|
|
||||||
BigDL-LLM is a library for running LLM (Large Language Models) on your Intel laptop using INT4 with very low latency (for any Hugging Face Transformers model).
|
<p>
|
||||||
|
<strong>BigDL-LLM</strong> is a library for running <strong>LLM</strong> (large language model) on your Intel <strong>laptop</strong> or <strong>GPU</strong> using INT4 with very low latency <sup><a href="#footnote-perf" id="ref-perf">[1]</a></sup> (for any <strong>PyTorch</strong> model).
|
||||||
|
</p>
|
||||||
|
|
||||||
-------
|
-------
|
||||||
|
|
||||||
.. grid:: 1 2 2 2
|
.. grid:: 1 2 2 2
|
||||||
:gutter: 2
|
:gutter: 2
|
||||||
|
|
||||||
|
.. grid-item-card::
|
||||||
|
|
||||||
|
**Get Started**
|
||||||
|
^^^
|
||||||
|
|
||||||
|
Documents in these sections helps you getting started quickly with BigDL-LLM.
|
||||||
|
|
||||||
|
+++
|
||||||
|
:bdg-link:`BigDL-LLM in 5 minutes <./Overview/quick_start.html>` |
|
||||||
|
:bdg-link:`Installation <./Overview/install.html>`
|
||||||
|
|
||||||
|
.. grid-item-card::
|
||||||
|
|
||||||
|
**Key Features Guide**
|
||||||
|
^^^
|
||||||
|
|
||||||
|
Each guide in this section provides you with in-depth information, concepts and knowledges about BigDL-LLM key features.
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
:bdg-link:`transformers-style <./Overview/KeyFeatures/transformers_style_api.html>` |
|
||||||
|
:bdg-link:`Optimize Model <./Overview/KeyFeatures/optimize_model.html>` |
|
||||||
|
:bdg-link:`LangChain <./Overview/KeyFeatures/langchain_api.html>` |
|
||||||
|
:bdg-link:`GPU <./Overview/KeyFeatures/gpu_supports.html>`
|
||||||
|
|
||||||
|
.. grid-item-card::
|
||||||
|
|
||||||
|
**Examples & Tutorials**
|
||||||
|
^^^
|
||||||
|
|
||||||
|
Examples contain scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community.
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
:bdg-link:`Examples <./Overview/examples.html>`
|
||||||
|
|
||||||
.. grid-item-card::
|
.. grid-item-card::
|
||||||
|
|
||||||
**API Document**
|
**API Document**
|
||||||
|
|
@ -20,6 +59,18 @@ BigDL-LLM is a library for running LLM (Large Language Models) on your Intel lap
|
||||||
|
|
||||||
:bdg-link:`API Document <../PythonAPI/LLM/index.html>`
|
:bdg-link:`API Document <../PythonAPI/LLM/index.html>`
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
.. raw:: html
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<p>
|
||||||
|
<sup><a href="#ref-perf" id="footnote-perf">[1]</a>
|
||||||
|
Performance varies by use, configuration and other factors. <code><span>bigdl-llm</span></code> may not optimize to the same degree for non-Intel products. Learn more at <a href="https://www.Intel.com/PerformanceIndex">www.Intel.com/PerformanceIndex</a>.
|
||||||
|
</sup>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:hidden:
|
:hidden:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
BigDL-LLM Transformers API
|
BigDL-LLM `transformers`-style API
|
||||||
=====================
|
=====================
|
||||||
|
|
||||||
llm.transformers.model
|
llm.transformers.model
|
||||||
|
|
|
||||||
|
|
@ -10,7 +10,12 @@ The BigDL Project
|
||||||
---------------------------------
|
---------------------------------
|
||||||
BigDL-LLM: low-Bit LLM library
|
BigDL-LLM: low-Bit LLM library
|
||||||
---------------------------------
|
---------------------------------
|
||||||
`bigdl-llm <https://github.com/intel-analytics/BigDL/tree/main/python/llm>`_ is a library for running **LLM** (large language model) on your Intel **laptop** or **GPU** using INT4 with very low latency [*]_ (for any **PyTorch** model).
|
|
||||||
|
.. raw:: html
|
||||||
|
|
||||||
|
<p>
|
||||||
|
<a href="https://github.com/intel-analytics/BigDL/tree/main/python/llm"><code><span>bigdl-llm</span></code></a> is a library for running <strong>LLM</strong> (large language model) on your Intel <strong>laptop</strong> or <strong>GPU</strong> using INT4 with very low latency <sup><a href="#footnote-perf" id="ref-perf">[1]</a></sup> (for any <strong>PyTorch</strong> model).
|
||||||
|
</p>
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
|
@ -33,8 +38,8 @@ See the **optimized performance** of ``chatglm2-6b``, ``llama-2-13b-chat``, and
|
||||||
.. raw:: html
|
.. raw:: html
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/chatglm2-6b.gif?raw=true" width='30%' /> <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llama-2-13b-chat.gif?raw=true" width='30%' /> <img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-15b5.gif?raw=true" width='30%' />
|
<a href="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/chatglm2-6b.gif" width='30%'></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llama-2-13b-chat.gif" width='30%' ></a> <a href="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif"><img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-15b5.gif" width='30%' ></a>
|
||||||
<img src="https://github.com/bigdl-project/bigdl-project.github.io/blob/master/assets/llm-models3.png?raw=true" width='76%'/>
|
<img src="https://llm-assets.readthedocs.io/en/latest/_images/llm-models3.png" width='76%'>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
@ -151,4 +156,12 @@ Choosing the right BigDL library
|
||||||
|
|
||||||
------
|
------
|
||||||
|
|
||||||
.. [*] Performance varies by use, configuration and other factors. ``bigdl-llm`` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
|
.. raw:: html
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<p>
|
||||||
|
<sup><a href="#ref-perf" id="footnote-perf">[1]</a>
|
||||||
|
Performance varies by use, configuration and other factors. <code><span>bigdl-llm</span></code> may not optimize to the same degree for non-Intel products. Learn more at <a href="https://www.Intel.com/PerformanceIndex">www.Intel.com/PerformanceIndex</a>.
|
||||||
|
</sup>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue