diff --git a/docs/readthedocs/source/_templates/sidebar_quicklinks.html b/docs/readthedocs/source/_templates/sidebar_quicklinks.html
index b691dc62..6d701556 100644
--- a/docs/readthedocs/source/_templates/sidebar_quicklinks.html
+++ b/docs/readthedocs/source/_templates/sidebar_quicklinks.html
@@ -2,6 +2,18 @@
Quick Links
@@ -31,16 +43,20 @@
PyTorch Training Acceleration
- PyTorch Inference Quantization with ONNXRuntime Acceleration
+ PyTorch Inference Quantization
+ with ONNXRuntime Acceleration
- PyTorch Inference Acceleration using OpenVINO
+ PyTorch Inference Acceleration using
+ OpenVINO
- Tensorflow Training Acceleration
+ Tensorflow Training
+ Acceleration
- Tensorflow Quantization Acceleration
+ Tensorflow Quantization
+ Acceleration
@@ -67,7 +83,8 @@
-
- Basic Forecasting
+ Basic
+ Forecasting
-
Forecasting using AutoML
diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml
index fdba5c9a..9cba0641 100644
--- a/docs/readthedocs/source/_toc.yml
+++ b/docs/readthedocs/source/_toc.yml
@@ -19,6 +19,46 @@ subtrees:
- file: doc/Application/powered-by
title: "Powered by"
+ - entries:
+ - file: doc/LLM/index
+ title: "LLM"
+ subtrees:
+ - entries:
+ - file: doc/LLM/Overview/llm
+ title: "LLM in 5 minutes"
+ - file: doc/LLM/Overview/install
+ title: "Installation"
+ subtrees:
+ - entries:
+ - file: doc/LLM/Overview/install_cpu
+ title: "CPU"
+ - file: doc/LLM/Overview/install_gpu
+ title: "GPU"
+ - file: doc/LLM/Overview/KeyFeatures/index
+ title: "Key Features"
+ subtrees:
+ - entries:
+ - file: doc/LLM/Overview/KeyFeatures/transformers_style_api
+ subtrees:
+ - entries:
+ - file: doc/LLM/Overview/KeyFeatures/hugging_face_format
+ - file: doc/LLM/Overview/KeyFeatures/native_format
+ - file: doc/LLM/Overview/KeyFeatures/optimize_model
+ - file: doc/LLM/Overview/KeyFeatures/langchain_api
+ # - file: doc/LLM/Overview/KeyFeatures/cli
+ - file: doc/LLM/Overview/KeyFeatures/gpu_supports
+ - file: doc/LLM/Overview/examples
+ title: "Examples"
+ subtrees:
+ - entries:
+ - file: doc/LLM/Overview/examples_cpu
+ title: "CPU"
+ - file: doc/LLM/Overview/examples_gpu
+ title: "GPU"
+ # - file: doc/LLM/Overview/known_issues
+ # title: "Tips and Known Issues"
+ - file: doc/PythonAPI/LLM/index
+ title: "API Reference"
- entries:
- file: doc/Orca/index
@@ -329,13 +369,6 @@ subtrees:
- file: doc/PPML/QuickStart/tpc-ds_with_sparksql_on_k8s
- file: doc/PPML/Overview/azure_ppml_occlum
- file: doc/PPML/Overview/secure_lightgbm_on_spark
- - entries:
- - file: doc/LLM/index
- title: "LLM"
- subtrees:
- - entries:
- - file: doc/PythonAPI/LLM/index
- title: "API Reference"
- entries:
- file: doc/UserGuide/contributor
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/cli.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/cli.md
new file mode 100644
index 00000000..7c21f2f8
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/cli.md
@@ -0,0 +1,40 @@
+# CLI (Command Line Interface) Tool
+
+```eval_rst
+
+.. note::
+
+ Currently ``bigdl-llm`` CLI supports *LLaMA* (e.g., vicuna), *GPT-NeoX* (e.g., redpajama), *BLOOM* (e.g., pheonix) and *GPT2* (e.g., starcoder) model architecture; for other models, you may use the ``transformers``-style or LangChain APIs.
+```
+
+## Convert Model
+
+You may convert the downloaded model into native INT4 format using `llm-convert`.
+
+```bash
+# convert PyTorch (fp16 or fp32) model;
+# llama/bloom/gptneox/starcoder model family is currently supported
+llm-convert "/path/to/model/" --model-format pth --model-family "bloom" --outfile "/path/to/output/"
+
+# convert GPTQ-4bit model
+# only llama model family is currently supported
+llm-convert "/path/to/model/" --model-format gptq --model-family "llama" --outfile "/path/to/output/"
+```
+
+## Run Model
+
+You may run the converted model using `llm-cli` or `llm-chat` (built on top of `main.cpp` in [`llama.cpp`](https://github.com/ggerganov/llama.cpp))
+
+```bash
+# help
+# llama/bloom/gptneox/starcoder model family is currently supported
+llm-cli -x gptneox -h
+
+# text completion
+# llama/bloom/gptneox/starcoder model family is currently supported
+llm-cli -t 16 -x gptneox -m "/path/to/output/model.bin" -p 'Once upon a time,'
+
+# chat mode
+# llama/gptneox model family is currently supported
+llm-chat -m "/path/to/output/model.bin" -x llama
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/gpu_supports.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/gpu_supports.md
new file mode 100644
index 00000000..7cbd65c3
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/gpu_supports.md
@@ -0,0 +1,47 @@
+# GPU Supports
+
+You may apply INT4 optimizations to any Hugging Face *Transformers* models on device with Intel GPUs as follows:
+
+```python
+# import ipex
+import intel_extension_for_pytorch as ipex
+
+# load Hugging Face Transformers model with INT4 optimizations on Intel GPUs
+from bigdl.llm.transformers import AutoModelForCausalLM
+
+model = AutoModelForCausalLM.from_pretrained('/path/to/model/',
+ load_in_4bit=True,
+ optimize_model=False)
+model = model.to('xpu')
+```
+
+```eval_rst
+.. note::
+
+ You may apply INT8 optimizations as follows:
+
+ .. code-block:: python
+
+ model = AutoModelForCausalLM.from_pretrained('/path/to/model/',
+ load_in_low_bit="sym_int8",
+ optimize_model=False)
+ model = model.to('xpu')
+```
+
+After loading the Hugging Face *Transformers* model, you may easily run the optimized model as follows:
+
+```python
+# run the optimized model
+from transformers import AutoTokenizer
+
+tokenizer = AutoTokenizer.from_pretrained(model_path)
+input_ids = tokenizer.encode(input_str, ...).to('xpu')
+output_ids = model.generate(input_ids, ...)
+output = tokenizer.batch_decode(output_ids)
+```
+
+```eval_rst
+.. seealso::
+
+ See the complete examples `here `_
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/hugging_face_format.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/hugging_face_format.md
new file mode 100644
index 00000000..11b1a040
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/hugging_face_format.md
@@ -0,0 +1,54 @@
+# Hugging Face ``transformers`` Format
+
+## Load in Low Precision
+You may apply INT4 optimizations to any Hugging Face *Transformers* models as follows:
+
+```python
+# load Hugging Face Transformers model with INT4 optimizations
+from bigdl.llm.transformers import AutoModelForCausalLM
+
+model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)
+```
+
+After loading the Hugging Face *Transformers* model, you may easily run the optimized model as follows:
+
+```python
+# run the optimized model
+from transformers import AutoTokenizer
+
+tokenizer = AutoTokenizer.from_pretrained(model_path)
+input_ids = tokenizer.encode(input_str, ...)
+output_ids = model.generate(input_ids, ...)
+output = tokenizer.batch_decode(output_ids)
+```
+
+```eval_rst
+.. seealso::
+
+ See the complete examples `here `_
+
+.. note::
+
+ You may apply more low bit optimizations (including INT8, INT5 and INT4) as follows:
+
+ .. code-block:: python
+
+ model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_low_bit="sym_int5")
+
+ See the complete example `here `_.
+```
+
+## Save & Load
+After the model is optimized using INT4 (or INT8/INT5), you may save and load the optimized model as follows:
+
+```python
+model.save_low_bit(model_path)
+
+new_model = AutoModelForCausalLM.load_low_bit(model_path)
+```
+
+```eval_rst
+.. seealso::
+
+ See the examples `here `_
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/index.rst b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/index.rst
new file mode 100644
index 00000000..4914196b
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/index.rst
@@ -0,0 +1,19 @@
+BigDL-LLM Key Features
+================================
+
+You may run the LLMs using ``bigdl-llm`` through one of the following APIs:
+
+* |transformers_style_api|_
+
+ * |hugging_face_transformers_format|_
+ * `Native Format <./native_format.html>`_
+
+* `General PyTorch Model Supports <./langchain_api.html>`_
+* `LangChain API <./langchain_api.html>`_
+* `GPU Supports <./gpu_supports.html>`_
+
+.. |transformers_style_api| replace:: ``transformers``-style API
+.. _transformers_style_api: ./transformers_style_api.html
+
+.. |hugging_face_transformers_format| replace:: Hugging Face ``transformers`` Format
+.. _hugging_face_transformers_format: ./hugging_face_format.html
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/langchain_api.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/langchain_api.md
new file mode 100644
index 00000000..c954402b
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/langchain_api.md
@@ -0,0 +1,57 @@
+# LangChain API
+
+You may run the models using the LangChain API in `bigdl-llm`.
+
+## Using Hugging Face `transformers` INT4 Format
+
+You may run any Hugging Face *Transformers* model (with INT4 optimiztions applied) using the LangChain API as follows:
+
+```python
+from bigdl.llm.langchain.llms import TransformersLLM
+from bigdl.llm.langchain.embeddings import TransformersEmbeddings
+from langchain.chains.question_answering import load_qa_chain
+
+embeddings = TransformersEmbeddings.from_model_id(model_id=model_path)
+bigdl_llm = TransformersLLM.from_model_id(model_id=model_path, ...)
+
+doc_chain = load_qa_chain(bigdl_llm, ...)
+output = doc_chain.run(...)
+```
+
+```eval_rst
+.. seealso::
+
+ See the examples `here `_.
+```
+
+## Using Native INT4 Format
+
+You may also convert Hugging Face *Transformers* models into native INT4 format, and then run the converted models using the LangChain API as follows.
+
+```eval_rst
+.. note::
+
+ * Currently only llama/bloom/gptneox/starcoder/chatglm model families are supported; for other models, you may use the Hugging Face ``transformers`` INT4 format as described `above <./langchain_api.html#using-hugging-face-transformers-int4-format>`_.
+
+ * You may choose the corresponding API developed for specific native models to load the converted model.
+```
+
+```python
+from bigdl.llm.langchain.llms import LlamaLLM
+from bigdl.llm.langchain.embeddings import LlamaEmbeddings
+from langchain.chains.question_answering import load_qa_chain
+
+# switch to ChatGLMEmbeddings/GptneoxEmbeddings/BloomEmbeddings/StarcoderEmbeddings to load other models
+embeddings = LlamaEmbeddings(model_path='/path/to/converted/model.bin')
+# switch to ChatGLMLLM/GptneoxLLM/BloomLLM/StarcoderLLM to load other models
+bigdl_llm = LlamaLLM(model_path='/path/to/converted/model.bin')
+
+doc_chain = load_qa_chain(bigdl_llm, ...)
+doc_chain.run(...)
+```
+
+```eval_rst
+.. seealso::
+
+ See the examples `here `_.
+```
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/native_format.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/native_format.md
new file mode 100644
index 00000000..ce70a980
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/native_format.md
@@ -0,0 +1,32 @@
+# Native Format
+
+You may also convert Hugging Face *Transformers* models into native INT4 format for maximum performance as follows.
+
+```eval_rst
+.. note::
+
+ Currently only llama/bloom/gptneox/starcoder/chatglm model families are supported; you may use the corresponding API to load the converted model. (For other models, you can use the Hugging Face ``transformers`` format as described `here <./hugging_face_format.html>`_).
+```
+
+```python
+# convert the model
+from bigdl.llm import llm_convert
+bigdl_llm_path = llm_convert(model='/path/to/model/',
+ outfile='/path/to/output/', outtype='int4', model_family="llama")
+
+# load the converted model
+# switch to ChatGLMForCausalLM/GptneoxForCausalLM/BloomForCausalLM/StarcoderForCausalLM to load other models
+from bigdl.llm.transformers import LlamaForCausalLM
+llm = LlamaForCausalLM.from_pretrained("/path/to/output/model.bin", native=True, ...)
+
+# run the converted model
+input_ids = llm.tokenize(prompt)
+output_ids = llm.generate(input_ids, ...)
+output = llm.batch_decode(output_ids)
+```
+
+```eval_rst
+.. seealso::
+
+ See the complete example `here `_
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/optimize_model.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/optimize_model.md
new file mode 100644
index 00000000..eeb7a3c1
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/optimize_model.md
@@ -0,0 +1,22 @@
+## General PyTorch Model Supports
+
+You may apply BigDL-LLM optimizations on any Pytorch models, not only Hugging Face *Transformers* models for acceleration. With BigDL-LLM, PyTorch models (in FP16/BF16/FP32) can be optimized with low-bit quantizations (supported precisions include INT4/INT5/INT8).
+
+You can easily enable BigDL-LLM INT4 optimizations on any Pytorch models just as follows:
+
+```python
+# Create or load any Pytorch model
+model = ...
+
+# Add only two lines to enable BigDL-LLM INT4 optimizations on model
+from bigdl.llm import optimize_model
+model = optimize_model(model)
+```
+
+After optimizing the model, you may straightly run the optimized model with no API changed and less inference latency.
+
+```eval_rst
+.. seealso::
+
+ See the examples for Hugging Face *Transformers* models `here `_. And examples for other general Pytorch models can be found `here `_.
+```
diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/transformers_style_api.rst b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/transformers_style_api.rst
new file mode 100644
index 00000000..2e4723a2
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/transformers_style_api.rst
@@ -0,0 +1,10 @@
+``transformers``-style API
+================================
+
+You may run the LLMs using ``transformers``-style API in ``bigdl-llm``.
+
+* |hugging_face_transformers_format|_
+* `Native Format <./native_format.html>`_
+
+.. |hugging_face_transformers_format| replace:: Hugging Face ``transformers`` Format
+.. _hugging_face_transformers_format: ./hugging_face_format.html
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/examples.rst b/docs/readthedocs/source/doc/LLM/Overview/examples.rst
new file mode 100644
index 00000000..e61f1c4b
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/examples.rst
@@ -0,0 +1,9 @@
+BigDL-LLM Examples
+================================
+
+You can use BigDL-LLM to run any Huggingface *Transfomers* models with INT4 optimizations on either servers or laptops.
+
+Here, we provide examples to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Please refer to the appropriate guide based on your device:
+
+* `CPU <./examples_cpu.html>`_
+* `GPU <./examples_gpu.html>`_
diff --git a/docs/readthedocs/source/doc/LLM/Overview/examples_cpu.md b/docs/readthedocs/source/doc/LLM/Overview/examples_cpu.md
new file mode 100644
index 00000000..7fdb934b
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/examples_cpu.md
@@ -0,0 +1,26 @@
+# BigDL-LLM Examples: CPU
+
+Here, we provide some examples on how you could apply BigDL-LLM INT4 optimizations on popular open-source models in the community.
+
+To run these examples, please first refer to [here](./install_cpu.html) for more information about how to install ``bigdl-llm``, requirements and best practices for setting up your environment.
+
+The following models have been verified on either servers or laptops with Intel CPUs.
+
+| Model | Example |
+|-----------|----------------------------------------------------------|
+| LLaMA *(such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.)* | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/vicuna) |
+| LLaMA 2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/llama2) |
+| MPT | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/mpt) |
+| Falcon | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/falcon) |
+| ChatGLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/chatglm) |
+| ChatGLM2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/chatglm2) |
+| Qwen | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/qwen) |
+| MOSS | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/moss) |
+| Baichuan | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/baichuan) |
+| Dolly-v1 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/dolly_v1) |
+| Dolly-v2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/dolly_v2) |
+| RedPajama | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/redpajama) |
+| Phoenix | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/phoenix) |
+| StarCoder | [link1](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/native_int4), [link2](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/starcoder) |
+| InternLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/internlm) |
+| Whisper | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/transformers/transformers_int4/whisper) |
diff --git a/docs/readthedocs/source/doc/LLM/Overview/examples_gpu.md b/docs/readthedocs/source/doc/LLM/Overview/examples_gpu.md
new file mode 100644
index 00000000..48b83b59
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/examples_gpu.md
@@ -0,0 +1,26 @@
+# BigDL-LLM Examples: GPU
+
+Here, we provide some examples on how you could apply BigDL-LLM INT4 optimizations on popular open-source models in the community.
+
+To run these examples, please first refer to [here](./install_gpu.html) for more information about how to install ``bigdl-llm``, requirements and best practices for setting up your environment.
+
+```eval_rst
+.. important::
+
+ Only Linux system is supported now, Ubuntu 22.04 is prefered.
+```
+
+The following models have been verified on either servers or laptops with Intel GPUs.
+
+| Model | Example |
+|-----------|----------------------------------------------------------|
+| LLaMA 2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/llama2) |
+| MPT | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/mpt) |
+| Falcon | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/falcon) |
+| ChatGLM2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/chatglm2) |
+| Qwen | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/qwen) |
+| Baichuan | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/baichuan) |
+| StarCoder | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/starcoder) |
+| InternLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/internlm) |
+| Whisper | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/whisper) |
+| GPT-J | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/gpt-j) |
diff --git a/docs/readthedocs/source/doc/LLM/Overview/install.rst b/docs/readthedocs/source/doc/LLM/Overview/install.rst
new file mode 100644
index 00000000..1ec5ea2d
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/install.rst
@@ -0,0 +1,7 @@
+BigDL-LLM Installation
+================================
+
+Here, we provide instructions on how to install ``bigdl-llm`` and best practices for setting up your environment. Please refer to the appropriate guide based on your device:
+
+* `CPU <./install_cpu.html>`_
+* `GPU <./install_gpu.html>`_
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md b/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md
new file mode 100644
index 00000000..763fd09a
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md
@@ -0,0 +1,71 @@
+# BigDL-LLM Installation: CPU
+
+## Quick Installation
+
+Install BigDL-LLM for CPU supports using pip through:
+
+```bash
+pip install bigdl-llm[all]
+```
+
+```eval_rst
+.. note::
+
+ ``all`` option will trigger installation of all the dependencies for common LLM application development.
+
+.. important::
+
+ ``bigdl-llm`` is tested with Python 3.9, which is recommended for best practices.
+```
+
+## Recommended Requirements
+
+Here list the recommended hardware and OS for smooth BigDL-LLM optimization experiences on CPU:
+
+* Hardware
+
+ * PCs equipped with 12th Gen Intel® Core™ processor or higher, and at least 16GB RAM
+ * Servers equipped with Intel® Xeon® processors, at least 32G RAM.
+
+* Operating System
+
+ * Ubuntu 20.04 or later
+ * CentOS 7 or later
+ * Windows 10/11, with or without WSL
+
+## Environment Setup
+
+For optimal performance with LLM models using BigDL-LLM optimizations on Intel CPUs, here are some best practices for setting up environment:
+
+First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.9 enviroment:
+
+```bash
+conda create -n llm python=3.9
+conda activate llm
+
+pip install bigdl-llm[all] # install bigdl-llm for CPU with 'all' option
+```
+
+Then for running a LLM model with BigDL-LLM optimizations (taking an `example.py` an example):
+
+```eval_rst
+.. tabs::
+
+ .. tab:: Client
+
+ It is recommended to run directly with full utilization of all CPU cores:
+
+ .. code-block:: bash
+
+ python example.py
+
+ .. tab:: Server
+
+ It is recommended to run with all the physical cores of a single socket:
+
+ .. code-block:: bash
+
+ # e.g. for a server with 48 cores per socket
+ export OMP_NUM_THREADS=48
+ numactl -C 0-47 -m 0 python example.py
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md
new file mode 100644
index 00000000..5429c150
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md
@@ -0,0 +1,65 @@
+# BigDL-LLM Installation: GPU
+
+## Quick Installation
+
+Install BigDL-LLM for GPU supports using pip through:
+
+```bash
+pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
+```
+
+```eval_rst
+.. note::
+
+ The above command will install ``intel_extension_for_pytorch==2.0.110+xpu`` as default. You can install specific ``ipex``/``torch`` version for your need.
+
+.. important::
+
+ ``bigdl-llm`` is tested with Python 3.9, which is recommended for best practices.
+```
+
+## Recommended Requirements
+
+BigDL-LLM for GPU supports has been verified on:
+
+* Intel Arc™ A-Series Graphics
+* Intel Data Center GPU Flex Series
+
+To apply Intel GPU acceleration, there're several steps for tools installation and environment preparation:
+
+* Step 1, only Linux system is supported now, Ubuntu 22.04 is prefered.
+* Step 2, please refer to our [driver installation](https://dgpu-docs.intel.com/driver/installation.html) for general purpose GPU capabilities.
+ ```eval_rst
+ .. note::
+
+ IPEX 2.0.110+xpu requires Intel GPU Driver version is `Stable 647.21 `_.
+ ```
+* Step 3, you also need to download and install [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html). OneMKL and DPC++ compiler are needed, others are optional.
+ ```eval_rst
+ .. note::
+
+ IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.
+ ```
+
+## Environment Setup
+
+For optimal performance with LLM models using BigDL-LLM optimizations on Intel GPUs, here are some best practices for setting up environment:
+
+First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.9 enviroment:
+
+```bash
+conda create -n llm python=3.9
+conda activate llm
+
+pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu # install bigdl-llm for GPU
+```
+
+Then for running a LLM model with BigDL-LLM optimizations, several environment variables are recommended:
+
+```bash
+# configures OneAPI environment variables
+source /opt/intel/oneapi/setvars.sh
+
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+```
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/known_issues.md b/docs/readthedocs/source/doc/LLM/Overview/known_issues.md
new file mode 100644
index 00000000..23a521a5
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/known_issues.md
@@ -0,0 +1 @@
+# BigDL-LLM Known Issues
\ No newline at end of file
diff --git a/docs/readthedocs/source/doc/LLM/Overview/llm.md b/docs/readthedocs/source/doc/LLM/Overview/llm.md
new file mode 100644
index 00000000..a13605a5
--- /dev/null
+++ b/docs/readthedocs/source/doc/LLM/Overview/llm.md
@@ -0,0 +1,68 @@
+# BigDL-LLM in 5 minutes
+
+You can use BigDL-LLM to run any [*Hugging Face Transformers*](https://huggingface.co/docs/transformers/index) PyTorch model. It automatically optimizes and accelerates LLMs using low-precision (INT4/INT5/INT8) techniques, modern hardware accelerations and latest software optimizations.
+
+Hugging Face transformers-based applications can run on BigDL-LLM with one-line code change, and you'll immediately observe significant speedup[1].
+
+Here, let's take a relatively small LLM model, i.e [open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2), and BigDL-LLM INT4 optimizations as an example.
+
+## Load a Pretrained Model
+
+Simply use one-line `transformers`-style API in `bigdl-llm` to load `open_llama_3b_v2` with INT4 optimization (by specifying `load_in_4bit=True`) as follows:
+
+```python
+from bigdl.llm.transformers import AutoModelForCausalLM
+
+model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path="openlm-research/open_llama_3b_v2",
+ load_in_4bit=True)
+```
+
+```eval_rst
+.. tip::
+
+ `open_llama_3b_v2 `_ is a pretrained large language model hosted on Hugging Face. ``openlm-research/open_llama_3b_v2`` is its Hugging Face model id. ``from_pretrained`` will automatically download the model from Hugging Face to a local cache path (e.g. ``~/.cache/huggingface``), load the model, and converted it to ``bigdl-llm`` INT4 format.
+
+ It may take a long time to download the model using API. You can also download the model yourself, and set ``pretrained_model_name_or_path`` to the local path of the downloaded model. This way, ``from_pretrained`` will load and convert directly from local path without download.
+```
+## Load Tokenizer
+
+You also need a tokenizer for inference. Just use the official `transformers` API to load `LlamaTokenizer`:
+
+```python
+from transformers import LlamaTokenizer
+
+tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path="openlm-research/open_llama_3b_v2")
+```
+
+## Run LLM
+
+Now you can do model inference exactly the same way as using official `transformers` API:
+
+```python
+import torch
+
+with torch.inference_mode():
+ prompt = 'Q: What is CPU?\nA:'
+
+ # tokenize the input prompt from string to token ids
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
+
+ # predict the next tokens (maximum 32) based on the input token ids
+ output = model.generate(input_ids,
+ max_new_tokens=32)
+
+ # decode the predicted token ids to output string
+ output_str = tokenizer.decode(output[0], skip_special_tokens=True)
+
+ print(output_str)
+```
+
+------
+
+
+
+
+ Performance varies by use, configuration and other factors. bigdl-llm may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
+
+
+
diff --git a/docs/readthedocs/source/doc/LLM/index.rst b/docs/readthedocs/source/doc/LLM/index.rst
index ca9e7491..23e7a759 100644
--- a/docs/readthedocs/source/doc/LLM/index.rst
+++ b/docs/readthedocs/source/doc/LLM/index.rst
@@ -1,14 +1,53 @@
BigDL-LLM
=========================
+.. raw:: html
-BigDL-LLM is a library for running LLM (Large Language Models) on your Intel laptop using INT4 with very low latency (for any Hugging Face Transformers model).
+
+ BigDL-LLM is a library for running LLM (large language model) on your Intel laptop or GPU using INT4 with very low latency [1] (for any PyTorch model).
+
-------
.. grid:: 1 2 2 2
:gutter: 2
+ .. grid-item-card::
+
+ **Get Started**
+ ^^^
+
+ Documents in these sections helps you getting started quickly with BigDL-LLM.
+
+ +++
+ :bdg-link:`BigDL-LLM in 5 minutes <./Overview/quick_start.html>` |
+ :bdg-link:`Installation <./Overview/install.html>`
+
+ .. grid-item-card::
+
+ **Key Features Guide**
+ ^^^
+
+ Each guide in this section provides you with in-depth information, concepts and knowledges about BigDL-LLM key features.
+
+ +++
+
+ :bdg-link:`transformers-style <./Overview/KeyFeatures/transformers_style_api.html>` |
+ :bdg-link:`Optimize Model <./Overview/KeyFeatures/optimize_model.html>` |
+ :bdg-link:`LangChain <./Overview/KeyFeatures/langchain_api.html>` |
+ :bdg-link:`GPU <./Overview/KeyFeatures/gpu_supports.html>`
+
+ .. grid-item-card::
+
+ **Examples & Tutorials**
+ ^^^
+
+ Examples contain scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community.
+
+ +++
+
+ :bdg-link:`Examples <./Overview/examples.html>`
+
.. grid-item-card::
**API Document**
@@ -20,7 +59,19 @@ BigDL-LLM is a library for running LLM (Large Language Models) on your Intel lap
:bdg-link:`API Document <../PythonAPI/LLM/index.html>`
+------
+
+.. raw:: html
+
+
+
+
+ Performance varies by use, configuration and other factors. bigdl-llm may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
+
+
+
+
.. toctree::
:hidden:
- BigDL-LLM Document
\ No newline at end of file
+ BigDL-LLM Document
diff --git a/docs/readthedocs/source/doc/PythonAPI/LLM/transformers.rst b/docs/readthedocs/source/doc/PythonAPI/LLM/transformers.rst
index 519b2d27..62b05ac7 100644
--- a/docs/readthedocs/source/doc/PythonAPI/LLM/transformers.rst
+++ b/docs/readthedocs/source/doc/PythonAPI/LLM/transformers.rst
@@ -1,4 +1,4 @@
-BigDL-LLM Transformers API
+BigDL-LLM `transformers`-style API
=====================
llm.transformers.model
diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst
index fc5a9549..3fb1f454 100644
--- a/docs/readthedocs/source/index.rst
+++ b/docs/readthedocs/source/index.rst
@@ -10,7 +10,12 @@ The BigDL Project
---------------------------------
BigDL-LLM: low-Bit LLM library
---------------------------------
-`bigdl-llm `_ is a library for running **LLM** (large language model) on your Intel **laptop** or **GPU** using INT4 with very low latency [*]_ (for any **PyTorch** model).
+
+.. raw:: html
+
+
+ bigdl-llm is a library for running LLM (large language model) on your Intel laptop or GPU using INT4 with very low latency [1] (for any PyTorch model).
+
.. note::
@@ -33,8 +38,8 @@ See the **optimized performance** of ``chatglm2-6b``, ``llama-2-13b-chat``, and
.. raw:: html
-
-
+
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -151,4 +156,12 @@ Choosing the right BigDL library
------
-.. [*] Performance varies by use, configuration and other factors. ``bigdl-llm`` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
+.. raw:: html
+
+
+
+
+ Performance varies by use, configuration and other factors. bigdl-llm may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.
+
+
+