[LLM Doc] Restructure (#10322)

* Add quick link guide to sidebar

* Add QuickStart to TOC

* Update quick links in main page

* Hide some section in More for top nav bar

* Resturct FAQ sections

* Small fix
This commit is contained in:
Yuwen Hu 2024-03-05 14:35:55 +08:00 committed by GitHub
parent 1e6f0c6f1a
commit 566e9bbb36
8 changed files with 46 additions and 121 deletions

View file

@ -3,111 +3,40 @@
<div class="navbar-nav"> <div class="navbar-nav">
<ul class="nav"> <ul class="nav">
<li> <li>
<strong class="bigdl-quicklinks-section-title">LLM QuickStart</strong> <strong class="bigdl-quicklinks-section-title">BigDL-LLM Quickstart</strong>
<input id="quicklink-cluster-llm" type="checkbox" class="toctree-checkbox" /> <input id="quicklink-cluster-llm-quickstart" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm" class="toctree-toggle"> <label for="quicklink-cluster-llm-quickstart" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i> <i class="fa-solid fa-chevron-down"></i>
</label> </label>
<ul class="nav bigdl-quicklinks-section-nav"> <ul class="nav bigdl-quicklinks-section-nav">
<li> <li>
<a href="doc/LLM/Overview/llm.html">BigDL-LLM in 5 minutes</a> <a href="doc/LLM/Quickstart/install_windows_gpu.html">Install BigDL-LLM on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/webui_quickstart.html">Use Text Generation WebUI on Windows with Intel GPU</a>
</li> </li>
</ul> </ul>
</li> </li>
<li> <li>
<strong class="bigdl-quicklinks-section-title">Orca QuickStart</strong> <strong class="bigdl-quicklinks-section-title">BigDL-LLM Installation</strong>
<input id="quicklink-cluster-orca" type="checkbox" class="toctree-checkbox" /> <input id="quicklink-cluster-llm-installation" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-orca" class="toctree-toggle"> <label for="quicklink-cluster-llm-installation" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i> <i class="fa-solid fa-chevron-down"></i>
</label> </label>
<ul class="bigdl-quicklinks-section-nav"> <ul class="bigdl-quicklinks-section-nav">
<li> <li>
<a href="doc/Orca/Howto/tf2keras-quickstart.html">Scale TensorFlow 2 Applications</a> <a href="doc/LLM/Overview/install_cpu.html">CPU</a>
</li> </li>
<li> <li>
<a href="doc/Orca/Howto/pytorch-quickstart.html">Scale PyTorch Applications</a> <a href="doc/LLM/Overview/install_gpu.html">GPU</a>
</li> </li>
<li> <li>
<a href="doc/Orca/Howto/ray-quickstart.html">Run Ray programs on Big Data clusters</a>
</li>
</ul> </ul>
</li> </li>
<li> <li>
<strong class="bigdl-quicklinks-section-title">Nano QuickStart</strong> <a href="doc/LLM/Overview/FAQ/faq.html">
<input id="quicklink-cluster-nano" type="checkbox" class="toctree-checkbox" /> <strong class="bigdl-quicklinks-section-title">BigDL-LLM FAQ</strong>
<label for="quicklink-cluster-nano" class="toctree-toggle"> </a>
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/Nano/QuickStart/pytorch_train_quickstart.html">PyTorch Training Acceleration</a>
</li>
<li>
<a href="doc/Nano/QuickStart/pytorch_quantization_inc_onnx.html">PyTorch Inference Quantization
with ONNXRuntime Acceleration </a>
</li>
<li>
<a href="doc/Nano/QuickStart/pytorch_openvino.html">PyTorch Inference Acceleration using
OpenVINO</a>
</li>
<li>
<a href="doc/Nano/QuickStart/tensorflow_train_quickstart.html">Tensorflow Training
Acceleration</a>
</li>
<li>
<a href="doc/Nano/QuickStart/tensorflow_quantization_quickstart.html">Tensorflow Quantization
Acceleration</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">DLlib QuickStart</strong>
<input id="quicklink-cluster-dllib" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-dllib" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/DLlib/QuickStart/python-getting-started.html">Python QuickStart</a>
</li>
<li>
<a href="doc/DLlib/QuickStart/scala-getting-started.html">Scala QuickStart</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">Chronos QuickStart</strong>
<input id="quicklink-cluster-chronos" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-chronos" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/Chronos/QuickStart/chronos-tsdataset-forecaster-quickstart.html">Basic
Forecasting</a>
</li>
<li>
<a href="doc/Chronos/QuickStart/chronos-autotsest-quickstart.html">Forecasting using AutoML</a>
</li>
<li>
<a href="doc/Chronos/QuickStart/chronos-anomaly-detector.html">Anomaly Detection</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">PPML QuickStart</strong>
<input id="quicklink-cluster-ppml" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-ppml" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/PPML/Overview/quicktour.html">Hello World Example</a>
</li>
<li>
<a href="doc/PPML/QuickStart/end-to-end.html">End-to-End Example</a>
</li>
</ul>
</li> </li>
</ul> </ul>
</div> </div>

View file

@ -34,6 +34,12 @@ subtrees:
title: "CPU" title: "CPU"
- file: doc/LLM/Overview/install_gpu - file: doc/LLM/Overview/install_gpu
title: "GPU" title: "GPU"
- file: doc/LLM/Quickstart/index
title: "Quickstart"
subtrees:
- entries:
- file: doc/LLM/Quickstart/install_windows_gpu
- file: doc/LLM/Quickstart/webui_quickstart
- file: doc/LLM/Overview/KeyFeatures/index - file: doc/LLM/Overview/KeyFeatures/index
title: "Key Features" title: "Key Features"
subtrees: subtrees:
@ -64,14 +70,8 @@ subtrees:
# title: "Tips and Known Issues" # title: "Tips and Known Issues"
- file: doc/PythonAPI/LLM/index - file: doc/PythonAPI/LLM/index
title: "API Reference" title: "API Reference"
- file: doc/LLM/Overview/FAQ/index - file: doc/LLM/Overview/FAQ/faq
title: "FAQ" title: "FAQ"
subtrees:
- entries:
- file: doc/LLM/Overview/FAQ/general_info
title: "General Info & Concepts"
- file: doc/LLM/Overview/FAQ/resolve_error
title: "How to Resolve Errors"
- entries: - entries:
- file: doc/Orca/index - file: doc/Orca/index

View file

@ -37,7 +37,7 @@ sys.path.insert(0, os.path.abspath("../../../python/llm/src/"))
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
html_theme = "pydata_sphinx_theme" html_theme = "pydata_sphinx_theme"
html_theme_options = { html_theme_options = {
"header_links_before_dropdown": 9, "header_links_before_dropdown": 3,
"icon_links": [ "icon_links": [
{ {
"name": "GitHub Repository for BigDL", "name": "GitHub Repository for BigDL",

View file

@ -1,8 +1,13 @@
# FAQ: How to Resolve Errors # Frequently Asked Questions (FAQ)
Refer to this section for common issues faced while using BigDL-LLM. ## General Info & Concepts
## Installation Error ### GGUF format usage with BigDL-LLM?
BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations).
Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support.
## How to Resolve Errors
### Fail to install `bigdl-llm` through `pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu` ### Fail to install `bigdl-llm` through `pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu`
@ -10,9 +15,6 @@ You could try to install BigDL-LLM dependencies for Intel XPU from source archiv
- For Windows system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for the steps. - For Windows system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for the steps.
- For Linux system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id3) for the steps. - For Linux system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id3) for the steps.
## Runtime Error
### PyTorch is not linked with support for xpu devices ### PyTorch is not linked with support for xpu devices
1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html). 1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html).
@ -21,7 +23,7 @@ You could try to install BigDL-LLM dependencies for Intel XPU from source archiv
4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection. 4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection.
5. If you do inference using the optimized model on Intel GPUs, you also need to set `to('xpu')` for input tensors. 5. If you do inference using the optimized model on Intel GPUs, you also need to set `to('xpu')` for input tensors.
### import `intel_extension_for_pytorch` error on Windows GPU ### Import `intel_extension_for_pytorch` error on Windows GPU
Please refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#error-loading-intel-extension-for-pytorch) for detailed guide. We list the possible missing requirements in environment which could lead to this error. Please refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#error-loading-intel-extension-for-pytorch) for detailed guide. We list the possible missing requirements in environment which could lead to this error.
@ -50,7 +52,7 @@ This error is caused by out of GPU memory. Some possible solutions to decrease G
2. You could try `model = model.float16()` or `model = model.bfloat16()` before moving model to GPU to use less GPU memory. 2. You could try `model = model.float16()` or `model = model.bfloat16()` before moving model to GPU to use less GPU memory.
3. You could try set `cpu_embedding=True` when call `from_pretrained` of AutoClass or `optimize_model` function. 3. You could try set `cpu_embedding=True` when call `from_pretrained` of AutoClass or `optimize_model` function.
### failed to enable AMX ### Failed to enable AMX
You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solve this error. You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solve this error.
@ -58,7 +60,7 @@ You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solv
You may encounter this error during finetuning on multi GPUs. Please try `sudo apt install level-zero-dev` to fix it. You may encounter this error during finetuning on multi GPUs. Please try `sudo apt install level-zero-dev` to fix it.
### random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching. ### Random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching.
If driver and OneAPI missmatching, it will lead to some error when BigDL use XMX(short prompts) for speeding up. If driver and OneAPI missmatching, it will lead to some error when BigDL use XMX(short prompts) for speeding up.
The output of `What's AI?` may like below: The output of `What's AI?` may like below:

View file

@ -1,10 +0,0 @@
# FAQ: General Info & Concepts
Refer to this section for general information about BigDL-LLM.
## BigDL-LLM Support
### GGUF format usage with BigDL-LLM?
BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations).
Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support.

View file

@ -1,7 +0,0 @@
Frequently Asked Questions (FAQ)
================================
You could refer to corresponding page to find solutions of your requirement:
* `General Info & Concepts <./general_info.html>`_
* `How to Resolve Errors <./resolve_error.html>`_

View file

@ -0,0 +1,11 @@
BigDL-LLM Quickstart
================================
.. note::
We are adding more Quickstart guide.
This section includes efficient guide to show you how to:
* `Install BigDL-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_
* `Use Text Generation WebUI on Windows with Intel GPU <./webui_quickstart.html>`_

View file

@ -56,7 +56,7 @@ It applies to Intel Core Ultra and Core 12 - 14 gen integrated GPUs (iGPUs), as
```bash ```bash
pip install --pre --upgrade bigdl-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ pip install --pre --upgrade bigdl-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
``` ```
> Note: If yuu encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice. > Note: If you encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice.
* You can verfy if bigdl-llm is successfully by simply importing a few classes from the library. For example, in the Python interactive shell, execute the following import command: * You can verfy if bigdl-llm is successfully by simply importing a few classes from the library. For example, in the Python interactive shell, execute the following import command:
```python ```python