LLM: add initial FAQ page (#10055)

This commit is contained in:
binbin Deng 2024-02-01 09:43:39 +08:00 committed by GitHub
parent d2d3f6b091
commit 4b92235bdb
4 changed files with 37 additions and 0 deletions

View file

@ -64,6 +64,14 @@ subtrees:
# title: "Tips and Known Issues" # title: "Tips and Known Issues"
- file: doc/PythonAPI/LLM/index - file: doc/PythonAPI/LLM/index
title: "API Reference" title: "API Reference"
- file: doc/LLM/Overview/FAQ/index
title: "FAQ"
subtrees:
- entries:
- file: doc/LLM/Overview/FAQ/general_info
title: "General Info & Concepts"
- file: doc/LLM/Overview/FAQ/resolve_error
title: "How to Resolve Errors"
- entries: - entries:
- file: doc/Orca/index - file: doc/Orca/index

View file

@ -0,0 +1,10 @@
# FAQ: General Info & Concepts
Refer to this section for general information about BigDL-LLM.
## BigDL-LLM Support
### GGUF format usage with BigDL-LLM?
BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations).
Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support.

View file

@ -0,0 +1,7 @@
Frequently Asked Questions (FAQ)
================================
You could refer to corresponding page to find solutions of your requirement:
* `General Info & Concepts <./general_info.html>`_
* `How to Resolve Errors <./resolve_error.html>`_

View file

@ -0,0 +1,12 @@
# FAQ: How to Resolve Errors
Refer to this section for common issues faced while using BigDL-LLM.
## Runtime Error
### PyTorch is not linked with support for xpu devices
1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html).
2. If you are using an older version of `bigdl-llm` (specifically, older than 2.5.0b20240104), you need to manually add `import intel_extension_for_pytorch as ipex` at the beginning of your code.
3. After optimizing the model with BigDL-LLM, you need to move model to GPU through `model = model.to('xpu')`.
4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection.