LLM: add initial FAQ page (#10055)
This commit is contained in:
parent
d2d3f6b091
commit
4b92235bdb
4 changed files with 37 additions and 0 deletions
|
|
@ -64,6 +64,14 @@ subtrees:
|
|||
# title: "Tips and Known Issues"
|
||||
- file: doc/PythonAPI/LLM/index
|
||||
title: "API Reference"
|
||||
- file: doc/LLM/Overview/FAQ/index
|
||||
title: "FAQ"
|
||||
subtrees:
|
||||
- entries:
|
||||
- file: doc/LLM/Overview/FAQ/general_info
|
||||
title: "General Info & Concepts"
|
||||
- file: doc/LLM/Overview/FAQ/resolve_error
|
||||
title: "How to Resolve Errors"
|
||||
|
||||
- entries:
|
||||
- file: doc/Orca/index
|
||||
|
|
|
|||
10
docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md
Normal file
10
docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# FAQ: General Info & Concepts
|
||||
|
||||
Refer to this section for general information about BigDL-LLM.
|
||||
|
||||
## BigDL-LLM Support
|
||||
|
||||
### GGUF format usage with BigDL-LLM?
|
||||
|
||||
BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations).
|
||||
Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support.
|
||||
7
docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst
Normal file
7
docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
Frequently Asked Questions (FAQ)
|
||||
================================
|
||||
|
||||
You could refer to corresponding page to find solutions of your requirement:
|
||||
|
||||
* `General Info & Concepts <./general_info.html>`_
|
||||
* `How to Resolve Errors <./resolve_error.html>`_
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
# FAQ: How to Resolve Errors
|
||||
|
||||
Refer to this section for common issues faced while using BigDL-LLM.
|
||||
|
||||
## Runtime Error
|
||||
|
||||
### PyTorch is not linked with support for xpu devices
|
||||
|
||||
1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html).
|
||||
2. If you are using an older version of `bigdl-llm` (specifically, older than 2.5.0b20240104), you need to manually add `import intel_extension_for_pytorch as ipex` at the beginning of your code.
|
||||
3. After optimizing the model with BigDL-LLM, you need to move model to GPU through `model = model.to('xpu')`.
|
||||
4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection.
|
||||
Loading…
Reference in a new issue