From 94cb16fe40e94333549d52dd65dd3af863203ce9 Mon Sep 17 00:00:00 2001 From: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com> Date: Wed, 21 Feb 2024 17:58:40 +0800 Subject: [PATCH] [LLM] Small updates to Win GPU Install Doc (#10199) * Make Offline installer as default for win gpu doc for oneAPI * Small other fixes --- README.md | 2 +- docs/readthedocs/source/doc/LLM/Overview/install_gpu.md | 8 +++++--- python/llm/README.md | 4 +--- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 9de3dd6f..b7de202b 100644 --- a/README.md +++ b/README.md @@ -162,6 +162,7 @@ Over 40 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | Baichuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan2) | | InternLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm) | | Qwen | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen) | +| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) | | Qwen-VL | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen-vl) | | Aquila | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila) | | Aquila2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/aquila2) | @@ -187,7 +188,6 @@ Over 40 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | Bark | [link](python/llm/example/CPU/PyTorch-Models/Model/bark) | [link](python/llm/example/GPU/PyTorch-Models/Model/bark) | | SpeechT5 | | [link](python/llm/example/GPU/PyTorch-Models/Model/speech-t5) | | Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | | -| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) | ***For more details, please refer to the `bigdl-llm` [Document](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).*** diff --git a/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md index 1b881992..f1d0054c 100644 --- a/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md +++ b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md @@ -24,6 +24,11 @@ Intel® oneAPI Base Toolkit 2024.0 installation methods: ```eval_rst .. tabs:: + .. tab:: Offline installer + Download and install `Intel® oneAPI Base Toolkit `_ version 2024.0 through Offline Installer. + + During installation, you could just continue with "Recommended Installation". If you would like to continue with "Custom Installation", please note that oneAPI Deep Neural Network Library, oneAPI Math Kernel Library, and oneAPI DPC++/C++ Compiler are required, the other components are optional. + .. tab:: PIP installer Pip install oneAPI in your working conda environment. @@ -33,9 +38,6 @@ Intel® oneAPI Base Toolkit 2024.0 installation methods: .. note:: Activating your working conda environment will automatically configure oneAPI environment variables. - - .. tab:: Offline installer - Download and install `Intel® oneAPI Base Toolkit `_ version 2024.0. oneAPI Deep Neural Network Library, oneAPI Math Kernel Library, and oneAPI DPC++/C++ Compiler are required, the other components are optional. ``` ### Install BigDL-LLM From PyPI diff --git a/python/llm/README.md b/python/llm/README.md index be0646a5..334296b5 100644 --- a/python/llm/README.md +++ b/python/llm/README.md @@ -56,6 +56,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | Baichuan2 | [link](example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](example/GPU/HF-Transformers-AutoModels/Model/baichuan2) | | InternLM | [link](example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](example/GPU/HF-Transformers-AutoModels/Model/internlm) | | Qwen | [link](example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](example/GPU/HF-Transformers-AutoModels/Model/qwen) | +| Qwen1.5 | [link](example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) | | Qwen-VL | [link](example/CPU/HF-Transformers-AutoModels/Model/qwen-vl) | [link](example/GPU/HF-Transformers-AutoModels/Model/qwen-vl) | | Aquila | [link](example/CPU/HF-Transformers-AutoModels/Model/aquila) | [link](example/GPU/HF-Transformers-AutoModels/Model/aquila) | | Aquila2 | [link](example/CPU/HF-Transformers-AutoModels/Model/aquila2) | [link](example/GPU/HF-Transformers-AutoModels/Model/aquila2) | @@ -63,7 +64,6 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | Whisper | [link](example/CPU/HF-Transformers-AutoModels/Model/whisper) | [link](example/GPU/HF-Transformers-AutoModels/Model/whisper) | | Phi-1_5 | [link](example/CPU/HF-Transformers-AutoModels/Model/phi-1_5) | [link](example/GPU/HF-Transformers-AutoModels/Model/phi-1_5) | | Flan-t5 | [link](example/CPU/HF-Transformers-AutoModels/Model/flan-t5) | [link](example/GPU/HF-Transformers-AutoModels/Model/flan-t5) | -| Qwen-VL | [link](example/CPU/HF-Transformers-AutoModels/Model/qwen-vl) | | | LLaVA | [link](example/CPU/PyTorch-Models/Model/llava) | [link](example/GPU/PyTorch-Models/Model/llava) | | CodeLlama | [link](example/CPU/HF-Transformers-AutoModels/Model/codellama) | [link](example/GPU/HF-Transformers-AutoModels/Model/codellama) | | Skywork | [link](example/CPU/HF-Transformers-AutoModels/Model/skywork) | | @@ -82,8 +82,6 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa | Bark | [link](example/CPU/PyTorch-Models/Model/bark) | [link](example/GPU/PyTorch-Models/Model/bark) | | SpeechT5 | | [link](example/GPU/PyTorch-Models/Model/speech-t5) | | Ziya-Coding-34B-v1.0 | [link](example/CPU/HF-Transformers-AutoModels/Model/ziya) | | -| Qwen1.5 | [link](example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](example/GPU/HF-Transformers-AutoModels/Model/qwen1.5) | - ### Working with `bigdl-llm`