Update readme (#10303)
This commit is contained in:
parent
1db20dd1d0
commit
367b1db4f7
2 changed files with 3 additions and 1 deletions
|
|
@ -63,7 +63,8 @@ See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` mode
|
|||
|
||||
### `bigdl-llm` quickstart
|
||||
|
||||
- [Windows GPU installation](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/Quickstart/install_windows_gpu.html)
|
||||
- [Windows GPU installation](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html)
|
||||
- [Run BigDL-LLM in Text-Generation-WebUI](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html)
|
||||
- [Run BigDL-LLM using Docker](docker/llm)
|
||||
- [CPU INT4](#cpu-int4)
|
||||
- [GPU INT4](#gpu-int4)
|
||||
|
|
|
|||
|
|
@ -83,6 +83,7 @@ See the **optimized performance** of ``chatglm2-6b`` and ``llama-2-13b-chat`` mo
|
|||
============================================
|
||||
|
||||
- `Windows GPU installation <doc/LLM/Quickstart/install_windows_gpu.html>`_
|
||||
- `Run BigDL-LLM in Text-Generation-WebUI <doc/LLM/Quickstart/webui_quickstart.html>`_
|
||||
- `Run BigDL-LLM using Docker <https://github.com/intel-analytics/BigDL/tree/main/docker/llm>`_
|
||||
- `CPU quickstart <#cpu-quickstart>`_
|
||||
- `GPU quickstart <#gpu-quickstart>`_
|
||||
|
|
|
|||
Loading…
Reference in a new issue