diff --git a/README.md b/README.md index 3d1832e4..0971b143 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,8 @@ See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` mode ### `bigdl-llm` quickstart -- [Windows GPU installation](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/Quickstart/install_windows_gpu.html) +- [Windows GPU installation](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html) +- [Run BigDL-LLM in Text-Generation-WebUI](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html) - [Run BigDL-LLM using Docker](docker/llm) - [CPU INT4](#cpu-int4) - [GPU INT4](#gpu-int4) diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index c5d4c1e2..d077e4cf 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -83,6 +83,7 @@ See the **optimized performance** of ``chatglm2-6b`` and ``llama-2-13b-chat`` mo ============================================ - `Windows GPU installation `_ +- `Run BigDL-LLM in Text-Generation-WebUI `_ - `Run BigDL-LLM using Docker `_ - `CPU quickstart <#cpu-quickstart>`_ - `GPU quickstart <#gpu-quickstart>`_