From 33f90beda0d15ed1295dc008808e29c080828113 Mon Sep 17 00:00:00 2001 From: Shengsheng Huang Date: Sun, 7 Apr 2024 14:26:59 +0800 Subject: [PATCH] fix quickstart docs (#10676) --- .../source/doc/LLM/Quickstart/chatchat_quickstart.md | 6 +++--- .../source/doc/LLM/Quickstart/continue_quickstart.md | 4 ++-- .../source/doc/LLM/Quickstart/webui_quickstart.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/chatchat_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/chatchat_quickstart.md index 9121f809..0e8a9bf4 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/chatchat_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/chatchat_quickstart.md @@ -32,9 +32,9 @@ See the Langchain-Chatchat architecture below ([source](https://github.com/chatc Follow the guide that corresponds to your specific system and GPU type from the links provided below: -- For systems with Intel Core Ultra integrated GPU: [Windows Guide](./INSTALL_win_mtl.md) -- For systems with Intel Arc A-Series GPU: [Windows Guide](./INSTALL_windows_arc.md) | [Linux Guide](./INSTALL_linux_arc.md) -- For systems with Intel Data Center Max Series GPU: [Linux Guide](./INSTALL_linux_max.md) +- For systems with Intel Core Ultra integrated GPU: [Windows Guide](https://github.com/intel-analytics/Langchain-Chatchat/blob/ipex-llm/INSTALL_win_mtl.md) +- For systems with Intel Arc A-Series GPU: [Windows Guide](https://github.com/intel-analytics/Langchain-Chatchat/blob/ipex-llm/INSTALL_windows_arc.md) | [Linux Guide](https://github.com/intel-analytics/Langchain-Chatchat/blob/ipex-llm/INSTALL_linux_arc.md) +- For systems with Intel Data Center Max Series GPU: [Linux Guide](https://github.com/intel-analytics/Langchain-Chatchat/blob/ipex-llm/INSTALL_linux_max.md) ### How to use RAG diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md index 91bb30f2..0f75491c 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md @@ -59,7 +59,7 @@ Follow the steps in [Model Download](https://ipex-llm.readthedocs.io/en/latest/d ```eval_rst .. note:: - If you don't need to use the API service anymore, you can follow the instructions in [Exit WebUI](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html#exit-the-webui) to stop the service. + If you don't need to use the API service anymore, you can follow the instructions in refer to `Exit WebUI `_ to stop the service. ``` @@ -104,7 +104,7 @@ In `config.json`, you'll find the `models` property, a list of the models that y } ``` -### 5. How to Use Continue +### 5. How to Use `Continue` For detailed tutorials please refer to [this link](https://continue.dev/docs/how-to-use-continue). Here we are only showing the most common scenarios. #### Ask about highlighted code or an entire file diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md index b5ca43b4..b59f7714 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md @@ -55,7 +55,7 @@ Configure oneAPI variables by running the following command in **Anaconda Prompt ```eval_rst .. note:: - For more details about runtime configurations, `refer to this guide `_ + For more details about runtime configurations, refer to `this guide `_ ``` ```cmd