diff --git a/docs/mddocs/DockerGuides/vllm_cpu_docker_quickstart.md b/docs/mddocs/DockerGuides/vllm_cpu_docker_quickstart.md
index 231e7bfa..115da812 100644
--- a/docs/mddocs/DockerGuides/vllm_cpu_docker_quickstart.md
+++ b/docs/mddocs/DockerGuides/vllm_cpu_docker_quickstart.md
@@ -115,4 +115,22 @@ wrk -t8 -c8 -d15m -s payload-1024.lua http://localhost:8000/v1/completions --tim
#### Offline benchmark through benchmark_vllm_throughput.py
-Please refer to this [section](../Quickstart/vLLM_quickstart.md#5performing-benchmark) on how to use `benchmark_vllm_throughput.py` for benchmarking.
+```bash
+cd /llm
+wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
+
+source ipex-llm-init -t
+export MODEL="YOUR_MODEL"
+
+python3 ./benchmark_vllm_throughput.py \
+ --backend vllm \
+ --dataset ./ShareGPT_V3_unfiltered_cleaned_split.json \
+ --model $MODEL \
+ --num-prompts 1000 \
+ --seed 42 \
+ --trust-remote-code \
+ --enforce-eager \
+ --dtype bfloat16 \
+ --device cpu \
+ --load-in-low-bit bf16
+```
diff --git a/docs/mddocs/Quickstart/index.md b/docs/mddocs/Quickstart/index.md
index efbaa868..9e4fa976 100644
--- a/docs/mddocs/Quickstart/index.md
+++ b/docs/mddocs/Quickstart/index.md
@@ -5,12 +5,17 @@
This section includes efficient guide to show you how to:
-- [`bigdl-llm` Migration Guide](./bigdl_llm_migration.md)
+## Install
+
+- [``bigdl-llm`` Migration Guide](./bigdl_llm_migration.md)
- [Install IPEX-LLM on Linux with Intel GPU](./install_linux_gpu.md)
- [Install IPEX-LLM on Windows with Intel GPU](./install_windows_gpu.md)
- [Install IPEX-LLM in Docker on Windows with Intel GPU](./docker_windows_gpu.md)
-- [Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)](./docker_benchmark_quickstart.md)
+
+## Inference
+
- [Run Performance Benchmarking with IPEX-LLM](./benchmark_quickstart.md)
+- [Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)](./docker_benchmark_quickstart.md)
- [Run Local RAG using Langchain-Chatchat on Intel GPU](./chatchat_quickstart.md)
- [Run Text Generation WebUI on Intel GPU](./webui_quickstart.md)
- [Run Open WebUI on Intel GPU](./open_webui_with_ollama_quickstart.md)
@@ -20,7 +25,14 @@ This section includes efficient guide to show you how to:
- [Run llama.cpp with IPEX-LLM on Intel GPU](./llama_cpp_quickstart.md)
- [Run Ollama with IPEX-LLM on Intel GPU](./ollama_quickstart.md)
- [Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM](./llama3_llamacpp_ollama_quickstart.md)
+- [Run RAGFlow with IPEX_LLM on Intel GPU](./ragflow_quickstart.md)
+
+## Serving
+
- [Run IPEX-LLM Serving with FastChat](./fastchat_quickstart.md)
- [Run IPEX-LLM Serving with vLLM on Intel GPU](./vLLM_quickstart.md)
-- [Finetune LLM with Axolotl on Intel GPU](./axolotl_quickstart.md)
- [Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi](./deepspeed_autotp_fastapi_quickstart.md)
+
+## Finetune
+
+- [Finetune LLM with Axolotl on Intel GPU](./axolotl_quickstart.md)
\ No newline at end of file
diff --git a/docs/mddocs/Quickstart/ragflow_quickstart.md b/docs/mddocs/Quickstart/ragflow_quickstart.md
new file mode 100644
index 00000000..161831d9
--- /dev/null
+++ b/docs/mddocs/Quickstart/ragflow_quickstart.md
@@ -0,0 +1,278 @@
+# Run RAGFlow with IPEX-LLM on Intel GPU
+
+[RAGFlow](https://github.com/infiniflow/ragflow) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding; by integrating it with [`ipex-llm`](https://github.com/intel-analytics/ipex-llm), users can now easily leverage local LLMs running on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max).
+
+
+*See the demo of ragflow running Qwen2:7B on Intel Arc A770 below.*
+
+
+
+
+## Quickstart
+
+### 0 Prerequisites
+
+- CPU >= 4 cores
+- RAM >= 16 GB
+- Disk >= 50 GB
+- Docker >= 24.0.0 & Docker Compose >= v2.26.1
+
+
+### 1. Install and Start `Ollama` Service on Intel GPU
+
+Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`) or a remote URL (e.g., `http://your_ip:11434`).
+
+
+
+```eval_rst
+.. important::
+
+ If the `RAGFlow` is not deployed on the same machine where Ollama is running (which means `RAGFlow` needs to connect to a remote Ollama service), you must configure the Ollama service to accept connections from any IP address. To achieve this, set or export the environment variable `OLLAMA_HOST=0.0.0.0` before executing the command `ollama serve`.
+
+.. tip::
+
+ If your local LLM is running on Intel Arcâ„¢ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`:
+
+ .. code-block:: bash
+
+ export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+```
+
+### 2. Pull Model
+
+Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) model as an example. Open a new terminal window, run the following command to pull [`qwen2:latest`](https://ollama.com/library/qwen2).
+
+
+```eval_rst
+.. tabs::
+ .. tab:: Linux
+
+ .. code-block:: bash
+
+ export no_proxy=localhost,127.0.0.1
+ ./ollama pull qwen2:latest
+
+ .. tab:: Windows
+
+ Please run the following command in Miniforge or Anaconda Prompt.
+
+ .. code-block:: cmd
+
+ set no_proxy=localhost,127.0.0.1
+ ollama pull qwen2:latest
+
+.. seealso::
+
+ Besides Qwen2, there are other LLM models you might want to explore, such as Llama3, Phi3, Mistral, etc. You can find all available models in the `Ollama model library `_. Simply search for the model, pull it in a similar manner, and give it a try.
+```
+
+### 3. Start `RAGFlow` Service
+
+
+```eval_rst
+.. note::
+
+ The steps in section 3 is verified on Linux system only.
+```
+
+
+#### 3.1 Download `RAGFlow`
+
+You can either clone the repository or download the source zip from [github](https://github.com/infiniflow/ragflow/archive/refs/heads/main.zip):
+
+```bash
+$ git clone https://github.com/infiniflow/ragflow.git
+```
+
+#### 3.2 Environment Settings
+
+Ensure `vm.max_map_count` is set to at least 262144. To check the current value of `vm.max_map_count`, use:
+
+```bash
+$ sysctl vm.max_map_count
+```
+
+##### Changing `vm.max_map_count`
+
+To set the value temporarily, use:
+
+```bash
+$ sudo sysctl -w vm.max_map_count=262144
+```
+
+To make the change permanent and ensure it persists after a reboot, add or update the following line in `/etc/sysctl.conf`:
+
+```bash
+vm.max_map_count=262144
+```
+
+### 3.3 Start the `RAGFlow` server using Docker
+
+Build the pre-built Docker images and start up the server:
+
+```eval_rst
+.. note::
+
+ Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands.
+```
+
+
+```bash
+$ export no_proxy=localhost,127.0.0.1
+$ cd ragflow/docker
+$ chmod +x ./entrypoint.sh
+$ docker compose up -d
+```
+
+
+```eval_rst
+.. note::
+
+ The core image is about 9 GB in size and may take a while to load.
+```
+
+Check the server status after having the server up and running:
+
+```bash
+$ docker logs -f ragflow-server
+```
+
+Upon successful deployment, you will see logs in the terminal similar to the following:
+
+```bash
+ ____ ______ __
+ / __ \ ____ _ ____ _ / ____// /____ _ __
+ / /_/ // __ `// __ `// /_ / // __ \| | /| / /
+ / _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
+/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
+ /____/
+
+* Running on all addresses (0.0.0.0)
+* Running on http://127.0.0.1:9380
+* Running on http://x.x.x.x:9380
+INFO:werkzeug:Press CTRL+C to quit
+```
+
+
+You can now open a browser and access the RAGflow web portal. With the default settings, simply enter `http://IP_OF_YOUR_MACHINE` (without the port number), as the default HTTP serving port `80` can be omitted. If RAGflow is deployed on the same machine as your browser, you can also access the web portal at `http://127.0.0.1` or `http://localhost`.
+
+
+### 4. Using `RAGFlow`
+
+```eval_rst
+.. note::
+
+ For detailed information about how to use RAGFlow, visit the README of `RAGFlow official repository `_.
+
+```
+
+#### Log-in
+
+If this is your first time using RAGFlow, you will need to register. After registering, log in with your new account to access the portal.
+
+
+
+
+#### Configure `Ollama` service URL
+
+Access the Ollama settings through **Settings -> Model Providers** in the menu. Fill out the **Base URL**, and then click the **OK** button at the bottom.
+
+
+
+
+
+
+If the connection is successful, you will see the model listed down **Show more models** as illustrated below.
+
+
+
+
+
+```eval_rst
+.. note::
+
+ If you want to use an Ollama server hosted at a different URL, simply update the **Ollama Base URL** to the new URL and press the **OK** button again to re-confirm the connection to Ollama.
+```
+
+#### Create Knowledge Base
+
+Go to **Knowledge Base** by clicking on **Knowledge Base** in the top bar. Click the **+Create knowledge base** button on the right. You will be prompted to input a name for the knowledge base.
+
+
+
+
+
+
+#### Edit Knowledge Base
+
+After entering a name, you will be directed to edit the knowledge base. Click on **Dataset** on the left, then click **+ Add file -> Local files**. Upload your file in the pop-up window and click **OK**.
+
+
+
+After the upload is successful, you will see a new record in the dataset. The _**Parsing Status**_ column will show `UNSTARTED`. Click the green start button in the _**Action**_ column to begin file parsing. Once parsing is finished, the _**Parsing Status**_ column will change to **SUCCESS**.
+
+
+
+
+Next, go to **Configuration** on the left menu and click **Save** at the bottom to save the changes.
+
+
+
+
+
+#### Chat with the Model
+
+Start new conversations by clicking **Chat** in the top navbar.
+
+On the left side, create a conversation by clicking **Create an Assistant**. Under **Assistant Setting**, give it a name and select your knowledge bases.
+
+
+
+
+
+
+
+Next, go to **Model Setting**, choose your model added by Ollama, and disable the **Max Tokens** toggle. Finally, click **OK** to start.
+
+```eval_rst
+.. tip::
+
+ Enabling the **Max Tokens** toggle may result in very short answers.
+```
+
+
+
+
+
+
+
+Input your questions into the **Message Resume Assistant** textbox at the bottom, and click the button on the right to get responses.
+
+
+
+
+
+#### Exit
+
+To shut down the RAGFlow server, use **Ctrl+C** in the terminal where the Ragflow server is runing, then close your browser tab.