ipex-llm/docs/readthedocs/source/_templates/sidebar_quicklinks.html
Zijie Li 5283df0078
LLM: Add RAGFlow with Ollama Example QuickStart (#11338)
* Create ragflow.md

* Update ragflow.md

* Update ragflow_quickstart

* Update ragflow_quickstart.md

* Upload RAGFlow quickstart without images

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* Update ragflow_quickstart.md

* fix typos in readme

* Fix typos in quickstart readme
2024-06-19 20:00:50 +08:00

129 lines
6.7 KiB
HTML

<nav class="bd-links">
<p class="bd-links__title">Quick Links</p>
<div class="navbar-nav">
<ul class="nav">
<li>
<a href="doc/LLM/index.html">
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Document</strong>
</a>
</li>
<li>
<a href="doc/LLM/Quickstart/bigdl_llm_migration.html">
<strong class="bigdl-quicklinks-section-title"><code>bigdl-llm</code> Migration Guide</strong>
</a>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Quickstart</strong>
<input id="quicklink-cluster-llm-quickstart" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-quickstart" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Quickstart/install_linux_gpu.html">Install IPEX-LLM on Linux with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/install_windows_gpu.html">Install IPEX-LLM on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/chatchat_quickstart.html">Run Local RAG using Langchain-Chatchat on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/webui_quickstart.html">Run Text Generation WebUI on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/continue_quickstart.html">Run Coding Copilot (Continue) in VSCode with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/dify_quickstart.html">Run Dify on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/open_webui_with_ollama_quickstart.html">Run Open WebUI with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/benchmark_quickstart.html">Run Performance Benchmarking with IPEX-LLM</a>
</li>
<li>
<a href="doc/LLM/Quickstart/llama_cpp_quickstart.html">Run llama.cpp with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/ollama_quickstart.html">Run Ollama with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.html">Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM</a>
</li>
<li>
<a href="doc/LLM/Quickstart/fastchat_quickstart.html">Run IPEX-LLM Serving with FastChat</a>
</li>
<li>
<a href="doc/LLM/Quickstart/vLLM_quickstart.html">Run IPEX-LLM Serving with vLLM</a>
</li>
<li>
<a href="doc/LLM/Quickstart/axolotl_quickstart.html">Finetune LLM with Axolotl on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/privateGPT_quickstart.html">Run PrivateGPT with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/deepspeed_autotp_fastapi_quickstart.html">Run IPEX-LLM serving on Multiple Intel GPUs
using DeepSpeed AutoTP and FastApi</a>
</li>
<li>
<a href="doc/LLM/Quickstart/ragflow_quickstart.html">Run RAGFlow using Ollama with IPEX_LLM</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Docker Guides</strong>
<input id="quicklink-cluster-llm-docker" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-docker" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/DockerGuides/docker_windows_gpu.html">Overview of IPEX-LLM Containers</a>
</li>
<li>
<a href="doc/LLM/DockerGuides/docker_pytorch_inference_gpu.html">Python Inference with `ipex-llm` on Intel GPU </a>
</li>
<li>
<a href="doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.html">VSCode LLM Development with `ipex-llm` on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/DockerGuides/docker_cpp_xpu_quickstart.html">llama.cpp/Ollama/Open-WebUI with `ipex-llm` on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/DockerGuides/fastchat_docker_quickstart.html">FastChat with `ipex-llm` on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/DockerGuides/vllm_docker_quickstart.html">vLLM with `ipex-llm` on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/DockerGuides/vllm_cpu_docker_quickstart.html">vLLM with `ipex-llm` on Intel CPU</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Installation</strong>
<input id="quicklink-cluster-llm-installation" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-installation" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Overview/install_cpu.html">CPU</a>
</li>
<li>
<a href="doc/LLM/Overview/install_gpu.html">GPU</a>
</li>
<li>
</ul>
</li>
<li>
<a href="doc/LLM/Overview/FAQ/faq.html">
<strong class="bigdl-quicklinks-section-title">IPEX-LLM FAQ</strong>
</a>
</li>
</ul>
</div>
</nav>