ipex-llm/docs/readthedocs/source/_templates/sidebar_quicklinks.html
Shaojun Liu 7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00

99 lines
5.1 KiB
HTML

<nav class="bd-links">
<p class="bd-links__title">Quick Links</p>
<div class="navbar-nav">
<ul class="nav">
<li>
<a href="doc/LLM/index.html">
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Document</strong>
</a>
</li>
<li>
<a href="doc/LLM/Quickstart/bigdl_llm_migration.html">
<strong class="bigdl-quicklinks-section-title"><code>bigdl-llm</code> Migration Guide</strong>
</a>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Quickstart</strong>
<input id="quicklink-cluster-llm-quickstart" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-quickstart" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Quickstart/install_linux_gpu.html">Install IPEX-LLM on Linux with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/install_windows_gpu.html">Install IPEX-LLM on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/docker_windows_gpu.html">Install IPEX-LLM in Docker on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/docker_pytorch_inference_gpu.html">Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)</a>
</li>
<li>
<a href="doc/LLM/Quickstart/chatchat_quickstart.html">Run Local RAG using Langchain-Chatchat on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/webui_quickstart.html">Run Text Generation WebUI on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/continue_quickstart.html">Run Coding Copilot (Continue) in VSCode with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/dify_quickstart.html">Run Dify on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/open_webui_with_ollama_quickstart.html">Run Open WebUI with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/benchmark_quickstart.html">Run Performance Benchmarking with IPEX-LLM</a>
</li>
<li>
<a href="doc/LLM/Quickstart/llama_cpp_quickstart.html">Run llama.cpp with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/ollama_quickstart.html">Run Ollama with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.html">Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM</a>
</li>
<li>
<a href="doc/LLM/Quickstart/fastchat_quickstart.html">Run IPEX-LLM Serving with FastChat</a>
</li>
<li>
<a href="doc/LLM/Quickstart/axolotl_quickstart.html">Finetune LLM with Axolotl on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/privateGPT_quickstart.html">Run PrivateGPT with IPEX-LLM on Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/deepspeed_autotp_fastapi_quickstart.html">Run IPEX-LLM serving on Multiple Intel GPUs
using DeepSpeed AutoTP and FastApi</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">IPEX-LLM Installation</strong>
<input id="quicklink-cluster-llm-installation" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-installation" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Overview/install_cpu.html">CPU</a>
</li>
<li>
<a href="doc/LLM/Overview/install_gpu.html">GPU</a>
</li>
<li>
</ul>
</li>
<li>
<a href="doc/LLM/Overview/FAQ/faq.html">
<strong class="bigdl-quicklinks-section-title">IPEX-LLM FAQ</strong>
</a>
</li>
</ul>
</div>
</nav>