ipex-llm/docs/readthedocs/source/_templates/sidebar_quicklinks.html
Kai Huang 92ee2077b3 Update Linux Quickstart (#10499)
* fix quick start

* update toc

* expose docker
2024-03-21 20:13:21 +08:00

55 lines
2.6 KiB
HTML

<nav class="bd-links">
<p class="bd-links__title">Quick Links</p>
<div class="navbar-nav">
<ul class="nav">
<li>
<strong class="bigdl-quicklinks-section-title">BigDL-LLM Quickstart</strong>
<input id="quicklink-cluster-llm-quickstart" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-quickstart" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Quickstart/install_linux_gpu.html">Install BigDL-LLM on Linux with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/install_windows_gpu.html">Install BigDL-LLM on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/docker_windows_gpu.html">Install BigDL-LLM in Docker on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/webui_quickstart.html">Use Text Generation WebUI on Windows with Intel GPU</a>
</li>
<li>
<a href="doc/LLM/Quickstart/benchmark_quickstart.html">BigDL-LLM Benchmarking</a>
</li>
<li>
<a href="doc/LLM/Quickstart/llama_cpp_quickstart.html">Use llama.cpp with BigDL-LLM on Intel GPU</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">BigDL-LLM Installation</strong>
<input id="quicklink-cluster-llm-installation" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-installation" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Overview/install_cpu.html">CPU</a>
</li>
<li>
<a href="doc/LLM/Overview/install_gpu.html">GPU</a>
</li>
<li>
</ul>
</li>
<li>
<a href="doc/LLM/Overview/FAQ/faq.html">
<strong class="bigdl-quicklinks-section-title">BigDL-LLM FAQ</strong>
</a>
</li>
</ul>
</div>
</nav>