ipex-llm/docs/mddocs/Quickstart/index.md
Yuwen Hu 8c9f877171
Update part of Quickstart guide in mddocs (1/2)
* Quickstart index.rst -> index.md

* Update for Linux Install Quickstart

* Update md docs for Windows Install QuickStart

* Small fix

* Add blank lines

* Update mddocs for llama cpp quickstart

* Update mddocs for llama3 llama-cpp and ollama quickstart

* Update mddocs for ollama quickstart

* Update mddocs for openwebui quickstart

* Update mddocs for privateGPT quickstart

* Update mddocs for vllm quickstart

* Small fix

* Update mddocs for text-generation-webui quickstart

* Update for video links
2024-06-20 18:43:23 +08:00

26 lines
1.6 KiB
Markdown

# IPEX-LLM Quickstart
> [!NOTE]
> We are adding more Quickstart guide.
This section includes efficient guide to show you how to:
- [`bigdl-llm` Migration Guide](./bigdl_llm_migration.md)
- [Install IPEX-LLM on Linux with Intel GPU](./install_linux_gpu.md)
- [Install IPEX-LLM on Windows with Intel GPU](./install_windows_gpu.md)
- [Install IPEX-LLM in Docker on Windows with Intel GPU](./docker_windows_gpu.md)
- [Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)](./docker_benchmark_quickstart.md)
- [Run Performance Benchmarking with IPEX-LLM](./benchmark_quickstart.md)
- [Run Local RAG using Langchain-Chatchat on Intel GPU](./chatchat_quickstart.md)
- [Run Text Generation WebUI on Intel GPU](./webui_quickstart.md)
- [Run Open WebUI on Intel GPU](./open_webui_with_ollama_quickstart.md)
- [Run PrivateGPT with IPEX-LLM on Intel GPU](./privateGPT_quickstart.md)
- [Run Coding Copilot (Continue) in VSCode with Intel GPU](./continue_quickstart.md)
- [Run Dify on Intel GPU](./dify_quickstart.md)
- [Run llama.cpp with IPEX-LLM on Intel GPU](./llama_cpp_quickstart.md)
- [Run Ollama with IPEX-LLM on Intel GPU](./ollama_quickstart.md)
- [Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM](./llama3_llamacpp_ollama_quickstart.md)
- [Run IPEX-LLM Serving with FastChat](./fastchat_quickstart.md)
- [Run IPEX-LLM Serving with vLLM on Intel GPU](./vLLM_quickstart.md)
- [Finetune LLM with Axolotl on Intel GPU](./axolotl_quickstart.md)
- [Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi](./deepspeed_autotp_fastapi_quickstart.md)