| .. | ||
| axolotl_quickstart.md | ||
| benchmark_quickstart.md | ||
| bigdl_llm_migration.md | ||
| chatchat_quickstart.md | ||
| continue_quickstart.md | ||
| deepspeed_autotp_fastapi_quickstart.md | ||
| dify_quickstart.md | ||
| fastchat_quickstart.md | ||
| graphrag_quickstart.md | ||
| install_linux_gpu.md | ||
| install_windows_gpu.md | ||
| llama3_llamacpp_ollama_quickstart.md | ||
| llama_cpp_quickstart.md | ||
| ollama_quickstart.md | ||
| open_webui_with_ollama_quickstart.md | ||
| privateGPT_quickstart.md | ||
| ragflow_quickstart.md | ||
| README.md | ||
| vLLM_quickstart.md | ||
| webui_quickstart.md | ||
IPEX-LLM Quickstart
Note
We are adding more Quickstart guides.
This section includes efficient guide to show you how to:
Install
bigdl-llmMigration Guide- Install IPEX-LLM on Linux with Intel GPU
 - Install IPEX-LLM on Windows with Intel GPU
 
Inference
- Run Performance Benchmarking with IPEX-LLM
 - Run Local RAG using Langchain-Chatchat on Intel GPU
 - Run Text Generation WebUI on Intel GPU
 - Run Open WebUI on Intel GPU
 - Run PrivateGPT with IPEX-LLM on Intel GPU
 - Run Coding Copilot (Continue) in VSCode with Intel GPU
 - Run Dify on Intel GPU
 - Run llama.cpp with IPEX-LLM on Intel GPU
 - Run Ollama with IPEX-LLM on Intel GPU
 - Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
 - Run RAGFlow with IPEX-LLM on Intel GPU
 - Run GraphRAG with IPEX-LLM on Intel GPU
 
Serving
- Run IPEX-LLM Serving with FastChat
 - Run IPEX-LLM Serving with vLLM on Intel GPU
 - Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi