Quick Links

  • IPEX-LLM Document
  • bigdl-llm Migration Guide
  • IPEX-LLM Quickstart
    • Install IPEX-LLM on Linux with Intel GPU
    • Install IPEX-LLM on Windows with Intel GPU
    • Run Local RAG using Langchain-Chatchat on Intel GPU
    • Run Text Generation WebUI on Intel GPU
    • Run Coding Copilot (Continue) in VSCode with Intel GPU
    • Run Dify on Intel GPU
    • Run Open WebUI with IPEX-LLM on Intel GPU
    • Run Performance Benchmarking with IPEX-LLM
    • Run llama.cpp with IPEX-LLM on Intel GPU
    • Run Ollama with IPEX-LLM on Intel GPU
    • Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
    • Run IPEX-LLM Serving with FastChat
    • Run IPEX-LLM Serving with vLLM
    • Finetune LLM with Axolotl on Intel GPU
    • Run PrivateGPT with IPEX-LLM on Intel GPU
    • Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi
    • Run RAGFlow using Ollama with IPEX_LLM
  • IPEX-LLM Docker Guides
    • Overview of IPEX-LLM Containers
    • Python Inference with `ipex-llm` on Intel GPU
    • VSCode LLM Development with `ipex-llm` on Intel GPU
    • llama.cpp/Ollama/Open-WebUI with `ipex-llm` on Intel GPU
    • FastChat with `ipex-llm` on Intel GPU
    • vLLM with `ipex-llm` on Intel GPU
    • vLLM with `ipex-llm` on Intel CPU
  • IPEX-LLM Installation
    • CPU
    • GPU
  • IPEX-LLM FAQ