IPEX-LLM Quickstart ================================ .. note:: We are adding more Quickstart guide. This section includes efficient guide to show you how to: * |bigdl_llm_migration_guide|_ * `Install IPEX-LLM on Linux with Intel GPU <./install_linux_gpu.html>`_ * `Install IPEX-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_ * `Install IPEX-LLM in Docker on Windows with Intel GPU <./docker_windows_gpu.html>`_ * `Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) <./docker_benchmark_quickstart.html>`_ * `Run Performance Benchmarking with IPEX-LLM <./benchmark_quickstart.html>`_ * `Run Local RAG using Langchain-Chatchat on Intel GPU <./chatchat_quickstart.html>`_ * `Run Text Generation WebUI on Intel GPU <./webui_quickstart.html>`_ * `Run Open WebUI on Intel GPU <./open_webui_with_ollama_quickstart.html>`_ * `Run PrivateGPT with IPEX-LLM on Intel GPU <./privateGPT_quickstart.html>`_ * `Run Coding Copilot (Continue) in VSCode with Intel GPU <./continue_quickstart.html>`_ * `Run Dify on Intel GPU <./dify_quickstart.html>`_ * `Run llama.cpp with IPEX-LLM on Intel GPU <./llama_cpp_quickstart.html>`_ * `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_ * `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_ * `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_ * `Run IPEX-LLM Serving wit vLLM on Intel GPU<./vLLM_quickstart.html>`_ * `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_ * `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`_ .. |bigdl_llm_migration_guide| replace:: ``bigdl-llm`` Migration Guide .. _bigdl_llm_migration_guide: bigdl_llm_migration.html