* Update main readme to add missing quckstart * Update quickstart index page * Small fixes * Small fix
1.5 KiB
1.5 KiB
IPEX-LLM Quickstart
Note
We are adding more Quickstart guide.
This section includes efficient guide to show you how to:
Install
bigdl-llmMigration Guide- Install IPEX-LLM on Linux with Intel GPU
- Install IPEX-LLM on Windows with Intel GPU
Inference
- Run Performance Benchmarking with IPEX-LLM
- Run Local RAG using Langchain-Chatchat on Intel GPU
- Run Text Generation WebUI on Intel GPU
- Run Open WebUI on Intel GPU
- Run PrivateGPT with IPEX-LLM on Intel GPU
- Run Coding Copilot (Continue) in VSCode with Intel GPU
- Run Dify on Intel GPU
- Run llama.cpp with IPEX-LLM on Intel GPU
- Run Ollama with IPEX-LLM on Intel GPU
- Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
- Run RAGFlow with IPEX_LLM on Intel GPU
Serving
- Run IPEX-LLM Serving with FastChat
- Run IPEX-LLM Serving with vLLM on Intel GPU
- Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi