* Quickstart index.rst -> index.md * Update for Linux Install Quickstart * Update md docs for Windows Install QuickStart * Small fix * Add blank lines * Update mddocs for llama cpp quickstart * Update mddocs for llama3 llama-cpp and ollama quickstart * Update mddocs for ollama quickstart * Update mddocs for openwebui quickstart * Update mddocs for privateGPT quickstart * Update mddocs for vllm quickstart * Small fix * Update mddocs for text-generation-webui quickstart * Update for video links
1.6 KiB
1.6 KiB
IPEX-LLM Quickstart
Note
We are adding more Quickstart guide.
This section includes efficient guide to show you how to:
bigdl-llmMigration Guide- Install IPEX-LLM on Linux with Intel GPU
- Install IPEX-LLM on Windows with Intel GPU
- Install IPEX-LLM in Docker on Windows with Intel GPU
- Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)
- Run Performance Benchmarking with IPEX-LLM
- Run Local RAG using Langchain-Chatchat on Intel GPU
- Run Text Generation WebUI on Intel GPU
- Run Open WebUI on Intel GPU
- Run PrivateGPT with IPEX-LLM on Intel GPU
- Run Coding Copilot (Continue) in VSCode with Intel GPU
- Run Dify on Intel GPU
- Run llama.cpp with IPEX-LLM on Intel GPU
- Run Ollama with IPEX-LLM on Intel GPU
- Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
- Run IPEX-LLM Serving with FastChat
- Run IPEX-LLM Serving with vLLM on Intel GPU
- Finetune LLM with Axolotl on Intel GPU
- Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi