* add entrypoint.sh * add quickstart * remove entrypoint * update * Install related library of benchmarking * update * print out results * update docs * minor update * update * update quickstart * update * update * update * update * update * update * add chat & example section * add more details * minor update * rename quickstart * update * minor update * update * update config.yaml * update readme * use --gpu * add tips * minor update * update
		
			
				
	
	
		
			32 lines
		
	
	
	
		
			1.7 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
			
		
		
	
	
			32 lines
		
	
	
	
		
			1.7 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
IPEX-LLM Quickstart
 | 
						|
================================
 | 
						|
 | 
						|
.. note::
 | 
						|
 | 
						|
   We are adding more Quickstart guide.
 | 
						|
 | 
						|
This section includes efficient guide to show you how to:
 | 
						|
 | 
						|
 | 
						|
* |bigdl_llm_migration_guide|_
 | 
						|
* `Install IPEX-LLM on Linux with Intel GPU <./install_linux_gpu.html>`_
 | 
						|
* `Install IPEX-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_
 | 
						|
* `Install IPEX-LLM in Docker on Windows with Intel GPU <./docker_windows_gpu.html>`_
 | 
						|
* `Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) <./docker_benchmark_quickstart.html>`_
 | 
						|
* `Run Performance Benchmarking with IPEX-LLM <./benchmark_quickstart.html>`_
 | 
						|
* `Run Local RAG using Langchain-Chatchat on Intel GPU <./chatchat_quickstart.html>`_
 | 
						|
* `Run Text Generation WebUI on Intel GPU <./webui_quickstart.html>`_
 | 
						|
* `Run Open WebUI on Intel GPU <./open_webui_with_ollama_quickstart.html>`_
 | 
						|
* `Run PrivateGPT with IPEX-LLM on Intel GPU <./privateGPT_quickstart.html>`_
 | 
						|
* `Run Coding Copilot (Continue) in VSCode with Intel GPU <./continue_quickstart.html>`_
 | 
						|
* `Run Dify on Intel GPU <./dify_quickstart.html>`_
 | 
						|
* `Run llama.cpp with IPEX-LLM on Intel GPU <./llama_cpp_quickstart.html>`_
 | 
						|
* `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_
 | 
						|
* `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_
 | 
						|
* `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_
 | 
						|
* `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_
 | 
						|
* `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`
 | 
						|
 | 
						|
 | 
						|
.. |bigdl_llm_migration_guide| replace:: ``bigdl-llm`` Migration Guide
 | 
						|
.. _bigdl_llm_migration_guide: bigdl_llm_migration.html
 |