* initial commit * update llama.cpp * add demo video at first * fix ollama link in readme * meet review * update * small fix
		
			
				
	
	
		
			26 lines
		
	
	
	
		
			1.3 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
			
		
		
	
	
			26 lines
		
	
	
	
		
			1.3 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
	
	
IPEX-LLM Quickstart
 | 
						|
================================
 | 
						|
 | 
						|
.. note::
 | 
						|
 | 
						|
   We are adding more Quickstart guide.
 | 
						|
 | 
						|
This section includes efficient guide to show you how to:
 | 
						|
 | 
						|
 | 
						|
* |bigdl_llm_migration_guide|_
 | 
						|
* `Install IPEX-LLM on Linux with Intel GPU <./install_linux_gpu.html>`_
 | 
						|
* `Install IPEX-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_
 | 
						|
* `Install IPEX-LLM in Docker on Windows with Intel GPU <./docker_windows_gpu.html>`_
 | 
						|
* `Run Performance Benchmarking with IPEX-LLM <./benchmark_quickstart.html>`_
 | 
						|
* `Run Local RAG using Langchain-Chatchat on Intel GPU <./chatchat_quickstart.html>`_
 | 
						|
* `Run Text Generation WebUI on Intel GPU <./webui_quickstart.html>`_
 | 
						|
* `Run Open WebUI on Intel GPU <./open_webui_with_ollama_quickstart.html>`_
 | 
						|
* `Run Coding Copilot (Continue) in VSCode with Intel GPU <./continue_quickstart.html>`_
 | 
						|
* `Run llama.cpp with IPEX-LLM on Intel GPU <./llama_cpp_quickstart.html>`_
 | 
						|
* `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_
 | 
						|
* `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_
 | 
						|
* `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_
 | 
						|
 | 
						|
.. |bigdl_llm_migration_guide| replace:: ``bigdl-llm`` Migration Guide
 | 
						|
.. _bigdl_llm_migration_guide: bigdl_llm_migration.html
 |