ipex-llm/docs/readthedocs/source/doc/LLM/Quickstart
Shengsheng Huang 586a151f9c
update the README and reorganize the docker guides structure. (#11016)
* update the README and reorganize the docker guides structure.

* modified docker install guide into overview
2024-05-14 17:56:11 +08:00
..
axolotl_quickstart.md Add axolotl main support and axolotl Llama-3-8B QLoRA example (#10984) 2024-05-14 13:43:59 +08:00
benchmark_quickstart.md Minior fix for quick start (#10857) 2024-04-23 15:22:01 +08:00
bigdl_llm_migration.md Minior fix for quick start (#10857) 2024-04-23 15:22:01 +08:00
chatchat_quickstart.md make images clickable (#10939) 2024-05-06 20:24:15 +08:00
continue_quickstart.md update quickstart (#10923) 2024-04-30 18:19:31 +08:00
deepspeed_autotp_fastapi_quickstart.md LLM: Refine README of AutoTP-FastAPI example (#10960) 2024-05-08 16:55:23 +08:00
dify_quickstart.md update private gpt quickstart and a small fix for dify (#10969) 2024-05-09 13:57:45 +08:00
fastchat_quickstart.md LLM: Enable Speculative on Fastchat (#10909) 2024-05-06 10:06:20 +08:00
index.rst Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970) 2024-05-14 12:58:31 +08:00
install_linux_gpu.md Fix apt install oneapi scripts (#10891) 2024-04-26 16:39:37 +08:00
install_windows_gpu.md Small update for GPU configuration related doc (#10770) 2024-04-15 18:43:29 +08:00
llama3_llamacpp_ollama_quickstart.md update llama.cpp usage of llama3 (#10975) 2024-05-09 16:44:12 +08:00
llama_cpp_quickstart.md update troubleshooting of llama.cpp (#10990) 2024-05-13 11:18:38 +08:00
ollama_quickstart.md add version for llama.cpp and ollama (#10982) 2024-05-11 09:20:31 +08:00
open_webui_with_ollama_quickstart.md revise private GPT quickstart and a few fixes for other quickstart (#10967) 2024-05-08 21:18:20 +08:00
privateGPT_quickstart.md update private gpt quickstart and a small fix for dify (#10969) 2024-05-09 13:57:45 +08:00
webui_quickstart.md Small update for GPU configuration related doc (#10770) 2024-04-15 18:43:29 +08:00